The paper provides a general representation for the errors of delta hedging derivatives contracts under misspecified asset price processes. A new 'Greek' is developed which quantifies the dependence between the prospective hedging errors and the volatility forecast errors. This analysis can be applied to any contingent claim that can be spanned under a general diffusion process. The hedging errors are studied in more detail for a standard vanilla option, a geometric average rate option, and an up and out call option with a continuous-time monitored barrier. Two alternative approaches are provided for deriving the conditional and unconditional distribution of hedging errors: binomial tree and kernel estimation. The binomial tree method exploits the Markovian nature of hedging errors. By evolving hedging errors forward, all the moments of the conditional and unconditional distributions are obtained. Alternatively, kernel estimation provides local estimates and confidence intervals for hedging errors. The analysis provides techniques for obtaining information on the absolute and relative difficulties of hedging different instruments or portfolios of instruments.

The paper is devoted to the description of an approach to solving an optimal stopping problems for multidimensional diffusion processes. This approach is based on connection between boundary problem for diffusion processes and Dirichlet problem for PDE of an elliptic type. The solution of a Dirichlet problem is considered as a functional of the available continuation regions. The optimization of this functional will be carried out by variational methods. Unlike the heuristic ``smooth pasting" method the proposed approach allows to obtain, in principle, to find necessary and sufficient conditions for optimality of stopping time in a given class of continuation regions. The approach is applied to the solving an optimal stopping problem for a two-dimensional geometric Brownian motion with objective functional, which is an expectation of a homogeneous function. We intend to discuss an application of this optimal stopping problem to real option theory and optimal timing of investment.

This paper analyzes the problem of how large investors form optimal equity portfolios when the return on each investment depends on the size the investment. The solution counterweighs the benefits of higher returns through private benefits of control and monitoring efforts when an investor purchases a large equity block with the costs of bearing diversifiable risk. The model predicts that budget-constrained investors will be more likely to buy controlling blocks in smaller firms and in firms where higher private benefits of control can be secured. Numerical analysis of the solution shows that investor utility is strictly increasing in the capital allocated to purchasing controlling blocks. The classical result that the efficient frontier is the same for all investors regardless of their wealth does not hold in the setting of this paper. Investors holding portfolios of controlling equity blocks earn strictly positive abnormal returns at the expense of small investors.

In this paper we deal with the problem of pricing a guaranteed life insurance participating policy, traded in the Italian market, which embeds a surrender option. This feature is an American-style put option that enables the policyholder to sell back the contract to the insurer at the surrender value. Employing a recursive binomial formula patterned after the Cox, Ross and Rubinstein discrete option pricing model we compute, first of all, the total price of the contract, which includes also a compensation for the participation feature ("participation option", henceforth). Then this price is split into the value of three components: the "basic contract", the "participation option" and the "surrender option". The numerical implementation of the model allows us to catch some comparative statics properties and to tackle the problem of suitably fixing the contractual parameters in order to obtain the premium computed by insurance companies according to standard actuarial practice.

The valuation of American-style options gives rise to an optimal stopping problem involving the computation of a time dependent exercise boundary over the whole life of the contract. An exact computational formula for this time dependent optimal boundary is not known. Nevertheless, in the literature some numerical approaches can be proposed to approximate the optimal boundary.

In particular, in this contribution we study three different numerical techniques: an improved lattice method, a randomization approach based on the American option valuation procedure proposed by Carr and an analytic approximation procedure proposed by Bunch and Johnson.

The three techniques studied are tested and compared through a wide empirical analysis.

The subject of the paper is to test whether there are mean and volatility spillovers between the US and the German stock market. Based on a newly compiled sample of intra-day data for the Dow Jones Industrial Average and the DAX, we find significant spillovers from the US to the German market and vice versa. We also differentiate between contemporaneous correlation and a spillover. In addition, we identify spurious spillovers associated with the opening quotes and with the exclusive estimation of volatility spillovers that can result in a trade-off between a mean and a volatility spillover.

We study a rational valuation and hedging principle for contingent claims which integrate tradable and untradable sources of risk. The principle is based on the preferences of a rational investor with constant absolute risk aversion, and uses expontial utility indifference arguments.

In a Cox-Ito model with multiple assets mutual dependencies between tradable and untradable sources of risk, constructive results on the utility-based valuation and hedging strategy - and as an aside also on the optimal investment strategy - are obtained in terms of reaction diffusion equations. Possible applications include credit- and rating dependent securities. Further properties like diversification and computations methods are obtained in a semicomplete product model.

Compared to the large amount of research on the US market, there is little published evidence on the relationship between the behavior of stock returns and different macro-economic variables, on markets outside the US, especially in developing countries. In this paper we examine the response of the Tunisian stock market, to macroeconomic announcements such as inflation, stock of money (M2), money market interest rate variations, and real activity. First, we analyze the response of the stock market to macroeconomic announcements during the days of announcements by combining the daily returns of the stock market index with the unanticipated components of the various announcements. The behavior of these latter has been captured using ARIMA and VAR models. Second, we examine the evidence about the variability of the stock market responses to macroeconomic announcements through the states of the economy. According to this analysis, we suggest that the variation of the money market interest rate and the growth rate of the industrial production are the only information to which the stock market adjusts rapidly. Empirical findings show that there is evidence of stock market asymmetry response to the announcement of M2. In addition, we analyze the monthly real return sensitivity of the BVMT stock market index to macroeconomic announcements. Our results seem to confirm that the money market interest rate and M2 are the only economic information that affect the monthly returns of the stock market index. Results concerning the announcement effect of other economic indicators vary with different methods used for the classification of economic states. Finally, the results of the test of asymmetry hypothesis of the stock market responses to macroeconomic announcements seem, also to depend on these methods of classification.

Keywords: Macroeconomic announcements, stock market, classification of the states of the economy, ARIMA models, VAR models.

Because of its properties (persistence/antipersistence and self-similarity), the fractional Brownian motion (fBm) has been suggested as a useful mathematical tool in many applications, including finance.

In this paper, we discuss the extension to the multi-dimensional case of the Wick-Ito integral with respect to fractional Brownian motion and apply this approach to study the problem of minimal variance hedging in a (possibly incomplete) market driven by m-dimensional fBm. The mean-variance optimal strategy is obtained by projecting the option value on a suitable space of stochastic integrals with respect to the fBm, which represents the attainable claims.

Here we prove first a multi-dimensional Ito type isometry for such integrals, which is used in the proof of the multi-dimensional Ito formula. These results are then applied in order to provide a necessary and sufficient characterization of the optimal strategy as a solution of a differential equation involving Malliavin derivatives.

A Radner-Shepp model for an insurance company value is considered. The value is modelled as a Brownian motion with drift minus the dividend flow. The dividend flow plays the role of a control and the total discounted payoff is maximized.

For the case of zero liquidation value, the optimal dividend policy was determined by M. Jeanblanc and A. Shiryaev. In this presentaion, an explicit solution for the general case when the dissolution value is nonnegative is given. We treat separately three different kinds of admissible controls: continuous bounded rate dividend yield, discrete dividends with transaction costs, and a general non-negative, non-decreasing, right-continuous dividend flow.

In today's equity derivatives market the trend is clearly leading towards long dated multi-asset products with strong path dependency. The purpose of this article is to present a Monte Carlo framework allowing to value and risk manage these products.

The framework accurately reproduces a given Vanilla options market while being as computationally efficient as a standard Monte Carlo routine sampling log-normally distributed assets. Sensitivities to a changing market are readily available since no calibration of model parameters is required. The framework can be extended to capture additional market information such as basket and forward starting Vanilla option prices. This is achieved by modeling spatial and temporal correlation skew using copulas.

The paper gives a closed-form dynamic programming solution to the discrete time mean-variance hedging problem with proportional transaction costs that can be easily implemented on a Markov chain. It then compares the performance of the dynamically optimal strategy with the Leland and Black-Scholes hedging strategies for realistic (leptokurtic) return distribution and transaction costs. We find that the dynamically optimal strategy outperforms Leland strategy for high transaction costs (1%), but that the replication error of the best hedging strategy is very high. Furthermore, in terms of performance there is little difference between hedging once a day and once a week. On the theoretical level this paper generalizes and combines the analyses of Leland (1985) and Schweizer (1995).

We consider a monopolistic firm which produces and sells a good that can be stored. It acts in continuous time over a period [0,T] and it plans its production/sales schedule by maximizing dynamically its revenue, entailed by its sales, diminished by its production and storage costs. We assume that the firm faces a nonincreasing marginal revenue and a nondecreasing marginal production cost, but we do not assume that the marginal storage cost is monotone. Therefore the firm problem turns out to be a nonconvex optimal control problem. Yet, we give an existence result and a characterization of the solution (from which we deduce a constructive resolution of the problem when it is convex). The optimal plan consists in getting rid of the starting inventories in the best way, in particular, keeping the marginal revenue and the marginal production cost equal. Hence, if there is no initial inventory, then the firm does not exploit its storage capability.

The Black-Scholes formula assumes the price of the underlying asset evolves as a geometric Brownian motion with constant volatility, and so the probability distribution of the asset price at a fixed time in the future is lognormal. However, a single volatility parameter cannot describe the prices of all traded options in general, due to volatility smiles.

To use the Black-Scholes formula, an implied volatility surface is required. Given such a surface, call options with any strike and time to expiry can be priced, and these prices are sufficient to infer the risk-neutral implied probability distribution for the asset price at any time.

This paper obtains expressions for the cumulative distribution function and probability density function implied by an arbitrary choice of smile, by transforming to a dimensionless smile parameterised by log-moneyness. Since a cumulative distribution function has obvious constraints, the smile must necessarily obey certain constraints which are presented here.

This paper provides a discrete time algorithm, in the framework of the standard binomial pricing model of Cox-Ross-Rubinstein, to evaluate Parisian options. The algorithm is based on a combinatorial tool to count the number of paths of a particle, performing a random walk, that remains beyond a barrier constantly for a period strictly smaller than a pre-specified time interval. Once we get this number, we use it to develop a binomial pricing model to evaluate Parisian options both with a constant barrier and with an exponential boundary. The algorithm proposed here is very easy to implement and, moreover, numerical results show that it produces highly accurate prices.

We analyze the question of completeness in a market containing countably many asset. In such a market portfolios involving an infinite number of assets may be formed. By making use of a cylindrical stochastic integral, we define a notion of self-financing "generalized" portfolio, as limit of "naive" portfolios, where a "naive" portfolio is instantaneously based on a finite number of assets, while a generalized portfolio involves infinitely many assets. The market is said to be complete if every contingent claim can be replicated either by a generalized portfolio or a naive portfolio. We relate completeness in the large market to completeness in the finite sub-markets and to completeness on the set of claims depending on a finitely many assets. Finally, we characterize completeness in very simple factor models, where diversification allows to complete an otherwise incomplete market.

The question we solve is the optimal design of the minimum guarantee in a Defined Contribution Pension Fund Scheme.

We endogeneize the investment in the financial market by assuming that the pension fund optimizes its retribution which is a part of the surplus. Then we define the optimal guarantee as the solution of the contributor's optimization program and find the solution explicitly. Finally, we analyze the impact of the main parameters, and particularly the sharing sule between the contributor and the pension fund. We find that favorable sharing rules for the pension fund lead to conservative guarantees for the contributor: the sharing rule is a way to create a continuum between the extreme points that are Defined Benefit and Defined Contribution Pension Schemes, and allows partial risk transfer between the contributor and the pension fund manager.

The CAPM is generally contested on an empirical basis. The tests conducted with data from financial markets do not generally imply the acceptance of the model as describing correctly the range of expected returns. Explaining the size of the residuals by factors missing from the hypothesis of the CAPM may lead to a better understanding of the origins of the empirical weaknesses of this model. We believe that the distributional properties of assets' returns are not correctly described by the assumption of joint normality. We will therefore study the distribution of assets returns and their impact on the CAPM empirical performances. Specifically we will explore the relationship that may exist between the ineffectiveness of the CAPM and the non normality of the returns distribution. More to the point, can we find a descriptive statistic of the returns distribution that will predict the ineffectiveness of the beta for a given asset?

A number of methodologies have been so far proposed in academic literature for the estimation of the implied Risk Neutral Densities. The present work develops a new non-parametric methodology for the pricing of contingent claims and the extraction of the stochastic discount factor. When no restrictions on the underlying process are imposed, the associated measure is a random distribution and under some weak assumptions, it admits an expansion with stochastic coefficients. The major advantage of the proposed estimator is that it results in a weak form solution as a consequence of no strict properties being imposed apart from the measurability of the coefficients of the expansion. Considering the underlying process as time varying a new non-parametric methodology is developed for the estimation of the implied RNDs and a number of theoretical issues associated with the estimation of implied RNDs are addressed using empirical data.

We investigate some general properties of American option prices when the volatility is time-dependent and level-dependent. We use a time change due to Janson and Tysk to derive monotonicity in the volatility for certain (not necessarily convex) contract functions. We also consider convexity in the underlying stock price when the contract function is convex, time decay and continuity in the volatility.

We study an optimal investment-consumption problem for a small investor whose wealth is divided between a riskless asset (a bank account) and a risky asset (a stock) with log-returns following a Lévy process. The investor preferences, in contrast to the standard von Neumann-Morgenstern time-additive preferences, allow for cumulative consumption patterns with possible jumps/singular sample paths and they incorporate the notion of local substitution. The dynamic programming equation of this singular stochastic control problem is a degenerate elliptic integro-differential variational inequality (a free boundary problem). Herein we present and analyze a Markov chain approximation scheme for solving the investment-consumption problem. A feature of the suggested numerical scheme is that it is based on a simplified dynamic programming equation obtained by approximating the original Lévy process by a more simple and tractable (Lévy) process which can be written as an independent sum of a drift, a Brownian component, and a finite number of compound Poisson processes. This approximation reduces the integral operator in the dynamic programming equation to a finite series operator. The convergence analysis of the numerical scheme is based on the theory of viscosity solutions.

In this work we provide a simple method to locate and measure dependence in the tails of a time series, based on the ''runs test'', a non parametric statistical test for detecting dependence in a 0-1 series.

We plot the p-value of the runs test against several values for the threshold, to construct what we call in the sequel the dependence plot. This allows us not only to measure dependence in a quantitative way, but also to locate at which thresholds the dependence ''turns on''.

The empirical results on daily data reveals a very strong departure from independence; it seems that the effect dies out quickly if we consider returns over longer periods.

We show that in most cases dependence in the tails can be successfully modeled through a two state Markov chain. A possible interest in this kind of analysis could be found for Value at Risk computation purposes.

Exotic options are complicated derivatives instruments whose structure does not allow, in general for closed-form solutions, thus making their pricing and hedging a difficult task. To overcome additional complexities such products are, as a rule, priced within a Black & Scholes framework, assuming a Geometric Brownian Motion (GBM)for the dynamics of the underlying asset. This paper develops a more realistic framework for the pricing of exotic derivatives and derives closed-form analytic solutions for the pricing and hedging of Basket options. We relax the simplistic assumption of a GBM process by introducing the Bernoulli Jump Diffusion process (BJD). Assuming a different stochastic process from the GBM, rather than an alternative distribution, is preferable for exotic products because it can provide better hedging rules. Potential extension of the model with the use of Edgeworth series expansion (ESE) is also discussed. Monte Carlo simulation confirms the validity of the proposed BJD model.

The objective of any portfolio manager is to deduce the optimal portfolio strategy linked to his qualitative piece of information. This paper tackles this problem. It investigates a way to translate a given piece of information into a trend and a volatility process, in order to make use of standard portfolio theory. When the agent's information is formulated as the "knowledge of the distribution of a random variable of interest", we provide an explicit link between this information and the trend process, the volatility process being revealed by the option prices. The optimal strategies can then readily be computed and their performances linked to the entropy of the private information. Closed form formulae are obtained in the gaussian case and allows to provide practical examples.

The approximation of stochastic integrals appears in Stochastic Finance while replacing continuously adjusted portfolios by discretely adjusted ones, where often equidistant time nets are used. With respect to the quadratic risk there are situations in which a significant better asymptotic quadratic error is obtained via arbitrary deterministic time nets (for example for the Binary option).

We investigate how much the asymptotics for equidistant nets and arbitrary deterministic nets differ from each other in case of European type options obtained by deterministic pay-off functions applied to a price process modeled by a diffusion.

In particular we give a characterization, that the approximation rate for the quadratic risk for equidistant nets (with $n$ time knots) behaves like $n^{-\eta}$, $0<\eta\le 1/2$, in terms of (1) the asymptotics of certain variances of the hedging strategy, (2) the asymptotics of a certain $L_2$-convexity of the value process, (3) a decomposition of the portfolio using the K-functional from interpolation theory.

We study an investment problem in a multi-factor interest rates framework like in Duffie and Kan (1996). We investigate a class of utility functions which extends the HARA family in a natural way, by mantaining its nice properties. The optimal investment strategy is obtained explicitly by assuming that the (stochastic) volatility matrix of the financial assets has a particular form, which is discussed and justified by an equilibrium argument.

There are two types of Asian options in the financial markets which differ according to the role of the average price. We give a symmetry result between the floating and fixed-strike Asian options. The proof involves a change of numéraire and time reversal of Brownian motion. Symmetries are very useful in option valuation and in this case, the result allows the use of more established fixed-strike pricing methods to price floating-strike Asian options.

Keywords: Asian options, floating strike Asian options, put call symmetry, change of num\'{e}raire, time reversal, Brownian motion.

AMS: 60G44, 91B28

An extension of the classical Merton model with consumption is considered when the diffusion coefficient of the asset prices depends on some economic factor. The objective is to maximize total expected discounted HARA utility of consumption. Optimal controls are provided as well as a characterization of the value function in terms of the associated Hamilton-Jacobi-Bellman equation.

This paper considers the effect of nonnormality on the unconditional inference of the market model when the joint distribution of asset returns and a given portfolio return is elliptically distributed. Based on the conditional covariance matrix formula by Chu (1973) in the class of elliptical distributions, we evaluate the asymptotic covariance matrix of the maximum likelihood estimator derived under the joint normality when the joint distribution is in fact in the class of elliptical distributions as well as in the class of scale mixtures of multivariate normal distributions. The effect of nonnormality on the asymptotic covariance matrix of the maximum likelihood estimator is shown in terms of the weighting function of the representation of the underlying probability density function as an integral of a set of multivariate normal probability density functions. The properties of marginal and conditional distributions in the class of elliptical distributions are made use of to derive the results.

Which loss function should be used when estimating and evaluating option pricing models? Many different functions have been suggested, but no standard has emerged. We do not promote a particular function, but instead emphasize that consistency in the choice of loss functions is crucial. First, for any given model, the loss function used in parameter estimation and model evaluation should be identical, otherwise suboptimal parameter estimates may be obtained. Second, when comparing models, the estimation loss function should be identical across models, otherwise unfair comparisons will be made. We illustrate the importance of these issues in an application of the so-called Practitioner Black-Scholes (PBS) model to S&P500 index options. We find reductions of over 50 percent in the root mean squared error of the PBS model when the estimation and evaluation loss functions are aligned. We also find that the PBS model outperforms a benchmark structural model when the estimation loss functions are identical across models, but otherwise not. The new PBS model with aligned loss functions thus represents a much tougher benchmark against which future structural models can be compared.

This paper presents the results of estimating the zero coupon yield curve from default free Australian treasury instruments based on weekly observations of a recent time period: January 1992 $-$ January 2001. Pure discount bonds and implied forward rates, although not observable for the entire yield curve, are extremely useful for pricing, modelling and analyzing financial securities, hence, the need to extract the theoretical yield curve from noisy prices observed in the market place. Two popular models for curve fitting, together with two specifications are adopted for estimating zero coupon and forward yield rates. The six parameter Svensson's model outperforms the more parsimonious Nelson-Siegel four parameter functional form. During the considered time period a structural break is detected in the zero coupon time series.

The subject of this work is the following Stochastic Delay Differential Equation (SDDE) \begin{equation} \frac{dS(t)}{S(t)}=rdt+\sigma(t,S_t)dW(t), \end{equation} where $S_t=\{S(t+\theta),\theta\in [-\tau,0]\}$, $\tau>0$, $\sigma(\cdot,\cdot)$ is a continuous function of time $t$ and segment of stock price path $S(t)$ on the interval $[t-\tau,t]$ to reflect the reality that responses are usually delayed, which is normally ignored in the literature.

We show that a continuous time equivalent of GARCH(1,1) model gives rise to a stochastic volatility model with delayed dependence on stock value. Then we derive an analogue of Ito's lemma for this type of SDDE and we obtain an integral-differential equation for functions of option price with boundary conditions specified according to the type of option to be priced. In the case of vanilla call option, we obtain a closed-form solution and the results are directly transferable to European puts through the use of put-call parity. We observe for a sample set of parameters, that the original Black-Sholes price overvalues in-the-money call options due to ignorance of the delay.

In this paper we propose a general framework for quantification of model risk. This framework allows one to allocate regulatory capital to positions in a given market depending on the extent to which this market can be reliably modeled. Our approach is based on computing worst-case risk measures over sets of models that are in some appropriate sense close to a nominal model. The method is general in the sense that it can be applied with any of the usual risk measures such as Value-at-Risk and Tail Conditional Expectation. Inasfar as risk measures can also be used as pricing tools or as determinants of margin requirements, the paper provides a quantification of model risk in these settings as well. We present applications both to stock portfolios and to derivative products: we find that, for usual specifications, misspecification risk is much more important than estimation risk.

In this paper we investigate the ruin probability in the classical risk model under a positive constant interest force. We restrict ourselves to the case where the claim size is heavy-tailed, i.e. the equilibrium distribution function of the claim size belongs to a wide subclass (named \cal{A}) of the subexponential distributions. Two-sided estimates for the ruin probability are developed by reduction from the classical model without interest force.

In this paper we consider a credit risk estimation problem that we call "Default Boundary Problem" and a related inverse problem for a random walk. This latter problem is formulated as follows: find a boundary such that the first hitting time has a known probability distribution function. We demonstrate that a Monte Carlo approach is applicable to solve the Default Boundary Problem in the discrete time setting. We also consider numerical aspects of the computation of conditional default probabilities in the joint market and credit risk framework.

In finance, multivariate stationary diffusions play an important role, e.g. they are used to model the dynamics of a portfolio of financial instruments or the term structure of interest rates. For an investigation of several risk factors, the dependence structure is of high importance. From the point of view of risk management it is important to know about the large fluctuations of these models. Based on theoretical results derived in Kunz (2002) we suggest methods to asses the goodness-of-fit of a multivariate stationary diffusion model comparing the theoretical asymptotic behavior of its maximum with the empirical maximum of a dataset. These test are applied for a bivariate Vasicek model and a bivariate diffusion model with stationary gamma distribution first to simluated data. Portfolios of interest rate swaps are also analysed.

A game contingent claim is a generalization of an American contingent claim which also enables the seller to terminate it before maturity, but at the expense of a penalty. We present different approaches to analyze such contracts in the context of incomplete financial markets.

We propose two nonparametric transition density-based specification tests for continuous-time diffusion models. By introducing an appropriate data transform and correcting the boundary bias of kernel estimators, our tests are robust to persistent dependence in the data and provide reliable inferences for sample sizes often encountered in empirical finance. Simulation studies show that our tests have reasonable size and good power against a variety of alternatives in finite samples even for data with highly persistent dependence. Besides the single-factor diffusion models, our tests can be applied to a broad class of dynamic economic models, such as discrete time series models, time-inhomogeneous diffusion models, stochastic volatility models, jump-diffusion models, and multi-factor term structure models. When applied to daily Eurodollar interest rates, our tests overwhelmingly reject some popular spot rate models, including single-factor diffusion models, GARCH models, regime-swtiching models and jump-diffusion models.

In this paper we consider the distributional difference between forward swap rates as implied by the lognormal forward-Libor model (LFM) and the lognormal forward-swap model (LSM) respectively. To measure this distributional difference, we resort to a "metric" in the space of distributions, the Kullback-Leibler information (KLI). We explain how the KLI can be used to measure the distance of a given distribution from the lognormal family of densities, and then apply this framework to our models' comparison. The volatility of the projection of the LFM swap-rate distribution onto the lognormal family is compared to a synthetic swap volatility approximation used by the industry. Finally, for some instantaneous covariance parameterizations of the LFM we analyze how the KLI changes according to the parameter values and to the parameterizations themselves, in an attempt to characterize the situations where LFM and LSM are distributionally close, as is often assumed by market practice.

Financial turbulence is a phenomenon occurring in anti - persistent markets. In contrast, financial crises occur in persistent markets. A relationship can be established between these two extreme phenomena of long term market dependence and the older financial concept of financial (il-)liquidity. The measurement of the degree of market persistence and the measurement of the degree of market liquidity are related. To accomplish the two research objectives of measurement and simulation of different degrees of financial liquidity, I propose to boldly reformulate and reinterpret the classical laws of fluid mechanics into cash flow mechanics and to incorporate the results into dynamic term structure analysis. At first this approach may appear contrived and artificial, but the end results of these reformulations and reinterpretations are useful quantifiable financial quantities, which will assist us with the measurement, analysis and proper characterization of modern dynamic financial markets in ways that the methodology of classical comparative static financial - economic analysis does not allow. For example, this new approach allows for easy implementation and interpretation of wavelet multiresolution analysis of financial rates of return series in the time-frequency domain, such that Reynolds numbers of illiquidity can be computed.

In financial markets, not only prices and returns can be considered as random variables, but also the waiting time between two transactions varies randomly. In the following, we analyse the statistical properties of General Electric stock prices, traded at NYSE, in October 1999. These properties are critically revised in the framework of theoretical predictions based on a continuous-time random walk model.

On a filtered probability space where W and N are respectively a standard Brownian motion and a simple Poisson process with constant intensity $\lambda >0$ we consider the process $Y$ such that $Y_0\in \R$ and \beqlab{modello} dY_t=a_tdt+\sigma_tdW_t+ \gamma_{t}dN_t,\; t\leq T,\eeq where $a$, $\sigma$ are predictable bounded stochastic processes, and $\gamma$ is a predictable process which keeps away from zero. A discrete record of $n+1$ observations $\{Y_{0}, Y_{h},...,Y_{(n-1)h},Y_{nh}\}$ is available, with $nh=T$. Using such observations we construct estimators of $N_{t_i}(\omega), i=1,..., n$, $\lambda$ and $\gamma_{\tau_j}(\omega)$, where $\tau_j$ are the instants of jump within $[0,T].$ They are consistent and asymptotically controlled when the number of observations increases and contemporaneously the step $h$ between them tends to zero.

We model in a game theoretic context managerial intervention directed towards value enhancement in the presence of uncertainty and spillover effects. Two firms face real investment opportunities, and before making the irreversible decisions, they have options to enhance value by doing more R&D and/or acquiring more information. Due to spillovers, firms act strategically by optimizing their behavior, conditional on the actions of their counterpart. They face two decisions that are solved for interdependently in a two-stage game. The first decision is: what is the optimal level of coordination between them? The second-stage decision is: what is the optimal effort for a given level of the spillover effects and the cost of information acquisition? For the solution we adopt an option pricing framework that allows analytic tractability.

We study the optimal reinsurance policy of an insurance company which gives part of its premium stream to another compagny in exchange of an obligation to support the difference between the amount of the claim and some retention level. This contract is known as excess of loss reinsurance. The objective of the insurance compagny is to maximize the expected utility of its reserve at some planning horizon and under a nonnegativity constraint. We suppose that reinsurance incurs a cost proportional to the size of risk run by the reinsurance compagny.

We first prove existence and uniqueness results for this optimization problem by using stochastic control methods. In a second part, we solve the associated Bellman equation numerically by using an algorithm based on policy iterations.

We propose to estimate stochastic volatility models using recently developed methods for the statistical analysis of Hidden Markov Models. The realization of HMM-s developed by Borkar in 1992 can be used to establish a link between HMM-s and linear stochastic systems, and $L$-mixing processes, assuming that the underlying Markov chain satisfies the Doeblin condition. This connection is exploited to design a promising change-point detection method for stochastic volatility models, extending a similar method for ARMA-processes. Our approach is compared with the recent results of Berkes, Horváth and Kokoszka on the estimation of GARCH processes.

The purpose of this paper is to extend the Hull-White (1990) one-factor model to the jump-diffusion case. Our model is based on Shirakawa (1991) model of the Heath-Jarrow-Morton term structure of interest rate under jump-diffusions, that we parameterise by an appropriate choice of forward rate volatility function. In this framework, the European call bond option price is explicitly derived. Also, we consider the extension of the Hull-White two factor model to the jump-diffusion case.

In this paper we apply two well-known models generally used for option pricing to forecast volatility in very volatile markets and compare their performances. The models used in our evaluation are the Stochastic Volatility (SVOL) model and the Affine Jump-Diffusion (AJD) model with jumps in return and volatility (originally proposed by Duffie, Pan and Singleton).

In our study, we first implement both models – SVOL and AJD – and calibrate them using one year of real data; then we forecast the return and volatility series using the calibrated models. Finally the volatility estimated by the two models is compared with the actual one. Our results show how jumps in return and volatility play an important role in volatility forecasting, especially in highly volatile markets. In addition, from the comparison between the forecast capabilities of the analysed models, we suggest possible modifications of the two models for improving their results.

There has recently been much interest in products that provide exposure to the realised volatilities or variances of asset returns (or covariances between asset returns), while avoiding direct exposure to the underlying assets themselves. These products are attractive to investors who either wish to hedge volatility risk or who wish to take a view on future realised volatilities. We consider the pricing of this family of products and especially volatility and variance swaps. We take a stochastic volatility model as our starting point and under risk-neutral valuation we provide closed form formulae for volatility-average and variance swaps, and show how other related products can be priced. A general pricing equation is introduced for derivatives that depend on four state variables: the asset value S, the time t, the volatility , and a running average, denoted by I, which represents our knowledge to date of the average that will determine the payoff. We consider an asymptotic analysis under which we derive approximate solutions to this equation, valid when volatility is fast mean-reverting over the typical lifetime of options and other contracts. We illustrate the procedure with a mean-reverting lognormal model while others can also be considered. The analysis is simpler for strictly volatility products, in particular, having a solution for the value of the volatility swap to first order, we are able to compare with the explicit results obtained before.

You have just developed a new way to compute prices of derivative securities, and your old Monte Carlo pricer is now of no use. You think everything is lost... the Model Control Variate methodology will surely help you. In this document we present an efficient way to increase the accuracy of a Monte Carlo procedure -as in the control variate methodology- if you have closed formulae for the prices of the same product with a different model.

Economic time series with heavy-tailed marginal distributions will be described by ARMA- models driven by i.i.d. innovation process with normal-inverse Gaussian distributions. Following earlier works by Gerencser we develop a new method for analyzing the full information maximum likelihood estimates. This result is then applied to analyze partially adaptive estimates, suggested by Phillips in a different context. A very accurate description of the estimation error process is given in both cases, which can be applied to analyze the performance of adaptive predictors.

We consider a continuous time dynamic model of pension funding in a defined-benefit plan of an employment system. The benefits liabilities are random, given in the general case for a path-continuous Ito process. Three different situations are studied regarding the investment decisions taken by the sponsoring employer: In the first one the fund is invested at a constant and risk-free rate of interest; in the second one the promoter invest in a portfolio with n risky assets and a risk-free security; finally, it is supposed that the rate of return is stochastic. Modelling the preferences of the manager such that her or his main objective is to minimize both the contribution rate risk and the solvency risk, we find that the optimal behavior leads to a spread method of funding. This is achieved with a natural selection for the force of interest used to value the actuarial functions intervening in the design of the pension plan. The aim of this paper is to determine the optimal funding behavior in this stochastic, continuous time framework, over an unbounded horizon.

The Banxico Option was created by the Central Bank of Mexico to increase its international reserves without affecting its exchange rate. This option has the following characteristics: i). The option is sold through a public auction among the Mexican banks the last day of each month. ii). The option gives the right to the banks to sell a fixed amount of dollars in one or several transactions in any trading day of the following month. iii) The strike price is the fix price of the dollar the day before the date of exercise. The fix price is determined by the market and is published once a day by the Central Bank. iv) In order to buy dollars when the market is offering the Central bank imposes an additional restriction for exercise: The strike price must be less or equal than the average price of the fix during the last twenty days before the date of exercise.

This is an exotic option whose strike price is stochastic and depends on the path of the fix price.

The purpose of the poster is to present the results we obtained in the study of the valuation and the optimal time of exercise of the Banxico Option with and without the iv restriction. The binomial and the Black-Scholes models are considered. We apply the poinbt of view of the Dynamical Programming Principle and the Optimal Stopping Time Theorem. With these results we propose a rule of exercise for the Banxico option that let us valuate the option via a Monte-Carlo method. Numerical results are presented.

The intensity approach to Credit Risk is one celebrated tool in Credit Risk modelling. Duffie and Singleton (1999) used this to apply the Heath-Jarrow-Morton Methodology to Credit Risk. We generalize their model to infinite dimensions by use of Random Fields. The obtained model naturally evolves if one considers parameter uncertainty in a finite dimensional context.

The construction is performed under the objective measure and an equivalent martingale measure is obtained to price Credit Derivatives. Also Hedging and Calibration issues are treated.

In a complete financial market model, in which the price processes of risky assets are described as diffusions with unobservable drifts, we treat the ``shortfall-risk'' minimization problem at the terminal date for a seller of a derivative security $F$. We adopt the worst conditional expectation of the shortfall as the measure of this risk, ensuring that the minimized risk satisfies some desirable properties as the dynamic measure of risk after Cvitanic and Karatzas (1999). The terminal value of the optimized portfolio is a binary functional, dependent on $F$ and $\widehat{Z}_T$, the projection of the Radon-Nikodym density of the minimal local martingale measure onto the available information for the hedger. In particular, we observe that there exists a positive number $x^*$, which is less than the replicating cost $x^F$ of $F$, and that the strategy minimizing the expectation of the shortfall is optimal if the hedger's capital lies in $[x^*,x^F]$.

We consider an investment model proposed by Bielecki and Pliska. The prices of securities we can invest in the market are affected by some economic factors which evolve as a diffusion process. The goal is to maximize the expected utility in infinite time horizon. We assume that the utility function is HARA, then we can use dynamical programming approach to derive the Bellman equation which is a nonlinear partial equation. It is important to study this equation, since for each solution there associates a candidate of optimal investment policy. We study the structure of the solutions of this equation. We can obtain a special solution which is relevant to the investment problem. Our approach is different from the conventional one that we do not need to introduce function spaces in the argument.

We study an optimal consumption and investment problem in which there is a critical wealth level so that from the first time the wealth process reaches it, the invesment opportunity gets better. Under appropriate regularity conditions, we derive the value function and optimal rules of consumption and investment for CRRA class utility functions and observe economic meaning of them when the investor start with wealth level less than the critical level.

We develop a theoretical arbitrage-free and complete model to price options on the stocks of companies involved in a merger or acquisition deal allowing for the possibility that the deal might be called off at an intermediate time possibly creating discontinuous impacts on the stock prices. Our model is intended to be a valuable tool for market makers to quote prices for options on stocks involved in merger or acquisition deals and also for risk arbitrageurs and options traders to gauge the fair value for such options. Although we specifically consider stock-for-stock and cash acquisition deals, our basic framework is applicable to the investigation of much more general deals.

We model the stock price processes as jump diffusions with the jumps representing the price impacts of the deal being called off. We show that fundamental economic considerations imply specific functional forms for the price impact functions that allow us to obtain analytical expressions for the option prices. We demonstrate the completeness of our model when there are marketed securities that represent the fundamental values of the stocks involved in the deal, i.e the values in the absence of the synergies associated with the deal. In the situation where such securities do not exist, we derive the optimal risk-minimizing strategies in the underlying stocks and a risk-free bond for any option on either stock. These strategies show how one may hedge the risks associated with merger or acquisition deals using traded options on the stocks involved. We also show how one may use the model to infer the probabilities of success of deals from observed option price data.

Finally, we test our model on real option price data. We investigate several merger deals and show that the model is able to explain observed option prices on stocks involved in such deals remarkably well. In the process, we also conclusively demonstrate the inadequacy of the Black-Scholes framework in explaining observed option prices.

This paper analyzes causality and cointegration relationships among stock markets for Latin America and the United States. Within a simple framework causality and cointegration is tested for Argentina, Brazil, Chile, Colombia, Mexico, Peru, Venezuela and the US. We found no evidence of cointegration among these stock markets but short-run causality could not be rejected. Furthermore, we use impulse response functions to analyze the relative impact of shocks in the US stock index (Dow Jones) on Latin American indexes. Evidence suggests that the responses differ significantly among these countries. These findings imply that there are valuable opportunities to international investors from diversifying in US and Latin American stocks.

In this paper we provide an economic and econometric justification for using a log-linear form to estimate stock value based on accounting information. A log-linear form stands in contrast to the more traditional linear form. We state conditions under which log linear regression provides minimum variance unbiased estimates of log value as well as the appropriate transformation that yields the minimum variance unbiased estimate of value. Specification tests are suggested to infer conformity of the data to model assumptions and these are applied to a recent sample of public companies.

We describe a methodology for identifying and hedging large portfolios of derivative instruments. The method, matching projection pursuit, is a variant of a popular statistical tool. We describe how the method can be used to hedge the portfolio with liquid, traded instruments. We present a numerical example, in which the most important vanilla structures are identified in an hierarchical manner, for a portfolio of exotic options.

When real investment opportunities are open to competing firms in the same line of business, strategic considerations become extremely important in determining investment/entry policies. We develop an equilibrium framework for strategic (real) option exercise where the focus is on the effect of first--mover's advantages. The generality of our framework stems from the fact that we allow such advantages to be either temporary or permanent in nature. When the latter is true, economically identical competing firms might end up investing at very distinct times simply due to the effect of first--mover's advantages. When such advantages are substantial but temporary in nature, the rival entry times are drawn further apart as uncertainty increases.

We analyse the entry decisions of competing firms in a two-player stochastic real option game, when rivals can exert different but correlated uncertain profitabilities from operating. In the presence of entry costs, decision thresholds exhibit hysteresis, the range of which is decreasing in the correlation between competing firms. The expected time of each firm being active in the market and the probability of both rivals entering in finite time are explicitly calculated. The former (latter) is found to decrease (increase) with the volatility of relative firm profitabilities implying that market leadership is shorter-lived the more uncertain the industry environment. In an application of the model to the aircraft industry, we find that Boeing's optimal response to Airbus' launch of the A3XX super carrier is to accommodate entry and supplement its current product line, as opposed to the riskier alternative of committing to the development of a corresponding super jumbo.

Although the multivariate normality of stock returns is a crucial assumption in many asset pricing models, the modern econometric literature abounds with evidence against this hypothesis. Instead, we model the multivariate process of returns using a two-step decomposition approach: First, we estimate a non-stationary covariance structure and second, we model the relatively stable, quasi-stationary, but heavy-tailed and asymmetric residuals. Numerical experiments indicate that the dynamics of multivariate returns for a variety of financial instruments can be adequately explained using our approach.

We use a notion of stochastic time, called volatility time, to show convexity of option prices in the underlying asset if the contract function is convex as well as continuity and monotonicity of the option price in the volatility. Earlier results on the convexity of option prices require the volatility to be differentiable, or at least Lipschitz, in the underlying asset. Using the volatility time, we show that the volatility does not even need to be continuous in time and only needs to satisty a local Hölder condition in the underlying asset. Our method also yields new results on the continuity of option prices in the volatility.

Let us assume that we observe a trajectory of a standard linear Brownian motion on the time interval [0,1].

Our aim is to stop at the moment which is in some sense the most close to the moment where the trajectory attains its ultimate maximum. This problem is very natural if we suppose that stock prices evolve as Brownian motion.

In 1999 Graversen, Peskir and Shiryaev have solved a problem of such type in there paper "Stopping Brownian Motion without Anticipation as Close as Possible to its Ultimate Maximum".

The present work deals with problems which arise in the situation considered above. In particular, the stopping time is found which is the most close to the moment where Brownian motion attains its maximum. (The measure of closeness is other than the one considered by Graversen, Peskir and Shiryaev.) The work provides explicit solutions of some similar problems for other processes.

We derive accurate, analytical and easy computable lower and upper bounds for the price of discretely sampled European-style arithmetic Asian options with fixed and floating strike. Adapting the idea of Rogers and Shi (1995) to the case of discrete averaging and using results based on comonotonic risks, we obtain a closed form expression for a lower bound which generalizes the lower bound of Nielsen and Sandmann (2002). For an upper bound we follow two different approaches, one that is based on a general technique for deriving bounds for stop-loss premiums of sums of dependent random variables from Kaas, Dhaene and Goovaerts (2000) and another that follows again the ideas of Rogers and Shi and of Nielsen and Sandmann. We compare our approaches and compare our results to those in literature. We also study the hedging Greeks of the lower and upper bounds. Several sets of numerical results are included.

We consider the problem of shortfall risk minimisation in the binomial model when the loss function is not specified, and analyse the robustness of shortfall risk minimising strategy with respect to the loss function. We find closed form solutions for the cases of convex and concave loss functions both for the strategy as for the expected shortfall. We also find out that in the particular case of minimising lower partial moments of order $\kappa$ of the final wealth, the optimal strategies are continuous with respect to $\kappa$ for $\kappa \geq 1$, and there can be a discontinuity for $\kappa < 1$.

A major issue in financial economics is the behavior of asset returns over long horizons. Various estimators of long range dependence have been proposed. Even though some have known asymptotic properties, it is important to test their accuracy by using simulated series of different lengths.

We test R/S analysis, Detrended Fluctuation Analysis and periodogram regression methods on samples drawn from Gaussian white noise. The DFA statistics turns out to be the unanimous winner. Unfortunately, no asymptotic distribution theory has been derived for this statistics so far. We were able, however, to construct empirical (i.e. approximate) confidence intervals for all three methods. The obtained values differ largely from heuristic values proposed by some authors for the R/S statistics and are very close to asymptotic values for the periodogram regression method. In the concluding part of the paper we apply the results of previous sections to a number of financial data illustrating their usefulness.

We derive a closed-form solution for an optimal intertemporal hedging policy using instantaneous forward contracts to hedge a continuum of non-tradable exposures. The optimal control appears to be a value hedge involving total current value of future earnings. More importantly, hedging decision is independent of risk preferences of the firm or agent. Our model implies several implications for the risk management policy in a firm. In order to ``freeze profits'' a hedge increase is recommended in favourable states of nature, while in bad states the firm should decrease the hedge and wait.

Key words: Optimal hedging; Financial forwards and futures; Long-term exposure; Separability; Hodograph transformation.

JEL classification: C61, G13, C11

In an incomplete market, option prices depend on investors' utility functions. In this paper, we establish the connection between risk preference and optimal hedging strategy, and price options according to the principle of utility indifference. Taking the exponential utility function, we completely characterize risk-neutral valuation for jump-diffusion processes. By using a recent result of duality by Delbaen et. al. (2000) we prove that pricing measure for the risk neutral valuation is just the equivalent minimal entropy martingale measure. We show that risk aversion contributes a price spread from the risk neutral price. We also show that, however, risk-neutral valuation does not correspond to any practical hedging strategy. Minimal variance hedging strategy is discussed. Parallel analysis is carried over to discrete setting with multi-nomial random walks, and efficient numerical methods are developed. Numerical examples show that our model reproduces ``crash-o-phobia" and other features of market prices of options.

Convertible bonds are hybrid securities whose pricing relies on a set of complex inter-dependencies due to the sensitivity to interest rate risk, underlying (equity) risk, FX risk, and credit risk, and due to the convertible bond’s early exercise American feature. We present a two factor model of interest rate and equity risk that is implemented using the Crank-Nicholson technique on the discretized pricing equation with projective successive over-relaxation. This paper extends a methodology proposed in the literature (TF[98]) to deal with credit risk in a self-consistent way, and proposes a new methodology to deal with FX sensitive cross-currency convertibles. A technique for extracting the price of vanilla options struck on a synthetic asset, the foreign equity in domestic currency, is employed to obtain the implied volatility for these options. These implied volatilities are then used to obtain the local volatility for use in the numerical routine. The model is designed to deal with most of the usual contractual features such as coupons, dividends, continuous and/or Bermudan call and put clauses. We suggest that credit spread adjustments in the boundary conditions can be made, to account for the negative correlation between spreads and equity. Detailed description of the numerical methods and the discretization schemes, together with their accuracy, are provided.

This paper discusses the allocation of capital over time with several risky assets. The capital growth log utility approach is used with conditions requiring that specific goals are achieved with high probability. The stochastic optimization model uses a disjunctive form for the probabilistic constraints, which identifies an outer problem of choosing an optimal set of scenarios, and an inner (conditional) problem of finding the optimal investment decisions for a given scenarios set. The multiperiod inner problem is composed of a sequence of conditional one period problems. The theory is illustrated for the dynamic allocation of wealth in stocks, bonds and cash equivalents.