We consider the term structure modeling of interest rates by considering the forward rate as the solution of a stochastic hyperbolic partial differential equation. The arbitrage-free model of the term structure is studied and the completeness of the market is explored. We then derive results for the pricing of general contingent claims .

Under perfect market equilibrium, option prices are determined as if the economic agents were risk neutral. This paper develops a simple two-period model to analyze the impact of imperfect hedging on the equilibrium pricing of derivatives. In a partial equilibrium analysis, we show how a bid-ask option price spread is generated. In particular, we show how the equilibrium bid and ask prices depend on the market-makers' risk aversion and competition between market-makers. We argue that a monopolist market-maker is crucial to the existence of a Nash equilibrium in prices. Neither transaction costs nor asymmetric information are considered.

In this paper we construct a risk neutral dynamics of the at the money implied volatility to price implied volatility futures and forward starting compound options. These are exotic derivatives contracts whose payoff depends on the future implied volatility. Starting from the description of the real mechanism on the basis of which implied volatilities are actually quoted by option traders in option markets, we derive the risk neutral dynamics for the stochastic implied volatility. In particular we obtain the risk neutral drift restriction that must be satisfied by each single stochastic implied volatility, individually considered, on the volatility surface invariant both to time to maturity and to relative futures prices with the same time to maturity. We show the risk neutral process of the instantaneous spot volatility towards which the risk neutral at the money market implied volatility converges by the absence of maturity arbitrage.

In this paper we derive a closed form approximation to a stochastic volatility option-pricing model and propose a variant of EGARCH for parameter estimation. The model thereby provides a consistent approach to the problem of option pricing and parameter estimation. Using Swedish stocks, the model provides a good fit to the heteroscedasticity prevalent in the time-series. The stochastic volatility model also prices options on the underlying stock more accurately than the traditional Black-Scholes formula. This result holds for both historic and implied volatility. A large part of the volatility smile that is observed for options of different maturity and exercise prices is thereby explained.

We present in our paper an original variance reduction technique for Monte Carlo methods.

By an elementary version of Girsanov theorem, we introduce a drift term in the price computation. Afterwards our main idea is to use a truncated version of the Robbins-Monro algorithm to find the optimal drift that reduce variance. We proved that for a large class of payoff functions, this version of the RM algorithm converges a.s. to the optimal drift. Then we illustrate the method by an application to some options pricing.

This paper models UK stock market returns in a smooth transition regression (STR) framework. We employ a variety of financial and macroeconomic series that are assumed to influence UK stock returns, namely GDP, interest rates, inflation, money supply and US stock prices. We estimate STR models where the linearity hypothesis is strongly rejected for at least one transition variable. These non-linear models describe the in-sample movements of the stock returns series better than the corresponding linear model. Moreover, the US stock market appears to play an important role in determining the UK stock market returns regime.

We discuss the utility maximization problem of an economic agent who obtains utility both from a perishable and a durable good. Introducing the gradient approach to this mixed classical/singular control problem, we show how the first order conditions for optimality reduce to a BSDE-variant of Skorohod's obstacle problem. The solution of this problem is obtained via a representation theorem for optional processes which characterizes the time--varying minimal storage level of durable goods to be held at each point in time. We explain how to derive this process from the given price and preference structure and provide some explicit solutions. We close by presenting an efficent and easily implementable algorithm which allows one to compute the minimal storage level process in a discrete time setting.

In this paper we address two main issues: the computation of default probability implicit in emerging markets bond prices and the impact on portfolio risks and returns of expected changes in default probability. Using a reduced-form model of the Duffie-Singleton (1999) type, weekly estimates of default probabilities for US Dollar denominated Global bonds of twelve emerging markets are extrapolated for the sample period 1997-2001. The estimation of a logit type econometric model shows that weekly changes of the default probabilities can be explained by means of some capital markets factors. Recursively estimating the logit model using rolling windows of data, out-of-sample forecasts for the dynamics of default probabilities are generated and used to form portfolios of bonds. The practical application provides interesting results, both in terms of testing the ability of a naive trading strategy based on model forecasts to outperform a "customized benchmark", and in terms of the model ability to actively manage the portfolio risk (evaluated in terms of VaR) with respect to a constant proportion allocation.

In this paper we solve a two-factor convertible bonds model that fits the observed term structure, calibrates the volatility parameters to market data and allows for correlation between the state variables. We propose the Method of Characteristics together with Finite Elements for time and space discretization. Compared with the traditionally used numerical schemes of finite differences and lattices, our methodology offers clear advantages for solving two-dimensional problems in terms of flexibility in incorporating a number of final, boundary and jump conditions, generality to pricing a wide array of exotic two-colored options and accuracy. An empirical investigation into the pricing of National Grid Group's convertible issue produced prediction errors of less than 5\% for 215 successive trading days, a clear indication for commercially-usable and confident valuation of convertible bonds.

We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE representation of the stock process and computing the logarithm of the transition probability matrix of an approximating Markov chain. The option price is computed as a function of the underlyings, thus allowing for computation of deltas. The results of numerical tests in five dimensions are promising.

We consider forward rate rate models of Heath-Jarrow-Morton type, as well as more general infinite dimensional SDEs, where the volatility/diffusion term is stochastic in the sense of being driven by a separate hidden Markov process. Within this framework we use the abstract realization theory developed by Björk and Svensson in order provide general necessary and sufficent conditions for the existence of a finite dimensional Markovian realizations for the stochastic volatility models. We illustrate the theory by analyzing a number of concrete examples.

Cross-sections of option prices embed the risk-neutral probability densities functions (PDFs) for the future values of the underlying asset. Theory suggests that risk-neutral PDFs differ from market expectations due to risk premia. Using a utility function to adjust the risk-neutral PDF to produce subjective PDFs, we can obtain measures of the risk aversion implied in option prices. Using FTSE 100 and S&P 500 options, and both power and exponential utility functions, we show that subjective PDFs accurately forecast the distribution of realizations, while risk-neutral PDFs do not. The estimated coefficients of relative risk aversion are all reasonable. The relative risk aversion estimates are remarkably consistent across utility functions and across markets for given horizons. The degree of relative risk aversion declines with the forecast horizon and is lower during periods of high market volatility.

Let $Z_{t,z}^\nu$ be a ${\R}^{d+1}$-valued mixed diffusion process controlled by $\nu$ with initial condition $Z_{t,z}^\nu(t)$ $=$ $z$. In this paper, we characterize the set of initial conditions such that $Z_{t,z}^\nu$ can be driven above a given stochastic target at time $T$ by proving that the corresponding value function is a discontinuous viscosity solution of a variational partial differential equation. As applications of our main result, we study two examples : a problem of optimal insurance under self-protection and a problem of option hedging under jumping stochastic volatility where the underlying stock pays a random dividend at a fixed date.

This paper proposes a methodology to provide risk measures for portfolios during extreme events. The approach is based on splitting the multivariate extreme value distribution of the assets of the portfolio into two parts: the distributions of each asset and their dependence function. The estimation problem is also investigated. Then, stress-testing is applied for market indices portfolios and Monte-Carlo based risk measures -- Value-at-Risk and Expected Shortfall -- are provided.

In this work we consider different parametric assumptions for the instantaneous covariance structure of the Libor market model. We examine the impact of each parameterization on the evolution of the term structure of volatilities in time, on terminal correlations and on the joint calibration to the caps and swaptions markets.

We present a number of cases of calibration in the Euro market. In particular, we consider calibration to swaptions via a parameterization establishing a controllable one to one correspondence between instantaneous covariance parameters and swaptions volatilities, and assess the possible benefits of smoothing the input swaption matrix before calibrating.

Finally, we hint at the problem of measuring the divergence between the swap rate distribution coming from the Libor model and the exponential family of lognormal densities, this distance being related to the quality of the algebraic approximation replacing Monte Carlo evaluation in the calibration procedure.

This paper develops a general equilibrium economy in which options are not redundant and studies the impact of heterogeneity in beliefs on option prices and their trading volume. The model, calibrated to survey data predicts that: the difference in beliefs can account for observed option trading volume; option trading volume is more likely to be driven by changes in difference in beliefs than by wealth shocks; the open interest in options can proxy for difference in beliefs; the model can partly explain the frequencies of violations of one-factor models reported by (Bakshi, Cao and Cen 2000). The model is fitted to S&P 500 and LIFFE options prices and trading volumes, and series of option-implied difference in beliefs (OIDB) are constructed. Studying the dynamics of OIDB we find that (1) up to 30 percent in OIDB time variation can be explained by difference in beliefs implied by surveys (2) one standard deviation shock to OIDB increases the implied volatility by up to 120 basis points contemporaneously and increases the volatility smile slope by up to 10 basis points (3) option trading volume is more sensitive to shocks to OIDB than to shocks to the value of the underlying asset.

This article investigates the role of option contracts in a supply chain when the demand curve is downward sloping. We consider call (put) options that provide the retailer with the right to reorder (return) goods at a fixed price. We show that the introduction of options causes the wholesale price to increase and the volatility of retail price to decrease. In general, options are not zero sum games. Conditions are derived under which the manufacturer prefers to use options. When this happens the retailer may also benefit or be worse off. Specifically, if the uncertainty in the demand curve is high, the introduction of options alters the equilibrium prices in a way that hurts the retailer. Finally, we demonstrate that if either the manufacturer or the retailer wants to hedge the risk, contracts that pay out according to the square of the price of a traded security are required.

A number of models have been proposed in order to account for observed implied volatility smiles. Popular solutions include local volatility, stochastic volatility, jumps in the underlying and/or in the volatility, and combinations of those. In practice one is faced with two basic issues: i) how to compute effectively and accurately the implied volatility surface that each model generates? ii) how to choose between those models and how to calibrate their parameters on market data? It is the purpose of the talk to address these topics. An essential idea is that all the models exhibit very different qualitative behaviours. In particular several asymptotic regimes will be given, in an attempt to classify those models. One can then take advantage of the asymptotics to propose well-posed calibration procedures in some cases. This talk is based on recent joint works with H. Berestycki (EHESS, Paris), R. Cont (Ecole Polytechnique, Paris) and I. Florent (CCF, HSBC Group, Paris).

We consider the problem of an executive that receives call options as compensation in a dynamic setting. He can influence the stock price return with his effort. In addition, he determines the level of volatility of the stock through the choice of projects. The executive is risk-averse and experiences disutility from the effort. In this framework, we introduce the problem of the company that wants to maximize the final expected value of the price of the stock minus the cost of the compensation package. The company has to design a compensation package such that the executive reaches the minimum level of utility or opportunity cost (individual rationality constraint). We characterize the optimal strike price the company should choose, and compute it numerically for the logarithmic case.

We consider a financial model with mild conditions on the dynamic of the underlying asset. The trading is only allowed at some fixed discrete times and the strategy is constrained to lie in a closed convex cone. In this context, we derive closed formulae to compute the super-replication prices of any contingent claim which depends on the values of the underlying at the discrete times above.

As an application, when the underlying follows a stochastic differential equation including stochastic volatility or Poisson jumps, we compute those super-replication prices for a range of European and American style options, including Asian, Lookback or Barrier Options.

Distantly maturing forward rates represent the markets long term (risk neutral) expectations about interest rates. As such, they are the fundamental ingredient of the pricing kernel. In most equilibrium models, interest rates mean revert, and so long forward rates are asymptotically constant.

However, from US Treasury STRIPs data, forward rates slope increasingly downwards, and do not attenuate in volatility, as maturity increases beyond about 15 years. We model this in a equilibrium framework, by showing that most of the volatility in long forward rates is ``short term'', coming from a predictable, tightly mean reverting factor, but for which this mean reverting aspect is absent when we transform to risk neutral probabilities. We verify this predictable behavior in the STRIPs data, and also in T Bond futures data, where we can observe the risk neutral dynamic directly. It is striking that this short term behavior has such a large persistent effect on the forward rates.

This paper presents the results of stability tests for alpha and beta coefficients over bull and bear market conditions for a sample of stocks traded in the Athens Stock Exchange. Our daily data cover the period July 1998 to June 2000, which splits in two sub-periods: one bullish (July 1998 to midst September 1999) and one bearish (midst September 1999 to June 2000). The tests are based in two alternative models: the Single Index Market Model (SIMM) and a modified version which introduces a binary (dummy) variable which accounts for the bullish period. Our results show that the vast majority of the shares examined have greater values of their beta coefficients in the period of the declining market and smaller betas during the market up trend. Also, the majority of stocks appear to have beta values below one during bullish market conditions and above one when the market is bearish.

In the mathematical finance the capital of a self-financing strategy is defined as a stochastic integral. If the time horizon is infinite, the capital of a self-financing strategy is specified as the limit almost surely of the capital at finite times.

We introduce the notion of the improper stochastic integral and propose to use it for defining the terminal capital of a self-finencing strategy. (The existence of an improper stochastic integral is more restrictive than the existence of the limit almost surely of the stochastic integrals up to finite times.)

It has been found that the First and the Second Fundamental Theorems of Asset Pricing remain true if one accepts this new definition of the terminal capital.

Although the Black Scholes formula is widely used by market practitioners, when applied to (``vanilla'') call and put options it is very often reduced to a means of quoting option prices in terms of another parameter, the implied volatility. The implied volatility of call options at a given date is a function of the strike price and exercice date: this function is called the implied volatility surface.

Two features of this surface have captured the attention of researchers in financial modeling. First, the non-flat instantaneous profile of the surface, whether it be a ``smile'', ``skew'' or the existence of a term structure, point out to the insufficiency of the Black Scholes model for matching a set of option prices at a given time instant. Second, the level of implied volatilities changes with time, deforming the shape of the implied volatility surface. The evolution in time of this surface captures the evolution of prices in the options market.

We present here an empirical study of this evolution : how does the implied volatility surface actually behave? How can this behavior be described in a parsimonious way?

Our study is based on time series of implied volatilities of FTSE, SP500 and DAX options. Starting from market prices for European call and put options, we describe a non-parametric smoothing procedure for constructing a time series of smooth implied volatility surfaces. We then study some of the statistical properties of these time series. By applying a Karhunen-Loéve decomposition to the daily variations of the volatility surface, we extract the principal deformations of the surface, study the time series of the corresponding factor loadings and examine their correlation with the underlying asset.

Based on these findings, we show that the implied volatility surface can be modeled as a mean reverting random field with a covariance structure matching the empirical observations. We propose a two factor model for the evolution of the surface which adequately captures the observed properties, indicate simple methods for estimating model coefficients. This model extends and improves the well known "constant smile" model used by practitioners. The consequences of these findings for measurement and management of volatility risk are then outlined.

In this paper, I show that an extension of CCAPM that incorporates the idea of limited participation and the time-varying and predictable nature of labor's share of output goes a long way toward explaining several empirical regularities in asset pricing, including the equity premium puzzle, the riskfree rate puzzle, the equity volatility puzzle, and the time-varying and predictable nature of asset returns.

The structure of the model is such that the aggregate labor income can be interpreted as the habit stock of aggregate consumption. Under this interpretation, our model nests several popular habit formation models in the asset pricing literature. Since the aggregate labor income growth rate is locally stochastic and arbitrarily correlated with the aggregate consumption growth rate, our model gives rise to a realistic model of the real term structure.

Using trade and quote data for Toronto Stock Exchange listed securities, this paper employs nonparametric estimation to measure the effect of being interlisted on a US exchange on the percentage bid-ask spread (and other trading properties). Unlike previous studies, I use kernel-based matching estimates in addition to variants of the standard nearest-neighbor approach for constructing matched samples of interlisted stocks and non-interlisted stocks. I explore the sensitivity of results to: (i) using different bandwidth parameters and caliper-matching criteria; (ii) using different matching characteristics; (iii) the exclusion/inclusion of firms. I highlight instances when kernel-based and nearest-neighbor matching estimation techniques produce significantly different results.

Default correlation has become a key issue in the trading of debt-related products. Credit spreads of individual issuers imply risk-neutral default distributions but provide little or no information about the relation between different issuers.

Several different approaches to this problem are available, based on both 'reduced form' and 'structural' models. In previous papers the author and Violet Lo introduced reduced-form `infection models' in which interaction effects can be handled in a straightforward way. The philosophy is to specify directly a mechanism by which different issuers interact, leading to readily-computable finite-state Markov models where the states represent various combinations of defaulted/non-defaulted issuers.

In this paper we consider

* Calibration of simple models and implications for counterparty default risk in default swaps. * Estimation of interaction parameters from market data. * More complex models to allow for stochastic interest and hazard rates. * Applications to pricing and hedging of basket default swaps.

In April 2001 Swiss banks held over CHF 500 billion in mortgages. This important segment accounts for about 63% of all the loan portfolios of Swiss banks. In this paper we restrict our attention to residential mortgages held by private clients, i.e. borrowers who finance their property by the loan and we model the probability distribution of the number of defaults using a non-parametric intensity based approach. We consider the time-to-default and, by conditioning on a set of predictors for the default event, we obtain a log-additive model for the conditional intensity process of the time-to-default, where the contribution of each predictor is described by a smooth function. We estimate the model by using a local scoring algorithm coming from the generalized additive model.

A large shareholder who undertakes costly effort to improve a firm's dividends faces a tradeoff. Selling shares will likely lower the share price (as the market anticipates a reduction in effort), while holding the shares implies a less diversified investment portfolio. Moreover, in a dynamic setting a time-consistency problem emerges: once some shares are sold, the incentive to sell additional shares may increase since the large shareholder is less exposed to the resultant price declines. We analyze a multi-period general equilibrium model for the optimal trading strategy of a large shareholder. We consider the case in which the large shareholder can commit to a trading strategy, and the case in which such commitment is impossible. Absent commitment, the problem is similar to durable goods monopoly: the share price today depends on the shares expected to be sold in the future. We show that while the large shareholder's stake ultimately converges to the efficient risk-sharing allocation, this solution entails inefficient monitoring. Moreover, even with continuous trading, with sufficient moral hazard the large shareholder adjusts his stake gradually over time. While the trading strategy (and therefore the dividend process) is complicated, our results produce a simple formula for the equilibrium share price in this setting: the trading strategy of the large shareholder can be ignored, and today's share price is simply the present value of dividends given constant holdings by the large shareholder, but adjusted by a risk premium that reflects the large shareholder's (rather than investors') risk aversion. We apply our model to provide a rational for both IPO underpricing and the use of lockup provisions. Finally, we generalize our results outside the moral hazard framework.

We present a model in which a bond issuer subject to possible default is assigned a "continuous" rating R(t) in [0,1] that follows a jump-diffusion process. Default occurs when the rating reaches 0, which is an absorbing state. An issuer that never defaults has rating 1 (unreachable). The value of a bond is the sum of "default-zero-coupon" bonds (DZC), priced as follows:

D(t,x,R)=exp( L(t,x)-psi(t,x,R)) x = T-t

The default-free yield y(t,x,1)=L(t,x)/x follows a traditional interest rate model (e.g. HJM, BGM, "string", etc.). The "spread field" psi(t,x,R) is a positive random function of two variables R and x, decreasing with respect to R and such that psi(t,0,R)=0. The value psi(t,x,0) is given by the bond recovery value upon default. The dynamics of psi is represented as the solution of a finite dimensional SDE. Given psi such that dpsi/dR <0 a.s., we compute what should be the drift of the rating process R(t) under the risk-neutral probability, assuming its volatility and possible jumps are also given.

For several bonds, ratings are driven by correlated Brownian motions and jumps are produced by a combination of economic events.

Credit derivatives are priced by Monte-Carlo simulation. Hedge ratios are computed with respect to underlying bonds and CDS's.

Most other credit models (Merton, Jarrow-Turnbull, Duffie-Singleton, Hull-White, etc.) can be seen either as particular cases or as limit cases of this model, which has been specially designed to ease calibration.

Long-term statistics on yield spreads in each rating and seniority category provide the diffusion factors of psi. The rating process is, in a first step, statistically estimated, thanks to agency rating migration statistics from rating agencies (each agency rating is associated with a range for the continuous rating). Then its drift is replaced by the risk-neutral value, while the historical volatility and the jumps are left untouched.

We propose a market-based approach to the modeling of implied volatility, in which the implied volatility surface is directly used as the state variable to describe the joint evolution of market prices of options and their underlying asset. We model the evolution of an implied volatility surface by representing it as a randomly fluctuating surface driven by a finite number of orthogonal random factors. Our modeling approach is based on empirical studies on statistical behavior of implied volatility time series [2].

We illustrate how this approach extends and improves the accuracy of the well-known ``sticky moneyness'' rule used by option traders for updating implied volatilities. Our approach gives a justification for use of ``Vega''s for measuring volatility risk and provides a decomposition of volatility risk as a sum of independent contributions from empirically identifiable factors.

We examine the existence of arbitrage free realizations of such stochastic implied volatility models and show that they lead to simple Delta-Vega hedging strategies for portfolios of options.

We introduce the intensity-based defaultable Lévy term structure model. It generalizes the default-free Lévy term structure model by Eberlein and Raible, and the intensity-based defaultable Heath-Jarrow-Morton approach of Bielecki and Rutkowski. Furthermore, we include the concept of multiple defaults, based on Schönbucher, within this generalization.

We extend the maximum likelihood estimation method of Ait-Sahalia (2002) for time-homogeneous diffusions to time-inhomogeneous ones. We derive a closed-form approximation of the likelihood function for discretely sampled time-inhomogeneous diffusions, and prove that this approximation converges to the true likelihood function and yields consistent parameter estimates. Monte Carlo simulations for several financial models reveal that our method largely outperforms other widely used numerical procedures in approximating the likelihood function. Furthermore, parameter estimates produced by our method are very close to the parameter estimates obtained by maximizing the true likelihood function, and superior to estimates obtained from the Euler approximation.

This paper considers learning when the distinction between risk and ambiguity matters. Working within the framework of recursive multiple-priors, the paper formulates a counterpart of the Bayesian model of learning about an uncertain parameter from conditionally i.i.d. signals. The framework permits a distinction between noisy and indistinguishable signals and also between indistinguishable and identical experiments. Other noteworthy features include: The set of conditional probabilities agents use for forecasting expands or shrinks in response to new data. Ambiguous signals may increase the volatility of conditional actions and may prevent ambiguity from vanishing in the limit. Properties of the model are illustrated with two applications. A dynamic portfolio choice model suggests that agents should exit (enter) the stock market after a string of bad (good) returns. A representative agent asset pricing model shows how large unanticipated shocks are amplified by the prospect of ambiguous news.

We provide the definition and a complete characterization of regular affine processes. This type of process unifies the concepts of continuous- state branching processes with immigration and Ornstein-Uhlenbeck type processes. We show a wide range of financial applications, including the CIR and Vasicek short rate models and Heston's stochastic volatility model.

The talk is concerned with the modelling of dependence between defaults in current models for credit portfolio management. We clarify the mathematical structure of the existing industry models and provide links between them. We study the model risk due to an inappropriate modelling of the dependence between defaults inherent in various modelling approaches. Finally we discuss the calibration of several models to default data.

We consider increasing strictly concave utility functions that are finite valued on the whole real line and a continuous-time model of a financial incomplete market where asset price processes are semimartingales. We show that if the conjugate of the utility function satisfies a rather weak assumption, then there exists an optimal solution to utility maximization from terminal wealth and to the dual problem of the minimization of generalized divergence distances. This assumption is shown to be equivalent to the condition of reasonable asymptotic elasticity of the utility function, as defined by Schachermayer (1999). The proof is based on a direct application of the properties of the solution of the dual problem.

We use a calibrated dynamic general equilibrium economy with heterogeneous beliefs to study the effects of a short-selling constraint on stock prices, stock price volatility, and interest rates. We find large stock price and volatility effects from heterogeneous beliefs, without introducing short-selling constraints. When we introduce short-selling constraints into this economy, we find small additional stock price valuation effects, with large and offsetting effects on equilibrium interest rates and Sharpe ratios.

In Treasury-Bond markets, several options are traded where the underlying are illiquid instruments, usually referred to as Off-the-run bonds (OffTR in short). This is opposed to the reference or benchmark instruments, equally known as On-the-Run bonds (OnTR in short). In general, mainly due to a liquidity premium, OffTR instruments tend to trade at a spread above the OnTR bonds and display a different historical volatility when compared to the benchmark set.

When trying to price a derivative where the underlying is an OffTR bond, we have the issue of pricing the option in a way to avoid possible arbitrage with the swap market. This also stems from that implied volatilities are not available for OffTR products. To achieve this goal, we introduce two alternative solutions that both use information gathered from money markets. Although a swaption in a frictionless market may be seen as a portfolio of 0-coupon bonds, things in reality are far more complicated. In real markets Libor curves and Bond Yield curves differ substantially due to liquidity effects, REPOs and collateralisation features, just to mention a few. In addition, only forward swap rates volatilities are quoted, with the implicit assumption of lognormality under the swap measure (the one where the PVBP asset is the numeraire). Therefore, a direct equivalent of the swap-rate volatility surface in the bond option market does not exist.

We place ourselves in a market driven by a generic stochastic volatility model for the forward swap rates under the swap measure. The main idea consists in finding a suitable swaption (or a swaption portfolio) to replicate the payoff of the bond option. The hypothesis we introduce and justify assumes that Yield curves can be recovered from swap curves through a deterministic (although maturity-dependent) zero-coupon spread. Alternative possibilities are also discussed. In a first attempt to solve the problem, we determine an arbitrage relationship between OffTR bond options and swaptions that only involves a position in ATM options. We then generalize this approach to include the possibility of statically replicate the bond option payoff through a portfolio of swaptions at different strikes. In the second scenario, we are then able to price an option on a OffTR bond consistently with the swaption smile.

This paper develops a one and two factor Adaptive Binomial Model (ABM) to price financial derivatives. By building several levels of finer trees around critical regions, the ABM improves upon the binomial model by significantly reducing pricing errors for all tree-based derivatives valuation. The improvement in efficiency should be in the order of $4^L$ with $L$ being the level of fine mesh employed. The model is applied to price several derivatives securities, e.g. discrete barrier options, knock-out swaps, and index knock-out bonds. For this class of derivatives, the pricing error comes mainly from the price discontinuity around the knock-out region. Numerical results show significant improvement in pricing accuracy. For the index knock-out bond, such improvement proves to be critical in getting an accurate price, because physical limitation of computer resources prevent us from getting an accurate price at all with the standard binomial model.

We consider the arbitrage-free equilibrium pricing problem in an incomplete market. We provide a method to find the minimal distance measure defined by Goll and R\"{u}schendorf (2001) or the maximum entropy measure over a finite dimensional subset of the set of all equivalent martingale measures. We use the cross entropy to measure the pseudo-distance between two equivalent martingale measures. We interpret the cross entropy as model risk and prove that it induces a Riemannian geometric structure. A numerical optimization algorithm on the Riemannian manifold is applied to solve the pricing problem.

Recent papers by Jean-Michel Cortault et al. (2000) and Murad S. Taqqu (2001) have cast new light on Bachelier's work and his time. In both articles Bernard Bru's research seems to be the most influential. In the case of Louis Bachelier and his area of activity the dominant French point of view is the most natural thing in the world and every body is convinced by the results. The aim of the present paper is to add a few tesseras from other countries to the picture which is known about the birth of mathematical finance and its probabilistic environment. Franck Jovanovic and Philippe LeGall (2000) investigated the work of Bachelier's French predecessors Jules Regnault and Henri Lefevre. In our talk we discuss and compare especially the contributions to finance and stochastics made by Charles Castelli (1871, 1882), Francis Ysidro Edgeworth (1886, 1888), and Gustav Theodor Fechner (1860, 1897).

Pricing exotic type options such as American and lookback options often translates to optimal stopping time problems. The key to solving these problems is to apply the ``principle of smooth fit'' to find the ``free boundary'' between the continuation region and the stopping region.

We will review some recent free boundary problems in option pricing in both classical Black-Scholes and regime switching models. For the Black-Scholes model, we will talk about pricing perpetual American type lookback options where the smooth fit principle is necessary but not sufficient in solving this problem. For the regime switching model, we will investigate some examples where explicit solutions can be obtained via extending the smooth fit technique to allow jump discontinuities.

Part of the talk is from a joint work with Larry Shepp.

This paper considers the pricing of contingent claims using an approach developed and used in insurance pricing. The approach is of interest and significance because of the increased integration of insurance and financial markets and also because insurance related risks are trading in financial markets as a result of securitisation and new contracts on futures exchanges. This approach uses probability distortion functions as the dual of the utility functions used in financial theory. The pricing formula is the same as the Black-Scholes formula for contingent claims when the underlying asset price is log-normal. The paper compares the probability distortion function approach with that based on financial theory. The theory underlying the approaches is set out and limitations on the use of the insurance based approach are illustrated. We extend the probability distortion approach to the pricing of contingent claims for more general assumptions than those used for Black-Scholes option pricing.

In this paper, we provide a definition of equilibrium in terms of risk measures, and present necessary and sufficient conditions for equilibrium in a market with finitely many traders (whom we call ``banks") who trade with each other in a financial market. Each bank has a preference relation on random payoffs which is monotonic, complete, transitive, convex and continuous, and show that this, together with the current position of the bank, leads to a family of valuation measures for the bank. We show that a market is in equilibrium if and only if there exist a (possibly signed) measure which, for each bank, agrees with a positive convex combination of all valuation measures used by that bank on securities traded by that bank.

This paper examines the efficiency of stock based compensation by valuing stock and options from the executive's point of view. Companies give compensation in the form of stock in order to align incentives by providing a link between executive wealth and the stock price performance of the company. However, it requires the executive to be exposed to firm-specific risk, and thus hold a less than fully diversified portfolio. Since firm-specific risk is not priced, this leads to the executive placing less value on the options than their cost to the company, given by their market value.

We propose a continuous time, utility maximisation model to value the executive's compensation. We endogenise allocation of the executive's non-option wealth as the executive may invest in the market portfolio. Executives trade the market portfolio to adjust exposure to market risk, but are subject to firm-specific risk for incentive purposes.

By distinguishing between these two types of risks, we are able to examine the effect of stock volatility, firm-specific risk, market risk and the correlation between the stock and the market, on the value to the executive and incentives. We can prove that there is a negative relationship between firm-specific risk and value, if volatility is fixed. However, the value may increase or decrease with firm-specific risk if market risk is fixed. The same ambiguous relationship is found if we consider value as a function of volatility, so executives will not always aim to increase the volatility of the stock price.

Just as the value of the compensation to the executive is overstated in a Black Scholes model, the Black Scholes model also exaggerates the incentives for the executive to increase the stock price. We address the question of how the company can maximise incentives (for a given cost) and show that if stock compensation replaces cash remuneration, it is optimal to compensate with stock, rather than options.

We investigate the structure of the pricing kernels in a general dynamic investment setting by making use of their duality with the self financing portfolios. We generalize the variance bound on the intertemporal marginal rate of substitution introduced in Hansen and Jagannathan (1991) along two dimensions, first by looking at the variance of the pricing kernels over several trading periods, and second by studying the restrictions imposed by the market prices of a set of securities.

The variance bound is the optimal Sharpe ratio which can be achieved through dynamic trading. It may be further enhanced by investing dynamically in some additional securities. We exhibit the kernel which leads to the smallest possible increase in optimum dynamic Sharpe ratio while agreeing with the current market quotes of the additional instruments.

We study the practical implications of imposing "consistency" on the choice of the initial curve on the calibration of an HJM model. Consistency, simply stated, is that the initial curve should come from the class of forward rate curves that will be generated by the model at futures dates. We perform analysis both on simulated and market data using the extended Vasicek model. Our results show that the initial curve has a significative impact on the estimates of the parameter of the model. We identify a family, consistent with the model, which, on market data, shows more stable estimates, as well as better fitting and forecasting capabilities.

In this talk we examine the dependence of option prices in a jump-diffusion model on the choice of martingale pricing measure. Since the model is incomplete there are many equivalent martingale measures. Each of these measures corresponds to a choice for the market price of diffusion risk and the market price of jump risk. The main result is that for convex payoffs the option price is increasing in the jump-risk parameter. We apply this result to deduce general inequalities comparing the prices of contingent claims under various pricing measures which have been proposed in the literature as candidate pricing measures.

The proofs are based on couplings of stochastic processes. If there is only one possible jump size then there is a second coupling which can be used to extend the results to include stochastic jump intensities.

This paper explores derivative security valuation when the underlying price process is controlled. Our particular application is valuing defaultable debt when the process for firm value is controlled by the firm's manager. Specifically, the manager uses forward contracts to control (continuously) the firm's net investment in a risky technology. It is assumed that the manger has an incentive compensation contract based on firm value and seeks to maximize the expected utility associated with that compensation. Via a state variable transformation, we are able to generate analytic solutions for this control problem. That allows us to examine the effect on default probabilities and bond pricing of altering key parameters such as volatility and the degree of managerial risk aversion. We also compare the situation with and without managerial control, and provide insights on the relative benefits for lenders. Over and above the specific results for defaultable debt, our basic methodology appears broadly applicable to valuing payoffs on portfolios where the investment allocation is subject to managerial control.

Motivated by the implied stochastic volatility literature (Britten-Jones and Neuberger (1998), Derman and Kani (1997), Ledoit and Santa-Clara (1998)) this paper proposes a new and general method for constructing smile-consistent stochastic volatility models. The method is developed by recognizing that option pricing and hedging can be accomplished via the simulation of the implied risk neutral distribution. We devise an algorithm for the simulation of the implied distribution, when the first two moments change over time. The algorithm can be implemented easily, and it is based on an economic interpretation of the concept of mixture of distributions. It can also be generalized to cases where more complicated forms for the mixture are assumed.

This paper addresses Merton's portfolio optimization problem in the general setting of exponential Lévy stock model. We investigate three canonical examples of utility functions, -exp(-x),x^p/p,log x and in each case give the general solutions of both the primal and dual optimal problems. To study the dual problem directly, we introduce a generalized notion of Hellinger process such that the solution of the dual problem is that supermartingale which minimizes the Hellinger process at each instant in time. We are especially interested in when this solution is a martingale: we find it fails to be a martingale in cases when there is a no borrowing/shortselling constraint which becomes binding.

This talk studies the relative error in the crude Monte Carlo pricing of some familiar European path- dependent multi asset options. For the crude Monte Carlo method, it is well-known that the convergence rate $O(n^{-1/2})$, where $n$ is the number of simu- lations, is independent of the dimension of the integral. We show that for a large class of pricing problems in the multi-asset Black-Scholes market also the constant in $O(n^{-1/2})$ is independent of the dimension. The main tool to prove this result is the isoperimetric inequality for Wiener measure.

It is often assumed that financial markets are frictionless. This assumption serves well in some instances. However, in bond markets this assumption prevents researchers from obtaining an estimate of the term structure of interest rates (TS). This is because bond markets are illiquid and bond prices are observed with errors. These errors are so hefty they lead to violation of no-arbitrage conditions in the market. Researchers have had to settle for a second best estimate of the TS (obtained via regression) at a cost of economically unrealistic assumption of symmetric market frictions. The true shape of market frictions, however, is not known and generally is a highly complex issue. The methodology developed here avoids making detrimental assumptions. It facilitates empirical investigation of the shape of the market frictions and of the TS that are simultaneously imputed from market data. Our methodology is based on no-arbitrage arguments and an assumption that in "efficient" markets frictions will minimizing maximum net arbitrage. The empirical investigation is performed in the Canadian and the US markets. In both markets it is found that market frictions are not symmetric and the estimates of the TS produced via regression and the methodology developed here differ significantly. This difference is more pronounced in the Canadian market, which corresponds to the fact that the US Treasury market is much more liquid than the Canadian bond market.

In this talk we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce ``uncertainty structures'' for the market parameters and show that the robust portfolio selection problems corresponding to these uncertainty structures can be reformulated as second-order cone programs and, therefore, the computational effort required to solve them is comparable to that required for solving convex quadratic programs. Moreover, we show that these uncertainty structures correspond to confidence regions associated with the statistical procedures used to estimate the market parameters. We demonstrate a simple recipe for efficiently computing robust portfolios given raw market data and a desired level of confidence.

One of the central questions in financial economics is the determination of asset prices, such as the value of a stock. Over the past three decades, research on this topic has converged on a concept called the "state-price density". However, a puzzle has arisen. On the one hand, Cox, Ingersoll, and Ross (1985) and others argue that the ratio of the state-price density to the statistical probability density, which is commonly known as the pricing kernel, should decrease monotonically as the aggregate wealth of an economy rises. On the other hand, recent empirical work on options on the S&P 500 index suggests that, for a sizable range of index levels, the pricing kernel is increasing instead of decreasing.

We investigate theoretical explanations to this puzzle. Our existing work has ruled out some alternative hypotheses, such as data imperfections and methodological problems. We provide a representative agent model where volatility is a function of a second momentum state variable. This model is capable of generating the empirical patterns in the pricing kernel. We estimate the model through GMM.

We study the invesment optimization problem when the horizon is a random time. We present a martingale approach, and a dynamic programming one. In the case of CRRA utility function, we prove that the problem can be solved via a backward equation

We establish necessary and sufficient conditions for a linear taxation system to be neutral - within the multi-period discrete time ``no arbitrage'' model - in the sense that valuation is invariant to the exact sequence of tax rates, realization dates as well as immune to timing options attempting to twist the time profile of taxable income through wash sale transactions.

We present a method to aggregate heterogeneous individual beliefs, given a competitive equilibrium in a dynamic complete financial market, into a single "market probability, such that it generates, if shared by all agents, the same marginal valuation of assets by the markets as well as by each individual investor.

In this process the market portfolio may have to be scalarly adjusted and we derive from there an adjusted CCAPM formula where the adjustment coefficient summarizes the heterogeneity of the beliefs.

We present new results in a ``general" theory of financial markets with transaction costs, including no-arbitrage criteria and hedging theorems. Our approach is based on a geometric formalism which allows us to incorporate ideas from convex analysis and stochastic optimal control.

The option-adjusted spread (OAS) is a standard measure in evaluation of mortgage-backed securities. In calculating the OAS, a prepayment model is incorporated to generate prepayment cashflows, however, no attention has been paid to a prepayment process and its associated probability measure. To illustrate the situation we examine two categories of prepayment models, the proportional hazards and the Poisson regression models. We formulate prepayment processes by point processes and find constraints on the point processes under which these prepayment models are reproduced. Prepayment rate is specified by the intensity of the point processes, and the intensity depends on a probability measure. We give warning of implicitly adpopting the real measure in the OAS approach.

Risk management of non-maturing liabilities is a relatively unstudied issue of significant practical importance. Non-maturing liabilities include most of the traditional deposit accounts like demand deposits, savings accounts and short time deposits and form the basis of the funding of depository institutions.

In this talk we propose a stochastic three-factor model as general quantitative framework for liquidity risk and interest rate risk management for non-maturing liabilities. It consists of three building blocks: market rates, deposit rates and deposit volumes. Our approach to liquidity risk management is based on the term structure of liquidity, a concept which forecasts for a specified period and probability what amount of cash is available for investment. For interest rate risk management we compute the value, the risk profile and the replicating bond portfolio of non-maturing liabilities using arbitrage-free pricing.

A classical problem in mathematical finance is the computation of optimal portfolios, where optimal here refers to maximization of expected utility from terminal wealth or consumption. In this talk, we consider logarithmic utility because it is intuitive and allows to obtain explicit solutions even in inhomogeneous, incomplete markets.

We want to provide an in some sense definitive answer to the log-optimal portfolio problem: For the most general semimartingale model, the optimal investment is determined explicitly in terms of the semimartingale characteristics of the price process. Similarly as transition densities and stochastic differential equations, this notion describes the local behaviour of a stochastic process. Previously obtained explicit solutions in the literature are easily recovered as special cases.

Moreover, we consider neutral derivative pricing in incomplete markets.

Prepayment modeling is the dominant consideration of MBS valuation. Projections of future prepayments are typically derived from historical data. Periods of rampant refinancings, such as the fall of 2001, inevitably give rise to "new and improved" prepayment models. "Burnout" (the slow-down following major refinancing activity) is usually modeled by changing parameters.

We introduce a new CLEAN* approach to valuing mortgage-backed securities. "Baseline" prepayments that do not depend on interest rates are modeled using a vector of prepayment speeds, while refinancings are modeled using an option-based approach. The full spectrum of refinancing behavior is described using the notion of refinancing efficiency. There are financial engineers who refinance at just the right time, leapers who do it too early, and laggards who wait too long.

The initial mortgage pool is partitioned into "efficiency buckets", whose sizes are calibrated to market prices. The composition of the seasoned pool is then determined by the amount of excess refinancings over baseline prepayments. Leapers are eliminated first, then financial engineers, and finally laggards. As the pool ages, its composition gradually shifts towards laggards, and this automatically accounts for burnout.

A distinguishing feature of our approach is the rigorous analysis of mortgages. It requires an optionless mortgage yield curve that is not explicitly observable but is implied by market data. Mortgage rates and MBS rates are represented as two perfectly correlated lattices: one determines mortgage refinancings; the other values the MBS. This formulation allows for recursive valuation that is orders of magnitude faster than conventional Monte Carlo analysis, resulting in increased accuracy and superior performance.

*patent pending

We analyze the implications of a fixed fraction of assets under management fee, the most commonly used fee by mutual funds, on portfolio choice decisions in a continuous time model. In our model, the investor has a log utility function and is allowed to dynamically allocate his capital between an actively managed mutual fund and a money market account. The optimal fund portfolio is shown to be the one that maximizes the market values of the fees received, and is independent of the manager's utility function. The presence of dynamic flows induces "flow hedging" on the part of the fund, even though the investor is log. We predict a positive relationship between a fund's proportional fee rate and a fund's volatility. This is a consequence of higher fee funds holding more extreme equity positions. However, the overall dollar amount of equity held by a fund can be independent of the fee rate, as a higher fee also implies that investors allocate a smaller fraction of their wealth to the fund. While both the fund portfolio and the investor's trading strategy depend on the proportional fee, the equilibrium value functions do not. Finally, we show that our results hold even if in addition to trading the fund and the money market account the investor is allowed to directly trade some of the risky securities, but not all.

Network commodities, e.g.point-to-point bandwidth capacity, are commodities whose substitution possibilities can be described using a network graph. This means that the price dynamics of individual commodities is influenced by each other according to the network structure. We describe an adapted form of the HJM model, the network-HJM model, which directly includes the graph of substitution possibilities in the forward price dynamics. In analogy with the standard HJM model a drift term is required to maintain the martingale property of the forward prices but in this case it relates to the effects of the network structure. We consider the network structure as a pricing opportunity and demonstrate option designs which are separately sensitive to the network existence and to details of network topology.

We consider an investor who has sold a contingent claim and intends to minimize the maximal expected weighted shortfall. Here, the maximum is taken over a family of models. We call the associated minimizing strategy robust-efficient. The problem to determine a robust-efficient strategy is closely related to the statistical problem of testing a composite hypothesis against a composite alternative. The solution to this statistical testing problem is provided on a general level by means of a least-favorable pair. We apply these results to derive the robust-efficient strategy for a class of Binomial-models with uncertain transition probabilities and for a class of generalized Black-Scholes models where volatility is subject to a random jump with uncertain mean and variance.

It is an interesting question to analyse the stochastic nature of long term rates in interest rate markets. In Dybvig-Ingersoll-Ross 1996 the authors show that long forward and zero coupon rates can never fall. In their proof they implicitly use an ``ergodicity'' assumption, which is economically reasonable, but does not hold in any arbitrage-free interest rate model. We prove without any additional assumption that long forward rates can never fall, if they exist.

We investigate some portfolio problems consisting of maximizing the expected terminal wealth under the constraint of an upper bound for the risk, where we measure risk by the variance, but also by the Capital-at-Risk (CaR). The solution of the mean-variance problem has the same structure for any price process which follows an exponential Lévy process. The mean-CaR involves a quantile of the corresponding wealth process of the portfolio. We derive a weak limit law for its approximation by a simpler Lévy process, often the sum of a drift term, a Brownian motion and a compound Poisson process. Certain relations between a Lévy process and its stochastic exponential are investigated.

The inability to predict the earnings of growth stocks, such as biotechnology and internet stocks, leads to the high volatility of share prices and difficulty in applying the traditional valuation methods. This paper attempts to demonstrate that the high volatility of share prices can nevertheless be used in building a model that leads to a particular size distribution, which can then be applied to price a growth stock relative to its peers. The model focuses on both transient and steady state behavior of the market capitalization of the stock, which in turn is modeled as a birth-death process. In addition, the model gives an explanation to an empirical observation that the market capitalization of internet stocks tends to be a power function of their relative ranks.

In this paper we consider the optimization problem of an agent who wants to maximize the total expected discounted utility from consumption over an infinite horizon. The agent is under obligation to pay a dept at a fixed rate until he/she declares bankruptcy. At that point, after paying a fixed cost, the agent will be able to keep a certain fraction of the present wealth, and the debt will be forgiven. The selection of the bankruptcy time is at the discretion of the agent.

The novelty of this paper is that at the time of bankruptcy the wealth process has a discontinuity, and that the agent continues to invest and consume after bankruptcy. We show that the solution of a free boundary problem satisfying some additional conditions is the value function of the above optimization problem. Particular examples such as the logarithmic and the power utility functions will be provided, and in these cases explicit forms will be given for the optimal bankruptcy time, investment and consumption processes.

This paper deals with the problem of price formation on a market with asymmetric informations and several risky assets. As Back (1992) and Cho (1997), we consider a model with a single insider and we extend to a continuous time framework the multivariate security model of Caballé and Krishnan (1994). We state first a general verification theorem, then we permit the insider to have two kinds of behaviour : risk neutral or risk averse with an exponential utility.

This paper develops in a Brownian information setting an approach for analyzing the nonindifference for the timing of resolution of uncertainty, a question that motivates the stochastic differential utility (SDU) due to Duffie and Epstein (1992). For a class of Backward Stochastic Differential Equations (BSDEs) including SDU, we formulate the information neutrality property as an invariance principle when the filtration is coarser (or finer) and characterize this property. Furthermore, we provide a concrete example of heterogeneity in information that illustrates explicitly the neutrality property for some particular BSDEs.

Open interest in a financial contract describes the total number that are held long. This information is quoted at the end of each trading day in addition to price and volume. Our paper investigates the risk-sharing rationale for option demand and the resulting shape of the open interest curve in calls across strikes in an equilibrium setup.We argue that skewness of the terminal stock price distribution drives equilibrium demand in options and that the observed shape of the open interest curve is the result of favorable trade-offs between skewness and variance. We explain that open interest curves are sensitive to the distributional assumptions made for the underlying security; an analysis of open interest in addition to price and volume could therefore enrich current empirical studies.

We deal with the minimal initial investment that is needed to hedge an American option when one is allowed to fail with positive probability. We develop general formulas for the problem and give a sharp answer in the case of call options in the Black-Scholes model. Bounds are provided in the general submartingale case. Finally we present a general result on optimal stopping in a restricted setup.

This paper develops a three-factor corporate bond valuation model that incorporates a stochastic default barrier. The default barrier is considered as the bond issuer's liability. A corporate bond defaults when the bond issuer's leverage ratio (liability-to-asset ratio) increases above a predefined default-triggering value. The payoff to the bondholders in case of default is a constant fraction of the value of a default-free security with the same corporate bond face value. A closed-form solution of the corporate bond price is derived to obtain credit spreads. The dynamics of the default barrier proposed in the three-factor model is more general than that proposed by Longstaff and Schwartz (1995) and Saà-Requejo and Santa-Clara (1999). This model is capable of producing quite diverse shapes of the term structures of corporate credit spreads. The numerical results show that credit spreads exhibit complex relationships with the parameters of the model.

We propose a new definition for tameness within the model of security prices as Itˆo processes that is risk-aware. We give a new definition for arbitrage and characterize it. We then prove a theorem that can be seen as an extension of the second fundamental theorem of asset pricing, and a theorem for valuation of contingent claims of the American type. The valuation of European contingent claims and American contingent claims that we obtain does not require the full range of the volatility matrix. The formulas obtain to price American contingent claims are closer in spirit to a computational approach.

This paper considers the problem of continuous investment of capital in risky assets in a dynamic capital market. The information filtration process and the capital allocation decisions are considered separately. The filtration is based on a Bayesian model for asset prices, and an (empirical) Bayes estimator for current price dynamics is developed from the price history. Given the conditional price dynamics, investors allocate wealth to achieve their financial goals efficiently in the time domain. The price updating and wealth reallocations occur when control limits on the wealth process are attained. A Bayesian fractional Kelly strategy is optimal at each rebalancing, assuming that the risky assets are jointly lognormally distributed. The strategy minimizes the expected time to the upper wealth limit while maintaining a high probability of reaching that goal before falling to a lower wealth limit. The fractional Kelly strategy is a blend of the log-optimal portfolio and cash and is equivalently represented by a negative power utility function. By rebalancing when control limits are reached, the wealth goals approach provides greater control over downside risk and upside growth.

We present three generic constructions of martingales that all have the Markov property with known and prespecified marginal densities. These constructions are further investigated for the special case when the prespecified marginals satisfy the scaling property and hence datum for the construction is reduced to the density at unit time. Interesting relations with stochastic orders are presented along with numerous examples of the resulting martingales.

We show that a central planner with two selves, or two "pseudo welfare functions", are sufficient to deliver the market equilibrium that prevails among any (finite) number of heterogeneous individual agents acting competitively in an incomplete financial market. Furthermore, we are able to exhibit a recursive formulation of the two-central planner problem. In that formulation, every aspect of the economy can be derived one step at a time, by a process of backward induction as in dynamic programming.

We consider continuous time short rate model. While a number of methods exist for the estimation of the parameters of such models, the estimation of the possible latent factors (e.g., stochastic volatility) in the model has received relatively little attention.

When the underlying setup is nonlinear and non-Gaussian the most frequently used extended Kalman filter (EKF) leads to inconsistent estimate of the parameters, though without high bias (de Jong (2000)). We use the Kitagawa type stochastic filtering algorithm, which provides a method to obtain the likelihood function deterministically, to estimate the model and the unobserved components.

Comparison on simulated data shows that the Kitagawa method provides better results than EKF for the parameter values atypical in financial applications. In cross-sectional estimation use of the Kitagawa method solves the inconsistency problem of EKF without needing to apply numerically more demanding methods such as efficient method of moments or indirect inference method.

We propose to model share prices with exponentials of time-changed $\alpha$-stable Lévy processes. In particular, if $X(t)=S(T(t))$ are logaithmic returns, we find empirical evidence, using high-frequency data, of heavy tailedness of returns in the natural and changed time scales, and of long memory in the process $T(t)$ modelling the so-called business time scale. These results are quite puzzling, as they cannot be explained by any of the subordinated models recently introduced in the financial and mathematical literature. Furthermore, we partially extend this approach to model the joint distribution of returns with a certain class of operator stable laws with diagonal exponent, thus allowing each return to have a different index of tail thickness. The resulting model features rich dependence structure and is again consistent with prices described by the exponential of a Lévy process. Finally, we discuss estimation and simulation issues related to this type of processes.

Using asymptotic expansion techniques, a family of arbitrage free term structure models are constructed that approximate the LIBOR Market Model. The models can be mapped to a Gaussian lattice, or other efficient numerical algorithm. This enables the rapid solution of Bellman equations to price exotic interest rate derivatives. Some analytic option pricing formulae are developed. A two-factor model is demonstrated with mean reversion and non-lognormal dynamics. Approximation errors are shown to be low for vanilla and callable swaptions.

We introduce two general classes of analytically-tractable diffusions for modeling forward LIBOR rates under their canonical measure.

The first class is based on the assumption of forward-rate densities given by the mixture of known basic densities. We consider two fundamental examples: i) a mixture of lognormal densities, and ii) a mixture of densities associated to ``hyperbolic-sine" processes. We derive explicit dynamics, existence and uniqueness results for the related SDEs and closed-form formulas for caps prices.

The second class is based on assuming a smooth functional dependence, at expiry, between a forward rate and an associated Brownian motion. This class is highly tractable: it implies explicit dynamics, known marginal and transition densities and explicit option prices at any time. As an example, we analyze the linear combination of geometric Brownian motions with perfectly correlated (decorrelated) returns.

Examples of the implied-volatility curves produced by the considered models are finally shown.

In this paper the role of the convexity hipothesys on preferences is analysed. In particular is proved that the set of price functional viable for the set of monotone and continuous preferences is the same than the set of price functionals for the set of preferences that are also convex. This result is given in weaker form for locally convex, Haussdorff t.v.s.

Proving this result allows to build explicitly the convex preference starting from a non convex one. In this way a "minimal" preference relation is defined.

Introducing the definition of sub-gradient for a binary relation, a representation of no-arbitrage price functionals is given.

We give a necessary and sufficient condition for the a.s. existence of a positive integer N such that for all n larger than N, there is a probability measure which minimizes the relative entropy with respect to the empirical distribution associated with the n first variables of a sequence of i.i.d. random variables, under a "moment" constraint.

Under this condition, we prove that a.s. this probability measure converges weakly to the generalized solution of the constrained minimization of the relative entropy with respect to the common law of the i.i.d. random variables.

In the presence of model uncertainty Bayesian model averaging (BMA) is a useful tool for accounting for model risk. The paper considers the practical implementation of BMA when models are specified only via moment conditions and no further assumptions are made about parametric specification of the probabilistic structure underlying the data sampling process. The paper proposes calculation of Bayes factors using empirical likelihood. Application of the proposed technique to portfolio choice problem is considered.

There are two basic questions in the asset pricing theory of financial mathematics. The first question is how to price primitives (e.g. risky stocks} and the second one is how to price derivatives (e.g. options). In the semimartingale theory based financial mathematics primitives are treated as solutions of stochastic differential equations with respect to semimartingales. This method of pricing is backed by using arguments based on various forms of the efficient market hypothesis. In the talk we present alternative arguments leading to pricing of primitives using a solution of the evolution representation problem. The resulting model of pricing of primitives includes the semimartingale based pricing in a different (pathwise) form.

Also, in the talk we mention two other questions in the context of our model:

(1) a relation between discrete and continuous time models, and

(2) pricing of derivatives.

In the talk we present some concepts in the valuation of portfolio dependent structures like Collaterized Debt Obligations or Basket Credit Derivative. The starting point is the structural approach similar to Merton's Asset Value Model. For a small set of underlying credits as in Credit Derivatives we model the default time as a first hitting time of a transformation of a multivariate correlated Brownian motion. We discuss three different approaches to match the distribution of the first hitting time with the forward default probability curve. In the second part of the presentation larger collateral pools, as for CDO or ABS-structures, are considered. The term structure of defaults in different tranches of the CDO is analysed and compared with term structures implied by corporate bond ratings. The analysis is based on the approximation of annual losses in the collateral portfolio by different distribution functions.

When supervisors have imperfect information about the soundness of the banks they monitor, they may be unaware of insolvency problems that develop rapidly in the intervals between on-site examinations. This paper analyses the trade-offs that supervisors face between the cost of bank examination and the need to supervise banks effectively. We first characterize the optimal policy of an independent supervisor, both in terms of the frequency of examinations and of the target zones requiring action. We then extend our analysis by making bank supervisors accountable for deposit insurance losses. We find that, while depositor preference laws give supervisors an incentive to be stiffer with problem banks, these laws also lead supervisors to be more lenient with solvent banks, potentially leaving them with more opportunities to take risks.

The statistics that summarise the probability distributions implied from option prices can be used to assess market expectations about future uncertainty, asymmetry and the probability of extreme movements in asset prices. This paper considers implied pdfs with a constant horizon of three months for S&P 500, FTSE 100, eurodollar and short-sterling. A time series analysis of the summary statistics provides some stylised facts about the behaviour of different elements of market expectations, their historical distribution and the relationships between them. The distributions of these measures provide information on past revisions to market expectations including the relative likelihood of upward rather than downward revisions and the extent to which these revisions were large. The similarity and relative stability of alternative measures for each element of market expectations is assessed to select a subset of summary statistics that can sufficiently reflect the information contained in the implied pdfs. Relationships between implied pdf summary statistics and movements in underlying assets are considered. Cross asset and cross country comparisons between the summary statistics series are also useful in revealing relations and/or associations between market participants’ expectations about equity price and interest rate movements. Finally the information content of the implied pdfs for future macroeconomic and financial variables is assessed.

In this paper, we show that contrary to the claim made in Longstaff, Santa-Clara and Schwartz (2000a,b) discrete string models are not more parsimonious than market models. In fact, they are found to be observationally equivalent. We derive that for the estimation of both a K-factor discrete string model and a K-factor Libor Market Model for N forward rate the number of parameters that needs to be estimated equals NK - 1/2 K(K-1) and not K(K+1)/2 and NK, respectively.

We study an economic agent who has an exogenously determined initial amount of debt. The agent is equipped with a constant relative risk aversion utility function and a deterministic terminal wealth (before debt interest payments) and faces a debt allocation problem: The choice between fixed interest rate debt or floating interest rate debt. The problem is thus related to the seminal Merton (1969,1971) asset allocation problem. In order to model fixed and floating interest rates we use a simple term structure model based on a Heath-Jarrow-Morton formulation of the Vasicek model.

First, the static case is considered, where no rebalancing of debt is allowed after the initial point in time. Next, the dynamic case is treated where the debt portfolio can be rebalanced continuously at no cost. Numerical examples indicate a surprisingly low increase in welfare, measured by expected utility, in the dynamic case compared to the static case. The optimal debt portfolio in the dynamic case is sensitive to the shape of the initial forward rates and therefore may or may not resemble the static case.

We propose a probabilistic numerical method based on quantization to solve some multidimensional stochastic control problems that arise, $e.g.$, in Mathematical Finance for portfolio optimization purpose. This leads to consider some controlled diffusions with most control free components. The space discretization of this part of the diffusion is achieved by a closest neighbour projection of the Euler scheme increments of the diffusion on some grids. The resulting process is a discrete time inhomogeneous Markov chain with finite state spaces. The induced control problem can be solved using the dynamic programming formula. {\em A priori} $L^p$-error bounds are produced and we show that the space discretization error term is minimal at some specific grids. A simple recursive algorithm is devised to compute these grids by induction based on a Monte Carlo simulation. A numerical implementation of the method is processed to solve a mean variance hedging problem for an underlying stock with stochastic volatility.

ricing contingent claims on power presents numerous challenges due to (1) the nonlinearity of power price processes, and (2) time-dependent variations in prices. We propose and implement an equilibrium model in which the spot price of power is a function of two state variables: demand (load or temperature) and fuel price. In this model, any power derivative price must satisfy a PDE with boundary conditions that reflect capacity limits and the non-linear relation between load and the spot price of power. Moreover, since power is non-storable and demand is not a traded asset, the power derivative price embeds a market price of risk. Using inverse problem techniques and power forward prices from the PJM market, we solve for this market price of risk function. During 2000, the market price of risk represented as much as 50 percent of PJM power forward prices for delivery during summer months. This is plausibly due to the extreme right skewness of power prices; this induces left skewness in the payoff to short forward positions, and a large risk premium is required to induce traders to sell power forwards. This huge risk premium suggests that the power market is not fully integrated with the broader financial markets. The data also suggest that power forward prices overreact to current demand shocks.

This paper introduces a benchmark approach for the modelling of continuous, complete financial markets when an equivalent risk neutral measure does not exist. This approach is based on the unique characterization of a benchmark portfolio, the growth optimal portfolio, which is obtained via a generalization of the mutual fund theorem. The discounted growth optimal portfolio with minimum variance drift is shown to follow a Bessel process of dimension four. Some form of arbitrage can be explicitly measured by arbitrage amounts. Fair contingent claim prices are derived as conditional expectations under the real world probability measure. The Heath-Jarrow-Morton forward rate equation remains valid despite the absence of an equivalent risk neutral measure.

We develop methods of risk sensitive impulsive control theory in order to solve an optimal asset allocation problem with transaction costs and a stochastic interest rate. The optimal trading strategy and the risk-sensitized expected exponential growth rate of the investor's portfolio are characterized in terms of a nonlinear quasi-variational inequality. This problem can then be interpreted as the ergodic Isaac-Hamilton-Jacobi equation associated with a min-max problem. We use a numerical method based on an extended two-stage Howard-Gaubert algorithm and provide numerical results for the case of two assets and one factor that is a Vasicek interest rate.

Using company level data from 17 countries that have suffered a currency crisis during the past decade, this paper documents that firms have increasing leverage and declining profitability prior to a crisis. After sorting companies into two groups based on their exchange rate exposure, we show that companies that benefit from currency depreciations have higher leverage, lower earnings to revenue ratios and lower interest coverage ratios compared to firms that are harmed by currency deprecations. We also provide evidence that the Asian crisis is different from the previous European and Latin American ones: in Asia firms become more fragile after the crises and their profitability declines further, whereas in Europe and Latin America there are clear signs of recovery.

A new approach to modelling and pricing derivative securities based on many assets will be presented. The ultimate, practical aim is to properly price such derivatives when the underlyings show a volatility smile/skew. Our theory allows to sample from an entirely new type of dynamics, fully compatible with earlier theory that consistently accounts for the observed volatility surfaces for a range of securities.

Within a local volatility framework we write in an analytic fashion the joint dynamics of the underlying securities affecting the derivative prices. A general treatment of a mixture of lognormals dynamical model in a multifactor setting will be presented, along with an operative proposal for the use of this model to deduce skews on baskets of many stocks. A natural extension to the case of the Libor Market Model would allow to compute quasianalytically the swap rates' smile given the smiles in the individual caplets.

We investigate the classical Arbitrage Pricing Theory and look for conditions on the market parameters which ensure the existence of an equivalent risk-neutral measure.

The paper assumes that the implied volatility of options with some fixed expiration is a quadratic function of the moneyness with stochastic coefficients. In addition, the options that are close to being at-the-money are traded without frictions where options that are away from the money are traded with frictions. The implied convenience yield of an illiquid option is defined as the rate of risk-less profit obtained from hedging this option using the liquid at-the-money options as hedging instruments. The paper assumes the convenience yield for the liquid options is zero and derives an explicit formula for the implied convenience yield of the non-liquid options and for the hedging coefficients.

This paper studies the limit distributions of Monte Carlo estimators of diffusion processes. Two types of estimators are examined. The first one is based on the Euler scheme applied to the original processes; the second applies the Euler scheme to a variance-stabilizing transformation of the processes. We show that the transformation increases the speed of convergence of the Euler scheme. The limit distribution of this estimator is derived in explicit form and is found to be non-centered. We also study estimators of conditional expectations of diffusions with known initial state. Expected approximation errors are characterized and used to construct second order bias corrected estimators. These enable us to eliminate the size distortion of asymptotic confidence intervals and to examine the relative efficiency of estimators. Finally, we derive the limit distributions of Monte Carlo estimators of conditional expectations with unknown initial state. The variance-stabilizing transformation is again found to increase the speed of convergence. Our results are illustrated in the context of the dynamic portfolio choice problem. They enable us to develop efficient designs of Monte Carlo estimators of diffusions.

While models with one or two stochastic drivers might price swaptions as well as higher order models, when it comes to hedging, we show that models with three or four factors perform significantly better. We address whether these higher order multifactor models are by themselves capable of explaining and hedging derivative instruments or whether other factors, such as unspanned stochastic volatility, are sufficiently important to warrant explicit inclusion.

The goal of the paper is to propose a family of processes to model electricity prices in deregulated markets. Our class is meant to be broad enough to encompass different regions worldwide. Besides mean reversion, a property they share with other commodities, electricity prices exhibit the unique feature of big spikes, generated by extreme weather conditions or other sources of supply disruption in the absence of storability. To address this key characteristic, we introduce a set of discontinuous semimartingales displaying a feature of jump-reversion while preserving the Markov property. We propose a calibration procedure based on approximate maximum likelihood. Simulation tests investigate the qualitative properties of sample paths and exhibit the quality of the statistical estimation on a database of US power markets.

The coherent risk principles are applied to quantify risk in derivatives under optimal hedging. Some theoretical aspects are discussed, and a fairly general dynamic programming solution is described. For convex European options and risk scenarios defined in terms of interval probabilities, computations are further reduced to a very simple recursion. An application on S&P 500 option data illustrates the results.

Keywords: Coherent Risk Measures; Robust Hedging; Interval Model; Limited Downside Risk; Option Pricing; Delta-hedging; Binomial tree; Incomplete Markets

New results generalizing the work of Jarrow and Yu (2001) are derived. These results regard modeling of mutually dependent default times of several credit names. They also regard valuation of some related defaultable credit instruments. In addition, we study dependent credit migrations and valuation issues of related credit instruments in a analogous framework.

This paper argues that the profitability of momentum strategies can be tied to the dynamics of firm-specific factors. We frame our argument within a model and test its predictions. We show that momentum can exist when the log market value of equity is increasing and convex (or decreasing and concave) with respect to the log price of the commodity produced by the firm. The addition of growth options will increase the convexity, and thus the profits from a momentum strategy. Costs, on the other hand, decrease convexity and lower momentum. The first contribution of this paper is that our model produces testable hypotheses that are largely borne out in the data. The most convincing evidence in favor of our theory is that, as predicted by the model, a momentum strategy implemented in a subsample of low cost-leverage or high market-to-book firms produces significantly greater returns than a momentum strategy implemented in a subsample of high cost-leverage or low market-to-book firms. The second contribution of this paper is that it ties the microeconomics of the firm to asset pricing. Although momentum might arise under different scenarios, we argue and present evidence that it can be traced directly to firm-specific factors.

We consider distributional free inference to test for positive quadrant dependence, i.e. for the probability that two variables are imultaneously small (or large) being at least as great as it would be were they dependent. Tests for its generalisation in higher dimensions, namely positive orthant dependences, are also analysed. We propose two types of testing procedures. The first procedure is based on the specification of the dependence concepts in terms of distribution functions, while the second procedure exploits the copula representation. For each specification a distance test and an intersection-union test for inequality constraints are developed depending on the definition of null and alternative hypotheses. An empirical illustration is given for US and Danish insurance claim data. Practical implications for the design of reinsurance treaties are also discussed.

The paper considers the evolution of portfolio rules in markets with stationary returns and endogenous prices. The ultimate success of a portfolio rule is measured by the wealth share the rule is eventually able to conquer in competition with other portfolio rules. We give necessary and sufficient conditions for portfolio rules to be evolutionary stable. In the case of i.i.d. returns we identify a simple portfolio rule to be the unique evolutionary stable strategy. Moreover we demonstrate that mean-variance optimization is not evolutionary stable while the CAPM-rule always imitates the best portfolio rule and survives.

Models which postulate lognormal dynamics for interest rates which are compounded according to market conventions, such as forward LIBOR or forward swap rates, can be constructed initially in a discrete tenor framework. Interpolating interest rates between maturities in the discrete tenor structure is equivalent to extending the model to continuous tenor. The present paper sets forth an alternative way of performing this extension; one which preserves the Markovian properties of the discrete tenor models and guarantees the positivity of all interpolated rates.

Starting from a general Ito process model with more assets than driving Brownian motions, we study the term structure model endogenously induced by this complete market. In the Markovian diffusion case, we provide the resulting HJM description and point out a link to finite factor models. But the main contribution is the conceptual approach of considering assets and interest rates within one model which is completely specified by the dynamics of the assets alone. This allows endogenous derivations of dynamic relations between assets and interest rates from global structural assumptions (homogeneity and some spherical symmetry) on the market. In particular, we obtain connections between the short rate, risk premia, the numeraire portfolio (the inverse of the pricing kernel in fact) and certain observable indices. The presented work is related to studies of Fisher & Gilles (2000), Kazemi (1992) and E. Platen (1999, 2000).

In this paper we provide solutions to the optimal lifetime consumption-portfolio problem in a possibly incomplete, competitive securities market with essentially arbitrary (continuous) price dynamics, and preferences that can take any (smooth) recursive homothetic form. Homothetic recursive preferences include but are not limited to the usual HARA time-additive expected utility, and the CES specification made popular by Epstein and Zin. Moreover, we also provide a complete solution for a non-homothetic exponential recursive form, generalizing CARA time-additive utility, which can be viewed as a limiting case of homothetic specifications. The securities market considered can be incomplete, in the sense that not all sources of uncertainty are hedgeable (but we do not consider the case of a non-tradeable income stream in this paper). The solution is given in closed form in terms of the solution to a single backward stochastic differential equation (BSDE). We show how to derive the solutions using either the utility gradient (or martingale) approach, or the dynamic programming approach, without relying on any underlying Markovian structure, making clear the relationship of the two approaches. Further imposing a Markov structure on the underlying dynamics reduces the BSDE to a quasilinear partial differential equation that can be tackled numerically, and in some parametric specifications can be further reduced to a simpler system of ordinary differential equations. Applying a technique from Schroder and Skiadas (2002), the solutions also extend readily to incorporate habit formation in the preferences.

Asian options are widely used financial derivatives which provide non-linear payoffs on the arithmetic average of an underlying asset. Their valuation has intrigued finance theorists for over a decade by now, even in the Black-Scholes context. Pursuing the Laplace transform approach proposed by Yor and Geman in 1993, this talk discusses our derivation of series and asymptotic expansions for computing benchmark prices for Asian options. And our point of view is that these formulas are consequences of the manifold interrelations this valuation problem has with other central parts of mathematics.

The punchline of the talk so is as follows. Since the Yor-Geman Laplace transforms are not those of the Asian option prices and, moreover, have been derived only under the restriction that the riskneutral drift is not less than half the variance, we re-address the notion of Laplace transforms of Asian option prices in a first part of the talk. While we will not discuss how in joint work with Peter Carr we have been able to lift the second of these restrictions, we will discuss how ideas of Peter Carr's have salvaged the relevance of the Yor-Geman results for valuing Asian options.

In a second part of the talk we will sketch the ideas, based on methods from complex analysis and special functions, of our analytic inversion of the Laplace transform, and compare the resulting integral for the Asian option price with Yor's 1992 triple integral for it.

In a third part, we will discuss series and asymptotic expansions for computing our integral. And we will indicate some of the background ideas from the analysis on the Poincare upper half plane and modular forms for this.

In this paper we present a new approach to incorporate dynamic default dependency in intensity-based default risk models. The model uses an arbitrary default dependency structure which is specified by the Copula of the times of default, this is combined with individual intensity-based models for the defaults of the obligors without loss of the calibration of the individual default-intensity models. The dynamics of the survival probabilities and credit spreads of individual obligors are derived and it is shown that in situations with positive dependence, the default of one obligor causes the credit spreads of the other obligors to jump upwards, as it is experienced empirically in situations with credit contagion. For the Clayton copula these jumps are proportional to the pre-default intensity. If information about other obligors is excluded, the model reduces to a standard intensity model for a single obligor, thus greatly facilitating its calibration. To illustrate the results they are also presented for Archimedean copulae in general, and Gumbel and Clayton copulae in particular. Furthermore it is shown how the default correlation can be calibrated to a Gaussian dependency structure of CreditMetrics-type.

Keywords: Credit Risk, Default Correlation, Credit Derivatives

JEL Classifications: G13

This paper is a contribution to the valuation of derivative securities in a stochastic volatility framework. The derivatives to be priced are of European type and their payoff can depend in general on both the level of the stock and the volatility at expiration.

A utility-based method is first developed which yields the price by comparing the maximal expected utilities with and without trading the derivative. The price turns out to be the solution of a certain quasilinear partial differential equation with the nonlinearity reflecting the unhedgable risk component. In general, the indifference price quasilinear PDE does not have a closed form solution. Therefore, it is desirable to obtain bounds that carry some natural insight and also numerical and asymptotic approximations to the indifference price.

Two sets of bounds on the indifference prices are deduced. The first pair is a direct consequence of the monotonic behavior of the nonlinearities with respect to the square of the correlation. The second set of bounds involve more sophisticated analysis. They are still derived from appropriate sub and super solutions to the pricing equation which are constructed by the so called sequential risk factorization method.

The problem is also analyzed by asymptotic methods in the limit of the volatility being a fast mean reverting process. The analysis relates the utility based valuation and the no arbitrage approach in such incomplete markets.

This paper first investigates the dynamics of implied probability density functions (PDFs), and the dynamics of implied cumulative distribution functions (CDFs). Subsequently, it presents new algorithms for simulating the probabilities evolution through time. Understanding the dynamics of implied distributions is essential for effective risk management, for ''smile-consistent'' arbitrage pricing with stochastic volatility, and for economic policy decisions. We investigate the number and shape of shocks that drive implied PDFs and CDFs by applying Principal Components Analysis (PCA). Under a variety of criteria, two components are identified. A Procrustes type of rotation is performed on the CDF components and it reveals an intuitive interpretation. The first component affects the location of the implied distribution while the second affects its variance and kurtosis.

The proposed algorithms are arbitrage-free and they can be implemented easily. The only inputs that they require are the known initial implied distribution and the PCA output. An out-of-sample application for Value-at-Risk (VaR) purposes is provided. The application reveals that the algorithms forecast accurately the range within which the future VaR level will lie.

JEL classification: G11, G12, G13.

Keywords: Implied Probability Density Function, Implied Cumulative Distribution Function, Principal Components Analysis, Monte Carlo Simulation, Value-at-Risk.

This paper first investigates the dynamics of implied probability density functions (PDFs) and the dynamics of implied cumulative distribution functions (CDFs). Subsequently, it presents new algorithms for simulating their evolution through time. Understanding the dynamics of implied distributions is essential for effective risk management, for ''smile-consistent'' arbitrage pricing with stochastic volatility, and for economic policy decisions. We investigate the number and shape of shocks that drive implied PDFs and CDFs by applying Principal Components Analysis (PCA). Performing PCA on CDFs provides us with easier to interpret results. Under a variety of criteria, two components are identified. The first component affects the location of the implied distribution while the second affects its variance and kurtosis.

The proposed algorithms are arbitrage-free and they can be implemented easily. The only inputs that they require are the known initial implied PDF and the PCA output.

In many applications it is necessary to use a simple and therefore highly misspecified econometric model as the basis for decision-making. We propose an approach to developing a possibly misspecified econometric model that will be used as the beliefs of an objective expected utility maximiser. A discrepancy between model and 'truth' is introduced that is interpretable as a measure of the model's value for this decision-maker. Our decision-based approach utilises this discrepancy in estimation, selection, inference and evaluation of parametric or semiparametric models. The methods proposed nest quasi-likelihood methods as a special case that arises when model value is measured by the Kullback-Leibler information discrepancy and also provide an econometric approach for developing parametric decision rules (e.g. technical trading rules) with desirable properties. The approach is illustrated and applied in the context of a CARA investor's decision problem for which analytical, simulation and empirical results suggest it is very effective.

In the paper risk sensitive portfolio optimization model introduced by Bielecki and Pliska is studied. It is assumed that the price of assets depends on economical factors. Some factors are completely and the other only partially observed. We want to maximize risk sensitized growth rate of the capital process.

The problem is to find a solution to a suitable Bellman equation which corresponds to the problem. For this purpose discounted risk sensitized problem is considered. Letting discount factor go to 1, we show that suitably normalized value function of the discounted control problem approaches a solution to the Bellman equation of the original problem.

The optimal strategies of risk sensitive control problem depend on the current observation of the observable factors and a normalization of the measure valued process which provides an information about the unobserved factors.

Assuming that the price process is locally bounded and admits an equivalent local martingale measure with finite entropy, we show, without further assumption, that in the case of exponential utility the optimal portfolio process is a martingale with respect to each local martingale measure with finite entropy. Moreover, the optimal value always can be attained on a sequence of uniformly bounded portfolios.

We model the dynamics $X_t$ of a stock index as a non--lognormally--distributed generalization of the geometric Brownian motion. In detail, $X_t$ is supposed to be a weak solution of a one-dimensional stochastic differential equation of the form

$$ dX_t = b(X_t) dt + \sigma X_t dW_t, $$

with volatility $\sigma > 0$ and Brownian motion $W_t$. For a dichotomous Bayesian decision problem concerning the size of the drift $b$, we investigate the (average) reduction of decision risk that can be obtained by observing the path of $X$. We also show some connections with relative entropy and with contingent claim pricing.

This study represents a proposal to create financial markets out of environmental improvements.We shall explain how to do it,and why the financial approach is the only one able to stop and invert the environmental degradation. We shall concentrate our attention in deforestation and show how to produce effectively reforestation.As a model of actual and future (after applying our approach) situation we consider X(t),0-th dimensional sqared Bessel process with drift.We justify our choice.To put the project into practice funds are needed, but contrary to an infinity of failed projects the use of resources would be apolitical,transparent and efficient.In the game of reforestation versus deforestation we formulate agents optimization problem of the first move needed to start the project,and the global optimization problem.On the other hand, because we aim for the creation of a developed market,we also solve the classical Merton problem with underlying asset X(t).

We present a new and flexible computational approach to derivative hedging. The method is based on the use of least squares to compute option sensitivities. This methodology can be readily applied to a huge class of derivative contracts under the assumptions of market completeness and continuous Markov price processes.

We illustrate this technique with a series of examples. In particular, we test it on plain vanilla and exotic options with both european and american exercise style, written on a single underlying asset with either constant or stochastic volatility. We show that the numerical accuracy that is achieved is always greater than or comparable with the best simulation and semianalytic techniques.

In this paper we provide the characterization of all finite-dimensional Heath--Jarrow--Morton models that admit arbitrary initial yield curves. It is well known that affine term structure models with time-dependent coefficients (such as the Hull--White extension of the Vasicek short rate model) perfectly fit any initial term structure. We find that such affine models are in fact the only finite-factor term structure models with this property. We also show that there is usually an invariant singular set of initial yield curves where the affine term structure model becomes time-homogeneous. We also argue that other than functional dependent volatility structures -- such as local state dependent volatility structures -- cannot lead to finite-dimensional realizations. Finally, our geometric point of view is illustrated by several further going examples.

We present two Monte Carlo Euler methods for a weak approximation problem of the HJM term structure model, which is based on Ito stochastic differential equations in infinite dimensional spaces, and prove error estimates useful for the approximation of contingent claims' prices.

These error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling.

Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is mild compared to the work to approximate contingent claims' prices with the Monte Carlo Euler method.

Numerical examples with known exact solution are included in order to show the behavior of the error estimates.

In this paper we analyse the mean-variance hedging approach in an incomplete market under the assumption of additional market information, which is represented by a given, finite set of observed prices of non-attainable contingent claims. Due to no-arbitrage arguments, our set of investment opportunities increases and the set of possible equivalent martingale measures shrinks. Therefore, we obtain a modified mean-variance hedging problem, which takes into account the observed additional market information.

Solving this by means of the techniques developed by Gouriéroux, Laurent and Pham (1998), we obtain an explicit description of the optimal hedging strategy and an admissible, constrained variance-optimal signed martingale measure, that generates both the approximation price and the observed option prices.

We study the consumption-portfolio problem of an investor who has access to multiple risky stocks and faces capital gain taxes. The optimal investment strategy depends on the correlation between the stocks as well as whether the investor is allowed to sell short or use derivatives. With multiple stocks that are highly correlated we find instances when the investor optimally holds an undiversified equity portfolio. When short selling is allowed, the investor shorts stocks for two reasons: to imprerfectly short the box ex-post in order to reduce aggregate equity exposure; and ex-ante to minimize future tax-induced trading costs. We term this second strategy a trading flexibility option. A similar trading flexibility option is implicitly present if the investor can not short but is allowed to buy put options. We find that, somewhat surprisingly, for investors who are prohibited from shorting, the benefit of trading separately in the two stocks is not economically significant, while, on the other hand, the welfare benefit is significant for investors who can short at a low cost and for those who can trade in derivatives.

Many of the assumptions used by Black & Scholes (1973) and Black (1976) to derive option pricing formulae are violated. Extensive empirical evidence has shown that most financial time series reject the assumption of Geometric Brownian Motion (GBM). Volatility is not constant nor is the price path continuous, as many markets display dynamics consistent with jump processes. Furthermore, other “perfect market” assumptions such as the absence of transaction costs and perfect divisibility of the underlying assets are also violated. The introduction of unspanned sources of risk (and frictions) implies that option prices include risk premium. Prema facie evidence of such risk premium is the implied volatility smile patterns that have been extensively identified and discussed in the literature.

A non-trivial problem is isolating the risk premium from implied volatility smiles. It is clear that a logical starting point would be to consider alternative models to Black & Scholes (1973). However, there is no consensus regarding what this appropriate model is and what would be the nature of the risk premium if such a model includes non-traded sources of risk. Some authors have suggested a stochastic volatility risk premium, some have argued for jump risk and still others point to market frictions. This paper will isolate for the risk premium in options on Bond futures by first considering whether a rich class of alternative models that include jumps and stochastic volatility can account for the empirical dynamics of Bond Futures. Bond Futures and options are examined for two reasons. Firstly, these are among the most active exchange traded derivatives on interest rates and secondly, much of the work on this area has concentrated on Stock Index option markets. Thus, this work extends work by Bates (2000) and Pan (2001) for Bond Futures and Options markets.

We find that single and multi-factor stochastic volatility models with jumps can both explain the empirical regularities we observed in Bond Futures. Secondly, assuming a feasible change of measure with an assumption of no risk premium, options consistent with this model are priced. Such option prices consistent with the dynamics of the underlying markets are compared to actual options on these markets and a risk premium is isolated. It is observed that for three Bond futures and options markets: US T-Bonds, UK Gilts and German Bunds, the nature of the risk premium is extremely similar for the choice of the underlying stochastic volatility model. While this research does not attempt to assign economic significance to this risk premium, such consistency suggests that market agents are assuming a similar functional form for the risk premium and may provide insights into models of the risk premium that have recently been proposed.

Given a multi-dimensional Markov diffusion $X$, the Malliavin integration by parts formula provides a family of representations of the conditional expectation $E[g(X_2)|X_1]$. The different representations are determined by some {\it localizing functions}. We discuss the problem of variance reduction within this family. We characterize an exponential function as the unique integrated-variance minimizer among the class of separable localizing functions. For general localizing functions, we provide a PDE characterization of the optimal solution, if it exists. This allows to draw the following observation~: the separable exponential function does not minimize the integrated variance, except for the trivial one-dimensional case. We provide an application to a portfolio allocation problem, by use of the dynamic programming principle.

It is well-known that prices, expressed in units of constant value, of European options on one underlying asset decay with time and are convex in the underlying asset if the contract function is convex. In this article, options on several underlying assets are studied and we prove that if the volatility matrix is independent of time, then the options prices decay with time if the contract function is convex. However, the option prices are no longer necessarily convex in the underlying assets. If a time-dependent volatility is allowed we note that the option prices do not necessarily decay with time. We then formulate conditions on the volatility matrix for convexity to be preserved. These conditions show that even if the price processes are independent, convexity is for instance not preserved if the volatilities are of the form used in the constant elasticity of variance model.

Even if returns are truly forecasted by variables such as the dividend yield, the noise in such a predictive regression may overwhelm the signal of the conditioning variable and render estimation, inference and forecasting unreliable. Unfortunately, traditional asymptotic approximations are not suitable to investigate the small sample properties of forecasting regressions with excessive noise. To systematically analyze predictive regressions, it is useful to quantify a forecasting variable's signal relative to the noisiness of returns in a given sample. We define an index of signal strength, or information accumulation, by renormalizing the signal-noise ratio. The novelty of our parameterization is that this index explicitly influences rates of convergence and can lead to inconsistent estimation and testing, unreliable R2's, and no out-of-sample forecasting power. Indeed, we prove that if the signal-noise ratio is close to zero, as is the case for many of the explanatory variables previously suggested in the finance literature, model based forecasts will do no better than the corresponding simple unconditional mean return. Our analytic framework is general enough to capture most of the previous findings surrounding predictive regressions using dividend yields and other persistent forecasting variables.

We present a new framework for fractional Brownian motion in which processes with all indices can be considered under the same probability measure. Our results extend recent contributions by Hu, Øksendal, Duncan, Pasik-Duncan and others. As an application we develop option pricing in a fractional Black-Scholes market with a noise process driven by a sum of fractional Brownian motions with various Hurst indices.

The paper gives a necessary and sufficient condition that must be satisfied by a model of the term structure of interest rates in order that an equilibrium economy exists with a bond market governed by that model. The condition is that the term structure model is arbitrage-free and that an optimal strategy of any risk-averse bond market investor has a finite expected utility. It is shown on the example of several models proposed in the literature that this condition is not always satisfied.

The focus of our paper is on the implications of model uncertainty for the cross-sectional properties of returns. We perform our analysis in the context of a tractable single-period mean-variance framework. We show that there is an uncertainty premium in equilibrium expected returns on financial assets and study how the premium varies across the assets. In particular, the cross-sectional distribution of expected returns can be formally described by a two-factor model, where expected returns are derived as compensation for the asset's contribution to the equilibrium risk and uncertainty of the market portfolio. In light of the large empirical literature on the cross-sectional characteristics of asset returns, understanding the implications of model uncertainty even in such a simple setting would be of significant value. By characterizing the cross-section of returns we are also able to address some of the observational equivalence issues raised in the literature. That is, whether model uncertainty in financial markets can be distinguished from risk, and whether uncertainty aversion at an individual level can be distinguished from risk aversion.

The actual economy is large and complex, and one might argue that not every economic agent is sufficiently sophisticated enough to know in advance every economic consequence of a chosen set of actions. We study competitive equilibrium in a model in which some or all agents do not know about some states of nature. Over time, certain agents will be surprised if the economy evolves inconsistently with their endowed state spaces.

We analyze the value of an investor's real option to delay the decision to fire a poor performing portfolio manager. It is shown in a Bayesian decision framework that the value of delaying the firing of a poorly performed manager, until some uncertainty over the manager's quality is resolved, partially explains empirical observations of investor tolerance of very poor portfolio performance results before changing manager (Sirri and Tufano 1998, Goetzmann and Peles 1997). Further, why mutual funds with shorter track records grow more quickly (Ippolito 1992, Chevalier and Ellison 1997). It is also shown that the real option value gives portfolio managers an incentive to inject noise into their performance results. Simulation results for the case of an investor with a 10 year investment horizon demonstrate that the real option value is economically significant, and highlight dependence of the value of active portfolio management on the cost of changing portfolio manager.

We apply Heston's Stochastic Volatility Model to the valuation of first generation exotic options in Foreign Exchange Markets and compare the prices of one-touch options with market prices as they are commonly derived by exotics traders.

In this article we study arithmetic Asian options when the underlying stock is driven by special semimartingale processes. We show that the inherently path dependent problem of pricing Asian options can be transformed into a problem without path dependency in the payoff function. We also show that the price satisfies a simpler integro-differential equation in the case the stock price is driven by a process with independent increments, Lévy process being a special case.

The problem of continuous-time modeling of term structure of interest rates is studied in a new framework, that is a methodological generalization of the HJM framework \cite{HJM}. A LIBOR market model is the most important example of our framework, because it is not completely described within the HJM framework.

The new framework is introduced so as to admit both the LIBOR market model and the HJM model, and it shows that the LIBOR market model is described as a limit of the sequence of the HJM models. In other words, the set of HJM models is topologically dense in our framework.

In particular our results show that the LIBOR market model provides some financial properties similar to those of the Jamshidian model(1997) (cf. Musiela and Rutkowski(1997)).

It is expected that our approach is applicable to construct other financial model which may not stay in the HJM framework, for example, a credit model based on LIBOR.

This paper provides an alternative approach to Duffie and Lando [6] for obtaining a reduced form credit risk model from a structural model. Our model provides an explicit formula for the default intensity based on an Azema martingale, and we use excursion theory of Brownian motions to price risky debt.

Expected utility maximisation problems in mathematical finance lead to a generalisation of the classical definition of entropy. It is demonstrated that a necessary and sufficient condition for the second law of thermodynamics to operate is that any one of the generalised entropies should tend to its minimum value of zero.

Increasing competition has lead the insurance industry to introduce more complicated and innovative policies. Modern policies come with guarantees on the minimum rate of return, bonus provisions and surrender options. We study the problem of asset and liability management for participating insurance policies with guarantees. The asset and liability management problem results in a nonlinear optimization model. We discuss this problem in terms of the policies offered by various insurers, and present numerical results. In particular, we discuss the critical issue of the structure of the bonus policy, and how insurers can use the results of the ALM problem to make bonuses attractive to investors without eroding shareholder value.

We study the convergence rate of the numerical approximation of the quantiles of the marginal laws of $(X_t)$, where $(X_t)$ is a diffusion process, when one uses a Monte Carlo method combined with the Euler discretization scheme. Our convergence rate estimates are obtained under two sets of hypotheses: either $(X_t)$ is uniformly hypoelliptic, or the inverse of the Malliavin covariance of the marginal law under consideration satisfies a Condition (M). We then show that Condition (M) seems widely satisfied in the applied contexts. We particularly study two financial applications: the computation of the VaR of a portfolio, and the computation of a Model Risk measurement for the Profit and Loss of a misspecified hedging strategy. In addition, we precise the constants in the convergence rate estimate by proving an accurate estimate from below for the density of the Profit and Loss.

We consider the problem of maximizing expected utility from consumption in a constrained incomplete semimartingale market with a random endowment process, and establish a general existence and uniqueness result using techniques from convex duality. The notion of asymptotic elasticity of Kramkov and Schachermayer is extended to the time-dependent case. By imposing no smoothness requirements on the utility function in the temporal argument, we can treat both pure consumption and combined consumption/terminal wealth problems, in a common framework. To make the duality approach possible, we provide a detailed characterization of the enlarged dual domain which is reminiscent of the enlargement of $L^1$ to its topological bidual $(L^{\infty})^*$, a space of finitely-additive measures. As an application, we treat the case of a constrained It\^ o-process market-model.

A Meyer-Tanaka formula involving weighted local time is derived for fractional Brownian motion and geometric fractional Brownian motion. The formula is applied to the study of the stop-loss-start-gain (SLSG) portfolio in a fractional Black-Scholes market. As a consequence, we abotain a fractional version of the Carr-Jarrow decomposition of the European call and put option prices into their intrinsic and time values.