+ All documents
Home > Documents > Bayesian parameter inference for models of the Black and Scholes type

Bayesian parameter inference for models of the Black and Scholes type

Date post: 23-Nov-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
19
APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY Appl. Stochastic Models Bus. Ind. 2008; 24:507–524 Published online 18 February 2008 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/asmb.709 Bayesian parameter inference for models of the Black and Scholes type Henryk Gzyl, Enrique ter Horst *, and Samuel W. Malone Instituto de Estudios Superiores de Administraci´ on IESA, Caracas, DF, Venezuela SUMMARY In this paper, we describe a general method for constructing the posterior distribution of the mean and volatility of the return of an asset satisfying d S = S d X for some simple models of X . Our framework takes as inputs the prior distributions of the parameters of the stochastic process followed by the underlying, as well as the likelihood function implied by the observed price history for the underlying. As an application of our framework, we compute the value at risk (VaR) and conditional VaR (CVaR) measures for the changes in the price of an option implied by the posterior distribution of the volatility of the underlying. The implied VaR and CVaR are more conservative than their classical counterpart, since it takes into account the estimation risk that arises due to parameter uncertainty. Copyright 2008 John Wiley & Sons, Ltd. Received 20 September 2007; Revised 21 November 2007; Accepted 10 December 2007 KEY WORDS: Bayesian analysis; Black and Scholes; option pricing; risk-neutral measure; mean reversion 1. INTRODUCTION In this article, we present a novel method for doing Bayesian inference by constructing the like- lihood of a stochastic process using the Radon–Nikodym derivative of the physical measure P with respect to the risk-neutral measure Q used to price options on the underlying. To illustrate the method, we construct the posterior distributions of the parameters in the Ornstein–Uhlenbeck (O–U) and geometric Brownian motion (GBM) models by combining the likelihood function that is implied by the underlying stochastic process with the prior distributions that are specified as the views of the market participant. The work close in spirit to ours includes the sequential testing of a precise hypothesis concerning the drift of a Brownian motion as in [1] or computing Bayes factors between different models as in [2]. Karolyi [3] and Darsinos and Satchell [4] are perhaps the previous work most closely * Correspondence to: Enrique ter Horst, Instituto de Estudios Superiores de Administraci´ on IESA, Caracas, DF, Venezuela. E-mail: [email protected], [email protected] Copyright 2008 John Wiley & Sons, Ltd.
Transcript

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRYAppl. Stochastic Models Bus. Ind. 2008; 24:507–524Published online 18 February 2008 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/asmb.709

Bayesian parameter inference for models of the Blackand Scholes type

Henryk Gzyl, Enrique ter Horst!,† and Samuel W. Malone

Instituto de Estudios Superiores de Administracion IESA, Caracas, DF, Venezuela

SUMMARY

In this paper, we describe a general method for constructing the posterior distribution of the mean andvolatility of the return of an asset satisfying dS= S dX for some simple models of X . Our framework takesas inputs the prior distributions of the parameters of the stochastic process followed by the underlying, aswell as the likelihood function implied by the observed price history for the underlying. As an applicationof our framework, we compute the value at risk (VaR) and conditional VaR (CVaR) measures for thechanges in the price of an option implied by the posterior distribution of the volatility of the underlying.The implied VaR and CVaR are more conservative than their classical counterpart, since it takes intoaccount the estimation risk that arises due to parameter uncertainty. Copyright 2008 John Wiley &Sons, Ltd.

Received 20 September 2007; Revised 21 November 2007; Accepted 10 December 2007

KEY WORDS: Bayesian analysis; Black and Scholes; option pricing; risk-neutral measure; mean reversion

1. INTRODUCTION

In this article, we present a novel method for doing Bayesian inference by constructing the like-lihood of a stochastic process using the Radon–Nikodym derivative of the physical measure Pwith respect to the risk-neutral measure Q used to price options on the underlying. To illustratethe method, we construct the posterior distributions of the parameters in the Ornstein–Uhlenbeck(O–U) and geometric Brownian motion (GBM) models by combining the likelihood function thatis implied by the underlying stochastic process with the prior distributions that are specified as theviews of the market participant.

The work close in spirit to ours includes the sequential testing of a precise hypothesis concerningthe drift of a Brownian motion as in [1] or computing Bayes factors between different modelsas in [2]. Karolyi [3] and Darsinos and Satchell [4] are perhaps the previous work most closely

!Correspondence to: Enrique ter Horst, Instituto de Estudios Superiores de Administracion IESA, Caracas, DF,Venezuela.

†E-mail: [email protected], [email protected]

Copyright 2008 John Wiley & Sons, Ltd.

508 H. GZYL, E. TER HORST AND S. W. MALONE

related to our own. In their articles, the authors use the Bayesian technique of integrating outparameters to derive the posterior distribution in closed form for a European call option whenthe underlying follows a GBM. To do this, however, they use the sufficiency property of theunbiased estimator of !2 from discretely sampled observations. The method we propose differscrucially in this respect by obtaining the posterior probability distribution of model parametersfrom the underlying, we avoid what may be considered an inconsistency in previous methods,which impose another likelihood by the introduction of a term that captures observational error.This is important on practical grounds, because the different likelihoods, except in a special case,will lead to different posterior distributions for the parameters, and this affects inference.

In general, the methodology developed in this paper is able to extend and yield the posteriordistribution, in closed form, as in the work of Karolyi [3] and Darsinos and Satchell [4]. Themethod described in the remainder of our paper serves as an illustration of the linkages to be madebetween the broad areas of mathematical finance, on the one hand, and Bayesian probability, onthe other.

The outline of this paper is as follows. Section 2 presents the methodology and describes howto find a likelihood function by the use of the Cameron–Martin–Girsanov formula. Section 3illustrates how to construct a likelihood function with respect to the risk-neutral measure in orderto perform a Bayesian inference in the case of two popular processes: GBM and the OU process.In Section 4, we show how to compute the posterior expected value at risk (VaR) and conditionalVaR (CVaR) once the posterior distribution of ! is obtained and finally compare them with theirclassical historical simulation counterpart. Section 5 concludes.

2. THE FRAMEWORK

2.1. Change of measure and the likelihood function in option pricing

It is common to use the following form when modeling the underlying St of a derivative:

St =exp(Xt ) (1)

where Xt can be modeled using a variety of different processes.‡ For us the logarithmic returnXt will be defined on a probability space (!,F,P), such that the filtration F=(Ft )0!t!T iscontinuous and complete (the null sets of P are contained in F0). Furthermore, in our specificexamples, dXt = f (Xt )dt+!dWt with constant ! "=0; therefore, we can consider the Ft to bethe completions of !(Ws |s!t) with respect to P, the Wiener measure on !=C[0,T ]. Henceforth,we will refer to P as the real-world probability measure of St .

The core result of this section, which we will motivate and state formally in what follows,is that the Radon–Nikodym derivative (dP/dQ|Ft )t>0 is the likelihood function appropriate forperforming inference on the parameters of the process.

We shall denote by " the parameters that define the model, and we shall assume that "#",which is taken to be an open subset of some Rn . For example, if we consider f (x)=#x , then"=(#,!).

The classical framework for option pricing supposes a European-type option C(t, St ,") whosepayoff H(ST ) depends on our underlying St . If we assume that the market model is complete and

‡See Section A.2 for the simple case when Xt is a jump diffusion.

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

BAYESIAN PARAMETER INFERENCE FOR MODELS 509

denote by Q the unique risk-neutral measure, then the price of the option can be computed viathe following integration:§

C(t, St ,")=exp($r(T $ t))EQ{H(ST )|Ft }

Define Zt %(dQ/dP|Ft )t>0>0. The process Zt is often called in the literature the densityprocess or the Radon–Nikodym derivative. General integration theory states that, by invoking theRadon–Nikodym theorem, the option price can be computed alternatively using the followingformula:

C(t, St ,")=exp($r(T $ t))EP{ZT H(ST )|Ft }

EP{ZT |Ft }We will now proceed in two steps. First, we will recall the results that allow us to obtain a

representation for Zt . Second, we will show how to combine prior information on the parameterswith the Radon–Nikodym derivative Zt to compute the option price. As part of the second step, wewill see that the Radon–Nikodym derivative is naturally identified with the likelihood function.

If in St =exp(Xt ) we assume Xt to be the unique strong solution to dXt = f (Xt )dt+!dWtwith continuously differentiable f (x) satisfying | f (x)|!k(1+|x |), it is a simple computation toverify that the exponential martingale

Zt =exp!" t

0g(Xs)dWs$

12

" t

0g(Xs)

2 ds#

, 0!t!T

where g(x)=( f (x)$r)/!, f (x)= f (x)$!2/2 is such that

EP[Zt exp Xt ]=exp(X0+r t)

Define dQ= ZT dP. Then Zt =(dQ/dP|Ft )t>0 and the previous identity is equivalent toEQ[St exp($r t)|Fs]= Ss exp($rs) for s!t , that is, Q is a martingale measure equivalent to P.Keeping in mind that P, Q, and Z are functions of the model parameter ". To understand the

role of the likelihood function Z$1t , note that for a prior $(d") on the parameter space ", the price

of a European option having St as underlying, averaged over all possible values of ", is

C(t, St ) = exp($r(T $ t))"

"C(t, St ,")$(d")

= exp($r(T $ t))"

"$(d")

EP{ZT H(ST )|Ft }Zt

= exp($r(T $ t))"

"$(d")

$dP

dQ

%%%%Ft

&

EP{ZT H(ST )|Ft }

= exp($r(T $ t))"

"g(Su :u!t)$(d"|Ft )EP{ZT H(ST )|Ft }

§For a European call option, the payoff function is equal to max(ST $K ,0), where K is the strike price at terminationdate T .

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

510 H. GZYL, E. TER HORST AND S. W. MALONE

where "#", " is the parameter space, (Zt )$1=(dP/dQ|Ft )t>0, and

g(Su :u!t)="

"$(d")

$dP

dQ

%%%%Ft

&

is the marginal probability distribution of the underlying St , which is thus a constant given theinformationFt . The posterior distribution $(d"|Ft ) comes from a direct application of Bayes rule:

$(d"|Ft ) =$(d")

$dP

dQ

%%%%Ft

&

g(Su :u!t)

$(d"|Ft )g(Su :u!t) = $(d")

$dP

dQ

%%%%Ft

&

This important result shows that in order to integrate out the uncertainty related to the governingparameters from the probability distribution of the underlying, one needs to use the likelihood thatcomes automatically by the specification of the underlying through (dP/dQ|Ft ).

Comment 1The fact that (Zt )

$1 is the likelihood function has already been noted in the literature, see, forexample, Sections III.3 and X.2 of Jacod and Shiryaev [5].

This fact helps us to build directly the likelihood function for deriving posterior parameterdistributions that can be further used to calibrate theoretical option price formulas that are functionsof the parameter vector ". In the following section, we illustrate how to use Comment 1 with somecommon models used in mathematical finance.

3. PARAMETER POSTERIORS FOR GBM AND THEO–U PROCESSES

In this section, we will present two examples illustrating the theory developed above. In the firstexample, we will conduct a Bayesian parameter inference assuming a GBM for the process. Inthe second example, we will do the same in the case that the underlying follows an O–U process.GBM was the stochastic process originally proposed by Black and Scholes for the modeling ofequity prices and is a tractable if not necessarily the most descriptively accurate process for thispurpose. The O–U process, on the other hand, allows for the modeling of mean reversion, whichis an important feature of certain time series, such as interest rates and commodity prices.

3.1. Geometric Brownian motion

GBM is given as follows, where Xt satisfies

dXt =#dt+!dWt (2)

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

BAYESIAN PARAMETER INFERENCE FOR MODELS 511

The parameter vector of interest is given by "=(#,!) and knowing the value of ! allows tocompute the value of the option given by the Black and Scholes formula. In order for us tocompute the posterior probability distributions of both ! and #, we need to find the likelihoodfunction of ". Given the form of GBM and the normality assumption, the Radon–Nikodymderivative (dP/dQ|Ft )t>0 can be further decomposed into the product of (dP/d%|Ft )t>0 times(dQ/d%|Ft )

$1t>0, where % is the Lebesgue measure. The log returns of the stock price are, therefore,

given by the physical measure P with respect to Lebesgue measure %. Therefore,

log!StS0

#

t&N

'#$ !2

2,!2

t

(

thus the conditional likelihoods for # and !2 in the GBM model are proportional to (see theAppendix)

L!

#| log!StS0

#,!2#

' exp

)**+

**,$ t2!2

-

../log

StS0t

$!

#$ !2

2

#0

112

23**4

**5(3)

L!

!2| log!StS0

#,##

' (!2)(1/2$1) exp

)***********+

***********,

$12

-

.........../

t

6

778#$log

StS0t

9

::;

2

!2+ t!2

4

0

111111111112

3***********4

***********5

(4)

The first conditional likelihood given # is proportional to a generalized inverse Gaussian withparameters %, &, and ' (in short GIG(%,&,')) whose density is equal to

f (x |%,&,')=< '&

=% 12K%('&)

x (%$1) exp

>

$12

?&2

x+'2x

@A

(5)

where we use the same parametrization as in [6]. We note that Equation (4) for the conditionalvariance is proportional to a GIG(%,&,') with parameters equal to

%= 12, &2= t

6

778#$log

StS0t

9

::;

2

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

512 H. GZYL, E. TER HORST AND S. W. MALONE

and '2= t/4, and that the conditional posterior distribution of # given !2 is normally distributed as

N

-

../log

StS0t

+ !2

2,!2

t

0

112

We state these results formally, for the appropriate prior distributions in the following two lemmas.

Lemma 1When choosing flat uniform prior improper¶ distributions on R and R+ for # and !2 in the GBMmodel, respectively, their posterior conditional distributions are

$!

#|!2, log StS0

#=N

-

../log

StS0t

+ !2

2,!2

t

0

112

and

$!

!2|#, logStS0

#=GIG(%,&,')

respectively, where

%= 12, &2= t

6

778#$log

StS0t

9

::;

2

and '2= t/4 for uniform priors on defined on R and R+, respectively.

ProofSee the Appendix. "

Lemma 2When choosing normal prior distributions ($(#)=N [m,s]) on R for #, and a $(!2)=GIG(%,&,')on R+ for !2 in the GBM model, their posterior conditional distributions are

$!

#|!2, log StS0

#=N

-

../

6

778log

StS0

+ !2t2

+m!2

s2

t+ !2

s2

9

::; ,!2

t+ !2

s2

0

112

and

$!

!2|#, logStS0

#=GIG(%(,&(,'()

¶ In the case of the normal probability model, posterior distribution for # is proper when using an improper prior(see [7]).

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

BAYESIAN PARAMETER INFERENCE FOR MODELS 513

where %( =%$1/2,

&(2= t

6

778#$log

StS0t

9

::;

2

+&2 and '(2= t4

+'2

ProofSee the Appendix. "

The posterior distributions for both # and !2 are consistent. The proof of this, which is straight-forward, is available upon request. These results provide the direct inputs that are necessary toperform Gibbs sampling,) a procedure that yields the posterior distributions of the parametersconditional only upon the data. Given these posterior distributions, it is then straightforward tocalculate the posterior distribution of the option price conditional upon the data. It is interestingto note that, for the case of GBM considered here, the posterior distribution of the option pricewill depend only upon the initial and final values of the underlying St .

3.2. The O–U process

The O–U-driven asset is given by Equation (2), where Xt satisfies

dXt =((#$Xt )dt+!dWt (6)

Here, ( is called the rate of mean reversion and # is the long-term mean to which the processreverts.

The parameter vector of interest is given by "=(#,(,!) and knowing " allows one to computethe value of the derivative. In order for us to compute the posterior probability distribution of ", weneed to find the likelihood function. A full derivation of how to compute the likelihood functionis given in the Appendix. In a same spirit as for GBM, we need to choose a prior distribution foreach parameter. Lemmas 3 and 4 state the conditional posterior distributions for the parameter #for uniform and normal priors, respectively.

Lemma 3When choosing flat uniform prior distributions on R for # in the O–U model, the posteriorconditional distribution is

$!

#|(,!2, logStS0

#=N

-

.../

" t

0

dSuSu

$('" t

0

!log

SuS0

+ !2u2

2

#(

(1+()t,

!2

(1+()t

0

1112

for a uniform prior defined on R.

ProofSee the Appendix. "

)See [8] for an explanation regarding the Gibbs sampling scheme.

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

514 H. GZYL, E. TER HORST AND S. W. MALONE

Lemma 4When choosing a normal prior distribution ($(#)=N [m,s]) on R for # in the O–U model, theposterior conditional distribution is

$!

#|(,!2, logStS0

#=N

-

.../

(1+()!" t

0

dSuSu

$('" t

0

!log

SuS0

+ !2u2

2

#(#+m!2

s2

(1+()t,

!2

(1+()2t+ !2

s2

0

1112

ProofSee the Appendix. "

Lemmas 5 and 6 state the conditional posterior distributions for the parameter ( for uniformand normal priors, respectively.

Lemma 5When choosing flat uniform prior distributions on R for ( in the O–U model, then the posteriorconditional distribution is

$!

(|!2,#, logStS0

#=N

-

.../

#" t

0

dSuSu

$#2t

#2t+" t

0

!log!SuS0

##2

du

,!2

#2t+" t

0

!log!SuS0

##2

du

0

1112

for a uniform prior defined on R.

ProofSee the Appendix. "

Lemma 6When choosing a normal prior distribution ($(()=N [m,s]) on R for ( in the O–U model, theposterior conditional distribution is

$!

(|!2,#, logStS0

#=N

-

.../

#" t

0

dSuSu

$#2t+m!2

s2

#2t+" t

0

!log!SuS0

##2

du+ !2

s2

,!2

#2t+" t

0

!log!SuS0

##2

du+ !2

s2

0

1112

ProofSee the Appendix. "

Unlike the posteriors for # and (, the posterior distribution for !2 is mathematically moreinvolved under the O–U model, since by looking at the likelihood function of the O–U process onecannot recognize any familiar probability distribution just as we did for all the other parametersin the GBM model. Because of this, it is difficult to derive the conditional posterior probabilitydistribution for !2 in closed form. Nonetheless, it is straightforward to perform Bayesian inferenceon the O–U model using the conditional posterior distributions for # and ( derived above. Thestandard procedure for doing this is to perform Gibbs sampling using the above conditional

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

BAYESIAN PARAMETER INFERENCE FOR MODELS 515

posteriors in an algorithm, which incorporates a Metropolis step for generating draws from theconditional posterior distribution of !2.

In the following section, we provide an application of how the posterior probability distributionof the parameters of the underlying process can be used to derive a posterior probability distributionfor the profit and loss of derivative contracts. This is useful for computing measures such asVaR,!! which is a commonly used metric of interest to risk managers in commercial and investmentbanks.

4. APPLICATION TO RISK MANAGEMENT

4.1. Computing Bayesian VaR

The approach we will take for computing the VaR of a derivative, in our case a call option whosevalue is denoted by C , will be to approximate the change in the value of C by the formula:

#Ci *&#Si +0.5'(#Si )2 (7)

where #Ci and #Si are the price changes from the call option and the underlying, respectively,for the time periods i=1, . . . ,n. The delta (&) in the above formula is the marginal sensitivityof the call option price to the change of the value in the underlying. The gamma (') in theabove formula is related to the second-order effect on the price of the call option due to thechanges in the underlying. From the Black and Scholes option pricing model, these quantities arecomputed as

& = N (d1)

' = N ((d1)St!

+T $ t

where

d1=log!StK

#+!r+ !2

2

#(T $ t)

!+

(T $ t)

r is the risk-free rate,†† N (d1) is the cumulative distribution function for the standard normal, andN ((d1) is the derivative of N (d1) with respect to d1.The delta and gamma of an option (&,') not only allow us to estimate option price changes

due to price changes in the underlying but also allow us to construct portfolios with desirableproperties such as ‘delta neutrality’. A portfolio is said to have the property of delta neutralitywhen its value remains unchanged in response to a small change in the value of the underlyingsecurity, and this is useful for hedging purposes.

Note that the formulas above for & and ' are functions of, among other things, the parameter !.They do not depend on #, as pricing in the Black and Scholes setup does not depend upon the drift

!!See Dowd [9].††USD LIBOR is used as the risk-free interest rate as suggested by Bliss and Panigirtzoglou [10].

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

516 H. GZYL, E. TER HORST AND S. W. MALONE

of the underlying process. To compute the VaR of the option price using the approximation statedabove, we will need to be able to draw from the posterior distribution of !2 in order to computevalues of & and '. In addition to this, we will also need to be able to sample from the distributionof changes in the underlying #S.

To compute the VaR of #C , we will generate a posterior distribution for #C . To generate thisposterior distribution, we will use Monte Carlo to sample from the posterior distribution of !2,which will allow us to compute & and ', and we will use the historical distribution of price changesto generate draws for #S. With respect to this last choice, we follow what is common practicein risk management institutions and among traders. Another alternative for generating #S wouldbe to use the posteriors for # and !2 to generate values for the drift and volatility and then touse a standard normal in conjunction with these values to generate a price process innovation.This process can be repeated many times to generate a posterior distribution for #C . We opt forthe former method, however, because it is common practice and allows us to use the informationcontained in historical data in cases where the true process generating returns is different from theone we have specified. In combining draws from the posterior of !2 and the historical distributionof #S, we assume that these draws are independent.

As the data are in discrete time, we must formulate the likelihood function of # and !2 as afunction of the N log returns Ri , where i=1, . . . ,N . By the property of independent incrementsof the N log returns over disjoint intervals, the likelihood can be obtained from

" T

0

dSuSu

=NBi=1

" ti

ti$1

dSuSu

=NBi=1

log!

StiSti$1

#+ !2

2T

where Ri = log(Sti /Sti$1). Moreover, we can express

log!STS0

#=

NBi=0

Ri

where Ri &N [#$!2/2,!2] and ti $ ti$1=1 (day). Therefore, under a noninformative prior forboth # and !2, our posterior distribution function is proportional to [8]:

L(#,!2|R1, . . . , RN )'(!2)$N/2$1 exp

)***+

***,$N!

#$!

!2

2+ R

##2

2!2$BN

i=1(Ri $ R)2

2!2

3***4

***5

And after integrating out # [3, 8], we obtain the following posterior distribution for !2:

L(!2|R1, . . . , RN )'(!2)$(N+1/2) expC$ (N$1)s2

2

D

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

BAYESIAN PARAMETER INFERENCE FOR MODELS 517

which is an inverse Gamma distribution IG(N+1/2, (N$1)s2/2) as in Karolyi [3]. In the above,R%BN

i=1 Ri/N and s2%BNi=1(Ri $ R)2/N$1.

4.2. Simulated data

In what follows, we will present some straightforward simulation results that will enable us tocompare the Bayesian VaR with its classical counterpart, as well as comparing the Bayesian VaRwith the Bayesian CVaR. These simulations are performed according to the procedures statedabove. There are two numerical experiments. In the first, we generate 100 samples each of sizeN =5. The results of this experiment are shown in Figures 1 and 2. In the second, we generate100 samples each of size N =1000. The results of that experiment are shown in Figure 3. For each

0 50 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9True VaR (black)Classic VaR (red)Bayesian VaR (blue)

0 50 100

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04Difference

Figure 1. Ninety-five percent value at risk under a N (0,0.42) for small sample size n=5.

0 50 1000.38

0.4

0.42

0.44

0.46

0.48

0.5

0.52True VaR (black)Classic VaR (red)Bayesian VaR (blue)

0 50 100

0

1

2

3

4

5

6x 10

Difference

Figure 2. Ninety-five percent value at risk under a N (0,0.42) for big sample size n=1000.

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

518 H. GZYL, E. TER HORST AND S. W. MALONE

0 50 1000.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7True VaR (black)Bayesian VaR (red)Bayesian CVaR (blue)

0 50 1000.11

0.12

0.13

0.14

0.15

0.16

0.17

0.18Difference

Figure 3. Ninety-five percent value at risk and conditional value at riskunder a N (0,0.42) for big sample size n=1000.

0.677 0.6775 0.678 0.6785 0.679 0.6795 0.68 0.68050

100

200

300

400

500

600

700

Figure 4. Posterior distribution of hedging ratio &.

group of 100 samples, we plot the sample VaRs in relation to the true VaR, which is computedanalytically. This allows us to illustrate a couple of simple conclusions. First, the greater theaverage error between both the Bayesian and the classical VaR and the true value, the smaller thesample size. Second, the Bayesian VaR tends to be more conservative than its classical counterpart,meaning that the ‘worst-case scenario’ loss for a given confidence level ( is greater when computedusing the Bayesian method. We also compare the Bayesian VaR with the Bayesian CVaR and findthat the latter is always greater than the former. This is consistent with the axiomatic propertiesof risk measures when we introduce parameter uncertainty [11].

In particular, we observe from Figure 1 that under small sample sizes (n=5), the Bayesian VaRat the 95% confidence level is bigger in absolute value than the classical VaR computed throughhistorical simulation, and their difference can range from as low as 0.5% to as high as 2%. On

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

BAYESIAN PARAMETER INFERENCE FOR MODELS 519

the other hand, whenever we have a large sample (n=1000), both the classical and Bayesian VaRconverge to the same theoretical value, as seen from Figure 3. Finally, Figure 4 depicts the posteriordistribution for the hedging parameter & used to compute the VaR of the option price. This is usefulsimply to illustrate that, when acknowledging that the parameters of an option pricing model arenot known with certainty, the Greek parameters used for hedging also will have uncertainty, whichmust be taken into account.

5. CONCLUSION

When traders and market participants use pricing formulas for derivatives, the price they obtain isa function of the parameter values that they assume. However, it is reasonable to expect that theseparameter values are not known with certainty. As a result, it is worthwhile to take this uncertaintyabout parameter values into account in option pricing. The natural way of doing this is through theuse of Bayesian methods. In this paper, we present a framework for Bayesian inference that canbe applied to some popular stochastic processes for the underlying in mathematical finance. Usingdirectly the probability model for the stochastic process of the stock price process, Girsanov’stheorem helps us to derive the likelihood function, which is one of the key ingredients that enablesus to calculate posterior probability distributions for the model parameters.

We illustrate how to apply our method in the cases where the stochastic process for the underlyingfollows a GBM and the case where it follows an O–U process. In addition, we present an applicationfor computing the VaR of a call option. Numerical simulations indicate that the Bayesian VaRis more conservative (higher expected losses) than its classical counterpart, which is consistentwith the more general results of Wang et al. [11], who show that risk measures that incorporateparameter uncertainty tend to be more conservative as a rule.

APPENDIX A: POSTERIOR PROBABILITY DISTRIBUTIONS

A.1. Complementary calculations

Consider first the case where the price process is St =e)t+!Wt , with W as above and )=#$!2/2. Inthis case, getting (Zt )

$1 ready for posteriorization will be pretty much as in Section 3.1. To beginwith note that from dSt =#St dt+!St dWt and Ito’s formula, on the one hand, !dWt =dSt/St $#dtand, on the other hand,

d!ln(St/S0)+

!2t2

#=dSt/St

Therefore,

!Wt = ln(St/S0)$ t)

With all this, just some arithmetics allows us to rearrange the exponent in

(Zt )$1=exp

'$"!

!!Wt +

("!)2!2t2

(

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

520 H. GZYL, E. TER HORST AND S. W. MALONE

where "! =(r$#)/!, so that

(Zt )$1=exp

?t

2!2

!r$ !2

2+ ln(St/S0)

t

#2

$ t2!2

!#$ !2

2+ ln(St/S0)

t

#2@

A.2. A very simple jump diffusion

Here, the goal is to showwith an example a situation where the likelihood does not lead to a posteriordistribution for " easy to sample from. When the asset prices are driven by discontinuous factors,part of the routine is easy, but part is not. Assume, for example, that dSt =#St dt+!St dWt +dJ (t),where the first two terms in the right-hand side are as in the previous example, whereas J (t)=BN (t)

n=1 *n is a compound Poisson process such that N (t) is a Poisson process with intensity % andthe i.i.d. sequence is independent of both W and N , and for the sake of simplicity, let us assumethat *n to be bounded.

The price equation has the following solution

St =et (#$!2/2)+!WtN (t)En=1

(1+*n)=et (#$!2/2)+!Wt+Y (t)

where Yt =BN (t)

n=1 ln(1+*n)%BN (t)

n=1 +n . Clearly, we must assume that (1+*n)>0 for the priceprocess to be positive. The boundedness assumption on the jumps yields the existence of

F(e"x $

1)n(dx), with n(dx)=dP(+1!x). With this, clearly E[e"(!Wt+Yt )]=etk("), with

k(")= !2

2+%"

(e"x $1)n(dx)

This time when we ask for the existence of a value "! such that

E[e"!(!Wt+Yt )$tk"!)St ]=etr

we are led to the equation k("!+1)$k("!)=r$), with ) as above. This amounts to solving

"!+%"

(ex $1)ex"!n(dx)= r$)!2

When "! ranges from $, to +,, the left-hand side of the identity above ranges in an increasingway over the same range; thus, the equation has a solution. Hence, there is at least one risk-neutralmeasure for the given asset price. Furthermore, it is not hard to see using Ito’s formula that thistime we also have

!Wt +Yt = ln(St/S0)$ t)

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

BAYESIAN PARAMETER INFERENCE FOR MODELS 521

but the likelihood

Zt =e$"!(!Wt+Yt )+tk("!) =e$"!(ln(St/S0)$t))+tk("!)

does not lead to a posterior distribution for " easy to sample from because of the non-conjugacynature of " with respect to the prior distribution for ".

APPENDIX B: PROOF OF LEMMAS 1 AND 2

B.1. GBM model

In this subsection, we derive the full conditional posterior distributions for both # and !2 in theGBM model. In their model, the log returns are normally distributed as

log!StS0

#

t&N

'#$ !2

2,!2

t

(

Once we observe the whole sample path St , we obtain the following function for # and !2:

L!

#,!2| log!StS0

##= 1G!

2$!2

t

# exp

)**+

**,$ t2!2

-

../log

StS0t

$!

#$ !2

2

#0

112

23**4

**5(8)

Proof for Lemma 1Since the priors for both # and !2 are both flat priors, the joint posterior distribution is proportionalto the likelihood function, which is equal to Equation (8). Keeping only the terms in !2 in Equation(8), we conclude easily that the conditional posterior distribution for !2 is a GIG(%(,&(,'() where

%( = 12, &(2= t

6

778#$log

StS0t

9

::;

2

and '(2= t/4. The conditional posterior distribution for # is a

N

-

../log

StS0t

+ !2

2,!2

t

0

112 "

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

522 H. GZYL, E. TER HORST AND S. W. MALONE

Proof for Lemma 2By the Bayes rule, the posterior $

H!2| log(St/S0),#

Ifor !2 given # and the data are proportional to

' L!

!2| log!StS0

#,##

$(!2)

' (!2)(1/2$1) exp

)***********+

***********,

$12

-

.........../

t!2

4+

t

6

778#$log

StS0t

9

::;

2

!2

0

111111111112

3***********4

***********5

$(!2)

and the likelihood $H!2|#, log(St/S0)

Itimes the GIG(%,&,') prior distribution $(!2) yields

(!2)(%$1/2$1) exp

)***********+

***********,

$12

-

.........../

!2!t4

+'2#

+

6

778t

6

778#$log

StS0t

9

::;

2

+&2

9

::;

!2

0

111111111112

3***********4

***********5

which is proportional to a GIG(%(,&(,'(), where

%( =%$ 12, &(2= t

6

778#$log

StS0t

9

::;

2

+&2

and '(2=(t/4)+'2.The full-conditional posterior distribution for # given !2 is given by $(#|!2, log(St/S0)) and is

proportional to

' $(#)exp

)**+

**,$ t2!2

-

../log

StS0t

$!

#$ !2

2

#0

112

23**4

**5

' exp

)**+

**,$ t2!2

-

../log

StS0t

$!

#$ !2

2

#0

112

2

$ 12s2

(#$m)2

3**4

**5

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

BAYESIAN PARAMETER INFERENCE FOR MODELS 523

' exp

)**********+

**********,

$

!t+ !2

s2

#

2!2

-

........../

#2$2#

6

77777777778

m!2+ ts2

6

778log

StS0t

+ !2

2

9

::;

ts2+!2

9

::::::::::;

0

11111111112

3**********4

**********5

' exp

)***+

***,$

!t+ !2

s2

#

2!2

-

../#2$2#

6

778m

!2

s2+ log

StS0

+ t!2

2

t+ !2

s2

9

::;

0

112

3***4

***5

' exp

)***+

***,$

!t+ !2

s2

#

2!2

-

../#$

6

778m

!2

s2+ log

StS0

+ t!2

2

t+ !2

s2

9

::;

0

112

23***4

***5

where $(#) is a N (m,s2). We conclude, therefore, that the conditional posterior distribution for #is distributed as

N

-

../

6

778m

!2

s2+ log

StS0

+ t!2

2

t+ !2

s2

9

::; ,!2

t+ !2

s2

0

112

which is the same conditional posterior distribution as in Polson and Roberts [2]. "

B.2. Mean reversion model (O–U)

In this subsection, we derive the full-conditional posterior distribution for # in the O–U model.From the Cameron–Martin–Girsanov theorem [12], the likelihood is equal to

(Zt )$1 = exp

)***+

***,$" t

0

'#!$r$(

!log

SuS0

+ !2u2

2

#('dSuSu

$!

#!$(!log

SuS0

+ !2u2

2

#du#(

!2

3***4

***5

-exp

)***+

***,$" t

0

'#!$r$(

!log

SuS0

+ !2u2

2

#(2

2!2du

3***4

***5

where #! =#(1+().

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb

524 H. GZYL, E. TER HORST AND S. W. MALONE

Since the prior for # is flat, the posterior distribution is proportional to the likelihood function.After long and tedious algebra, we obtain that the conditional posterior distribution for # is

N

-

.../

" t

0

dSuSu

$('" t

0

!log

SuS0

+ !2u2

2

#(

(1+()t,

!2

(1+()t

0

1112

In a similar way, the posterior for # is found when completing squares under a normal priordistribution ($(#)=N [m,s]). The result is, therefore,

$!

#|(,!2, logStS0

#=N

-

.../

(1+()!" t

0

dSuSu

$('" t

0

!log

SuS0

+ !2u2

2

#(#+m!2

s2

(1+()t,

!2

(1+()2t+ !2

s2

0

1112

The same procedure is applied to ( under a uniform and normal prior distribution together withthe likelihood given by (Zt )

$1.

ACKNOWLEDGEMENTS

The authors would like to thank the anonymous referees as well as German Molina and Robert L. Wolpertfor their helpful conversations.

REFERENCES

1. Paulo R. Problems on the Bayesian/frequentist interface. Ph.D. Thesis, Duke University, ISDS, 2002.2. Polson N, Roberts G. Bayes factors for discrete observations from diffusion processes. Biometrika 1994; 81:11–26.3. Karolyi A. A Bayesian approach to modeling stock return volatility for option valuation. Journal of Financial

and Quantitative Analysis 1993; 28:5824. Darsinos T, Satchell SE. Bayesian analysis of the Black & Scholes option price. Technical Report, Cambridge

Working Papers in Economics, 2001.5. Jacod J, Shiryaev A. Limit Theorems for Stochastic Processes. Springer: Berlin, 2003.6. Silva RS, Lopes HS, Migon HS. The extended generalized inverse Gaussian for log-linear stochastic volatility

models. Technical Report, 2006.7. Berger JO. Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics. Springer: Berlin,

1985.8. Gelman A, Carlin JB, Stern H, Rubin D. Bayesian Data Analysis. Chapman & Hall/CRC: London, 2003.9. Dowd K. Measuring Market Risk. Wiley: New York, 2005.10. Bliss R, Panigirtzoglou N. Option-implied risk aversion estimates. The Journal of Finance 2004; LIX(1):412.11. Wang SS, Young VR, Panjer HH. Axiomatic characterization of insurance prices. Insurance: Mathematics and

Economics 1997; 21:173–183.12. Oksendal B. Stochastic Differential Equations: An Introduction with Applications. Springer: Berlin, 2003.

Copyright 2008 John Wiley & Sons, Ltd. Appl. Stochastic Models Bus. Ind. 2008; 24:507–524DOI: 10.1002/asmb


Recommended