On Markov Modulated Mean-Reverting Price-Difference Models

ABSTRACT: In this paper we develop a stochastic model incorporating a double-Markov modulated mean-reversion model. Unlike a price process the basis process X can take positive or negative values. This model is based on an explicit discretisation of the corresponding continuous time dynamics. The new feature in our model is that we suppose the mean reverting level in our dynamics as well as the noise coefficient can change according to the states of some finite-state Markov processes which could be the economy and some other unseen random phenomenon.


Introduction
T he main contribution of this article is to further extend the primary ideas presented in Elliott C. et al. (2005).
The subject matter of our work is the dynamics that describe the difference between two prices, for example the prices of two different stocks.What we would like to do, is construct the dynamics which model price differences, and in addition to this, capture important but unseen random phenomena.To do this, we consider regime switching mean reversion.Mean-reverting models are well known in quantitative finance and were introduced by Vasicek.The extension of Vasicek's ideas to Markov-modulated mean reversion has been investigated for interest rate models (see Elliott R. J. et al., 1999).
One common domain of application for price difference models is in the natural gas market.In the natural gas market the basis is the difference in the price of gas at two delivery points.The usual reference in the U.S.A.
for a basis differential is NYMEX.For example, if the May Henry Hub price is 5 25 $ .and the May NYMEX price is then the basis differential for May NYMEX is 5 45 $ .0 20 $ .to Henry.The usual reference for Canada is the price at the AECO facility.
In this article we propose to model the basis as a mean reverting diffusion, { t X X t 0 } = , ≥ .Unlike a price process the basis process X can take positive or negative values.The new feature in our model is that we suppose the mean reverting level in our dynamics can change according to the state of the economy.The economy is modeled as a finite state Markov chain and the economy can perhaps be in two states ('good' and 'bad'), or possibly three states.Our continuous time model is discretized and the results of Elliott R.J. et al (2005), are adapted to obtain a recursive filter for the state of the economy given observations of In turn, this allows predictions to be made of the basis at the next time.If the observed basis is then higher or lower than the predicted value, it suggests one price is possibly higher than it should be and the other lower.
Consequently, a trading strategy can be implemented based on these predictions.

Stochastic dynamics
All models are, initially, on the probability space ( ) is the rate-parameter, that is, a parameter determining how fast the level is attained by the process L X .X has dynamics: where W is a standard Wiener process, and R σ ∈ .
Remark 1.The dynamics at (1) exhibit a mean reversion1 character of the model when written in stochastic differential equation form: (2) ( ) ) Here, M and are martingale increments.The scalar-valued Markov processes taking values and , , , are obtained by Here , denotes an inner product and denotes an indicator function for the event .

〈⋅,⋅〉
What also we wish to impose is that the two Markov chains Z and are not independent, that is, information on the behavior of one conveys some knowledge on the behavior of the other.More precisely, we assume the dynamics: ( ) Again is a martingale increment.The dynamics at (1) take the form Remark 2. We defined Z and as inherently discrete-time.Here, we "read" A Z and as the output of a sample and hold circuit, or CADLAG processes.

A
What we wish to do now, is discretise the dynamics at (8) and then compute a corresponding filter and detector.We will use an Euler-Maruyama discretisation scheme to obtain discrete-time dynamics, although many other schemes can be used.
For all time discretisations we will consider a partition, on the interval [0 ] T , and write ( ) This partition is strict, , regular and the 0 1 are identical for indices .Applying the Euler-Maruyama scheme to (8), we get, Here ( 1) ∆ . (13) The Gaussian process v is an independently and identically distributed (0 1) N , .Our stochastic system now, under the measure , has the form: A A A

State estimation filters
The approach we take to compute our filters is the so-called reference probability method.This technique is widely used in Electrical Engineering, (Elliott et al., 1995 andmore recently Aggoun et al., 2004) is a sequence of independently and identically distributed Gaussian (0 1) N , random variables. Write The "real world" probability is now defined in terms of the probability measure by setting Lemma 1.Under , the sequence is a sequence of independently and identically distributed random variables, where Remark 1.The objective in estimation via reference probability is to choose a measure which facilitates and or simplifies calculations.In Filtering and Prediction, we wish to evaluate conditional expectations.† P Under the measure , our dynamics have the form: In what follows we shall use the following version of Bayes' rule.

∑ ∑ ∑ ∑
A A Theorem 1. Information State Recursion.Suppose the Markov chain Z and are observed through the unitdelay discrete-time dynamics at (10).The information state for the corresponding filtering problem is computed by the recursion: ( ) The recursion given in Theorem 1 provides a scheme to estimate the conditional probabilities for events of the form { : , given the information 1 k F + .In practice, one would use the vector-valued information state , to compute an estimate for the state .In general two approaches are adopted; one computes either a conditional mean, that is or the so-called Maximum-a-Posteriori (MAP) estimate, that is

Prediction/Forecasting
What we would like to do is to predict the difference X in the next time period, and with this information, develop a trading strategy.Let us first compute the n -step predictor, where {0} n N ∈ Proof of Lemma 1 The one-step prediction of the price difference is computed as follows: Remark 2.Here the usual issue of MAP/conditional-mean-estimate is irrelevant, as the price difference is continuously-valued.
Lemma 1 and 2 are routine.
25) Marginal distributions for k Z and k Z are obtained by multiplying ( k k ) Z Z σ ⊗ on the right-hand side with the n -dimensional row vector (1 1) … , , or on the left-hand side with the -dimensional column vector (