Filtering and Mary Detection of Markov Modulated Mean Reverting Model

ةصلاخ :  قرفلل ةآرح ةساردب موقي جذومن ىلع ريثأتلاب موقت فوآرم عون نم ناتلسلس ىلع يوتحي جذومن روطي ثحبلا اذه نيب ا لاملا قوس يف راعسلأ . فوآرم عون نم ناتلسلسلا وتب موقت لأا ى لع ريثأ ت تاذ نكل ةفورعم ريغ ثادحأ فيص راعس يف لاملا قوس . رركتملا يطرشلا عيزوتلا ريدقتل سايقلا رييغت قرط ثحبلا اذه يف لمعتستو .     ABSTRACT: In an earlier paper we developed a stochastic model incorporating a double-Markov modulated mean-reversion model. The model is based on an explicit discretisation of the corresponding continuous time dynamics. Here we discuss parameter estimation via the technique of M-ary detection.


Introduction
he model we developed in Malcolm et al. (2004) is a stochastic model incorporating a double Markov modulated mean reversion model.Unlike a price process the basis process can take positive or negative values.This model is based on an explicit discretization of the corresponding continuous time dynamics.In that model we suppose the mean reverting level in our dynamics as well as the noise coefficient can change according to the states of some finite-state Markov processes which could be the economy and some other unseen random phenomenon.In this paper we wish to discuss -ary detection for this model.The term -ary detection is used in Electrical Engineering to describe sequential hypothesis testing for more than two candidate model hypotheses.Here we are interested in model-parameter hypotheses.In effect our formulation is something like a discrete and finite version of the EM algorithm by Baum andPetrie (1966), Dempster et al. (1977) where, rather than considering an uncountable collection of model parameter sets in the space of all admissible models, we consider a finite collection in this space.
We assume that we have a list of candidate models, from which to choose, describing the model dynamics over time.These candidate models will be denoted by , .Let be a simple random variable denoting a specific model, with states indexed by .We assume that is taking on values in the canonical basis of .We suppose is an indicator random variable such that , that is if and only if hypothesis holds.Here is the usual inner product.We shall be interested in computing the posterior probabilities , where denotes information contained in some observation process.It will be shown that this problem separates into a pure filtering component and a pure estimation component.In the context of -ary detection, this is known as the Separation Theorem (Poor 1988).This paper is organized as follows.In §2 & §3 we recall the model dynamics as well as the construction of a new probability measure under which all processes are independent.In §4 M-ary Detection Filters are derived.In §5 & §6 our results are adapted to continuous time dynamics.

Stochastic Dynamics
All models are, initially, on the probability space ( ) is the rate-parameter, that is, a parameter determining how fast the level L is attained by the process X .X has dynamics: Here W is a standard Wiener process, and R σ ∈ .
Remark 1.The dynamics at (2.1) exhibit a mean reversion1 character of the model when written in stochastic differential equation form: ( ) (2.2) Ignoring the noise respectively.We assume here that these levels are determined by the states of two Markov chains Z and respectively.
Without loss of generality, we take the state spaces of our Markov chains to be the canonical basis What also we wish to impose is that the two Markov chains Z and be not independent, that is, information on the behavior of one conveys some knowledge of the behavior of the other.More precisely, we assume the dynamics: (2. M is a martingale increment. The dynamics at (1) take the form (2.8) Remark 2. We defined Z and as inherently discrete-time.Here, we "read" Z and as the output of a sample and hold circuit, or CADLAG processes.
• What we wish to do now, is discretise the dynamics at (8) and then compute a corresponding filter and detector.
• We will use an Euler-Maruyama discretisation scheme to obtain discrete-time dynamics, although many other schemes can be used; see, for example, Numerical Solution of Stochastic Differential Equations by Kloeden and Platen (1992).
For all time discretisations we will consider a partition, on some given time interval [0 ] T , and write (2.9) This partition is strict, 0 1 t t … < < , and regular, the The Gaussian process v is an independently and identically distributed (0 1) N , .Our stochastic system now, under the measure P , has the form: (2.11) Write

State Estimation Filters
The approach we take to compute our filters is the so-called reference probability method.This technique is widely used in Electrical Engineering, see Elliott et al. (1995) and more recently Aggoun and Elliott (2004).
We define a probability measure † P on the measurable space ( ) F Ω, , such that, under † P , the following two conditions hold.
1.The state processes Z and are Markov chains initial distributions 0 p and 0 p respectively.
2. The observation process X , is independently and identically distributed and is Gaussian with zero mean and unit variance.
With † P defined, we construct P , such that under P the following hold: The "real world" probability P , is now defined in terms of the probability measure † P by setting Lemma 1.Under P , the sequence v, is a sequence of independently and identically distributed (0 1) N , random variables, where .
That is, under P , (3.4) Lemma 2. Under the measure P , the process Z remains a Markov process, with transition matrix Π and initial distribution 0 p .The proofs of Lemma 1 and 2 are routine.
Remark 1.The objective in estimation via reference probability is to choose a measure † P which facilitates and/or simplifies calculations.In Filtering and Prediction, we wish to evaluate conditional expectations.
Under the measure † P , our dynamics have the form: In what follows we shall use the following version of Bayes' rule. (3.6)

Note that
The following result is proven in Malcolm et al. (2008).(3.9) The recursion given in Theorem 1, provides a scheme to estimate the conditional probabilities for events of the form , given the information up to time k+1.In practice, one would use the vector-valued information state , to compute an estimate for the state .In general two approaches are adopted; one computes either a conditional mean, that is

M-ary Detection Filters
To denote a specific model hypothesis for the discrete-time dynamics given at (2.5), (2.7) and (2.10) we write, (4.1)

Here
. Using the simple random variable , as before, we are interested to compute the detector expectation (4.2)Here the sigma algebra is taken as generated by a model with parameter set , and similarly the Radon-Nikodym derivative , is constructed according to .Further, to make a clear distinction between the filter information state defined for specific model , and the corresponding un-normalised detector probability for model , we write, respectively Theorem 2 (M-ary Detection Filter) The M-ary detection filter for the model hypothesis is computed by the recursion Proof: Then .
-ary Detection Filters The process Z takes values on a canonical basis of matrix-valued indicator functions, each of which jointly indicates a particular model hypothesis, and a particular value taken by the state process.
The corresponding normalized detection probabilities are computed, for example, by Write (6.1) define The process , defined by equation ( 6.1) satisfies the dynamics The symbol in the previous equation denotes a point-wise matrix product, where for two matrices of the same dimensions, the point-wise product is and so the right side of is continually trying to reach the level L .Now suppose that parameters L and σ are stochastic and can switch between different levels 1 ⋅〉 denotes an inner product and { } 1 A denotes an indicator function for the event A .

Theorem 1 .
Information State Recursion.Suppose the Markov chains Z and are observed though the unitdelay discrete-time dynamics at (2.10).The information state for the corresponding filtering problem is computed by the recursion: (3.10) or the so-called Maximum-a-Posteriori (MAP) estimate, that is (3.11)Marginal distributions for the Markov chains are obtained by multiplying on the right with the n -dimensional row vector (1 1) … , , or on the left with the m -dimensional column vector (1