21. Linear State Space Models#

“We may regard the present state of the universe as the effect of its past and the cause of its future” – Marquis de Laplace

In addition to what’s in Anaconda, this lecture will need the following libraries:

!pip install quantecon
Hide code cell output
Requirement already satisfied: quantecon in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (0.8.1)
Requirement already satisfied: numba>=0.49.0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (0.60.0)
Requirement already satisfied: numpy>=1.17.0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (1.26.4)
Requirement already satisfied: requests in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (2.32.3)
Requirement already satisfied: scipy>=1.5.0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (1.13.1)
Requirement already satisfied: sympy in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from quantecon) (1.14.0)
Requirement already satisfied: llvmlite<0.44,>=0.43.0dev0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from numba>=0.49.0->quantecon) (0.43.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from requests->quantecon) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from requests->quantecon) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from requests->quantecon) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from requests->quantecon) (2024.8.30)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /home/runner/miniconda3/envs/quantecon/lib/python3.12/site-packages (from sympy->quantecon) (1.3.0)

21.1. Overview#

This lecture introduces the linear state space dynamic system.

The linear state space system is a generalization of the scalar AR(1) process we studied before.

This model is a workhorse that carries a powerful theory of prediction.

Its many applications include:

  • representing dynamics of higher-order linear systems

  • predicting the position of a system j steps into the future

  • predicting a geometric sum of future values of a variable like

    • non-financial income

    • dividends on a stock

    • the money supply

    • a government deficit or surplus, etc.

  • key ingredient of useful models

    • Friedman’s permanent income model of consumption smoothing.

    • Barro’s model of smoothing total tax collections.

    • Rational expectations version of Cagan’s model of hyperinflation.

    • Sargent and Wallace’s “unpleasant monetarist arithmetic,” etc.

Let’s start with some imports:

import matplotlib.pyplot as plt
import numpy as np
from quantecon import LinearStateSpace
from scipy.stats import norm
import random

21.2. The Linear State Space Model#

The objects in play are:

  • An n×1 vector xt denoting the state at time t=0,1,2,.

  • An IID sequence of m×1 random vectors wtN(0,I).

  • A k×1 vector yt of observations at time t=0,1,2,.

  • An n×n matrix A called the transition matrix.

  • An n×m matrix C called the volatility matrix.

  • A k×n matrix G sometimes called the output matrix.

Here is the linear state-space system

(21.1)#xt+1=Axt+Cwt+1yt=Gxtx0N(μ0,Σ0)

21.2.1. Primitives#

The primitives of the model are

  1. the matrices A,C,G

  2. shock distribution, which we have specialized to N(0,I)

  3. the distribution of the initial condition x0, which we have set to N(μ0,Σ0)

Given A,C,G and draws of x0 and w1,w2,, the model (21.1) pins down the values of the sequences {xt} and {yt}.

Even without these draws, the primitives 1–3 pin down the probability distributions of {xt} and {yt}.

Later we’ll see how to compute these distributions and their moments.

21.2.1.1. Martingale Difference Shocks#

We’ve made the common assumption that the shocks are independent standardized normal vectors.

But some of what we say will be valid under the assumption that {wt+1} is a martingale difference sequence.

A martingale difference sequence is a sequence that is zero mean when conditioned on past information.

In the present case, since {xt} is our state sequence, this means that it satisfies

E[wt+1|xt,xt1,]=0

This is a weaker condition than that {wt} is IID with wt+1N(0,I).

21.2.2. Examples#

By appropriate choice of the primitives, a variety of dynamics can be represented in terms of the linear state space model.

The following examples help to highlight this point.

They also illustrate the wise dictum finding the state is an art.

21.2.2.1. Second-order Difference Equation#

Let {yt} be a deterministic sequence that satisfies

(21.2)#yt+1=ϕ0+ϕ1yt+ϕ2yt1s.t.y0,y1 given

To map (21.2) into our state space system (21.1), we set

xt=[1ytyt1]A=[100ϕ0ϕ1ϕ2010]C=[000]G=[010]

You can confirm that under these definitions, (21.1) and (21.2) agree.

The next figure shows the dynamics of this process when ϕ0=1.1,ϕ1=0.8,ϕ2=0.8,y0=y1=1.

def plot_lss(A,
         C,
         G,
         n=3,
         ts_length=50):

    ar = LinearStateSpace(A, C, G, mu_0=np.ones(n))
    x, y = ar.simulate(ts_length)

    fig, ax = plt.subplots()
    y = y.flatten()
    ax.plot(y, 'b-', lw=2, alpha=0.7)
    ax.grid()
    ax.set_xlabel('time', fontsize=12)
    ax.set_ylabel('$y_t$', fontsize=12)
    plt.show()
ϕ_0, ϕ_1, ϕ_2 = 1.1, 0.8, -0.8

A = [[1,     0,     0  ],
     [ϕ_0,   ϕ_1,   ϕ_2],
     [0,     1,     0  ]]

C = np.zeros((3, 1))
G = [0, 1, 0]

plot_lss(A, C, G)
_images/5abc343469ee45fb1cb14df7b6f13a3d0605afc6b138b2f159799a166b8c101c.png

Later you’ll be asked to recreate this figure.

21.2.2.2. Univariate Autoregressive Processes#

We can use (21.1) to represent the model

(21.3)#yt+1=ϕ1yt+ϕ2yt1+ϕ3yt2+ϕ4yt3+σwt+1

where {wt} is IID and standard normal.

To put this in the linear state space format we take xt=[ytyt1yt2yt3] and

A=[ϕ1ϕ2ϕ3ϕ4100001000010]C=[σ000]G=[1000]

The matrix A has the form of the companion matrix to the vector [ϕ1ϕ2ϕ3ϕ4].

The next figure shows the dynamics of this process when

ϕ1=0.5,ϕ2=0.2,ϕ3=0,ϕ4=0.5,σ=0.2,y0=y1=y2=y3=1
ϕ_1, ϕ_2, ϕ_3, ϕ_4 = 0.5, -0.2, 0, 0.5
σ = 0.2

A_1 = [[ϕ_1,   ϕ_2,   ϕ_3,   ϕ_4],
       [1,     0,     0,     0  ],
       [0,     1,     0,     0  ],
       [0,     0,     1,     0  ]]

C_1 = [[σ],
       [0],
       [0],
       [0]]

G_1 = [1, 0, 0, 0]

plot_lss(A_1, C_1, G_1, n=4, ts_length=200)
_images/ddab77456cd86cde870452d75fd54b4117db34a12545507cf568d1f53fbb9949.png

21.2.2.3. Vector Autoregressions#

Now suppose that

  • yt is a k×1 vector

  • ϕj is a k×k matrix and

  • wt is k×1

Then (21.3) is termed a vector autoregression.

To map this into (21.1), we set

xt=[ytyt1yt2yt3]A=[ϕ1ϕ2ϕ3ϕ4I0000I0000I0]C=[σ000]G=[I000]

where I is the k×k identity matrix and σ is a k×k matrix.

21.2.2.4. Seasonals#

We can use (21.1) to represent

  1. the deterministic seasonal yt=yt4

  2. the indeterministic seasonal yt=ϕ4yt4+wt

In fact, both are special cases of (21.3).

With the deterministic seasonal, the transition matrix becomes

A=[0001100001000010]

It is easy to check that A4=I, which implies that xt is strictly periodic with period 4:1

xt+4=xt

Such an xt process can be used to model deterministic seasonals in quarterly time series.

The indeterministic seasonal produces recurrent, but aperiodic, seasonal fluctuations.

21.2.3. Moving Average Representations#

A nonrecursive expression for xt as a function of x0,w1,w2,,wt can be found by using (21.1) repeatedly to obtain

(21.5)#xt=Axt1+Cwt=A2xt2+ACwt1+Cwt=j=0t1AjCwtj+Atx0

Representation (21.5) is a moving average representation.

It expresses {xt} as a linear function of

  1. current and past values of the process {wt} and

  2. the initial condition x0

As an example of a moving average representation, let the model be

A=[1101]C=[10]

You will be able to show that At=[1t01] and AjC=[10].

Substituting into the moving average representation (21.5), we obtain

x1t=j=0t1wtj+[1t]x0

where x1t is the first entry of xt.

The first term on the right is a cumulated sum of martingale differences and is therefore a martingale.

The second term is a translated linear function of time.

For this reason, x1t is called a martingale with drift.

21.3. Distributions and Moments#

21.3.1. Unconditional Moments#

Using (21.1), it’s easy to obtain expressions for the (unconditional) means of xt and yt.

We’ll explain what unconditional and conditional mean soon.

Letting μt:=E[xt] and using linearity of expectations, we find that

(21.6)#μt+1=Aμtwithμ0 given

Here μ0 is a primitive given in (21.1).

The variance-covariance matrix of xt is Σt:=E[(xtμt)(xtμt)].

Using xt+1μt+1=A(xtμt)+Cwt+1, we can determine this matrix recursively via

(21.7)#Σt+1=AΣtA+CCwithΣ0 given

As with μ0, the matrix Σ0 is a primitive given in (21.1).

As a matter of terminology, we will sometimes call

  • μt the unconditional mean of xt

  • Σt the unconditional variance-covariance matrix of xt

This is to distinguish μt and Σt from related objects that use conditioning information, to be defined below.

However, you should be aware that these “unconditional” moments do depend on the initial distribution N(μ0,Σ0).

21.3.1.1. Moments of the Observables#

Using linearity of expectations again we have

(21.8)#E[yt]=E[Gxt]=Gμt

The variance-covariance matrix of yt is easily shown to be

(21.9)#Var[yt]=Var[Gxt]=GΣtG

21.3.2. Distributions#

In general, knowing the mean and variance-covariance matrix of a random vector is not quite as good as knowing the full distribution.

However, there are some situations where these moments alone tell us all we need to know.

These are situations in which the mean vector and covariance matrix are all of the parameters that pin down the population distribution.

One such situation is when the vector in question is Gaussian (i.e., normally distributed).

This is the case here, given

  1. our Gaussian assumptions on the primitives

  2. the fact that normality is preserved under linear operations

In fact, it’s well-known that

(21.10)#uN(u¯,S)andv=a+BuvN(a+Bu¯,BSB)

In particular, given our Gaussian assumptions on the primitives and the linearity of (21.1) we can see immediately that both xt and yt are Gaussian for all t0 2.

Since xt is Gaussian, to find the distribution, all we need to do is find its mean and variance-covariance matrix.

But in fact we’ve already done this, in (21.6) and (21.7).

Letting μt and Σt be as defined by these equations, we have

(21.11)#xtN(μt,Σt)

By similar reasoning combined with (21.8) and (21.9),

(21.12)#ytN(Gμt,GΣtG)

21.3.3. Ensemble Interpretations#

How should we interpret the distributions defined by (21.11)(21.12)?

Intuitively, the probabilities in a distribution correspond to relative frequencies in a large population drawn from that distribution.

Let’s apply this idea to our setting, focusing on the distribution of yT for fixed T.

We can generate independent draws of yT by repeatedly simulating the evolution of the system up to time T, using an independent set of shocks each time.

The next figure shows 20 simulations, producing 20 time series for {yt}, and hence 20 draws of yT.

The system in question is the univariate autoregressive model (21.3).

The values of yT are represented by black dots in the left-hand figure

def cross_section_plot(A,
                   C,
                   G,
                   T=20,                 # Set the time
                   ymin=-0.8,
                   ymax=1.25,
                   sample_size = 20,     # 20 observations/simulations
                   n=4):                 # The number of dimensions for the initial x0

    ar = LinearStateSpace(A, C, G, mu_0=np.ones(n))

    fig, axes = plt.subplots(1, 2, figsize=(16, 5))

    for ax in axes:
        ax.grid(alpha=0.4)
        ax.set_ylim(ymin, ymax)

    ax = axes[0]
    ax.set_ylim(ymin, ymax)
    ax.set_ylabel('$y_t$', fontsize=12)
    ax.set_xlabel('time', fontsize=12)
    ax.vlines((T,), -1.5, 1.5)

    ax.set_xticks((T,))
    ax.set_xticklabels(('$T$',))

    sample = []
    for i in range(sample_size):
        rcolor = random.choice(('c', 'g', 'b', 'k'))
        x, y = ar.simulate(ts_length=T+15)
        y = y.flatten()
        ax.plot(y, color=rcolor, lw=1, alpha=0.5)
        ax.plot((T,), (y[T],), 'ko', alpha=0.5)
        sample.append(y[T])

    y = y.flatten()
    axes[1].set_ylim(ymin, ymax)
    axes[1].set_ylabel('$y_t$', fontsize=12)
    axes[1].set_xlabel('relative frequency', fontsize=12)
    axes[1].hist(sample, bins=16, density=True, orientation='horizontal', alpha=0.5)
    plt.show()
ϕ_1, ϕ_2, ϕ_3, ϕ_4 = 0.5, -0.2, 0, 0.5
σ = 0.1

A_2 = [[ϕ_1, ϕ_2, ϕ_3, ϕ_4],
       [1,     0,     0,     0],
       [0,     1,     0,     0],
       [0,     0,     1,     0]]

C_2 = [[σ], [0], [0], [0]]

G_2 = [1, 0, 0, 0]

cross_section_plot(A_2, C_2, G_2)
_images/67803d45395701dc8a0ee39a506912a5d7495c607e9c4b9c5e2551d5519cec5c.png

In the right-hand figure, these values are converted into a rotated histogram that shows relative frequencies from our sample of 20 yT’s.

Here is another figure, this time with 100 observations

t = 100
cross_section_plot(A_2, C_2, G_2, T=t)
_images/cd7b87498572c2a6ad5b1f29ea3cbd79c043dc9accda164486979418184e7bdd.png

Let’s now try with 500,000 observations, showing only the histogram (without rotation)

T = 100
ymin=-0.8
ymax=1.25
sample_size = 500_000

ar = LinearStateSpace(A_2, C_2, G_2, mu_0=np.ones(4))
fig, ax = plt.subplots()
x, y = ar.simulate(sample_size)
mu_x, mu_y, Sigma_x, Sigma_y, Sigma_yx = ar.stationary_distributions()
f_y = norm(loc=float(mu_y.item()), scale=float(np.sqrt(Sigma_y.item())))
y = y.flatten()
ygrid = np.linspace(ymin, ymax, 150)

ax.hist(y, bins=50, density=True, alpha=0.4)
ax.plot(ygrid, f_y.pdf(ygrid), 'k-', lw=2, alpha=0.8, label='true density')
ax.set_xlim(ymin, ymax)
ax.set_xlabel('$y_t$', fontsize=12)
ax.set_ylabel('relative frequency', fontsize=12)
ax.legend(fontsize=12)
plt.show()
_images/f66d3879ce84f20dbc6a39c06ecb9ab1a818f712d72ec9f625b1b0be33b838ed.png

The black line is the population density of yT calculated from (21.12).

The histogram and population distribution are close, as expected.

By looking at the figures and experimenting with parameters, you will gain a feel for how the population distribution depends on the model primitives listed above, as intermediated by the distribution’s parameters.

21.3.3.1. Ensemble Means#

In the preceding figure, we approximated the population distribution of yT by

  1. generating I sample paths (i.e., time series) where I is a large number

  2. recording each observation yTi

  3. histogramming this sample

Just as the histogram approximates the population distribution, the ensemble or cross-sectional average

y¯T:=1Ii=1IyTi

approximates the expectation E[yT]=GμT (as implied by the law of large numbers).

Here’s a simulation comparing the ensemble averages and population means at time points t=0,,50.

The parameters are the same as for the preceding figures, and the sample size is relatively small (I=20).

I = 20
T = 50
ymin = -0.5
ymax = 1.15

ar = LinearStateSpace(A_2, C_2, G_2, mu_0=np.ones(4))

fig, ax = plt.subplots()

ensemble_mean = np.zeros(T)
for i in range(I):
    x, y = ar.simulate(ts_length=T)
    y = y.flatten()
    ax.plot(y, 'c-', lw=0.8, alpha=0.5)
    ensemble_mean = ensemble_mean + y

ensemble_mean = ensemble_mean / I
ax.plot(ensemble_mean, color='b', lw=2, alpha=0.8, label='$\\bar y_t$')
m = ar.moment_sequence()

population_means = []
for t in range(T):
    μ_x, μ_y, Σ_x, Σ_y = next(m)
    population_means.append(float(μ_y.item()))

ax.plot(population_means, color='g', lw=2, alpha=0.8, label=r'$G\mu_t$')
ax.set_ylim(ymin, ymax)
ax.set_xlabel('time', fontsize=12)
ax.set_ylabel('$y_t$', fontsize=12)
ax.legend(ncol=2)
plt.show()
_images/f2545b35ac0db2535469c1afefc8543d20ecfb5fe589ee1b5e28270998f9bacf.png

The ensemble mean for xt is

x¯T:=1Ii=1IxTiμT(I)

The limit μT is a “long-run average”.

(By long-run average we mean the average for an infinite (I=) number of sample xT’s)

Another application of the law of large numbers assures us that

1Ii=1I(xTix¯T)(xTix¯T)ΣT(I)

21.3.4. Joint Distributions#

In the preceding discussion, we looked at the distributions of xt and yt in isolation.

This gives us useful information but doesn’t allow us to answer questions like

  • what’s the probability that xt0 for all t?

  • what’s the probability that the process {yt} exceeds some value a before falling below b?

  • etc., etc.

Such questions concern the joint distributions of these sequences.

To compute the joint distribution of x0,x1,,xT, recall that joint and conditional densities are linked by the rule

p(x,y)=p(y|x)p(x)(joint = conditional × marginal)

From this rule we get p(x0,x1)=p(x1|x0)p(x0).

The Markov property p(xt|xt1,,x0)=p(xt|xt1) and repeated applications of the preceding rule lead us to

p(x0,x1,,xT)=p(x0)t=0T1p(xt+1|xt)

The marginal p(x0) is just the primitive N(μ0,Σ0).

In view of (21.1), the conditional densities are

p(xt+1|xt)=N(Axt,CC)

21.3.4.1. Autocovariance Functions#

An important object related to the joint distribution is the autocovariance function

(21.13)#Σt+j,t:=E[(xt+jμt+j)(xtμt)]

Elementary calculations show that

(21.14)#Σt+j,t=AjΣt

Notice that Σt+j,t in general depends on both j, the gap between the two dates, and t, the earlier date.

21.4. Stationarity and Ergodicity#

Stationarity and ergodicity are two properties that, when they hold, greatly aid analysis of linear state space models.

Let’s start with the intuition.

21.4.1. Visualizing Stability#

Let’s look at some more time series from the same model that we analyzed above.

This picture shows cross-sectional distributions for y at times T,T,T

def cross_plot(A,
            C,
            G,
            steady_state='False',
            T0 = 10,
            T1 = 50,
            T2 = 75,
            T4 = 100):

    ar = LinearStateSpace(A, C, G, mu_0=np.ones(4))

    if steady_state == 'True':
        μ_x, μ_y, Σ_x, Σ_y, Σ_yx = ar.stationary_distributions()
        ar_state = LinearStateSpace(A, C, G, mu_0=μ_x, Sigma_0=Σ_x)

    ymin, ymax = -0.6, 0.6
    fig, ax = plt.subplots()
    ax.grid(alpha=0.4)
    ax.set_ylim(ymin, ymax)
    ax.set_ylabel('$y_t$', fontsize=12)
    ax.set_xlabel('$time$', fontsize=12)

    ax.vlines((T0, T1, T2), -1.5, 1.5)
    ax.set_xticks((T0, T1, T2))
    ax.set_xticklabels(("$T$", "$T'$", "$T''$"), fontsize=12)
    for i in range(80):
        rcolor = random.choice(('c', 'g', 'b'))

        if steady_state == 'True':
            x, y = ar_state.simulate(ts_length=T4)
        else:
            x, y = ar.simulate(ts_length=T4)

        y = y.flatten()
        ax.plot(y, color=rcolor, lw=0.8, alpha=0.5)
        ax.plot((T0, T1, T2), (y[T0], y[T1], y[T2],), 'ko', alpha=0.5)
    plt.show()
cross_plot(A_2, C_2, G_2)
_images/5fd5ae543ee273f50686e3df2c01de9b33f721fcb5e049ecafd3f9230c881891.png

Note how the time series “settle down” in the sense that the distributions at T and T are relatively similar to each other — but unlike the distribution at T.

Apparently, the distributions of yt converge to a fixed long-run distribution as t.

When such a distribution exists it is called a stationary distribution.

21.4.2. Stationary Distributions#

In our setting, a distribution ψ is said to be stationary for xt if

xtψandxt+1=Axt+Cwt+1xt+1ψ

Since

  1. in the present case, all distributions are Gaussian

  2. a Gaussian distribution is pinned down by its mean and variance-covariance matrix

we can restate the definition as follows: ψ is stationary for xt if

ψ=N(μ,Σ)

where μ and Σ are fixed points of (21.6) and (21.7) respectively.

21.4.3. Covariance Stationary Processes#

Let’s see what happens to the preceding figure if we start x0 at the stationary distribution.

cross_plot(A_2, C_2, G_2, steady_state='True')
_images/48769db04e9f57789934d63c34bffdfd428b8c4c33237fe3c2e749f1978f9508.png

Now the differences in the observed distributions at T,T and T come entirely from random fluctuations due to the finite sample size.

By

  • our choosing x0N(μ,Σ)

  • the definitions of μ and Σ as fixed points of (21.6) and (21.7) respectively

we’ve ensured that

μt=μandΣt=Σfor all t

Moreover, in view of (21.14), the autocovariance function takes the form Σt+j,t=AjΣ, which depends on j but not on t.

This motivates the following definition.

A process {xt} is said to be covariance stationary if

  • both μt and Σt are constant in t

  • Σt+j,t depends on the time gap j but not on time t

In our setting, {xt} will be covariance stationary if μ0,Σ0,A,C assume values that imply that none of μt,Σt,Σt+j,t depends on t.

21.4.4. Conditions for Stationarity#

21.4.4.1. The Globally Stable Case#

The difference equation μt+1=Aμt is known to have unique fixed point μ=0 if all eigenvalues of A have moduli strictly less than unity.

That is, if (np.absolute(np.linalg.eigvals(A)) < 1).all() == True.

The difference equation (21.7) also has a unique fixed point in this case, and, moreover

μtμ=0andΣtΣast

regardless of the initial conditions μ0 and Σ0.

This is the globally stable case — see these notes for more a theoretical treatment.

However, global stability is more than we need for stationary solutions, and often more than we want.

To illustrate, consider our second order difference equation example.

Here the state is xt=[1ytyt1].

Because of the constant first component in the state vector, we will never have μt0.

How can we find stationary solutions that respect a constant state component?

21.4.4.2. Processes with a Constant State Component#

To investigate such a process, suppose that A and C take the form

A=[A1a01]C=[C10]

where

  • A1 is an (n1)×(n1) matrix

  • a is an (n1)×1 column vector

Let xt=[x1t1] where x1t is (n1)×1.

It follows that

x1,t+1=A1x1t+a+C1wt+1

Let μ1t=E[x1t] and take expectations on both sides of this expression to get

(21.15)#μ1,t+1=A1μ1,t+a

Assume now that the moduli of the eigenvalues of A1 are all strictly less than one.

Then (21.15) has a unique stationary solution, namely,

μ1=(IA1)1a

The stationary value of μt itself is then μ:=[μ11].

The stationary values of Σt and Σt+j,t satisfy

(21.16)#Σ=AΣA+CCΣt+j,t=AjΣ

Notice that here Σt+j,t depends on the time gap j but not on calendar time t.

In conclusion, if

  • x0N(μ,Σ) and

  • the moduli of the eigenvalues of A1 are all strictly less than unity

then the {xt} process is covariance stationary, with constant state component.

Note

If the eigenvalues of A1 are less than unity in modulus, then (a) starting from any initial value, the mean and variance-covariance matrix both converge to their stationary values; and (b) iterations on (21.7) converge to the fixed point of the discrete Lyapunov equation in the first line of (21.16).

21.4.5. Ergodicity#

Let’s suppose that we’re working with a covariance stationary process.

In this case, we know that the ensemble mean will converge to μ as the sample size I approaches infinity.

21.4.5.1. Averages over Time#

Ensemble averages across simulations are interesting theoretically, but in real life, we usually observe only a single realization {xt,yt}t=0T.

So now let’s take a single realization and form the time-series averages

x¯:=1Tt=1Txtandy¯:=1Tt=1Tyt

Do these time series averages converge to something interpretable in terms of our basic state-space representation?

The answer depends on something called ergodicity.

Ergodicity is the property that time series and ensemble averages coincide.

More formally, ergodicity implies that time series sample averages converge to their expectation under the stationary distribution.

In particular,

  • 1Tt=1Txtμ

  • 1Tt=1T(xtx¯T)(xtx¯T)Σ

  • 1Tt=1T(xt+jx¯T)(xtx¯T)AjΣ

In our linear Gaussian setting, any covariance stationary process is also ergodic.

21.5. Noisy Observations#

In some settings, the observation equation yt=Gxt is modified to include an error term.

Often this error term represents the idea that the true state can only be observed imperfectly.

To include an error term in the observation we introduce

  • An IID sequence of ×1 random vectors vtN(0,I).

  • A k× matrix H.

and extend the linear state-space system to

(21.17)#xt+1=Axt+Cwt+1yt=Gxt+Hvtx0N(μ0,Σ0)

The sequence {vt} is assumed to be independent of {wt}.

The process {xt} is not modified by noise in the observation equation and its moments, distributions and stability properties remain the same.

The unconditional moments of yt from (21.8) and (21.9) now become

(21.18)#E[yt]=E[Gxt+Hvt]=Gμt

The variance-covariance matrix of yt is easily shown to be

(21.19)#Var[yt]=Var[Gxt+Hvt]=GΣtG+HH

The distribution of yt is therefore

ytN(Gμt,GΣtG+HH)

21.6. Prediction#

The theory of prediction for linear state space systems is elegant and simple.

21.6.1. Forecasting Formulas – Conditional Means#

The natural way to predict variables is to use conditional distributions.

For example, the optimal forecast of xt+1 given information known at time t is

Et[xt+1]:=E[xt+1xt,xt1,,x0]=Axt

The right-hand side follows from xt+1=Axt+Cwt+1 and the fact that wt+1 is zero mean and independent of xt,xt1,,x0.

That Et[xt+1]=E[xt+1xt] is an implication of {xt} having the Markov property.

The one-step-ahead forecast error is

xt+1Et[xt+1]=Cwt+1

The covariance matrix of the forecast error is

E[(xt+1Et[xt+1])(xt+1Et[xt+1])]=CC

More generally, we’d like to compute the j-step ahead forecasts Et[xt+j] and Et[yt+j].

With a bit of algebra, we obtain

xt+j=Ajxt+Aj1Cwt+1+Aj2Cwt+2++A0Cwt+j

In view of the IID property, current and past state values provide no information about future values of the shock.

Hence Et[wt+k]=E[wt+k]=0.

It now follows from linearity of expectations that the j-step ahead forecast of x is

Et[xt+j]=Ajxt

The j-step ahead forecast of y is therefore

Et[yt+j]=Et[Gxt+j+Hvt+j]=GAjxt

21.6.2. Covariance of Prediction Errors#

It is useful to obtain the covariance matrix of the vector of j-step-ahead prediction errors

(21.20)#xt+jEt[xt+j]=s=0j1AsCwts+j

Evidently,

(21.21)#Vj:=Et[(xt+jEt[xt+j])(xt+jEt[xt+j])]=k=0j1AkCCAk

Vj defined in (21.21) can be calculated recursively via V1=CC and

(21.22)#Vj=CC+AVj1A,j2

Vj is the conditional covariance matrix of the errors in forecasting xt+j, conditioned on time t information xt.

Under particular conditions, Vj converges to

(21.23)#V=CC+AVA

Equation (21.23) is an example of a discrete Lyapunov equation in the covariance matrix V.

A sufficient condition for Vj to converge is that the eigenvalues of A be strictly less than one in modulus.

Weaker sufficient conditions for convergence associate eigenvalues equaling or exceeding one in modulus with elements of C that equal 0.

21.7. Code#

Our preceding simulations and calculations are based on code in the file lss.py from the QuantEcon.py package.

The code implements a class for handling linear state space models (simulations, calculating moments, etc.).

One Python construct you might not be familiar with is the use of a generator function in the method moment_sequence().

Go back and read the relevant documentation if you’ve forgotten how generator functions work.

Examples of usage are given in the solutions to the exercises.

21.8. Exercises#

Exercise 21.1

In several contexts, we want to compute forecasts of geometric sums of future random variables governed by the linear state-space system (21.1).

We want the following objects

  • Forecast of a geometric sum of future x’s, or Et[j=0βjxt+j].

  • Forecast of a geometric sum of future y’s, or Et[j=0βjyt+j].

These objects are important components of some famous and interesting dynamic models.

For example,

  • if {yt} is a stream of dividends, then E[j=0βjyt+j|xt] is a model of a stock price

  • if {yt} is the money supply, then E[j=0βjyt+j|xt] is a model of the price level

Show that:

Et[j=0βjxt+j]=[IβA]1xt

and

Et[j=0βjyt+j]=G[IβA]1xt

what must the modulus for every eigenvalue of A be less than?


1

The eigenvalues of A are (1,1,i,i).

2

The correct way to argue this is by induction. Suppose that xt is Gaussian. Then (21.1) and (21.10) imply that xt+1 is Gaussian. Since x0 is assumed to be Gaussian, it follows that every xt is Gaussian. Evidently, this implies that each yt is Gaussian.