[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

linear model implementation

From: E. Joshua Rigler
Subject: linear model implementation
Date: 18 Aug 2002 18:20:34 -0600


I'm trying to write some various ARMA/OE type linear models by
minimizing the prediction error with the leasqr.m function in the
octave-forge package.  I get identical results when I use this nonlinear
optimization function (with the appropriate prediction equation below)
to solve for an ARX model as when I simply do a linear regression on the
input and output for a 1-step ahead prediction.  I run into problems
(i.e. I don't know how to code up the optimal predictor) with more
complicated model structures though.

I am sure of this much...

         B(q)         1
yhat =   ---- u(k) + ---- v(k)
         A(q)        A(q)

...or, assuming a strictly proper polynomial in (q)...

ARX model =  -->  yhat(k) = b1*u1 + b2*u2*un
                           -a1*y1 - a2*y2*yn

For the OE (output error) type model, it's actually even easier, because
Octave's "filter" function already does this...

yhat =   ---- u(k) + v(k)
OE model ->  yhat(k) = b1*u1    + b2*u2*un
                      -f1*yhat1 - f2*yhat2 +...fn*yhatn

But, despite having analytically calculated gradients for the various
filters, I have no idea how to implement something like an ARMA model...

yhat =  ---- v(k)

...or a Box-Jenkins model...

        B(q)         C(q)
yhat =  ---- u(k) +  ---- v(k)
        F(q)         D(q)

"v(k)" is an unknown white noise sequence.  I've noticed a small program
in octave called arma_rnd.m, which actually generates a pseudo-random
white noise sequence of a specified variance, and then uses this to
calculate a 1-step prediction, but I really don't think this (or
something similar) is what I should use with leasqr.m.  For one, I have
no idea how to calculate the gradient of the variance.  Mostly though,
it seems to me that the optimal predictor should somehow include the
previous process outputs, like in the ARX predictor.

Please, if you have read this far, and wish to enlighten me, don't
simply rewrite the transfer functions.  I can find those in any one of
half a dozen books I have, and the Matlab system identification toolbox
help documentation.  What I need to understand is how the transfer
functions are coded up in a way that can be used with a non-linear
optimization function like leasqr.m.



Octave is freely available under the terms of the GNU GPL.

Octave's home on the web:
How to fund new projects:
Subscription information:

reply via email to

[Prev in Thread] Current Thread [Next in Thread]