help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Constrained non linear regression using ML


From: Corrado
Subject: Re: Constrained non linear regression using ML
Date: Thu, 18 Mar 2010 17:21:40 +0000
User-agent: Thunderbird 2.0.0.23 (X11/20090817)

Dear Fredrik, Octavers,

I am firstly trying the maximum likelihood approach. The likelihood function, and the log likelihood function, will depend on the pdf of the error e in the formula:

y=f(theta*x)+e

Now let's say that e is Gaussian distributed, then I can use LS which is the same as ML in this case as we said.
The residuals in this case would be distributed Gaussian. Is that right?

If e is distributed differently (for example: beta, in the continuous case, or binomial, in the discrete case), then I am better off by using Maximum Likelihood.

How should the residual be distributed in this case? Should they not be distributed the same as e? In particular, how should they be distributed in the case on e being beta distributed?

Best,


Fredrik Lingvall wrote:
On 03/17/10 11:48, Corrado wrote:
Dear Friedrik, Jaroslav, Octave,

Yes, of course it depends on the error. But you can still build a frequency distribution with the y_j. It was additional information on the problem, but probably not very useful, apologies.

First of all, the {p1,....,pn} and hence the {k1,....,kn} can have a few tenths of elements (that is n could be maybe 60 in the worst case). The problem is that for such a case we use millions of observations ;). In the case of the 40,000 observation it would be safe to suppose we would use a max of 20.

To me 40,000 observations seems more the enough to estimate 60
parameters. But if you are not sure on what model order to use for your
problem then there are methods available for model selection
(comparison). What you do is essentially to integrate out all parameters
of your model to get the probability for the particular model (given
your data). You do this for all model orders and you can then look at
the odds ratio, for example

p(M_1|y,I) / p(M_2|y,I)

for comparing model M_1 and M_2. So if the odds ratio is significantly
larger than one then you would prefer model M_1 over M_2.

I recommend this book:
http://www.cambridge.org/catalogue/catalogue.asp?isbn=9780521841504 but
also Larry Bretthorst's book (available for download here:
http://bayes.wustl.edu/glb/bib.html) and papers.

/Fredrik



--
Corrado Topi
PhD Researcher
Global Climate Change and Biodiversity
Area 18,Department of Biology
University of York, York, YO10 5YW, UK
Phone: + 44 (0) 1904 328645, E-mail: address@hidden



reply via email to

[Prev in Thread] Current Thread [Next in Thread]