help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Constrained non linear regression using ML


From: Corrado
Subject: Re: Constrained non linear regression using ML
Date: Tue, 23 Mar 2010 08:10:06 +0000
User-agent: Thunderbird 2.0.0.24 (X11/20100317)

Fredrik Lingvall wrote:
3) the pdf of e is dependent on E(y)

Note that y is your data and it is not distributed at all - it is the
numbers that your data recording machine gave you. The error is just
something you "add" because you don't have perfect knowledge of the
physical process that you are studying. The better your knowledge is
(better theory/better model) the less the error becomes.
I do not understand what you mean by y not being distributed at all.

What I mean is:

1) our observation are y = {y_1,y_2,....,y_t}.

2) Each y_i can be considered as an extraction or realisation of a random variable Y.

3) This random variable Y has a distribution pdf(Y).

4) If I build the frequency distribution of y, then this frequency distribution may tell me something about the distribution of Y.

5) If I understand the process that generates the measurements y = {y_1,y_2,....,y_t}, then I may infer something about the distribution of Y.

I thought this was the approach, for example, of

Ferrari, S.L.P., and Cribari-Neto, F. (2004). Beta Regression for Modeling Rates and Proportions
Journal of Applied Statistics, *31*(7), 799-815.

Could you please explain me where am I going wrong?

No I would not say that it is "an approximation by desperation". The
Gaussian assumption is a very conservative and safe assumption.
OK.
If I understand well, the approach you are proposing is Bayesian,
assuming a prior exponential positive distribution for the theta.

I do not know the Bayesian approach, thanks a lot for the books.
Unfortunately, we do not have the Gregory book in our library, but I
will download the other one. I stuck at home with bad back and the
connection is not great, so as soon as I am back .... :D

A the same time, I would like to initially use ML for comparison with
some previous work, before moving into Bayesian.

The computational difference may not be as huge as you think. In ML you
just try to maximize L(theta) and in MAP (Bayesian) you maximize
p(theta|I)*L(theta) so you end up with an optimization problem in both
cases. When you do ML you don't use all info you have (bounds of the
parameters etc). Using such information often improve your estimates
significantly.
I do not understand: What is the advantage of applying MAP if I still have to calculate L(theta),
which assumes I know about the distribution of the error?

I had the impression that MAP was not completely Bayesian
and that a completely Bayesian approach would assume
only very vague prior knowledge of the distributions of the thetas.

Am I wrong?

Best,

--
Corrado Topi
PhD Researcher
Global Climate Change and Biodiversity
Area 18,Department of Biology
University of York, York, YO10 5YW, UK
Phone: + 44 (0) 1904 328645, E-mail: address@hidden



reply via email to

[Prev in Thread] Current Thread [Next in Thread]