help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Optimization in Octave


From: Urs Hackstein
Subject: Re: Optimization in Octave
Date: Wed, 12 Mar 2014 16:03:37 +0100

2014-03-12 13:36 GMT+01:00, Julien Bect <address@hidden>:
> On 12/03/2014 12:41, Urs Hackstein wrote:
>> First, the program runs through for some values for b, for some others
>> not. If it runs through, it seems to return the minimal value for F,
>> not the maximum. Thus changing F to
>> F=@(x)(1/(real(p(x(1),x(2),x(3),x(4),x(5))))); gives better results,
>> but not the optimal values. For example, opt(9.9454e+10) returns
>> -2.5657e+11 + 9.9454 e+10i, but p(3,8,1.5,4,62)=-1.1198e+11+9.9454
>> e+10i is significantly better. Maybe -2.5657e+11 is a local maximum of
>> the real part of p for
>> fixed imaginary part 9.9454e+10, but how can we reach the absolute
>> maximum?
>
> If would suggest try F -> -F instead of F -> 1/F to obtain a minimzation
> problem.
>
> If it doesn't work better, I can see several directions for you to dig :
>
> 1) Assumtpion : your problem does not really have local optima, but sqp
> stop permaturely.
>
> Then you might get better results by providing either
>
> - a better starting point,
> - or a better scaling of your function (e.g., try to rescale the inputs
> to [0; 1] or take the log),
> - or smaller tolerances for the stopping conditions.
>
> 2) Assumption : your problem really has local optima.
>
> Then sqp, which is a local optimization method, is not appropriate for
> your problem unless you can provide a starting point that is close
> enough to the optimum. If you can not provide such a starting point,
> then you have to use a methode that can escape from local minima. You
> can find several such methods in the optimization package
>
> http://octave.sourceforge.net/optim/overview.html#Optimization (see,
> e.g., battery, sa_min or de_min)
>
> and also in NLopt
>
> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Global_optimization
>
> You have to know, however, that most of these algorithms cannot handle
> equality constraints directly (ISRES from NLopt seems to be the
> exception). You can turn your problem into an unconstrained problem by
> using the constraint has a penalty on your objective function, for
> instance:
>
> Rp = @(x) (real(p(x(1),x(2),x(3),x(4),x(5))));
> Ip = @(x) (imag(p(x(1),x(2),x(3),x(4),x(5))));
> F = @(x) ( - Rp(x) + alpha * (Ip(x) - b)^2);
>
> and then solve several times using increasing values of alpha to enforce
> the constraint asymptotically. In NLopt, this sort of thing can be done
> automatically using the "augmented Lagrangian" algorithm:
>
> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Augmented_Lagrangian_algorithm
>
>
> It might be interesting for the people on this list if you could provide
> more details on your function p. Perhaps could you even send us the code
> ? Can you obtain the gradient of Rp and Ip with respect to x ? How long
> does it take to evaluate p for one value of x ? How many evaluations of
> p do you expect to use in order to solve the optimization problem ?
>
> @++
> Julien
>
>
>

Dear Julien,

thanks a lot for your detailed explanations. I just started to check them.
F->-F doesn't work better. A better starting point improves the
result, but I don't know how to choose the appropriate starting point.
I assumed that our problem has local optima, but I can't prove it. I
send you the code of p as attachment. p is a very long expression
build up from fractions and powers of x(1),...,x(5).

Best regards,

Urs Hackstein

Attachment: p.m
Description: Text Data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]