[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Simulated annealing with nonlinear constraints

From: heberf
Subject: RE: Simulated annealing with nonlinear constraints
Date: Sat, 9 Oct 1999 12:04:43 -0500 (CDT)

Quite correct.  In fact if anyone is interested this same PhD student has
written some MCMC octave code (Metropolis-Hastings algorithms, etc).  He
was my reseach assistant last year so I taught him octave and he's been
busily writing code ever since.


On Sat, 9 Oct 1999 address@hidden wrote:

> On 09-Oct-99 address@hidden wrote:
> > Someone asked if you could do simulated annealing with nonlinear
> > inequality constraints.  The answer is yes and no.  With simulated
> > annealing you essentially sample randomly from a distribution on the 
> > feasible set.  If the feasible set is not of the form (low <= x
> > <= high), where low, x, and high are vectors, then you won't be able to
> > easily figure out the distribution.
> > 
> > However the beauty of simulated annealing is that you don't have to
> > have a smooth objective function.  So you can just add the nonlinear
> > constraint to the objective in the form of a large exterior penalty. 
> > By this I mean that you add (if you are minimizing, subtract if
> > maximizing) a large positive number to the objective for points which
> > violate your constraint.
> > This does waste some computer time because you will occasionally
> > evaluate the objective function for infeasible points (for those
> > unfamiliar with optimization the measure of efficiency of a given
> > algorithm is how few times the objective has to be evaluated before a
> > solution is obtained) but it's the best you can do.
> As a further comment on the above: Simulated annealing is often
> implemented as an evolutionary variant of Markov Chain Monte Carlo
> (MCMC) in which the distribution being sampled from contains a
> parameter ("temperature") which is slowly forced towards zero so that
> the distribution being sampled from ultimately has all its probability
> at the maximum of the objective function.
> MCMC is a two-stage process. With 'x' the current position, a candidate
> 'y' for the next position is sampled from a "proposal distribution"
> 'q(x,y)'. Then the candidate 'y' is accepted at random with probability
>   min[ 1, (q(y,x)*P(y))/(q(x,y)*P(x)) ]
> (with the obvious interpretation if the denominator in this expression
> is zero), where P(x) is the distribution you are trying to sample from.
> If 'y is accepted, the next position is 'y; otherwise it remains at 'x'.
> This process x -> ... is a Markov Chain with equilibrium
> distribution P(x).
> An alternative, therefore, to implementing the constraint by returning
> a large "penalty" in the objective function outside the constraint
> boundaries (as Heber suggests) is to incorporate the constraints
> into the proposal distribution q(x,y) so that q(x,y) = 0 for all
> 'y' outside the constraint boundaries. In this way, such a 'y' will
> never be a candidate and so will never appear as a sampled value from
> the distribution 'P'.
> Whether it is easier to do it this way depends, I suppose, on the
> constraints. Since MCMC is pretty robust where choice of 'q(x,y)'
> is concerned (though there is a lot of scope for studying good
> choices of 'q(x,y)' in particular problems), maybe a simple and
> effective suggestion might be to have 'q(x,y)', as a function of
> 'y', uniform over the admissible region and zero outside.
> I hope this helps.
> Best wishes,
> Ted.
> --------------------------------------------------------------------
> E-Mail: (Ted Harding) <address@hidden>
> Date: 09-Oct-99                                       Time: 11:00:14
> ------------------------------ XFMail ------------------------------
> ---------------------------------------------------------------------
> Octave is freely available under the terms of the GNU GPL.  To ensure
> that development continues, see
> Instructions for unsubscribing:
> ---------------------------------------------------------------------

Octave is freely available under the terms of the GNU GPL.  To ensure
that development continues, see
Instructions for unsubscribing:

reply via email to

[Prev in Thread] Current Thread [Next in Thread]