[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Optimisation functions missing?
From: |
Daniel Heiserer |
Subject: |
Re: Optimisation functions missing? |
Date: |
Fri, 05 Mar 1999 09:13:59 +0100 |
heberf wrote:
>
> Ok, we need to get together and write this stuff. This is not hard. the
> traditional way to solve an LP problem is with the simplex method and there
> is a
> similar thing for QP problems.
>
> I recommend we try something a little more modern. If someone would like to
> join me (or several someones) we could have something in a short amount of
> time.
> Let me give you a little introduction to modern optimization (I'm no expert
> but
> I've read some recent stuff and taken a class or two a few years back). When
> octave first came out we talked about using NPSOL but the license was a
> problem.
> The truth is that NPSOL uses a very old algorithm called "sequential quadratic
> programming". I'm thinking we can do better than this.
>
> The problem with free software for optimization is the contraints.
> Unconstrained optimation is easy and there is lots of free code for doing
> that.
> Modern optimization techniques transform constrained problems to unconstrained
> problems. The way you do it is to add a "penalty function" which is close to
> zero on the inside of the feasible region and rises sharply near the edge.
> There is a parameter which controls how steeply the function rises at the edge
> and how close to zero it is in the middle so that in the limit the function is
> equal to the true objective inside the feasible region and is infinity
> elsewhere. The first step is to solve the problem for a low value of this
> parameter. This will give you a solution which lies strictly on the interior
> of
> the feasible region. For this reason these methods are called "interior point
> methods". Now imagine solving the problem for all possible values of this
> parameter. This traces out a path of solutions called the "central path".
> The
> trick comes in deciding how close you need to get to solving these
> intermediate
> problems and how much to increase the parameter between each intermediate
> problem (long step vs. short step). In other words you decide how close you
> are
> going to stay to the central path.
>
> The advantage of these methods is two-fold.
>
> 1. They are easy since unconstrained problems are easy to code. LP
> and
> QP problems are particularly easy since the gradient of the objective +
> penalty
> is known.
> 2. They scale well in the sense that as the number of choice
> variables
> and constraints increases (problem complexity) the number of function
> evaluations it takes to find a solution increases rather slowly (at polynomial
> rather than exponential rate as was the case for the classical methods).
>
> Point 2 brings up the issue of large problems. These methods will
> work
> well for large problems but usually these problems involve spare matrices.
> In
> order to have a first rate package we will need this functionality eventually.
>
> Anyone want to help me?
>
Yes I am interested to help. I could add some non-gradient methods
like genetic algorithms and Simulated annealing.
For gradient stuff I heard a lot of Fletcher Reeves. Is that still
state
of the art?
One thing would be nice anyway: Sparse matrices and very sparse
matrices.
Perhaps we could do that in one?
Bye daniel
--
Mit freundlichen Gruessen
Daniel Heiserer
-----
--------------------------------------------------------------
Dipl.-Phys. Daniel Heiserer, BMW AG, Knorrstrasse 147, 80788 Muenchen
Abteilung EK-20
Tel.: 089-382-21187, Fax.: 089-382-42820
mailto:address@hidden