help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

undergraduate octave project ideas (fwd)


From: Dirk Eddelbuettel
Subject: undergraduate octave project ideas (fwd)
Date: Sat, 12 Jan 2002 10:44:05 -0600

Zach,

Good idea to ask here. We can surely need contributions. As a minor nit,
bug-octave is slightly off as far as the list goes, help-octave might be more
topical so I have taken the liberty of redirecting this.

  "Zach" == Z Frazier <address@hidden> writes:
  Zach> I am a senior at the University of Washington. I am a double major in
  Zach> Math and the Numerical Algorithms track of our Computational Sciences
  Zach> major. I am looking or a good project to work on over the next couple
  Zach> of months, and was hoping i could also contribute to the development
  Zach> of Octave.
  Zach> 
  Zach> Are there any suggestions for projects?  I am looking something that
  Zach> could be completed in a few months of light/moderate work.  Something
  Zach> that is covered in the undergraduate courses of numerical
  Zach> analysis(root finding, matrix computations, optimization, etc...).
  Zach> 
  Zach> I have considered working on an LP solver, but it looks like it is
  Zach> pretty much done, as far as the math goes.  I could work on Quadratic
  Zach> Programming, however I have very limited experience with them.  At
  Zach> the same time there are really no *free* QP libraries that could be
  Zach> integrated into Octave( that i could find at least), so it seems like
  Zach> a reasonable project.
  Zach> 
  Zach> Is there anything at this level that Octave is in need of?

IMHO nonlinear optimisation under constraints would be much appreciated in
the Free Software world. There is some code, but typically under restrictive
licenses.

One decent piece of free code is L-BFGS-B [ nonlinear optim under bounds ],
which is eg used in GNU R (cf http://www.r-project.org). I attach the help
page of optim() -- a general purpose routine which can employ several
underlying algorithms -- below.  At a minimum, a port of optim() would be
greatly appreciated (and should be feasible given the GPL'ed sources of
R). Extending the algorithms is probably a grad school topic :)

Others might have different suggestion, and your supervisor might have yet
another one.

Cheers, Dirk



optim                  package:base                  R Documentation

General-purpose Optimization

Description:

     General-purpose optimization based on Nelder-Mead, quasi-Newton
     and conjugate-gradient algorithms. It includes an option for
     box-constrained optimization.

Usage:

     optim(par, fn, gr = NULL,
           method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN"),
           lower = -Inf, upper = Inf,
           control = list(), hessian = FALSE, ...)

Arguments:

     par: Initial values for the parameters to be optimized over.

      fn: A function to be minimized (or maximized), with first
          argument the vector of parameters over which minimization is
          to take place. It should return a scalar result.

      gr: A function to return the gradient. Not needed for the
          `"Nelder-Mead"' and `"SANN"' method. If it is `NULL' and it
          is needed, a finite-difference approximation will be used. It
          is guaranteed that `gr' will be called immediately after a
          call to `fn' at the same parameter values.

  method: The method to be used. See Details.

lower, upper: Bounds on the variables for the `"L-BFGS-B"' method.

 control: A list of control parameters. See Details.

 hessian: Logical. Should a numerically differentiated Hessian matrix
          be returned?

     ...: Further arguments to be passed to `fn' and `gr'.

Details:

     By default this function performs minimization, but it will
     maximize if `control$fnscale' is negative.

     The default method is an implementation of that of Nelder and Mead
     (1965), that uses only function values and is robust but
     relatively slow. It will work reasonably well for
     non-differentiable functions.

     Method `"BFGS"' is a quasi-Newton method (also known as a variable
     metric algorithm), specifically that published simultaneously in
     1970 by Broyden, Fletcher, Goldfarb and Shanno. This uses function
     values and gradients to build up a picture of the surface to be
     optimized.

     Method `"CG"' is a conjugate gradients method based on that by
     Fletcher and Reeves (1964) (but with the option of Polak-Ribiere
     or Beale-Sorenson updates).  Conjugate gradient methods will
     generally be more fragile that the BFGS method, but as they do not
     store a matrix they may be successful in much larger optimization
     problems.

     Method `"L-BFGS-B"' is that of Byrd et. al. (1994) which allows
     box constraints, that is each variable can be given a lower and/or
     upper bound. The initial value must satisfy the constraints. This
     uses a limited-memory modification of the BFGS quasi-Newton
     method. If non-trivial bounds are supplied, this method will be
     selected, with a warning.

     Nocedal and Wright (1999) is a comprehansive reference for the
     previous three methods.

     Method `"SANN"' is a variant of simulated annealing given in
     Belisle (1992). Simulated-annealing belongs to the class of
     stochastic global optimization methods. It uses only function
     values but is relatively slow. It will also work for
     non-differentiable functions. This implementation uses the
     Metropolis function for the acceptance probability. The next
     candidate point is generated from a Gaussian Markov kernel with
     scale proportional to the actual temperature. Temperatures are
     decreased according to the logarithmic cooling schedule as given
     in Belisle (1992, p. 890). Note that the `"SANN"' method depends
     critically on the settings of the control parameters.  It is not a
     general-purpose method but can be very useful in getting to a good
     value on a very rough surface.

     Function `fn' can return `NA' or `Inf' if the function cannot be
     evaluated at the supplied value, but the initial value must have a
     computable finite value of `fn'. (Except for method `"L-BFGS-B"'
     where the values should always be finite.)

     `optim' can be used recursively, and for a single parameter as
     well as many.

     The `control' argument is a list that can supply any of the
     following components:

     `trace' Integer. If positive, tracing information on the progress
          of the optimization is produced. Higher values may produce
          more tracing information: for method `"L-BFGS-B"' there are
          six levels of tracing.  (To understand exactly what these do
          see the source code: higher levels give more detail.)

     `fnscale' An overall scaling to be applied to the value of `fn'
          and `gr' during optimization. If negative, turns the problem
          into a maximization problem. Optimization is performed on
          `fn(par)/fnscale'.

     `parscale' A vector of scaling values for the parameters.
          Optimization is performed on `par/parscale' and these should
          be comparable in the sense that a unit change in any element
          produces about a unit change in the scaled value.

     `ndeps' A vector of step sizes for the finite-difference
          approximation to the gradient, on `par/parscale' scale.
          Defaults to `1e-3'.

     `maxit' The maximum number of iterations. Defaults to `100' for
          the derivative-based methods, and `500' for `"Nelder-Mead"'.
          For `"SANN"' `maxit' gives the total number of function
          evaluations. There is no other stopping criterion. Defaults
          to `10000'.

     `abstol' The absolute convergence tolerance. Only useful for
          non-negative functions, as a tolerance for reaching zero.

     `reltol' Relative convergence tolerance.  The algorithm stops if
          it is unable to reduce the value by a factor of `reltol *
          (abs(val) + reltol)' at a step.  Defaults to
          `sqrt(.Machine$double.eps)', typically about `1e-8'.

     `alpha', `beta', `gamma' Scaling parameters for the
          `"Nelder-Mead"' method. `alpha' is the reflection factor
          (default 1.0), `beta' the contraction factor (0.5) and
          `gamma' the expansion factor (2.0).

     `REPORT' The frequency of reports for the `"BFGS"' and
          `"L-BFGS-B"' methods if `control$trace' is positive. Defaults
          to every 10 iterations.

     `type' for the conjugate-gradients method. Takes value `1' for the
          Fletcher-Reeves update, `2' for Polak-Ribiere and `3' for
          Beale-Sorenson.

     `lmm' is an integer giving the number of BFGS updates retained in
          the `"L-BFGS-B"' method, It defaults to `5'.

     `factr' controls the convergence of the `"L-BFGS-B"' method.
          Convergence occurs when the reduction in the objective is
          within this factor of the machine tolerance. Default is
          `1e7', that is a tolerance of about `1e-8'.

     `pgtol' helps controls the convergence of the `"L-BFGS-B"' method.
          It is a tolerance on the projected gradient in the current
          search direction. This defaults to zero, when the check is
          suppressed.

     `temp' controls the `"SANN"' method. It is the starting
          temperature for the cooling schedule. Defaults to `10'.

     `tmax' is the number of function evaluations at each temperature
          for the `"SANN"' method. Defaults to `10'.

Value:

     A list with components: 

     par: The best set of parameters found.

   value: The value of `fn' corresponding to `par'.

  counts: A two-element integer vector giving the number of calls to
          `fn' and `gr' respectively. This excludes those calls needed
          to compute the Hessian, if requested, and any calls to `fn'
          to compute a finite-difference approximation to the gradient.

convergence: An integer code. `0' indicates successful convergence.
          Error codes are

          `1' indicates that the iteration limit `maxit' had been
               reached.

          `10' indicates degeneracy of the Nelder-Mead simplex.

          `51' indicates a warning from the `"L-BFGS-B"' method; see
               component `message' for further details.

          `52' indicates an error from the `"L-BFGS-B"' method; see
               component `message' for further details.

 message: A character string giving any additional information returned
          by the optimizer, or `NULL'.

 hessian: Only if argument `hessian' is true. A symmetric matrix giving
          an estimate of the Hessian at the solution found. Note that
          this is the Hessian of the unconstrained problem even if the
          box constraints are active.

Note:

     The code for methods `"Nelder-Mead"', `"BFGS"' and `"CG"' was
     based originally on Pascal code in Nash (1990) that was translated
     by `p2c' and then hand-optimized.  Dr Nash has agreed that the
     code can be made freely available.

     The code for method `"L-BFGS-B"' is based on Fortran code by Zhu,
     Byrd, Lu-Chen and Nocedal obtained from Netlib (file
     `opt/lbfgs_bcm.shar': another version is in `toms/778').

     The code for method `"SANN"' was contributed by A. Trapletti.

References:

     Belisle, C. J. P. (1992) Convergence theorems for a class of
     simulated annealing algorithms on Rd. J Applied Probability, 29,
     885-895.

     Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C.  (1995) A limited
     memory algorithm for bound constrained optimization. SIAM J.
     Scientific Computing, 16, 1190-1208.

     Fletcher, R. and Reeves, C. M. (1964) Function minimization by
     conjugate gradients. Computer Journal 7, 148-154.

     Nash, J. C. (1990) Compact Numerical Methods for Computers. Linear
     Algebra and Function Minimisation. Adam Hilger.

     Nelder, J. A. and Mead, R. (1965) A simplex algorithm for function
     minimization. Computer Journal 7, 308-313.

     Nocedal, J. and Wright, S. J. (1999) Numerical Optimization.
     Springer.

See Also:

     `nlm', `optimize'

Examples:

     fr <- function(x) {   ## Rosenbrock Banana function
         x1 <- x[1]
         x2 <- x[2]
         100 * (x2 - x1 * x1)^2 + (1 - x1)^2
     }
     grr <- function(x) { ## Gradient of `fr'
         x1 <- x[1]
         x2 <- x[2]
         c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
            200 *      (x2 - x1 * x1))
     }
     optim(c(-1.2,1), fr)
     optim(c(-1.2,1), fr, grr, method = "BFGS")
     optim(c(-1.2,1), fr, NULL, method = "BFGS", hessian = TRUE)
     optim(c(-1.2,1), fr, grr, method = "CG")
     optim(c(-1.2,1), fr, grr, method = "CG", control=list(type=2))
     optim(c(-1.2,1), fr, grr, method = "L-BFGS-B")

     flb <- function(x)
         { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) }
     ## 25-dimensional box constrained
     optim(rep(3, 25), flb, NULL, "L-BFGS-B",
           lower=rep(2, 25), upper=rep(4, 25)) # par[24] is *not* at boundary

     ## "wild" function , global minimum at about -15.81515
     fw <- function (x)
         10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80
     plot(fw, -50, 50, n=1000, main = "optim() minimising `wild function'")

     res <- optim(50, fw, method="SANN",
                  control=list(maxit=20000, temp=20, parscale=20))
     res
     ## Now improve locally
     (r2 <- optim(res$par, fw, method="BFGS"))
     points(r2$par, r2$val, pch = 8, col = "red", cex = 2)



-- 
Good judgment comes from experience; experience comes from bad judgment. 
                                                            -- F. Brooks



-------------------------------------------------------------
Octave is freely available under the terms of the GNU GPL.

Octave's home on the web:  http://www.octave.org
How to fund new projects:  http://www.octave.org/funding.html
Subscription information:  http://www.octave.org/archive.html
-------------------------------------------------------------



reply via email to

[Prev in Thread] Current Thread [Next in Thread]