[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Built-in parallelization in optimization functions

From: Olaf Till
Subject: Re: Built-in parallelization in optimization functions
Date: Sat, 16 Nov 2013 22:23:02 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Nov 14, 2013 at 12:47:29PM +0100, Žiga Povalej wrote:
> Dear All,
> I am trying to solve a non-linear optimization problem in parallel. Does 
> there exist an optimization function with a built-in parallel option (similar 
> to 'fmincon' in Matlab)? I have checked the built-in optimization functions 
> and the package optim, but there seems to be no function with such option.
> If there is no such optimization function, I am planning to implement it on 
> my own, since it would be very helpful to have a tool to speed-up large 
> time-consuming non-linear optimization problems.

Hi Žiga,

seemingly I never put something like that into the optim package, and
I don't think there is ready made parallelism available now. It should
not be hard to implement, though (i.e. for parallelism on a single
multi-processor machine).

One could change the default gradient/jacobian function of nonlin_min
or nonlin_residmin & co. and of the classical leasqr. If you can wait
a bit more than a week, I can do it myself (I can't do it during the
next week). Indeed this should be better than doing it yourself, since
I'd like to review the code anyway before inclusion.

If you can't wait so long, you can provide an own gradient/jacobian
function to nonlin_min or nonlin_residmin & co. or leasqr which
computes the finite difference gradient/jacobian in parallel (consider
using parcellfun of the general package).

Of course, better than parallel finite differences is providing an
explicit gradient/jacobian function, if possible.


public key id EAFE0591, e.g. on x-hkp://

Attachment: signature.asc
Description: Digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]