help-glpk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-glpk] Patches to improve pseudocost initialisatiion speed


From: Andrew Makhorin
Subject: Re: [Help-glpk] Patches to improve pseudocost initialisatiion speed
Date: Wed, 11 Jan 2017 22:20:16 +0300

Hi Chris,

> I'm not sure which patch confuses you, so I'll describe all of them,
> since they try to address three different issues.

Thank you for clarification.

I think your design is too complicated. It seems to me that both
pseudo-cost initialization and strong branching could be implemented
as a special call to the dual simplex routine.

In general case we need to implement the following procedure.

INPUT:

   master LP, its basic optimal solution, factorization of the basis
   corresponding to the optimal solution (all this can be passed thru
   glp_prob);

   a subset { xj } of (structural) variables of master LP and their
   new lower (xj >= new_lj) and upper (xj <= new_uj) bounds (it is
   assumed that the value of xj in the optimal solution to master LP
   violates new_lj and new_uj).

   maximal number of dual simplex iterations to be applied to each
   xj in the subset.

PROCEDURE:

   for all xj in the subset do

      set LP := master LP

      set lj := new_lj

      solve LP (with dual simplex)

      set LP := master LP

      set uj := new_uj

      solve LP (with dual simplex)

   end

OUTPUT:

   for each xj two optimal objective values, first of which corresponds
   to "up" branch where lj was replaced with new_lj, and the second one
   corresponds to "down" branch where uj was replaced with new_uj. Note
   that if some auxiliary LP was not solved to optimality, i.e. only
   a limited number of dual simplex iterations was performed, the
   objective value at the final point is suboptimal, i.e. it a lower
   (minimization) or upper(maximization) bound for the optimal objective
   value.

The procedure above can be implemented *within* the dual simplex
driver and thus can use internal data structures of the simplex
routines (no API is needed).

To make the implementation efficient (this is the key point) we need to
avoid computing factorization from scratch before we start solving next
auxiliary LP, because the current factorization corresponds to the basis
of the previous auxiliary LP just solved. Here the following approach
can be used. As you know, one of simpliest forms of the basis
factorization is so called "eta file" B = B0 * H1 * ... * Hk, where
B is the current basis factorization, B0 = L0 * U0 is the factorization
of the basis at some preceding simplex iteration, H1, ..., Hk are
factors. Every time we change a column in B, a new factor H is added
to the eta file. Thus, if B0 corresponds to the optimal basis of the
master LP, it can be easily restored just by dropping H1, ..., Hk which
are stored separately from B0 = L0 * U0. However, the eta file
factorization has bad numerical properties, so it is not implemented
in glpk. The factorization based on Forrest-Tomlin update B = L * U
(currently used by default) doesn't fit, because transformations are
applied directly to L and U (starting from B0 = L0 * U0), so it is
unable to (efficiently) obtain factorization of B0. To resolve this
issue some time ago I implemented another factorization of the basis
based on Schur complement (please see comments in src/bflib/scf.h),
where B0 = L0 * U0 is not changed, so B = B0 can be easily restored as
in case of "eta file". However, the Schur-complement factorization is
numerically stable (even much more stable than Forrest-Tomlin). Thus
for the procedure above we can compute factorization B0 = L0 * U0
only once for optimal basis to the master LP, and then restore it every
time before solving next auxiliary LP. This approach, however, limits
the number of simplex iteration (say, to 100-200), since if
refactorization is needed, to keep B0 = L0 * U0 we should stop. On the
other hand neither strong branching nor pseudo-cost initialization
require optimal objective values--suboptimal values are sufficient.


Best regards,

Andrew Makhorin





reply via email to

[Prev in Thread] Current Thread [Next in Thread]