[Coin-ipopt] Re: Ipopt : Scaling factor

Andreas Waechter andreasw at watson.ibm.com
Sun Sep 25 16:23:43 EDT 2005


Hi Ullrich,

I'm copying my reply to the Ipopt mailing list - it would be nice if you
could send your questions next time to the mailing list...

> I^Rm using the FORTRAN implementation of Ipopt for solving small dynamic
> data reconciliation problems. I am a new user of this optimization
> algorithm.
>
> I need some advices for the scaling of my variables. Which of the
> following possibilities is the best ?
> • Should I scale my variables before giving them to Ipopt and unscale them
> when Ipopt returns them for the estimation of the objective function,
> constraints… ?
> • Should I use the option from Ipopt “ISCALE (3)” ?
> • Or can I use the option “ISCALE (5)” and only specify the scaling
> parameter for the variable and let Ipopt choose the scaling parameter for
> the constraint. How can I do this ?

Yes, scaling the optimization problem can make a big difference in terms
of how easy or difficult it is for the solver to find a solution.

In an earlier message to the mailing list I wrote down some general
thoughts on what I think a well-scaled problem is, see

http://list.coin-or.org/pipermail/coin-ipopt/2005-September/000343.html

Essentially, I believe that it makes sense to try to find scaling factors
for the variables and constraints, so that the sensitivities are on the
order of 1.

Among the alternatives that you describe, the first option (including the
scaling of the variables in your problem formuation) and the last option,
setting ISCALE to 5 (reading scaling parameters from a file) make the
algorithm see the same problem, so it is up to you to decide that is more
convenient to you.  However, if you are using AMPL, it is not clear in
which order to variables are given to IPOPT (and some might be eliminated
by AMPL's presolve), in which case the ISCALE=5 options is quite difficult
to use.  Also, if your problem formulation changes (e.g., the problem size
changes du to changes in the discretization), you would have to rewrite
the file everytime.  But if you want to use the ISCALE=5 variant and only
want to rescale the variables, you can simply provide the scaling factor
"1" for the constraints if you don't want to change their scaling.

Option ISCALE=3 is more tricky; the idea is that the Harwell subroutine
MC19 (or whichever routine, MC29 or MC30 is compiled into your Ipopt
library/executable) is used to try to find scaling values for variables
and constraints so that the non-zero first partial derivatives are of
order 1.  But that is a heuristic that often doesn't do a good job.
Since you probably have a good idea how the scaling should be, I recommend
that you scale your problem formulation directly.

By the way, the new Ipopt version does not (yet?) have the option to
compute scaling factors automatically, or to read them from a file.

> I also tried the C++ version. But in my case it is very difficult to
> compute an analytic expression of the Hessian matrix. Should I try to
> estimate it by finite difference? Or does an option exist in the C++
> version to do it automatically? If not, how is it possible to use BFGS?
> How can I add BFGS? Or when do you think we will be able to have such an
> option in the C++ version?

We are planning to add limied-memory versions of BFGS and SR1 to the C++
version soon (I have to do that for an IBM project anyway), and I hope to
be able to do that before the year ends.

Also, there are tools for automatic differentiation, such as ADOL-C, see

http://www.math.tu-dresden.de/~adol-c/) which can compute also second
derivatives (and I believe also the non-zero structure of the Hessian).
That might be an interesting alternative to quasi-Newton methods.

I hope this helps,

Andreas





More information about the Coin-ipopt mailing list