[Coin-ipopt] Problem with constraint scaling

Andreas Waechter andreasw at watson.ibm.com
Thu May 17 17:36:25 EDT 2007


Hi Fabian,

> thank you very much for your answer. It is very helpfull to get support! :)

Well, sorry, this time I didn't answer so quickly - crawling my way 
through emails...

>> My guess is that what matters is the size of the elements in the
>> gradient - those values should be typically in the order of  to -10.
>> Does your scaling of 1e9 do that?  (The VALUE of the function does not
>> matter that much)
> Thanks for the hint. My objective is about 2 magnitudes larger than its
> gradient. I scale now to make the objective gradient +/- 0.

It doesn't matter what really the VALUE of the objective function is 
(although it is better if it is not in orders larger than 1e10 or so to 
avoid rounding issues).  The objective gradient, however, should be on the 
order of 1-10, NOT ZERO!  (Or was that a typo?)

>> ... iterations or so.  I sugsest you run this problem again, but set the
>> tolerance (including the "acceptable" tolerance") so that Ipopt
>> continues further.  One way to do that is to set acceptable_iter to a
>> integer (100000).
> To be honest, the stopping criteria was max_iter. I set it now to
> 0.05 and the results are quite good!!

Again, is there a typo?  max_iter is an integer option, so 0.05 doesn't 
make sense...?

>> However, you might want to consider to do the problem scaling directly
>> in your problem formulation, instead of using obj_scaling_factor.
> I actually just "found" the get_scaling_parameters() and removed my
> own constraint scaling (value, gradient, boundary) and found another
> behaviour than with get_scaling_parameters(). How can this be?
> What is the difference form putting it into the problem formulation?
>
> Isn't it, that IPOPT does in this case some scaling by it's own?

I don't understand exactly what you mean.  This is what Ipopt does, 
depending on your choice of the option nlp_scaling_method:

1. none: it uses no internal scaling nor whatever you could give Ipopt
    with get_scaling_parameters
2. user-scaling: Ipopt calls your get_scaling_parameters implementation
    and uses the scaling values you provide
3. gradient-based: Ipopt computes some scaling factors on its own, based
    on the gradients at the initial point.   It ignores
    get_scaling_parameters.

In addition to this, Ipopt also uses the value of the obj_scaling_factor 
option, and it does this ON TOP OF the scaling factor for the objective 
function determined by nlp_scaling_method.

> The originial IPOPT scaling is not "strong" enough for my values,
> do you mean that then IPOPT "fine-scaled" my scaling within the problem?

I hope that my above explanation clarifies this question.

>> on? What happens if you set nlp_scaling_method=none?)
> Very bad - 2 iterations and IPOPT is done because the constraint holds
> exactly and the objective is many magnitudes smaller.

Are you saying that in this case Ipopt terminates with SUCCESS, providing 
a feasible point, and a small value of the objective function?

Was it that the scale of your constraints is extremely small (with values 
and derivatives on the order of something like 1e-6), so that it is almost 
as if they are numerically satisfied for any values of your variables? 
In that case, your constraints are badly underscaled, and should be 
multiplied with something like 1e6 (preferrable in your model) so that 
they also have numerical significance...(?)

> I played with gradient-based scaling options but found no way that IPOPT
> can do reasonable scaling automatically. That is not bad as I now learned
> what to adjust - thanks for the help.

Glad to hear it is already working better.

Regards,

Andreas



More information about the Coin-ipopt mailing list