[Ipopt] large-scale quadratic optimization without constraints

Tran Minh Tuan tmtuan at laas.fr
Tue Mar 24 10:55:39 EDT 2009


Hi Andreas,

Thank you for your reply. In fact I used limited-memory as  
hessian_approximation because computing this with thousands variables  
by hand is hard.
I felt also that there was a wrong information... I set the number of  
constraints m as zero and the functions were like this:

Bool eval_g(Index n, Number* x, Bool new_x,
             Index m, Number* g, UserDataPtr user_data)
{
   return TRUE;
}

//---------------------------------------------------------------------- 
---------------- jacobian of constraint functions
Bool eval_jac_g(Index n, Number *x, Bool new_x,
                 Index m, Index nele_jac,
                 Index *iRow, Index *jCol, Number *values,
                 UserDataPtr user_data)
{
   return TRUE;
}

//---------------------------------------------------------------------- 
---------------- hessian of lagrangian
Bool eval_h(Index n, Number *x, Bool new_x, Number obj_factor,
             Index m, Number *lambda, Bool new_lambda,
             Index nele_hess, Index *iRow, Index *jCol,
             Number *values, UserDataPtr user_data)
{
   return FALSE; // use quasi-Newton approximation in options
}

I set also:
      AddIpoptStrOption(nlp, "check_derivatives_for_naninf", "yes");

but nothing appeared.

Could you tell me how to set correctly some parameters ? Or a simple  
example of such kind of optimization without constraints.

Thanks again,

On 24 Mar 2009, at 3:33 PM, Andreas Waechter wrote:

> Hi,
>
> If you don't provide the Hessian for a nonlinear problem, you will  
> have to choose the quasi-Newton approximation option
>
> hessian_approximation limited-memory
>
> Maybe you are already doing this...?  What options are you setting?
>
> You say that you are solving a QP, so I assume that you already  
> have the Hessian matrix somewhere.  Why do you not provide it to  
> Ipopt? Convergence would probably quite a bit faster, unless the  
> Hessian is dense.
>
> The error you describe looks like there is some wrong information  
> provided to Ipopt (e.g., if NaN ends up in Hessian entries and you  
> didn't choose a Hessian approximation).  Did you run your code  
> through a memory checker (like valgrind)?
>
> Andreas
>
> On Tue, 24 Mar 2009, Tran Minh Tuan wrote:
>
>> Hi all,
>>
>> I am using Ipopt to solve a quadratic optimization problem without
>> constraints (but only bound constraints on variables).
>> In this case, the constraint number is set to zero, the gradient of
>> the objective function is computed but the hessain is not.
>> So the result is like that all the time:
>>
>> ====
>> iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du
>> alpha_pr  ls
>>    0  2.0416444e+01 0.00e+00 5.95e+00   0.0 0.00e+00    -  0.00e+00
>> 0.00e+00   0
>>    1  1.6443076e+01 0.00e+00 1.21e+01  -6.2 5.95e+00  -4.0 1.00e+00
>> 4.06e-01f  1
>> ERROR: Problem in step computation, but emergency mode cannot be
>> activated.
>>
>> .....
>> Number of inequality constraint Jacobian evaluations = 0
>> Number of Lagrangian Hessian evaluations             = 0
>> Total CPU secs in IPOPT (w/o function evaluations)   =      0.009
>> Total CPU secs in NLP function evaluations           =      0.000
>>
>> EXIT: Error in step computation (regularization becomes too large?)!
>>
>>
>> Objective value
>> f(x*) = 1.644308e+01
>> ====
>>
>> I am wondering that in this kind of optimization, we MUST provide the
>> hessain matrix ? ou there is something wrong somewhere ?
>>
>> Your experience would help me much,
>> Thanks,
>>
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt
>>
>>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://list.coin-or.org/pipermail/ipopt/attachments/20090324/5c886971/attachment-0001.html 


More information about the Ipopt mailing list