[Ipopt] Unconstrained optimization problems

Andreas Waechter andreasw at watson.ibm.com
Fri Nov 20 14:22:27 EST 2009


Drosos,

> I have two questions which I could find by reading
> Ipopt's manual.
>
> 1) I didn't see in the manual how one can run
> a simulation in the unconstrained case. Forgive
> me if I skipped it. Is there any specific interface
> for that class of problems or should we still use TNLP?
> If not, then setting m=0,  nnz_jac_g = 0,
> nnz_h_lag = 0 and returning false inside eval_g
> and eval_grad_g would it do the job?

Yes, if the problem is unconstrained, there are no constraints (m=0) and 
no nonzeros in the Jacobian (nnz_jac_g=0).  However, the objective 
function might still have non-zero entries in the Hessian, so nnz_h_lag 
depends on your objective function (please check the documentation for the 
definition of the Lagrangian Hessian).  I would return true for eval_g 
eval_grad_g since they can correctly compute the values of the (zero) 
constaints, but it probably doesn't matter.

> 2) This is what I am doing at the moment for the
> unconstrained case in the simple hs071 example.
> However, I see that eval_f is called 6 times before eval_grad_f
> is called. When I change the initial guess and bounds
> the number of objective function calls is reduced.
> Is this expected or is it because I am doing something wrong?
> I am asking because in the unconstrained case I would expect
> something like:
>
> do until convergence
> {
>  eval_grad_f
>  eval_f
>  check_convergence
> }

This loop is incomplete, since you are forgetting the line search.  Please 
see the Ipopt implementation paper for a description of the algorithm.

Also, Ipopt needs a different number of iterations for different problems 
or different starting points (like any NLP solver).  So, if you change the 
starting point or bounds, it is expected to see different behavior of the 
algorithm.

Andreas



More information about the Ipopt mailing list