[Ipopt] Modification of the example hs071_c: case without constraints

Tran Minh Tuan tmtuan at laas.fr
Wed Mar 25 09:44:18 EDT 2009


Hi Andreas,

Before, I used MUMPS with Ipopt-3.3.3 on Mac OS X and I got that result.

Now I use MA27 with Ipopt-3.5.0 on Linux Fedora and my program runs  
well. I obtained also the same result as yours for the case hs071_c  
without constraints.

Thanks for your suggestion, but I still have to resolve some problems  
on Mac :)

Minh

On 25 Mar 2009, at 12:01 AM, Andreas Waechter wrote:

> Hi Minh,
>
> I tried to modifications that you describe, used the option
>
> hessian_approximation limited-memory
>
> and the algorithm converged in 11 iterations to the solution you  
> describe. So, it looks all fine.
>
> I assume there is something wrong in your local setup.  I suggest  
> you make sure that 'make test' works properly, and that the  
> examples as they are in the distribution work.
>
> If that still doesn't work you can submit a ticket at the Ipopt  
> Trac page, including all information: which operating system, which  
> compilers, the config.log file from the Ipopt subdirectory, some  
> code the preplicate your issue, and the all.out output file  
> generate with
>
> file_print_level 7
> output_File all.out
>
> But I can't promise that by looking at this imformation I or  
> someone else can quickly help you.
>
> Andreas
>
>
> On Tue, 24 Mar 2009 tmtuan at laas.fr wrote:
>
>> Hi all again,
>>
>> Thinking about a simple example of non-constraint optimization  
>> with Ipopt,
>> I just modified the example hs071_c.c by setting m=0, nele_jac=0,
>> nele_hess=0. I also removed the part /* set the values of the  
>> constraint
>> bounds */ and modified the gradient functions as follows:
>>
>> Bool eval_g(...)
>> {
>>  return TRUE;
>> }
>>
>> Bool eval_jac_g(...)
>> {
>>  return TRUE;
>> }
>>
>> Bool eval_h(...)
>> {
>>   return FALSE; // using approximated hessian in option
>> }
>>
>> In this case, the obvious optimum is x1 = x2 = x3 = x4 = 1 which  
>> gives 4
>> as optimal value for the objective function.
>>
>> However, I got this result:
>>
>> =====
>> ...
>> 533  4.0498668e+00 0.00e+00 5.13e+09  -1.7 1.39e+14 -20.0 2.81e-14
>> 2.81e-16w  1
>> 534  4.0693547e+00 0.00e+00 2.04e+08  -1.7 4.51e+23 -20.0 1.68e-21
>> 4.16e-23f 14
>> Warning: Cutting back alpha due to evaluation error
>> ...
>> Restoration phase is called at point that is almost feasible,
>>  with constraint violation 0.000000e+00. Abort.
>>
>> Number of Iterations....: 534
>>
>>                                   (scaled)                 (unscaled)
>> Objective...............:   4.0693546701813865e+00     
>> 4.0693546701813865e+00
>> Dual infeasibility......:   2.0382053748458204e+08     
>> 2.0382053748458204e+08
>> Constraint violation....:   0.0000000000000000e+00     
>> 0.0000000000000000e+00
>> Complementarity.........:   2.0581959617183902e+06     
>> 2.0581959617183902e+06
>> Overall NLP error.......:   7.9999998816327331e+02     
>> 2.0382053748458204e+08
>>
>>
>> Number of objective function evaluations             = 4138
>> Number of objective gradient evaluations             = 535
>> Number of equality constraint evaluations            = 0
>> Number of inequality constraint evaluations          = 0
>> Number of equality constraint Jacobian evaluations   = 0
>> Number of inequality constraint Jacobian evaluations = 0
>> Number of Lagrangian Hessian evaluations             = 0
>> Total CPU secs in IPOPT (w/o function evaluations)   =      0.968
>> Total CPU secs in NLP function evaluations           =      0.007
>>
>> EXIT: Restoration Failed!
>> =====
>>
>> After 534 iterations, the optimal cost converged to 4 but no explicit
>> optimal values X1->X4 was found.
>>
>> When I changed initial values to X=(1,2,2,1), I got this:
>>
>> =====
>>  84  5.3833428e+00 0.00e+00 4.25e+00  -1.3 6.72e+05 -20.0 2.22e-08
>> 2.40e-08f  8
>>  85  4.9866527e+00 0.00e+00 8.89e-01  -1.3 9.29e-01 -20.0 1.00e+00
>> 6.54e-02f  1
>> ERROR: Problem in step computation, but emergency mode cannot be  
>> activated.
>>
>> Number of Iterations....: 85
>>
>>                                   (scaled)                 (unscaled)
>> Objective...............:   4.9866526748311717e+00     
>> 4.9866526748311717e+00
>> Dual infeasibility......:   8.8935433599997316e-01     
>> 8.8935433599997316e-01
>> Constraint violation....:   0.0000000000000000e+00     
>> 0.0000000000000000e+00
>> Complementarity.........:   4.5557198993150250e-01     
>> 4.5557198993150250e-01
>> Overall NLP error.......:   8.8935433599997316e-01     
>> 8.8935433599997316e-01
>>
>>
>> Number of objective function evaluations             = 394
>> Number of objective gradient evaluations             = 86
>> Number of equality constraint evaluations            = 0
>> Number of inequality constraint evaluations          = 0
>> Number of equality constraint Jacobian evaluations   = 0
>> Number of inequality constraint Jacobian evaluations = 0
>> Number of Lagrangian Hessian evaluations             = 0
>> Total CPU secs in IPOPT (w/o function evaluations)   =      0.198
>> Total CPU secs in NLP function evaluations           =      0.001
>>
>> EXIT: Error in step computation (regularization becomes too large?)!
>> =====
>>
>> After 85 iterations, the program converged to 5.
>>
>> So, could you please explain me how to overcome this simple  
>> problem ? How
>> to "design" a non-constraint optimization program which works ?
>>
>> Thanks a lot,
>> Minh
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt
>>



More information about the Ipopt mailing list