[ADOL-C] Again optimization using ipopt
Andrea Walther
andrea.walther at uni-paderborn.de
Thu Aug 25 09:46:06 EDT 2011
Hi,
>
> The needed files are here: http://codepad.org/UI0xBjUM
> http://codepad.org/6RirlnjY http://codepad.org/yxwpGmPm
using these file I got the following output from Ipopt:
===================================================================================
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear
optimization.
Ipopt is released as open source code under the Eclipse Public License
(EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.9.3, running with linear solver ma27.
Starting derivative checker for first derivatives.
* jac_g [ 10, 2] = -7.0883496138621369e+04 v ~
-6.8156564172855970e+06 [ 9.896e-01]
* jac_g [ 11, 2] = -1.1325797422978580e+14 v ~
-1.7781953788100556e+14 [ 3.631e-01]
* jac_g [ 20, 19] = 1.0000000000000000e+00 v ~
9.9892704893794593e-01 [ 1.073e-03]
* jac_g [ 22, 20] = 1.0000000000000000e+00 v ~
1.0014528328256151e+00 [ 1.451e-03]
* jac_g [ 22, 21] = 1.0089848894572583e+01 v ~
1.0090954957040028e+01 [ 1.096e-04]
* jac_g [ 24, 21] = 2.0000000000000000e+00 v ~
1.9992654174662303e+00 [ 3.674e-04]
* jac_g [ 21, 27] = 1.0000000000000000e+00 v ~
1.0045152424801469e+00 [ 4.495e-03]
* jac_g [ 21, 28] = 5.0449244472862915e+00 v ~
5.0488452973638154e+00 [ 7.766e-04]
* jac_g [ 6, 35] = -1.0000000000000000e+00 v ~
-9.9413106066846668e-01 [ 5.869e-03]
* jac_g [ 20, 35] = -1.0000000000000000e+00 v ~
-1.0001928354286402e+00 [ 1.928e-04]
* jac_g [ 6, 36] = -8.8481721816691490e+00 v ~
-8.8455741151450766e+00 [ 2.937e-04]
* jac_g [ 7, 36] = -1.0000000000000000e+00 v ~
-9.9634213934326410e-01 [ 3.658e-03]
* jac_g [ 22, 36] = -1.0000000000000000e+00 v ~
-1.0008985820536753e+00 [ 8.978e-04]
* jac_g [ 6, 37] = -7.8290150956463791e+01 v ~
-7.8277498650931705e+01 [ 1.616e-04]
* jac_g [ 7, 37] = -1.7696344363338298e+01 v ~
-1.7704299125886404e+01 [ 4.493e-04]
* jac_g [ 11, 37] = 2.1999575568135035e-03 v ~
2.3197456925951785e-03 [ 1.198e-04]
* jac_g [ 24, 37] = -2.0000000000000000e+00 v ~
-1.9996207870170439e+00 [ 1.896e-04]
* jac_g [ 11, 39] = 1.0334224950717197e+00 v ~
1.0336708661232652e+00 [ 2.403e-04]
* jac_g [ 8, 43] = -1.0000000000000000e+00 v ~
-8.8394733632336475e-01 [ 1.161e-01]
* jac_g [ 10, 43] = -4.6895295850044880e-03 v ~
-4.1453015344754150e-03 [ 5.442e-04]
* jac_g [ 11, 43] = -7.4929528000541320e+06 v ~
-6.6233756705982545e+06 [ 1.313e-01]
* jac_g [ 21, 43] = -1.0000000000000000e+00 v ~
-1.0036485381171538e+00 [ 3.635e-03]
* jac_g [ 8, 44] = -8.8481721816691490e+00 v ~
-8.8455957176383428e+00 [ 2.913e-04]
* jac_g [ 9, 44] = -1.0000000000000000e+00 v ~
-9.8931004736744632e-01 [ 1.069e-02]
* jac_g [ 11, 44] = -6.6298936528688461e+07 v ~
-6.6279631423013784e+07 [ 2.913e-04]
* jac_g [ 23, 44] = -1.0000000000000000e+00 v ~
-1.0038587245346147e+00 [ 3.844e-03]
* jac_g [ 8, 45] = -7.8290150956463791e+01 v ~
-7.8259421567922075e+01 [ 3.927e-04]
* jac_g [ 9, 45] = -1.7696344363338298e+01 v ~
-1.7702630613190362e+01 [ 3.551e-04]
* jac_g [ 11, 45] = -5.8662440590700567e+08 v ~
-5.8639417315164995e+08 [ 3.926e-04]
* jac_g [ 10, 46] = -3.2485504883161909e+00 v ~
-3.2550068522349518e+00 [ 1.984e-03]
* jac_g [ 27, 46] = -6.0000000000000000e+00 v ~
-5.9984927739408924e+00 [ 2.513e-04]
* jac_g [ 10, 47] = -2.8743710574496390e+01 v ~
-2.9326953073826740e+01 [ 1.989e-02]
* jac_g [ 10, 48] = -2.5432902321419428e+02 v ~
-2.8060119938979096e+02 [ 9.363e-02]
* jac_g [ 10, 49] = -2.2503439235307897e+03 v ~
-7.7604348169291270e+03 [ 7.100e-01]
* jac_g [ 11, 49] = -3.5956249755456401e+12 v ~
-3.6005855319234385e+12 [ 1.378e-03]
* jac_g [ 10, 50] = -1.9911397963362237e+04 v ~
-1.8137935407645153e+05 [ 8.902e-01]
* jac_g [ 11, 50] = -3.1814708885976988e+13 v ~
-3.2166061710988578e+13 [ 1.092e-02]
Derivative checker detected 37 error(s).
======================================================================================
And the optimization seems to work:
=============================================================================================
....
Number of Iterations....: 46
(scaled) (unscaled)
Objective...............: 1.5000002944838144e+00 1.5000002944838144e+00
Dual infeasibility......: 1.3131589351800095e-06 1.3131589351800095e-06
Constraint violation....: 6.0512193158501759e-06 6.0512193158501759e-06
Complementarity.........: 1.1070224529562539e-07 1.1070224529562539e-07
Overall NLP error.......: 6.0512193158501759e-06 6.0512193158501759e-06
Number of objective function evaluations = 50
Number of objective gradient evaluations = 47
Number of equality constraint evaluations = 50
Number of inequality constraint evaluations = 50
Number of equality constraint Jacobian evaluations = 47
Number of inequality constraint Jacobian evaluations = 47
Number of Lagrangian Hessian evaluations = 0
Total CPU secs in IPOPT (w/o function evaluations) = 0.050
Total CPU secs in NLP function evaluations = 0.007
EXIT: Optimal Solution Found.
Lösung der Optimierung:
tf 0.5000 0.5000 0.5000
a 0 0.0000 7.0524 -1.4913
a 1 0.0000 38.1276 3.9746
a 2 0.0000 -50.6196 147.7050
a 3 -0.0000 -879.8557 -602.3558
a 4 121.3494 1860.5780 789.8496
a 5 -54.0932 -89.3045 -34.6974
a 6 699.5690 -1040.0536 -631.4350
a 7 -1250.8540 -283.5472 314.5087
a 8 -0.4000 159.4038 129.5313
a 9 0.0000 291.8155 -383.3504
a10 1369.6381 -592.9467 -330.4081
a11 -1330.3376 -482.1689 1512.4684
a12 -565.1536 525.0474 -256.4353
a13 378.2663 -94.2044 -768.5017
a14 522.5413 -202.8789 -421.0349
a15 -125.0485 771.3141 244.3483
======================================================================
Is this the function value that you expect?
And do you get the same behaviour?
If this is the case then it is a little bit hard to judge the output, since:
Given that the derivatives are checked with finite differences I found
the discrepancies not too bad. I do not know how Ipopt determines the
step size for the finite differences, but it can choose only one for a
whole column of the Jacobian. Therefore some entries might be matched
almost perfectly while others would require a different step size.
Therefore one would interpret the warnings of the derivative check just
as the usual discrepancy between the estimate (!) of the derivatives by
finite differences and the derivative provided by AD.
There might be some systematic error in the rows 10 and 11 but before
going more deeply in the code I would like to get some feedback from you.
Best regards,
Andrea Walther
--
Prof. Dr. Andrea Walther
Lehrstuhl fuer Mathematik und ihre Anwendungen
Institut fuer Mathematik
Universitaet Paderborn
Warburger Str. 100
33098 Paderborn
Email: andrea.walther at uni-paderborn.de
Phone: ++49 5251 602721
Fax: ++49 5251 603728
**********
More information about the ADOL-C
mailing list