[Ipopt] Lagrange multipliers for inequality (<=) constraints

Stefan Vigerske stefan at math.hu-berlin.de
Mon Nov 7 10:15:44 EST 2016


Hi,

yes, internally, Ipopt creates your inequalities as equalities with 
slacks and ip_data is internal data, so refers to the internal 
representation of your problem.
However, lambda refers to the problem with inequality constraints.

Stefan

On 11/07/2016 04:10 PM, Chunhua Men wrote:
> Hi Stefan,
>
> Thanks for your response. I think I still miss something.... I have asked a
> question several months ago: all my constraints were inequality (<=) and I
> wondered why I got negative "lambda" in "finalize_solution". Then I was
> told that IPOPT treats all constraints as equalities with slacks...
> However, in this email, I was told ip_data refer to the equality
> constraints (+ slack variables), and "lambda" refer my
> original inequalities (<=)?
>
> Thanks again, Chunhua
>
>
>
> On Mon, Nov 7, 2016 at 6:01 AM, Stefan Vigerske <stefan at math.hu-berlin.de>
> wrote:
>
>> Hi,
>>
>> you should read lambda directly.
>> ip_data should refer to the internal representation of your problem, where
>> inequality constraints have been reformulated to equality constraints (+
>> slack variables), so it might not be obvious what y_c() or y_d() mean.
>> You can have a look at the documentation of intermediate_callback to get
>> some idea how to bring y_c() and y_d() into the TNLP space:
>> http://www.coin-or.org/Ipopt/documentation/node23.html#SECTI
>> ON00053410000000000000
>>
>> Hope that helps,
>> Stefan
>>
>>
>> On 10/28/2016 12:44 AM, Chunhua Men wrote:
>>
>>> Hello,
>>>
>>> I have a convex model with some inequality (<=)  nonlinear constraints. At
>>> the end of the optimization,  I wanted to get the Lagrange multipliers for
>>> each constraint. I found out there were 2 ways within
>>> "finalize_solution(...)" to get them:
>>> 1) read "lambda" directly;
>>> 2) get from ip_data->curr()->y_d(). And I had to do some data transfer to
>>> get it -  and what I did was "static_cast<const
>>> Ipopt::DenseVector*>(GetRawPtr(ip_data->curr()->y_c()))".
>>>
>>> However, these 2 methods could not give me the completely same results. In
>>> my case, there were 6 constraints: 3 of them were the same, and 3 of them
>>> were not.
>>>
>>> Did I do anything wrong? and what is the best way to get Lagrange
>>> multipliers?  BTW, I am using " limited-memory" as
>>> "hessian_approximation",
>>> so I could not get Lagrange multipliers from "eval_h".
>>>
>>> Thanks! Chunhua
>>>
>>>
>>>
>>> _______________________________________________
>>> Ipopt mailing list
>>> Ipopt at list.coin-or.org
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__list.coi
>>> n-2Dor.org_mailman_listinfo_ipopt&d=CwICAg&c=Ngd-ta5yRYsqe
>>> UsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=BRcuJnQr5NAzU29t80hk2r
>>> sLc4vrlRySBDabuq0O1ZI&m=8QCzuoqz-AgMGxJ-2JYYSS62sOzui1
>>> ScGc0vPaY2sJM&s=qrM6-XpdelsvZ3rS_QCzA2dHp6S7EvzsMi-Le1gguHw&e=
>>>
>>>
>>
>> --
>> http://www.gams.com/~stefan
>>
>


-- 
http://www.gams.com/~stefan


More information about the Ipopt mailing list