[Ipopt] Some suggestions to speed up code sought

Stefan Vigerske stefan at math.hu-berlin.de
Wed Nov 11 12:14:20 EST 2015


Hi,

On 11/11/2015 04:52 AM, jmogali at andrew.cmu.edu wrote:
> Hi,
>       @Stefan -: Thanks for letting me know the ReOptimizeTNLP() option.
> Also warm start has significantly improved the timings.
>
> However, I observe that Hessian is being recalculated when
> ReOptimizeTNLP() executes. I know that this has been included for
> generality. I would like to however exploit the fact that throughout the
> lifespan of my program, the Hessian I provide remains a constant. Is there
> any way I can make IPOPT exploit this fact ?

Even if you set the mentioned hessian_constant option?

> On a side note, the MA97 solver is being used to solve the KKT matrix right ?

Yes, I would think so.
To see what exactly is solved there, see (13) in the implementation 
paper (http://www.optimization-online.org/DB_HTML/2004/03/836.html).

Stefan

>
> Thanks,
> Jayanth
>
>> Hi,
>>
>> if the structure of the Jacobian and Hessian do not change, have you
>> considered using ReOptimizeTNLP() instead of OptimizeTNLP()?
>> I haven't checked, but likely that will avoid calls that get the problem
>> structure (n, m, sparsity pattern).
>>
>> There are also option to tell Ipopt that the Jacobian and the Hessian
>> are constant (jac_c_constant, jac_d_constant, hessian_constant):
>> http://www.coin-or.org/Ipopt/documentation/node43.html#SECTION000114070000000000000
>>
>> The TNLP is mostly implemented by you, while Ipopt only defines the
>> interface. From the Ipopt point-of-view, it should be save to reuse the
>> TNLP object.
>>
>> MA97 error -7 means that the matrix passed to MA97 is singular. See
>> http://www.hsl.rl.ac.uk/specs/hsl_ma97.pdf for a full documentation of
>> MA97.
>>
>> Stefan
>>
>> On 11/09/2015 03:35 AM, jmogali at andrew.cmu.edu wrote:
>>> Hi,
>>>        I would like to have some suggestions to speed up my code. I run
>>> my
>>> code in a loop where at each iteration I call the function
>>> OptimizeTNLP(); however I create a single TNLP object that calls
>>> OptimizeTNLP at each iteration. After every iteration, I change the
>>> objective function I am optimizing by a little.
>>>
>>> 1. The structure of the hessian is constant across iterations, is there
>>> a
>>> way to make IPOPT avoid calling eval_h with values=null be passed to my
>>> program at every iteration ?
>>> On a similar note, my Jacobian is a constant
>>> for all iterations, is there a way to make IPOPT avoid calling
>>> eval_jac_g
>>> , get_bounds_info for every iteration ?
>>>
>>> Please note that in the above question, by iterations I am NOT referring
>>> to those when IPOPT executes OptimizeTNLP().
>>>
>>> 2. Is it safe to reuse the TNLP object between iterations?
>>>
>>> 3. I sometimes get the message "Error return from ma97_factor. Error
>>> flag
>>> =  -7" , does this mean the KKT matrix is singular ?
>>>
>>> Thanks,
>>> Jayanth
>>>
>>> _______________________________________________
>>> Ipopt mailing list
>>> Ipopt at list.coin-or.org
>>> http://list.coin-or.org/mailman/listinfo/ipopt
>>>
>>
>>
>> --
>> http://www.gams.com/~stefan
>>
>
>


-- 
http://www.gams.com/~stefan


More information about the Ipopt mailing list