[Ipopt] IPOPT Warmstart

"Michael Gißler" michael.gissler at gmx.net
Mon Dec 19 09:02:37 EST 2011


Hi Andreas,

thanks for answering so fast! :-)

Yes that's exactly what I want to do. By saving the vectors:

* design variables x[n]
* slack variables s[m_ieq]
* lower & upper bound multipliers z_L[n] and z_U[n]
* lagrange multipliers y_c[m_eq] and y_d[m_ieq]
* multipliers for inequality constraints v_L[m_ieq] and v_U[m_ieq]

and inputting them after each iteration as starting point.

It's a little hard to figure out why we can't run IPOPT in the way it's normally used to be. But it's not too less memory but rather memory troubles holding the IPOPT objects in memory over all iterates.

The easiest way to implement IPOPT would be a working warmstart (which would be a nice feature in general I think). So what do you expect: Is there a possible chance for me to figure out all the information have to be saved for a complete warmstart or not?

If yes: How could I approach?

I know these are some difficult questions. I just would like to get an esitmation of you.

If someone could give me a list of all necessary information which has to be stored between two iterations I would give a try to implement such a complete warmstarting option.


Best regards,

Michael


-------- Original-Nachricht --------
> Datum: Fri, 16 Dec 2011 13:38:46 -0600
> Von: Andreas Waechter <awaechter.iems at gmail.com>
> An: ipopt at list.coin-or.org
> Betreff: Re: [Ipopt] IPOPT Warmstart

> Hi Michael,
> 
> If I understand correctly, you want to restore the entire Ipopt state 
> after each iteration, but delete all Ipopt information in between.  This 
> will not work, because the state of Ipopt is defined by more than the 
> values of the iterates (such as your L-BFGS approximation, the filter in 
> the line search, the barrier parameter etc).  It would be very 
> complicated to figure out what all the relevant data is, store it, and 
> then restore it.
> 
> For most problems, the main memory requirements in Ipopt will be in the 
> linear solver.  If you are using MA27, you might be able to reduce 
> memory requirement by setting "ma27_liw_init_factor" and 
> "ma27_la_init_factor" to smaller values.  For Mumps, try a smaller value 
> for "mumps_mem_percent".  Or try a different linear solver, such as
> Pardiso.
> 
> Or simply buy more memory for your computer :)
> 
> Hope this helps,
> 
> Andreas
> 
> On 12/15/2011 04:53 AM, "Michael Gißler" wrote:
> > Hello everyone,
> >
> > for a student research project I try to implement IPOPT in a structural
> topology optimization software. For my needs I have to shutdown IPOPT after
> running one iteration to clean memory. So, what I try to do is
> warmstarting each iteration with the information stored of the former iteration.
> >
> > Until now I got a version which:
> > * shuts down after one iteration (max_iter=1)
> > * saving design variable values
> > * running it again with the values from the former iteration
> > * furthermore: running a loop in the main function of mynlp in which I
> generate new objects for each iteration
> >
> > I know there's a lot of information missing to get good results but
> before I go on I got some questions due to that:
> >
> > In IPOPT there are two ways to restore a former iteration:
> >
> > One is to store the values of the TNLP problem formulation (x,
> z_L,Z_U,lambda) and one to store the NLP information (x,s,y_c,y_d,z_U,z_L,v_u, v_L).
> >
> > Furthemore each TNLP problem definition internally gets "translated"
> into an NLP formulation by the class TNLPAdapter. The TNLP formulation is just
> more common to input a problem definition.
> >
> > Q1: Because of that I think I get better warmstarting conditions by
> saving the information of the NLP formulation (which IPOPT uses it internally)?
> Or is there no difference?
> >
> > There is function to input the NLP information by setting
> "warmstart_entire_iterate" to "yes" and using get_warmstart_iterate().
> >
> > Q2: Is there also a function to output this vector? (In my case after
> running one iteration.)
> >
> > Last but not least a question on another topic:
> >
> > Turing "warm_start_init_point" to "yes" should result in using initial
> values for x, z_L, z_U and lambda. But in the first iteration init_z and
> init_lambda are zero (false) so the assert commands in get_starting_point()
> kill the programm. By deleting the assert commands it seems the values I
> enter for this vectors get really used inside (changing them results in
> different convergence).
> >
> > Q3: Someone can tell me what's happening there inside? I just know from
> debugging that get_starting_point() gets called twice in one iteration and
> by calling it the second time init_z and init_lambda are 1 (true).
> >
> > After my work the whole software should work like that:
> >
> > Loop:
> > * FE-problem ->  Solve to store gradients, objective function value and
> so on (there's no second order information! ->  "hessian_approximation" -> 
> "limited-memory")
> > * Running IPOPT for one iteration
> >   ** Input the information of the FE Solver and IPOPT's former solution
> >   ** Output the information of IPOPT and store them (with new design
> variable values a new FE-problem is generated)
> > * Termination criteria checking by the exisiting software (not by IPOPT)
> >
> >
> > Thanks in advance,
> >
> > Michael
> 
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt

-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


More information about the Ipopt mailing list