[Ipopt] IPOPT Questions on DOF and feasibility
Uwe Nowak
uwe.nowak at itwm.fraunhofer.de
Sat May 21 05:25:31 EDT 2011
Hi!
I am not an expert with IPOPT, but use it for some time.
As far as I know, IPOPT has no possibility to stay feasible, even if
started from a feasible point. In principle the filter allows to become
infeasible, if the objective improves.
In my setting it helped, to set the value theta_max_fact to some smaller
value, e.g. to 10
From the documentation:
> theta_max_fact 0 < ( 10000) < +inf
> Determines upper bound for constraint violation in the filter.
> The algorithmic parameter theta_max is determined as theta_max_fact times
> the maximum of 1 and the constraint violation at initial point. Any
> point with a constraint violation larger than theta_max is unacceptable
> to the filter (see Eqn. (21) in the implementation paper).
Still it sometimes converges to infeasible points, but less often.
Best Regards,
Uwe Nowak
Am 20.05.2011 21:34, schrieb David Veerasingam:
> Hello all,
>
> I have a few questions about IPOPT's operation.
>
> 1) IPOPT sometimes exits when the number of constraints exceed the number of
> variables (it complains that there are too few degrees of freedom). This
> sometimes happens in the final steps of a shrinking horizon MPC application
> (where variables at each instance are successively fixed as one marches
> toward the endpoint). GRG solvers like CONOPT however continue to chug on,
> even when there are limited degrees of freedom. I understand IPOPT's
> behavior arises because it drops fixed variables from the problem. Is there
> a setting to prevent IPOPT from dropping the fixed variables when solving an
> NLP?
>
> When IPOPT exits on a "too few degrees of freedom" error, I do kind of think
> it's the correct response from a mathematical point of view (the problem is
> probably incorrectly modeled in some sense). But it would be great if it
> could be made to proceed gracefully nonetheless, especially in control
> applications.
>
> 2) IPOPT sometimes converges to an infeasible point from a mere change in
> objective function. I'm having trouble understanding why this can happen.
> For instance, I solve an NLP once with a setpoint tracking objective, and it
> converges to an optimal solution. Using that solution as an initial guess
> (the default behavior in AMPL), I change the objective function to an
> economic objective function and re-solve (everything else stays exactly the
> same), and it converges to an infeasible point. If I start from a feasible
> point, doesn't the filter method ensure I don't lose feasibility?
>
> Extra info: my ipopt.opt file is as follows.
>
> linear_solver ma27
> max_iter 12000
> print_level 4
> tol 1e-8
> acceptable_tol 1e-6
> expect_infeasible_problem yes
> linear_system_scaling mc19
> linear_scaling_on_demand no
> warm_start_init_point yes
> mu_strategy adaptive
>
> I'd be grateful if someone could shed some light on these issues. Thank you
> so much.
>
>
>
>
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt
--
Uwe Nowak
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik
Abteilung Optimierung
Fraunhofer-Platz 1
D-67663 Kaiserslautern
Telefon: +49(0)631/31600-4458
E-Mail: uwe.nowak at itwm.fraunhofer.de
Internet: www.itwm.fraunhofer.de
More information about the Ipopt
mailing list