# [Coin-ipopt] Re: IPOPT and optimal control problem

Andreas Waechter andreasw at watson.ibm.com
Fri Sep 5 17:56:55 EDT 2003

```Dear Jens,

(I'm sending the message to the Ipopt mailing list, since this is the

You wrote:

> I have a few questions regarding IPOPT and the use of it as a solver for
> optimal control problem. In your thesis, as well as in the enclosed
> presentations given by Larry Biegler on p. 16 (Decomposition of
> Large-scale NLP) , it is mentioned that the performance can be increased
> by specifying some variables as independent and the others as dependent.
> This is shown to increase the speed significantly.  How can this be
> implemented in the current version of IPOPT?? I have seen in the
> DYNOPT.README file that the file constr.f does this, but for systems of
> DAE solved by orthogonal collection.
>
> Can it be done in some other way, for example, we can easily partition our
> optimal control problem according to the figure on p16, but how is that
> information provided to IPOPT??

The story with the dependent and independent variables is that this allows
one to compute the search directions for the nonlinear optimization
problem (which for an optimal control problem usually includes a
discretized version of the DAE system) in two steps (by solving two
linear systems) instead of one bigger one.  We call the first approach the
"reduced space approach" and the other one the "full-space approach".
You can find details on the approached in Section 3.2 in my thesis.

Which approach is more efficient depends on the particular properties of
the problem that you are solving, in particular on the number of degrees
of freedom (i.e. n-m, where n is the number of variables and m is the
number of equality constraints, assuming that according to the problem
statements that Ipopt accepts, all inequality constraints except for
bounds have been reformulated into equality constraint and slacks).

The reduced space approach can be more efficient than the full space
approach if the number of degrees of freedom is small (e.g. in the optimal
control problems that we considered), and if one can solve the linear
system involving the basis matrix "C" (where C is defined in Eqn. (2.27)
in my thesis) efficiently.  For the dynamic optimization decomposition
(which is implemented in the IPOPT package as DYNOPT), we exploit the
almost block diagonal structure of the basis matrix C - for this we
replace the "general-purpose" version of the file constr.f by an
appropriate one.

The variable partition for the reduced space approach can also be helpful
if no second derivatives are available, and one wants to use a
reduced-space quasi-Newton method in order to approximate the missing
information (IPOPT options IQUASI=-5,-4,-3,-2,-1,1,2,3,4,5) instead of a
limited memory BFGS approximation (IQUASI=-6,6) for the full space option.

Note that the input to the DYNOPT version is essentially only the DAE
system, and the DYNOPT package does the discretization for you (as you
said using orthogonal collocation).  IPOPT itself is only the nonlinear
optimizer, i.e. if you want to solve an optimal control problem with Ipopt
not using orthogonal collocation, you will have to either pass the large
discretized ODE system as constraints directly to Ipopt (via eval_c and
eval_a), or you will have to add something to the DYNOPT part of the
package.

I'm not sure which option you are interested in.  In order to answer your
question on how to pass the variable parition information to Ipopt let me
assume that you do not want to use the discretization implemented in
DYNOPT and instead provide your discretized version of the DAE system to
IPOPT as constraints.  In that case, the parameter ISELBAS (see the
README.IPOPT file) determines how the dependent and independent variables
are to be chosen.  In particular, you can list the numbers of the
(in)dependent variables in a file, or choose the first variables as the
(in)dependent ones.

I hope this helps.  Please let me know if this not clear, or if I didn't