[Ipopt] Achieving small step size in the optimizatiton

lorejam at liralab.it lorejam at liralab.it
Tue Nov 6 07:48:41 EST 2012


Dear all,
I am using IpIpot (since some years) to compute the inverse kinematics of
humanoid robots. Due to the system redundancy, the inversion is formulated
as an optimization problem.

I am now treating a specif problem in which I need to directly control the
step size of the gradient descent during the optimization (i.e.
alpha_primal).
In particular, considering the update of the solution at the i-th
iteration , x_i+1 = x_i + d_x , I need d_x to be limited (and , typically,
quite small).
Even at the cost of reducing the convergence speed, that's what I need.

Do any of you has an idea of how to implement this behavior?

I am using a precompiled version of IpOpt (v. 3.7) for Windows, so I would
like to achieve this WITHOUT changing the IpOpt source code, but only
manipulating the parameters or the interface to my TNLP.

Indeed, I tried to compile IpOpt for Windows but I got several issues, and
therefore I'd like to keep using the precompiled version, if possible.

Any suggestion would be highly appreciated!
Thank you!

Best regards,
Lorenzo



More information about the Ipopt mailing list