[Ipopt] Low # iters, ensuring that solution remains feasible

Ipopt User ipoptuser at gmail.com
Thu Nov 13 13:25:43 EST 2014


To see where those 15 seconds are spent, look at the wall time in the
advanced timing statistics: 14.946 out of 15.734 seconds are spent in
function evaluations. This seems normal for the quasi newton method if you
have only a few constraints.

Why does the optimization terminate early?

On Thu, Nov 13, 2014 at 1:24 PM, Ipopt User <ipoptuser at gmail.com> wrote:

> To see where those 15 seconds are spent, look at the wall time in the
> advanced timing statistics: 14.946 out of 15.734 seconds are spent in
> function evaluations. This seems normal for the quasi newton method if you
> have only a few constraints.
>
> Why does the optimization terminate early?
>
> On Thu, Nov 13, 2014 at 12:45 PM, John Schulman <john.d.schulman at gmail.com
> > wrote:
>
>> I haven't had much luck yet setting the initial Lagrange multipliers.
>> One hack that helps a bit is to redefine the objective function so that
>> it has value +inf when the constraint violation is greater than some
>> threshold.
>> Of course, this severely impedes the progress of the optimizer and causes
>> it to emit a lot of warnings.
>> But maybe rescaling or reshaping the objective function is the way to go
>> here.
>>
>> Yup, the problem 50000 dimensional, and there's no banded-diagonal
>> structure or anything. I think the main bottleneck should be function and
>> gradient evaluations, not the linear algebra. I looked at the timing stats
>> and something is a bit off. The optimization takes about 15 seconds but the
>> stats only account for about 1 second. Maybe because I'm using fork-based
>> parallelism (via python multiprocessing) to compute the gradients? Anyway,
>> if it's indeed true that linear algebra is taking .5 seconds or so, then
>> that's insignificant.
>>
>>
>>
>> Number of objective function evaluations             = 70
>> Number of objective gradient evaluations             = 51
>> Number of equality constraint evaluations            = 0
>> Number of inequality constraint evaluations          = 70
>> Number of equality constraint Jacobian evaluations   = 0
>> Number of inequality constraint Jacobian evaluations = 51
>> Number of Lagrangian Hessian evaluations             = 0
>> Total CPU secs in IPOPT (w/o function evaluations)   =      0.591
>> Total CPU secs in NLP function evaluations           =      0.439
>>
>>
>> Timing Statistics:
>>
>> OverallAlgorithm....................:      1.030 (sys:      0.064 wall:
>>   15.734)
>>  PrintProblemStatistics.............:      0.000 (sys:      0.000 wall:
>>    0.000)
>>  InitializeIterates.................:      0.008 (sys:      0.001 wall:
>>    0.222)
>>  UpdateHessian......................:      0.051 (sys:      0.002 wall:
>>    0.055)
>>  OutputIteration....................:      0.001 (sys:      0.001 wall:
>>    0.002)
>>  UpdateBarrierParameter.............:      0.353 (sys:      0.022 wall:
>>    0.378)
>>  ComputeSearchDirection.............:      0.133 (sys:      0.005 wall:
>>    0.137)
>>  ComputeAcceptableTrialPoint........:      0.176 (sys:      0.017 wall:
>>    7.201)
>>  AcceptTrialPoint...................:      0.004 (sys:      0.000 wall:
>>    0.004)
>>  CheckConvergence...................:      0.304 (sys:      0.015 wall:
>>    7.735)
>> PDSystemSolverTotal.................:      0.436 (sys:      0.020 wall:
>>    0.458)
>>  PDSystemSolverSolveOnce............:      0.403 (sys:      0.019 wall:
>>    0.422)
>>  ComputeResiduals...................:      0.026 (sys:      0.001 wall:
>>    0.027)
>>  StdAugSystemSolverMultiSolve.......:      0.187 (sys:      0.009 wall:
>>    0.197)
>>  LinearSystemScaling................:      0.000 (sys:      0.000 wall:
>>    0.000)
>>  LinearSystemSymbolicFactorization..:      0.000 (sys:      0.000 wall:
>>    0.001)
>>  LinearSystemFactorization..........:      0.030 (sys:      0.000 wall:
>>    0.031)
>>  LinearSystemBackSolve..............:      0.133 (sys:      0.000 wall:
>>    0.133)
>>  LinearSystemStructureConverter.....:      0.000 (sys:      0.000 wall:
>>    0.000)
>>   LinearSystemStructureConverterInit:      0.000 (sys:      0.000 wall:
>>    0.000)
>> QualityFunctionSearch...............:      0.049 (sys:      0.005 wall:
>>    0.054)
>> TryCorrector........................:      0.000 (sys:      0.000 wall:
>>    0.000)
>> Task1...............................:      0.014 (sys:      0.002 wall:
>>    0.017)
>> Task2...............................:      0.028 (sys:      0.000 wall:
>>    0.028)
>> Task3...............................:      0.002 (sys:      0.000 wall:
>>    0.003)
>> Task4...............................:      0.000 (sys:      0.000 wall:
>>    0.000)
>> Task5...............................:      0.002 (sys:      0.000 wall:
>>    0.003)
>> Function Evaluations................:      0.439 (sys:      0.031 wall:
>>   14.946)
>>  Objective function.................:      0.069 (sys:      0.007 wall:
>>    3.589)
>>  Objective function gradient........:      0.149 (sys:      0.008 wall:
>>    3.946)
>>  Equality constraints...............:      0.000 (sys:      0.000 wall:
>>    0.000)
>>  Inequality constraints.............:      0.077 (sys:      0.009 wall:
>>    3.686)
>>  Equality constraint Jacobian.......:      0.000 (sys:      0.000 wall:
>>    0.000)
>>  Inequality constraint Jacobian.....:      0.145 (sys:      0.007 wall:
>>    3.725)
>>  Lagrangian Hessian.................:      0.000 (sys:      0.000 wall:
>>    0.000)
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Nov 13, 2014 at 12:02 AM, Tony Kelman <kelman at berkeley.edu>
>> wrote:
>>
>>>   Yikes, you should’ve said you have a dense Hessian, that’s bad news.
>>> You really have 50000 x 50000 all-to-all nonlinear interaction terms? I
>>> hope you’re using OpenBLAS or MKL. Depending on the results of
>>> print_timing_statistics, also a good idea to experiment with as many
>>> different linear solvers as you can get.
>>>
>>>
>>>  *From:* John Schulman <john.d.schulman at gmail.com>
>>> *Sent:* Wednesday, November 12, 2014 10:47 PM
>>> *To:* Tony Kelman <kelman at berkeley.edu>
>>> *Cc:* ipopt at list.coin-or.org
>>> *Subject:* Re: [Ipopt] Low # iters, ensuring that solution remains
>>> feasible
>>>
>>>  Hi Tony,
>>>
>>> Funny to get a reply from someone so close by after launching a message
>>> out to the colossal internet.
>>>
>>> I wasn't warm-starting at all--that's a great suggestion. Hopefully the
>>> Lagrange multipliers won't change too much between iterations. Otherwise,
>>> there may be some problem-specific ways to guess a reasonable Lagrange
>>> multiplier here.
>>>
>>> Regarding speed & algorithm choice, BFGS seems like the right fit here,
>>> and in this problem it's not really practical to compute the hessian (which
>>> is dense).
>>> But I'll be sure to check out print_timing_statistics -- I'm new to
>>> Ipopt and haven't found these handy tools yet.
>>>
>>> John
>>>
>>>
>>> On Wed, Nov 12, 2014 at 10:19 PM, Tony Kelman <kelman at berkeley.edu>
>>> wrote:
>>>
>>>>   Hi John, good to see groups next door also using Ipopt.
>>>>
>>>> Generally speaking this is a pretty hard thing to do with an
>>>> interior-point method. Are you warm-starting the dual variables as well, or
>>>> just the primal? That may help, but it depends how closely related the
>>>> subproblems are. I’d also avoid doing quasi-newton hessian approximations
>>>> if you have a speed-critical application, you’ll get better convergence in
>>>> most cases if you are using a modeling tool that can provide exact
>>>> Hessians. Have you looked at the breakdown of computation time from
>>>> print_timing_statistics?
>>>>
>>>> -Tony
>>>>
>>>>
>>>>  *From:* John Schulman <john.d.schulman at gmail.com>
>>>> *Sent:* Wednesday, November 12, 2014 10:14 PM
>>>> *To:* ipopt at list.coin-or.org
>>>> *Subject:* Re: [Ipopt] Low # iters, ensuring that solution remains
>>>> feasible
>>>>
>>>>   Oops, "wildly feasible" in the first paragraph should be "wildly
>>>> infeasible"
>>>>
>>>> On Wed, Nov 12, 2014 at 10:06 PM, John Schulman <
>>>> john.d.schulman at gmail.com> wrote:
>>>>
>>>>>  Short:
>>>>>
>>>>> I am calling Ipopt repeatedly to solve a series of subproblems.
>>>>> For each subproblem, Ipopt is initialized with a feasible solution,
>>>>> and max_iter is set to 50 or so.
>>>>> The optimization terminates early, and often this intermediate
>>>>> solution is wildly feasible.
>>>>> I'm wondering if there are any settings that will ensure that the
>>>>> result is nearly feasible.
>>>>>
>>>>> Longer:
>>>>>
>>>>> I am using Ipopt to solve a series of subproblems of the form
>>>>> minimize f(x), subject to g(x) < delta,
>>>>> Here g is a distance function of sorts, measuring Distance(x_0,x),
>>>>> where x_0 is the initialization.
>>>>> So the the initial point x_0 is feasible.
>>>>> x has dimension 50000 or so, so I am using hessian_approximation with
>>>>> limited memory.
>>>>>
>>>>> I need to keep to a low number of iterations, say 50 or 100, so the
>>>>> overall computation time remains reasonable.
>>>>> It's not essential at all that the solution generated is optimal; I
>>>>> just want to improve the objective as much as possible while remaining
>>>>> feasible.
>>>>>
>>>>> I tried fiddling with the barrier parameters but didn't have any luck.
>>>>> Any suggestions?
>>>>> Thanks in advance for your time.
>>>>>
>>>>> John
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> ------------------------------
>>>> _______________________________________________
>>>> Ipopt mailing list
>>>> Ipopt at list.coin-or.org
>>>> http://list.coin-or.org/mailman/listinfo/ipopt
>>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/ipopt/attachments/20141113/afb041fb/attachment-0001.html>


More information about the Ipopt mailing list