<div dir="ltr"><div><div>Hi Chinmay -<br><br></div>If your objective function is differentiable, then you can get around implementing an analytical gradient by instead implementing a finite difference approximation. Since this will require at least one objective function evaluation per decision variable, this could be quite slow depending on the problem size, but it is a possible way forward. (I have done this when my objective function depended on the output of a third party code.) You could also look into automatic differentiation code that could determine the gradient of your objective. However, if your objective is non-differentiable within your feasible region, you will need to find another optimization algorithm.<br><br></div>- Seth<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 1, 2018 at 9:39 AM, Chinmay Rajhans Official <span dir="ltr"><<a href="mailto:rajhanschinmay2@gmail.com" target="_blank">rajhanschinmay2@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">To: all,<div><br></div><div>I am trying to solve nonlinear optimization problem using IPOpt solver inside MATLAB.</div><div><br></div><div>The objective function is too complicated.</div><div>Can giving of objective gradient be made optional?</div><div>OrĀ </div><div>can the code run without supplying of the gradient of the objective function?</div><div><br></div><div><br></div><div>Thank you.</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Chinmay</div><div><br></div><div><br></div></font></span></div>
<br>______________________________<wbr>_________________<br>
Ipopt mailing list<br>
<a href="mailto:Ipopt@list.coin-or.org">Ipopt@list.coin-or.org</a><br>
<a href="https://list.coin-or.org/mailman/listinfo/ipopt" rel="noreferrer" target="_blank">https://list.coin-or.org/<wbr>mailman/listinfo/ipopt</a><br>
<br></blockquote></div><br></div>