<div dir="ltr"><div>The only one of those that's likely to be able to provide inertia information needed by the Ipopt algorithm is SPRAL, given it's by the same authors as the other HSL CPU solvers that Ipopt can use. (SPQR might also work but is likely to be more expensive.) I suspect you might end up limited by slow GPU-CPU communication for values that change at every iteration (function values and derivatives) and control over the algorithm itself that happens on the CPU, unless you were to rewrite all of Ipopt as well as your function and derivative evaluations in CUDA or OpenCL.<br><br></div>Jonathan Hogg is the person to ask whether they've hooked up their sparse GPU solvers to a full Ipopt-like optimization algorithm and collected performance numbers.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Aug 19, 2016 at 11:08 AM, Stuart Rogers <span dir="ltr"><<a href="mailto:smr1@ualberta.ca" target="_blank">smr1@ualberta.ca</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span style="font-size:12.8px">To my knowledge, IPOPT only exploits direct sparse linear solvers that execute on the CPU (i.e. MA27, MA57, MA77, MA86, MA97, PARDISO, WSMP, and MUMPS). Except for MA27, all these direct sparse linear solvers have been parallelized (to varying degrees) to exploit multiple core CPUs. Have there been any studies that investigate performance improvements by executing a direct or iterative sparse linear solver on the GPU when using IPOPT to solve NLP problems? </span><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">There are many iterative sparse linear solvers implemented to run on the GPU such as those in the following libraries: <div>cuSPARSE (<a href="https://developer.nvidia.com/cusparse" target="_blank">https://developer.nvidia.com/<wbr>cusparse</a>)</div><div><div>MAGMA (<a href="http://icl.cs.utk.edu/magma/" target="_blank">http://icl.cs.utk.edu/magma/</a>)</div><div>CULA Sparse (<a href="http://www.culatools.com/sparse/" target="_blank">http://www.culatools.com/spar<wbr>se/</a>)</div><div>MATLAB's bicgstab and gmres (<a href="http://www.mathworks.com/help/distcomp/run-built-in-functions-on-a-gpu.html#buqplvc" target="_blank">http://www.mathworks.com/help<wbr>/distcomp/run-built-in-functio<wbr>ns-on-a-gpu.html#buqplvc</a>)</div></div><div>ViennaCL (<a href="http://viennacl.sourceforge.net/" target="_blank">http://viennacl.sourceforge.n<wbr>et/</a>)</div><div>clSPARSE (<a href="https://github.com/clMathLibraries/clSPARSE" target="_blank">https://github.com/clMathLibr<wbr>aries/clSPARSE</a>)</div><div><br></div><div>A few direct sparse linear solvers have been implemented to run on the GPU such as those in the following libraries: </div><div>cuSOLVER (<a href="https://developer.nvidia.com/cusolver" target="_blank">https://developer.nvidia.com/<wbr>cusolver</a>) </div><div>SPRAL (<a href="http://www.numerical.rl.ac.uk/spral/" target="_blank">http://www.numerical.rl.ac.uk<wbr>/spral/</a>) </div><div>SPQR (<a href="http://faculty.cse.tamu.edu/davis/suitesparse.html" target="_blank">http://faculty.cse.tamu.edu/d<wbr>avis/suitesparse.html</a>) </div></div><div style="font-size:12.8px">SuperLU_DIST (<a href="http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superlu_dist" target="_blank">http://crd-legacy.lbl.gov/~xi<wbr>aoye/SuperLU/#superlu_dist</a>)</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">ViennaCL and clSparse are implemented in OpenCL, so run on any OpenCL device (e.g. NVIDIA GPU, AMD GPU, and Intel Xeon Phi coprocessor). All the other libraries listed above are implemented in CUDA and therefore only run on NVIDIA GPUs.</div></div>
<br>______________________________<wbr>_________________<br>
Ipopt mailing list<br>
<a href="mailto:Ipopt@list.coin-or.org">Ipopt@list.coin-or.org</a><br>
<a href="http://list.coin-or.org/mailman/listinfo/ipopt" rel="noreferrer" target="_blank">http://list.coin-or.org/<wbr>mailman/listinfo/ipopt</a><br>
<br></blockquote></div><br></div>