[Ipopt] Sparse Linear Solver on GPU for IPOPT?
Stuart Rogers
smr1 at ualberta.ca
Fri Aug 19 14:08:01 EDT 2016
To my knowledge, IPOPT only exploits direct sparse linear solvers that
execute on the CPU (i.e. MA27, MA57, MA77, MA86, MA97, PARDISO, WSMP, and
MUMPS). Except for MA27, all these direct sparse linear solvers have been
parallelized (to varying degrees) to exploit multiple core CPUs. Have there
been any studies that investigate performance improvements by executing a
direct or iterative sparse linear solver on the GPU when using IPOPT to
solve NLP problems?
There are many iterative sparse linear solvers implemented to run on the
GPU such as those in the following libraries:
cuSPARSE (https://developer.nvidia.com/cusparse)
MAGMA (http://icl.cs.utk.edu/magma/)
CULA Sparse (http://www.culatools.com/sparse/)
MATLAB's bicgstab and gmres (http://www.mathworks.com/help
/distcomp/run-built-in-functions-on-a-gpu.html#buqplvc)
ViennaCL (http://viennacl.sourceforge.net/)
clSPARSE (https://github.com/clMathLibraries/clSPARSE)
A few direct sparse linear solvers have been implemented to run on the GPU
such as those in the following libraries:
cuSOLVER (https://developer.nvidia.com/cusolver)
SPRAL (http://www.numerical.rl.ac.uk/spral/)
SPQR (http://faculty.cse.tamu.edu/davis/suitesparse.html)
SuperLU_DIST (http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
ViennaCL and clSparse are implemented in OpenCL, so run on any OpenCL
device (e.g. NVIDIA GPU, AMD GPU, and Intel Xeon Phi coprocessor). All the
other libraries listed above are implemented in CUDA and therefore only run
on NVIDIA GPUs.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/ipopt/attachments/20160819/13cd93ad/attachment.html>
More information about the Ipopt
mailing list