[CppAD] Multithreading

Jey Kottalam jey at kottalam.net
Mon Mar 19 18:39:24 EDT 2012


Dominik, could you provide a standalone testcase that would allow us to
reproduce the problem you're experiencing?

Brad,

> 3. I am not sure I understand the discussion below. Are you trying to use
> CppAD without calling parallel_setup ? Or have you found a bug in CppAD ?

I am calling parallel_setup, and I don't know whether everything is working
correctly or not. I'm using helgrind to check for threading errors, and it
does issue some warnings in CppAD::ADFun<double>::operator=, but helgrind
tends to be conservative in the errors it reports and could be simply
complaining about code that is actually thread safe.

I don't quite understand the motivation for the thread_alloc machinery.

-Jey


On Mon, Mar 19, 2012 at 3:25 PM, <bradbell at seanet.com> wrote:

> 1. If you do not call hold_memory
>  http://www.coin-or.org/CppAD/Doc/ta_hold_memory.xml
> then CppAD will not hold onto memory for quick use by the same thread;
> i.e., the behavior will be very close to using the original system
> allocator.
>
> 2. It is still in the CppAD API requirements to call parallel_setup
>   http://www.coin-or.org/CppAD/Doc/ta_parallel_setup.xml
> before using CppAD with multiple threads.
>
> 3. I am not sure I understand the discussion below. Are you trying to use
> CppAD without calling parallel_setup ? Or have you found a bug in CppAD ?
>
>
> > Hi Dominik,
> >
> > Is this occurring because thread_alloc expects memory to be freed from
> > the same thread where it's allocated?
> >
> >> Or is it really true that independently  allocating memory in
> >> one thread  slows down everything even so no mutex mechanism is needed?
> >
> > This can happen when using a malloc that isn't designed for threading
> > workloads, but it's easy to select an alternate malloc on modern
> > systems. For example under Linux/x86 and Linux/x86-64 options include
> > TCMalloc, jemalloc, TBBmalloc, and hoard. In practice I would
> > recommend just using the system malloc until you've determined that
> > the malloc is actually a performance problem. The important thing
> > however is to use a *thread-safe* malloc, for example under Linux with
> > GCC you should pass the "-pthreads" option to GCC when compiling and
> > linking. (It may not be enough to pass "-lpthread" when linking since
> > "-pthreads" enables other options too.)
> >
> > Attached is a rudimentary patch that switches thread_alloc to use
> > operator new and delete directly. You might want to give that a shot.
> >
> > However, even after using this patch and building with -DNDEBUG, I
> > still get some warnings from helgrind about data races in
> > CppAD::ADFun<double>::operator=, and I don't know whether they are
> > simply spurious warnings. I'm planning to switch to a multiprocess
> > model using mmap(MAP_SHARED) and fork() to avoid these issues.
> >
> > -Jey
> >
> > On Mon, Mar 19, 2012 at 9:00 AM, Dominik Skanda
> > <Dominik.Skanda at biologie.uni-freiburg.de> wrote:
> >> Concerning my last message, I have looked a little bit deeper in the
> >> code and found out that essentially the command
> >>
> >> pod_vector<Base> Partial;
> >> Partial.extend(total_num_var_ * p);
> >>
> >> in reverse.hpp causes the error.
> >> So  as far as I think the Vector Partial is only accesed within the
> >> function
> >>
> >> "VectorBase ADFun<Base>::Reverse(size_t p, const VectorBase &w)"
> >>
> >> and therefore should be thread safe (Since it is no global variable)! On
> >> the other side due to the new memory allocater it is not. I think that
> >> the new memory allocator should only be used were global information is
> >> created. Or is it really true that independently  allocating memory in
> >> one thread  slows down everything even so no mutex mechanism is needed?
> >> This would made the applicability of the CppAD Library really much more
> >> flexible!
> >>
> >>
> >> _______________________________________________
> >> CppAD mailing list
> >> CppAD at list.coin-or.org
> >> http://list.coin-or.org/mailman/listinfo/cppad
> > _______________________________________________
> > CppAD mailing list
> > CppAD at list.coin-or.org
> > http://list.coin-or.org/mailman/listinfo/cppad
> >
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/cppad/attachments/20120319/d2311698/attachment-0001.html>


More information about the CppAD mailing list