[CppAD] cppad parallel setup
Brad Bell
bradbell at seanet.com
Sun Aug 30 10:39:56 EDT 2015
Here is a recent discussion that is appropriate for the CppAD mailing list:
Hi Brad,
Thanks for the further clarification. I think I understand the memory
model a little better now. For case 2, I’ll just protect Independent
with a “critical" block.
Feel free to forward this to the mailing list.
AJ
> On Aug 30, 2015, at 5:19 PM, Brad Bell <bradbell at uw.edu> wrote:
>
> On 08/30/2015 06:50 AM, AJ Bostian wrote:
>> Hi Brad,
>>
>> Thanks for the clarification.
>>
>> In my first use case, OpenMP will different spawn threads for
>> different iterations of the loop, so it seems that there will be no
>> issue with calling Independent/Dependent inside that loop. Is that
>> the right interpretation? In that case, does the parallel memory
>> initialization need to occur per-thread, or once for the whole
>> program? (Or does it even need to be explicitly initialized in that
>> manner?)
> The initialization needs to be done in single thread mode; see
> http://www.coin-or.org/CppAD/Doc/ta_parallel_setup.xml
>>
>> In my second use case, I guess I just need to make sure that the
>> threads synchronize before Independent/Dependent are called, and also
>> make sure that the root thread does the calling.
> A different recording is going for each thread, so the calls to
> Independent must be done by the thread that will do the corresponding
> computation. You must make sure that the thread numbers are the same
> for the computations you want in the same recording. It might be
> simpler to start with the team of OpenMP threads example and see if
> you can make it do what you want.
>>
>> AJ
>>
>>> On Aug 30, 2015, at 4:39 PM, Brad Bell <bradbell at uw.edu> wrote:
>>>
>>> The restriction on parallel CppAD is that there can only be one
>>> currently active call to Independent per thread. An attempt to have
>>> multiple calls active at the same time will generate an error
>>> message. I will try to make this more explicit under the Parallel
>>> heading for Independent; see
>>> http://www.coin-or.org/CppAD/Doc/independent.xml#Parallel%20Mode
>>>
>>> For examples, see
>>> file:///home/bradbell/cppad.git/doc/multi_thread.htm
>>> file:///home/bradbell/cppad.git/doc/thread_test.cpp.htm
>>> and the OpenMP specific case
>>> file:///home/bradbell/cppad.git/doc/team_openmp.cpp.htm
>>>
>>>
>>> On 08/30/2015 04:38 AM, AJ Bostian wrote:
>>>> Hi Brad,
>>>>
>>>> I’ve been using a number of AD tools for a while now, and I find
>>>> myself needing one that works well in a parallel environment.
>>>> Scanning through github, I see that cppad has parallel support.
>>>> However, I have two use cases, and I want to verify that both are
>>>> feasible. Could you advise me?
>>>>
>>>> Each case involves a for-loop parallelized with OpenMP.
>>>>
>>>> Case 1: The calls to Independent, Dependent, and Forward/Reverse
>>>> are completely self-contained within the loop. (I.e., each
>>>> iteration of the loop deals with a completely different
>>>> mathematical function, and these functions do not need to survive
>>>> outside the loop.)
>>>>
>>>> Case 2: The calls to Independent, Dependent, and Forward/Reverse
>>>> are completely outside the loop. (I.e., each iteration of the loop
>>>> is helping to build up a larger mathematical function.)
>>>>
>>>> Can cppad handle both of these? If so, does memory initialization
>>>> occur in the same manner in each?
>>>>
>>>> Thanks for your help,
>>>> AJ
>>>>
>>>> —
>>>> Dr. AJ Bostian
>>>> University of Tampere
>>>> Tel: +358 50 318 7328
>>>> Email: aj at bostian.us.com
>>>> Web: aj.bostian.us.com <http://aj.bostian.us.com/>
>>>>
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/cppad/attachments/20150830/84257ca9/attachment.html>
More information about the CppAD
mailing list