[Coin-ipopt] Re: memory comsumption
Frank J. Iannarilli
franki at aerodyne.com
Fri Jun 10 16:57:08 EDT 2005
Hi Andreas (and All),
Just to correct/refine a point below:
David Gay was indicating that the separate invocations of AMPL and the
solver would not only reduce total memory used (otherwise by both at once),
but ALSO the helpful point that AMPL could skip its #collect phase, and
consequently not double ITS memory usage.
Regards,
--On Friday, June 10, 2005 3:46 PM -0400 Andreas Waechter
<andreasw at watson.ibm.com> wrote:
> Hi Frank,
>
> Thanks for letting us know about David Gay's thoughts on the memory
> consumption in AMPL. (I'm sending this to the mailing list in case other
> users care/have ideas....)
>
> Just a few more comments:
>
> You probably know this already, but if not, here is the basic way that
> AMPL works:
>
> The AMPL interpreter (which is started when you type "ampl") it the
> program that can read and understand the model language. What usually
> happens when you type "solve" is that this interpreter writes an output
> file (with the extension .nl), and starts a new executable, which is the
> solver program (for Ipopt, it's called 'ipopt'). This solver executable
> gets as comment line argument the name of the .nl file, after startup
> reads that files, and then the optimization method can call certain
> functions in the "Ampl Solver Library" (ASL) to obtain function values,
> derivatives etc.
>
> I guess when David Gay suggested to first write the .nl file (which you
> can do in the AMPL interpreter for example by issuing
>
> model mymodel.mod;
> data mydata.dat;
> write gmymodel;
>
> this will create an ASCII file called mymodel.nl), then quit the AMPL
> interpreter and start the solver executable separately (e.g. with "ipopt
> mymodel.nl"), he thought about the overall memory consumption of the two
> (simultaneously running) processes of the AMPL interpreter and the
> optimization executable on one's computer.
>
> However, when I created the valgrind map of memory consumption that I sent
> you earlier, this was already done only for the solver executable (the
> AMPL interpreter was not running at the same time). And here it seemed
> that for your problem about 60% of memory was consumed by the ASL.
>
> I was not aware that the ASL might constitute such a large part of the
> memory consumption (assuming that I indeed interpreted valgrind's output
> correctly :) of the AMPL solver executable.
>
> If anyone else on the mailing list has comments/ideas it might be nice to
> hear about them...
>
> Everybody have a nice weeked,
>
> Andreas
>
>
>
> On Fri, 10 Jun 2005, Frank J. Iannarilli wrote:
>
>> --On Friday, June 10, 2005 10:28 AM -0400 Andreas Waechter
>> <andreasw at watson.ibm.com> wrote:
>>
>> > Hi Frank,
>> >
>> > I ran
>> >
>> > valgrind --tool=massif
>> >
>> > on one of your exmaples - the output is attached.
>> >
>> > It seems that indeed a lot of memory is allocated in the AMPL solver
>> > library (more than 50% of the total)...
>> >
>> > I don't know how to change that etc, but it might indicate that for
>> > large problems it might really pay off to code your NLP directly in
>> > Fortran/C/C++ instead of AMPL... I'm surprised...
>> >
>> > Well, I only had a very superficial look at the valgrind output, and
>> > I'm not so familiar with that output, so my first impression might be
>> > wrong...
>> >
>> > Cheers
>> >
>> > Andreas
>> >
>> > PS: So, I'm going to hide now for the rest of the day and try to
>> > finally read some overdue papers... :)
>>
>>
>> Hi Andreas,
>>
>> Hope you've successfully hidden yourself for a while, *before* you
>> respond :-)
>>
>> On the related AMPL side of the memory consumption issue, David Gay (of
>> ampl) responded to my observation that, during AMPL's preparation of the
>> problem (i.e. the .nl file), its memory consumption *doubles* in the
>> "collect" phase, after the so-called genmod (model generation) phase. He
>> explains that AMPL is making copies of all its genmod structures prior to
>> the presolve phase, presumably for ease in bookkeeping. These
>> memory-doubling phases can be avoided by running AMPL w/o invoking the
>> solver, instead writing the .nl file for separate solver invocation. His
>> point of view is:
>>
>> "Memory is cheap and getting cheaper; for big problems, machines
>> with more than 32 bits of address space are readily available.
>> We've long traded memory for speed and reliability, and I remain
>> convinced that this is the correct course."
>>
>>
>> I tend to agree, but perhaps my industry roost permits me to dismiss what
>> to me are negligible incremental memory costs for running large problems.
>> What is your sense of what other people's feelings are with respect to
>> this trade? Do you get the sense that most people won't object to ASL
>> eating
>> > 60% of the required memory on the IPOPT/ASL end? Is this issue worth
>> communicating to David and Bob Fourer?
>>
>>
>> Regards,
>>
>>
>>
>>
>> Frank J. Iannarilli, franki at aerodyne.com
>> Aerodyne Research, Inc., 45 Manning Rd., Billerica, MA 01821 USA
>> www.aerodyne.com/cosr/cosr.html
>>
>
>
Frank J. Iannarilli, franki at aerodyne.com
Aerodyne Research, Inc., 45 Manning Rd., Billerica, MA 01821 USA
www.aerodyne.com/cosr/cosr.html
More information about the Coin-ipopt
mailing list