[Coin-discuss] OSI layer cached data, interface guidelines

Lou Hafer lou at cs.sfu.ca
Mon Dec 17 13:01:54 EST 2001


Brady,

	I can remember being surprised by several of the bound conventions in
glpk. In this instance, I'd vote for the cached vector solution. Regenerating
the entire bounds vector for each query seems an excessive amount of work.
The dylp interface keeps copies of row bounds, row sense/rhs/range,
objective, primal and dual solutions, reduced costs, row activity, and
row-major and column-major copies of the constraint matrix. (It would keep
column bounds too, but as it happens dylp's notion of column bounds matches
OSI's notion.) My guess is that this is overkill (particularly the two copies
of the constraint matrix). Cached copies are created on the first query for
the information and (typically) destroyed when changes to the problem make
them invalid. The copy of the objective function exists to make it easy to
deal with changing the problem sense from max to min.

	(As an aside, I've finished the first implementation of the OSI layer
for dylp. I've shipped a copy up to IBM but haven't heard back from Laci.
Xmas vacation? Functionality (as much as there will be) is complete. The
major loose end is to throw errors at the proper moments.  In any event, if
you want a copy, you can pick it up from
  http://www.cs.sfu.ca/~lou/BonsaiG/OsiDylp.tar.gz
You'll need to type the final file name, the tarball doesn't show up on the
index page. I'd rather not advertise to the world at large just yet, but for
the COIN development community it might be useful.)

	The point about guidelines is well taken, and not really obvious
until you start trying to fit a new solver into the framework. Infinity is a
good case in point. Some solvers use a `finite' infinity, usually the max
floating point value. Others use IEEE infinity. There's a significant
difference in mathematical behaviour, and also in the level of system library
support. The OSI layer doesn't really care, but some care would be required
to write an application that would work with either flavour of infinity. The
question at the OSI layer would be `Should it specify a single flavour of
infinity?'

	A sampling of other issues that occurred to me as I was working on
the dylp interface:

  * Solver parameters --- the current laissez-faire approach is implementable
    for any individual solver (by extending the parameter enum type). But it
    isn't going to make life easy for the user. Can we identify some
    reasonable common subset? Then, what to do about parameters that exploit
    the unique capabilities of individual solvers?

  * Errors, logging, & such-like. There's a CoinError class, and a little bit
    of guidance on its use, but more extensive guidelines might be useful.
    Dylp is what you might call a chatty solver --- it'll dump huge amounts
    of information if you ask it. (Sort of goes back to the first point about
    parameters.) Other solvers are inclined towards the strong, silent type.
    A set of OSI layer functions to control logging and output files might be
    useful.

    Alternative view: Perhaps I'm thinking too much from the stand-alone
    solver point of view? But even in this case, there needs to be some way
    to tie solver functionality to client (user) program functionality.

  * Semantics for cold, warm, and hot starts. For cold start, I guess there's
    not too much room for discussion. But for warm and hot starts, there's a
    lot of room. Just how much state is required? What's the solver's
    internal break point, in terms of type and amount of changes to the
    problem, for warm vs. hot start? One way to come at this might be to
    collect the information for the various solvers and see if we can
    identify common boundaries.

    One big question that came up with respect to dylp: A hot start for dylp
    means that it assumes that its internal data structures are valid and it
    can simply resume pivoting (after diddling the structures to deal with
    allowable problem modifications). The conceptual breakpoint was to avoid
    reinitialising (and refactoring) the basis. There really isn't a `hot
    start' object, in any meaningful sense. It would be possible to create
    such a beast, but the cost (in space and time) would be considerable.

    Warm start, on the other hand, is a convenient and compact object. You
    could create a warm start in one solver instance, destroy the instance,
    create a new solver instance, read in the same problem, and the warm
    start object would work just fine. (But note the warm start object
    doesn't contain the constraint system.)

  * What is tested by commonUnitTest? It's easy enough to figure out by
    reading through the code, but some documentation would be helpful for
    people trying to make sense of it all for the first time.


							Lou



More information about the Coin-discuss mailing list