[Csdp] Question regarding linear germs

Ronald Bruck rbruck at gmail.com
Sat Sep 3 22:36:52 EDT 2011


My question doesn't fit into the framework of general sdp problems, but of a rather specialized application.

Example:  the system matrix (let us call it W, and assume it's nun) is PSD and is therefore the Gramian of points in
R^d,  W = (vi.vj), 1 <= i, j <= d.  (Do a Cholesky decomposition, set the vi = the rows, et voilà!)  This is actually my main interest in SDP,  since I am optimizing geometric problems, where the objects of interest are the points in R^d.

Often, some of these points are KNOWN; how to enter them?  For example, I have developed a code to intersect a family of closed balls

       ||x - ci|| <= ri,

and also COMPLEMENTS of balls,

    ||x - ci|| >= ri,

(NOT usually thought of as a convex problem!!) where the centers and radii are known.  The optimization problem is either to find the CLOSEST point in the intersection to a given point, or a FARTHEST point.  (Same problem, as far as SDP is concerned, except for a switch of sign on the objective function.)

I thought of two approaches:  treat the ci as UNKNOWNS, but recapture them (up to SO(n)) by including as constraints

       ci.cj = (known value for the known points).

But this adds QUADRATICALLY to the number of constraints.  The other, which I haven't seen used before, is to treat the standard ON basis of R^d as unknowns:  the ei.  They're unknown, but we recapture them by adding d(d+1)/2 constraints

      ei.ej = delta_{ij}   (Kronecker delta).

This is quadratic in the dimension, but that's often very small (2 or 3, commonly) in my problems.  We then append ONE MORE unknown x and set up the constraints for whatever the problem is in terms of x.x and the x.ei.  And there's the linear term I'm interested in:  x.ei is the i-th coordinate of x, which is usually impossible to access.

For example, for the intersection of balls, the constraint ||x-c|| <= r is rephrased as  ||x - sum(c_i e_i)||^2 <= r^2, or

            x.x - 2 sum(c_i x.e_i) <= r^2 - sum_i c_i^2,

where the RHS is known and all the terms on the left are elements of the system matrix (the coefficients ci are KNOWN, remember).

Now I'm trying to solve the generalization of the (very old!) problem of finding a circle (or sphere, etc.) of minimal radius which includes each of a given family of spheres.  For points (spheres of radius 0) Welzl has an elegant solution which is O(n) in time, and VERY intuitive (blow up a balloon, then gradually shrink it), and I don't think quadratic programming has much to say about it.  For REAL spheres (not points), quadratic programming can certainly solve it (and it is a convex problem), because the condition

    B(c_i, r_i) \subset B(x, rho)

is equivalent to

    ||x - c_i|| + r_i <= rho,

which quadraticizes to

(*)    ||x-c_i||^2 <= (rho - r_i)^2 for all i,    and rho >= max_i r_i (to keep rho from being negative).

We minimize rho over these constraints.

But I'm having difficulties turning this into an SDP problem.  My first thought was to treat the e_i as unknowns again, which handles the c_i; introduce an unknown x to minimize over; but how to handle rho?  There's a quadratic term in rho, but there are also linear terms.  I thought of introducing a second VECTOR, call it "r", and stipulate that r.e_i = r.e_j for all i, j (that only involves d-1 constraints).  The r_i*rho term which appears in (*) can then be taken to be r_i * (r.e1), HOPING that the vector r doesn't interact with x.

The disadvantage of these methods is that we increase the dimension of the problem:  one for x, and one for r.  The system matrix looks like

         e1    e2    ...   ed       x          r
------------------------------------------------
e1  |     1     0    ...    0     x_1        r_1
e2  |     0     1           0     x_2        r_2
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
ed  |     0     0  ……..     1     x_d        r_d
x   |   x_1   x_2 …..     x_d     x.x        x.r  
r   }   r_1   r_2         r_d     r.x        r.r  


As part of the constraint's we're sure that r_1 = r_2 = … = r_d.  But the x.x, x.r and r.r terms are
problematic, and in fact we DON'T get the optimal solution, but one too large.  (And that puzzles me.)

This trick works well in the intersection-of-balls problem, perfectly in the convex case (no complements of balls involved); even when complements of balls are included, and the solution strays into that extra dimension, it does so only very rarely (probably because the solution sets are 'thin', i.e. nowhere-dense, but there's more to it than that).  And a small perturbation of the centers of the balls fixes the problem because of that thinness.

Has anyone seen techniques like these used elsewhere?  And can anyone phrase the smallest-enclosing-sphere problem in terms of SDP?
If you want to see my code, I've made it available on my website:  go to <http://imperator.usc.edu/~bruck/sdp>.

Best wishes,

--Ron Bruck

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://list.coin-or.org/pipermail/csdp/attachments/20110903/0ce33ff0/attachment.html 


More information about the Csdp mailing list