# [Ipopt] IPOpt for l1 optimization?

Thomas Vacek vacek008 at umn.edu
Thu Apr 12 16:22:19 EDT 2012

```Both codes seem similar from a user's perspective, and I think either
will work.  For YALL1, it looks like you'll need to reformulate the
problem using the standard trick of separating some variables into
positive and negative parts so that the entire optimization variable
satisfies the nonnegativity condition, perhaps like this:
min ||y||_1 s.t. y = Cx, z = -Ax, z >=0

becomes

min || [y+, y-, x+, x-, z] ||_w,1
s.t.
[  I  -I  C  -C  0
0  0  -A  A  I]  [y+, y-, x+, x-, z]' = [0, 0]'
[y+, y-, x+, x-, z] >= 0

where weight w is 1 for indices of y+ and y-, and zero elsewhere.

For SPGL1, you could implement your own gauge function
k(x) = { ||Cx||_1 if Ax <= 0, infinity otherwise. }

The dimension of this problem would be a few times smaller than YALL1,
but perhaps there are other tricks for YALL1.

To keep this list-related, if you do any of these, please compare the
result with IPOPT and let us know...

Tom

On 04/12/2012 02:12 PM, Paul van Hoven wrote:
> Thank you Thomas. I found this here (only for matlab):
>
> http://yall1.blogs.rice.edu/
>
>
>
> Am 12. April 2012 16:15 schrieb Thomas Vacek<vacek008 at umn.edu>:
>> Perhaps such as http://www.cs.ubc.ca/labs/scl/spgl1/
>>
>>
>>
>> On 04/12/2012 02:44 AM, Jonathan Hogg wrote:
>>> It may also be worth looking at some of the algorithms used in compressed
>>> sensing. I'm told these algorithm don't have any strong complexity results
>>> (many are as poor in the worst case), but do exhibit good practical
>>> performance on many L1 optimization problems.
>>>
>>> Jonathan.
>>>
>>> On 11/04/12 18:51, Paul van Hoven wrote:
>>>> Thank you for the answer Peter. Can you recommend some sources on this
>>>> topic of transformation?
>>>>
>>>> Am 11. April 2012 18:33 schrieb Peter Carbonetto<pcarbo at uchicago.edu>:
>>>>> Is there an absolute value in that objective function you are
>>>>> minimizing? If
>>>>> so, then the answer is no, because the objective is non-smooth (it has
>>>>> undefined derivatives at zeros). But you can convert this to an
>>>>> equivalent
>>>>> smooth optimization problem with additional inequality constraints.
>>>>> There is
>>>>> quite a bit of literature on this topic.
>>>>>
>>>>> Peter Carbonetto, Ph.D.
>>>>> Postdoctoral Fellow
>>>>> Dept. of Human Genetics
>>>>> University of Chicago
>>>>>
>>>>>
>>>>> On Wed, 11 Apr 2012, Paul van Hoven wrote:
>>>>>
>>>>>> I've got the following problem:
>>>>>>
>>>>>> min_x sum_{i=1}^N |<x,c_i>    |
>>>>>> s.t. Ax<    0
>>>>>>
>>>>>> <x,c_i>    denotes the standard scalar product between x and c_i.
>>>>>>
>>>>>> Is this a problem that can be solved appropriately with IPOpt?
>>>>>> _______________________________________________
>>>>>> Ipopt mailing list
>>>>>> Ipopt at list.coin-or.org
>>>>>> http://list.coin-or.org/mailman/listinfo/ipopt
>>>>>>
>>>> _______________________________________________
>>>> Ipopt mailing list
>>>> Ipopt at list.coin-or.org
>>>> http://list.coin-or.org/mailman/listinfo/ipopt
>>>
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt

```

More information about the Ipopt mailing list