[CHiPPS] Questions About Blis And example of Alps

Ted Ralphs ted at lehigh.edu
Fri Jan 27 18:25:44 EST 2012


Just to follow up on this, Abc is not very well maintained and is
there as an example of how to use Alps. I would not be surprised if it
has bugs.

As for why Blis solves the knapsack problem you referred to in one
node, whereas the knapsack application took many nodes, this is not
surprising. The knapsack application uses a very naive bounding
procedure and has no sophisticated methodology, whereas Blis uses
cutting planes and other sophisticated procedures to do the bounding.
It would not be unusual for a knapsack problem to take many nodes with
the naive approach, but only one with the more sophisticated approach.
Hope this helps clarify things.

Cheers,

Ted

2011/12/4 Yan Xu <Yan.Xu at sas.com>:
> If you have a TotalView debugger, can try to run through the debugger. It is
> hard to tell if it is a system error or bug in ALPS.
>
>
>
> From: chipps-bounces at list.coin-or.org
> [mailto:chipps-bounces at list.coin-or.org] On Behalf Of ???
> Sent: 2011年11月29日 8:47 上午
> To: chipps at list.coin-or.org
> Cc: geogrid at gmail.com
> Subject: [CHiPPS] Questions About Blis And example of Alps
>
>
>
> Dear Sir,
>
> I am now trying to using Alps solve Ip problems.
>
> I have two questions.
>
> First, I solved the knap3(Alps/Alps/Abc/data/knap3.mps) problem using Abc in
> parallel.
>
> It generate good log file. I can see the work each process has done.
>
> And we can see that a few hundred nodes are searched.
>
> But when I use Blis to solve the same the knap3 problem.
>
> When it is solved, the information shows that only on node searched.
>
> The depth of the search tree is zero.
>
> Why is this?
>
> Second, I use Abc to solve a MIP problem(MPS file: 72MB, 75861 rows, 115173
> columns, 1417123 elements) in parallel using 3(2,4 are tried) machines.
>
> But the program goes wrong after more or less 2000 seconds with the put
> out::
>
> Fatal error in MPI_Test: Other MPI error, error stack:
>
> 0: MPI_Test(153)..................: MPI_Test(request=0x7fffe1cdbdb4,
> flag=0x7fffe1cdbdb0, status=0x7fffe1cdbd50) failed
>
> 0: MPIDI_CH3I_Progress(150).......:
>
> 0: MPID_nem_mpich2_test_recv(800).:
>
> 0: MPID_nem_tcp_connpoll(1720)....:
>
> 0: state_commrdy_handler(1556)....:
>
> 0: MPID_nem_tcp_recv_handler(1446): socket closed
>
> rank 0 in job 5  vm3-1_33710   caused collective abort of all ranks
>
>   exit status of rank 0: return code 1
>
> Could you tell me why?
>
>
>
> Thanks a lot!
>
>
> _______________________________________________
> CHiPPS mailing list
> CHiPPS at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/chipps
>



-- 
Dr. Ted Ralphs
Associate Professor, Lehigh University
(610) 628-1280
ted 'at' lehigh 'dot' edu
coral.ie.lehigh.edu/~ted



More information about the CHiPPS mailing list