RE: [Beowulf] many cores and ib

2008-05-06 Thread Mark Hahn
messages into one network message. For applications cases, sometimes it helps with performance and sometimes it does not. OSU have shown both when would a program deliberately send such messages? isn't it something that the program should avoid in the first place? does the MPI optimization app

RE: [Beowulf] many cores and ib

2008-05-06 Thread Gilad Shainer
Patrick Geoffray wrote: > > It is the same benchmark that QLogic were and are using for MPI > > message rate, and I guess you know that better then me, > don't you? > > I want to make sure when one do a comparison he/she will be > using the > > same benchmark/output to compare. > > It i

Re: [Beowulf] Purdue Supercomputer

2008-05-06 Thread Jim Lux
At 03:20 PM 5/6/2008, Mark Hahn wrote: We have built out a beefy install infrastructure to support a lot of simultaneous installs... I'm curious to hear about the infrastructure. btw: http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=207501882 Interesting... 1000 computers, ass

Re: [Beowulf] many cores and ib

2008-05-06 Thread akshay gulati
How Can i start bewoluf and HPC any ideas On Tue, May 6, 2008 at 2:02 AM, Greg Lindahl <[EMAIL PROTECTED]> wrote: > On Mon, May 05, 2008 at 10:01:42AM -0700, Gilad Shainer wrote: > > > According to OSU benchmarks, InfiniHost III Ex provides >20M MPI message > > per second. > > And we should all

Re: [Beowulf] Nvidia, cuda, tesla and... where's my double floating point?

2008-05-06 Thread Ricardo Reis
On Sun, 4 May 2008, Mikhail Kuzminsky wrote: "Next generation Tesla", but I don't know when. Or use AMD FireStream 9170 instead :-) I've read somewhere that double precision performance from AMD wasn't very good and their programming model goes more towards assembly... Besides, AMD/ATI stil

Re: [Beowulf] Purdue Supercomputer

2008-05-06 Thread Matt Lawrence
That's going to be nifty stunt. Proving that such an install not only can be done, it has een done. Wish I was involved. Other suggestions: Make sure you have somebody responsible for the power on site, blowing breakers and not being able to fix the problems would be embaressing. Also, ma

Re: [Beowulf] Purdue Supercomputer

2008-05-06 Thread Mark Hahn
We have built out a beefy install infrastructure to support a lot of simultaneous installs... I'm curious to hear about the infrastructure. btw: http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=207501882 ___ Beowulf mailing list, Beowulf

Re: [Beowulf] many cores and ib

2008-05-06 Thread Patrick Geoffray
Gilad Shainer wrote: It is the same benchmark that QLogic were and are using for MPI message rate, and I guess you know that better then me, don't you? I want to make sure when one do a comparison he/she will be using the same benchmark/output to compare. It is not the benchmark, it's the