Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Patrick Geoffray
Hi Eric, Eric W. Biederman wrote: On the other hand it is my distinction impression the reason there is no opportunity cost from polling is that the applications have not been tuned as well as they could be. In all other domains of programming synchronous receives are serious looked down upon.

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Eric W. Biederman
Stuart Midgley <[EMAIL PROTECTED]> writes: >> >> It does apply, however, many parallel algorithms used today are >> naturally blocking. Why? Well, complicating your algorithm to overlap >> communication and computation rarely gives a benefit in practice. So >> anyone who's tried has likely become

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Eric W. Biederman
"Vincent Diepeveen" <[EMAIL PROTECTED]> writes: > You're assuming that you run 1 thread at a 2 to 4 core node or so? Not at all. I am assuming 1 thread per core, is typical. So if you have 4 cores and 4 threads, when one of them is asleep. Most likely when the interrupt arrives your previous c

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Eric W. Biederman
Greg Lindahl <[EMAIL PROTECTED]> writes: > On Thu, Sep 07, 2006 at 01:15:01PM -0600, Eric W. Biederman wrote: > >> I agree. Taking an interrupt per message is clearly a loss. > > Ah. So we're mostly in violent agreement! That is always nice :) >> Polling is a reasonable approach for the short d

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Vincent Diepeveen
You're assuming that you run 1 thread at a 2 to 4 core node or so? This is not the reality. See supercomputer report Europe. On average of machines in production more than 50% of the cpu's is on average in use and that goes up to 70% on average at every given hour 24 hours a day, 365 days a ye

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Eric W. Biederman
"Vincent Diepeveen" <[EMAIL PROTECTED]> writes: > How about the latency to wake up that thread again. runqueue latency in linux > is > 10+ ms? That assumes you have a 100Mhz clock (default is currently 250Mhz) and you have something else running. If you have something else running yielding is e

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Stuart Midgley
It does apply, however, many parallel algorithms used today are naturally blocking. Why? Well, complicating your algorithm to overlap communication and computation rarely gives a benefit in practice. So anyone who's tried has likely become discouraged, and most people haven't even tried. -- g

Re: [Beowulf] detection and diagnosis of PCI bus saturation

2006-09-08 Thread Roy L Butler
Hernando, H.Vidal, Jr. wrote: Hello. Does anybody here have any thoughts, experiences, or references to diagnostic methods when looking into potential PCI bus saturation? If one is moving data across the bus, either under CPU copies or under DMA/bus master, are there canonical methods to detec

[Beowulf] Gigabit PCI-E NIC

2006-09-08 Thread Marcelino Mata
I have searched the archive and several sites but I have not come across an definitive answer. We are going to build a new cluster out of standard HP desktop hardware. The cluster will have dual purpose as computer cluster 80% of the time and the remaining time to be used as PC training compute

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Greg Lindahl
On Thu, Sep 07, 2006 at 01:15:01PM -0600, Eric W. Biederman wrote: > I agree. Taking an interrupt per message is clearly a loss. Ah. So we're mostly in violent agreement! > Polling is a reasonable approach for the short durations say > <= 1 milisecond, but it is really weird to explain that yo

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Vincent Diepeveen
- Original Message - From: "Eric W. Biederman" <[EMAIL PROTECTED]> To: "Ashley Pittman" <[EMAIL PROTECTED]> Cc: "'Bogdan Costescu'" <[EMAIL PROTECTED]>; "'Beowulf List'" ; "Daniel Kidger" <[EMAIL PROTECTED]>; "'Mark Hahn'" <[EMAIL PROTECTED]> Sent: Thursday, September 07, 2006 7:54 P

RE: [Beowulf] Create cluster : questions

2006-09-08 Thread Bernard Li
For those who are interested in this discussion, I'd like to point you to the openSUSE Build Service (in alpha stage): http://build.opensuse.org They have a web GUI/CLI for performing automatic builds of packages and they not only support SUSE Linux but also other distributions like Fedora, Mandr

[Beowulf] Killing may user jobs on many compute nodes

2006-09-08 Thread Daniel.G.Roberts
Hello All Any one have a method/script that they would be willing to pass along that could be used in the process of terminating user processes out on the compute nodes that we never properly cleaned up? Thanks! Dan ___ Beowulf mailing list, Beowulf@b

Re: [Beowulf] Create cluster : questions

2006-09-08 Thread Maxence Dunnewind
2006/9/7, Ed Hill <[EMAIL PROTECTED]>: On Mon, 4 Sep 2006 21:23:03 +0200"Maxence Dunnewind" <[EMAIL PROTECTED]> wrote:> Hi.>> i'm a user of the Ubuntu Linux OS, and also a packager for this OS. > As you may know , packaging can be take a lot of time, mainly> during building process.> I would create

Re: [Beowulf] detection and diagnosis of PCI bus saturation

2006-09-08 Thread Jim Lux
At 09:18 AM 9/8/2006, H.Vidal, Jr. wrote: Hello. Does anybody here have any thoughts, experiences, or references to diagnostic methods when looking into potential PCI bus saturation? If one is moving data across the bus, either under CPU copies or under DMA/bus master, are there canonical metho

Re: [Beowulf] Optimal BIOS settings for Tyan K8SRE

2006-09-08 Thread stephen mulcahy
Hi, Thanks to everyone who responded to my queries. I've tried to summarise the responses below for other's reference. Hope this is useful. For BIOS memory settings, may want to disable "Node Memory Interleave". It may decrease memory bandwidth and noticeably increase memory latency (this is supp

[Beowulf] Slackware Package for openmpi 1.1.1 and mpich2 1.0.4p1

2006-09-08 Thread Marcelo Souza
If interest anyone i make Slackware packages, i486, for openmpi 1.1.1 and mpich2 1.0.4p1 TGZ http://www.cebacad.net/slackware/openmpi-1.1.1-i486-1goa.tgz http://www.cebacad.net/slackware/mpich2-1.0.4p1-i486-1goa.tgz signed with my pgp key http://www.cebacad.net/slackware/openmpi-1.1.1-i486

RE: [Beowulf] NCSU and FORTRAN

2006-09-08 Thread Ivan Silvestre Paganini Marin
Em Qui, 2006-09-07 às 16:55 -0400, Mark Hahn escreveu: > > I've had grad students and profs in the past get good results using > > Matlab, intel and the intel MKL. > > it's worth making explicit again: grad students and profs > are not elegible for the "non-commercial" free Intel license. Agai

Re: [Beowulf] Create cluster : questions

2006-09-08 Thread Rayson Ho
What you are looking for is not an HPC cluster, but rather a compute/compile farm... Did you look at SGE (GridEngine) before?? SGE has 2 parallel make implementations: SGE's own qmake and distmake: http://distmake.sourceforge.net/pmwiki/pmwiki.php?n=Main.SGEIntegration For other similar projects

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Eric W. Biederman
Ashley Pittman <[EMAIL PROTECTED]> writes: > I think Daniel was talking about supercomputer networks and not > ethernet, on the first QsNet2 machine I have to hand latency without > interrupts is 2.72uSec, using interrupts it is 7.20uSec. One > fundamental difference between these two measurement

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-08 Thread Eric W. Biederman
Greg Lindahl <[EMAIL PROTECTED]> writes: > On Wed, Sep 06, 2006 at 11:10:14AM -0600, Eric W. Biederman wrote: > >> There is fundamentally more work to do when you take an interrupt because >> you need to take a context switch. But cost of a context switch is in >> the order of microseconds, so wh

Re: [Beowulf] Optimal BIOS settings for Tyan K8SRE

2006-09-08 Thread Eric W. Biederman
stephen mulcahy <[EMAIL PROTECTED]> writes: > Hi Bruce, > > Do you have any idea what the performance impact from enabling scrubbing > is on your systems? did you do any before/after benchmarking? I don't know what the performance impact of scrubbing is for Bruce. I did some looking a long time

Re: [Beowulf] Create cluster : questions

2006-09-08 Thread Maxence Dunnewind
2006/9/7, Michael Will <[EMAIL PROTECTED]>: This would be if you had a dedicated beowulf style cluster rather than cyclescavenging style ([EMAIL PROTECTED]). It would be a very ambitious project to do package compilation that way. Michaeli'm ambitious ;)but i have to start looking for a de

[Beowulf] Looking for model numbers of a 10/100/1000 switch that has external phy's

2006-09-08 Thread Bill Ansell
Hi all, Might sound like a odd request, but I am looking for a reasonable priced (I will even buy used if I know model numbers) 10/100/1000 switch that uses external phy's.. I need to tap off the RD valid lines to monitor received packets on a oscilloscope. (timing, jitter ect..) I have a cheap

[Beowulf] detection and diagnosis of PCI bus saturation

2006-09-08 Thread H.Vidal, Jr.
Hello. Does anybody here have any thoughts, experiences, or references to diagnostic methods when looking into potential PCI bus saturation? If one is moving data across the bus, either under CPU copies or under DMA/bus master, are there canonical methods to detect, perhaps in bridge chip regist

RE: [Beowulf] NCSU and FORTRAN

2006-09-08 Thread Mark Hahn
I've had grad students and profs in the past get good results using Matlab, intel and the intel MKL. it's worth making explicit again: grad students and profs are not elegible for the "non-commercial" free Intel license. Again, what? The intel fortran free non-commercial unsupported CANNOT be