Re: [Beowulf] onboard Gb lan: any opinion, suggestion or,impression?

2006-11-14 Thread SIM DOG
I agree, the current generation boards all seem to have decent GigE. Nvidis runs it's own NForce. Broadcom and Marvell seem popular too. Intel mobos run Intel (strangely enough) as do Supermicro. I think Tyan uses Intel for some and 3Com for others. My only word of caution is if you go bargain bin

[Beowulf] Re: Sun Project "blackbox"

2006-10-19 Thread SIM DOG
G'day all I remember hearing about the Australian army developing something similar with DEC (who?) back in the early 1980's. Idea was it had to be robust enough for a sled drop out of the back of a Herc! Not sure how they handed the online storage. I doubt if your average 80's HDD could cope wit

Re: [Beowulf] Re: gamers: evil or just useless? ;)

2006-08-09 Thread SIM DOG
G'day Kyle and all > So how is having a PPU any different from dual- or quad-core? Or do > the advantages lie in it's specialized physics-handling abilities > [programming, instructions]? You're right about application of a PPU in a Beowulf. I just happen to run galaxy dynamics on my Beowulf. Mo

[Beowulf] While the knives are out... Wulf Keepers

2006-08-09 Thread SIM DOG
Mark started it so while we're asking loaded questions... =) I recently visited a large educational institution (that shall remain nameless) that hosts an excellent, world class, science research team. They also have a reasonably large Beowulf environment (over 100 dual nodes). Now maybe it was j

[Beowulf] Re: gamers: evil or just useless? ;)

2006-08-09 Thread SIM DOG
Evil *and* useless... ;) If I see one more nice looking bit of kit (CPU, mobo etc) tested under some gaming benchmark I'll spit! OK, there's a couple of Linux hardware sites around and some of the bigger sites occasionally run a Linux test but it's slim pickings. Slashdot used to be worth reading

[Beowulf] MS HPC... Oh dear...

2006-06-11 Thread SIM DOG
G'day all Sorry if this turns out to be a dupe post but MS has just released their HPC clustering kit. http://www.microsoft.com/windowsserver2003/ccs/overview.mspx While I've tried to approach this with an open mind... it didn't last long. I'll refer anyone to ClusterMonkey's article about wh

[Beowulf] Re: noob understanding

2006-05-20 Thread SIM DOG
G'day all A couple of points to add (being a 3DCG *and* Cluster monkey ;) Sorry RGB but you shouldn't mention 'production quality' and POVray in the same sentence. While POVray is great for what it is (and what it costs!) It falls a long way short of what's expected for production. Ummm... by

[Beowulf] Re: coprocessor to do "physics calculations"

2006-05-05 Thread SIM DOG
Further to the discussion, AnandTech has a review of an ASUS card sporting this beastie... (US$300) http://www.anandtech.com/video/showdoc.aspx?i=2751 I can vaguely remember seeing some mention of AGEIA publishing the API. Just Newtonian gravity calcs would be just fine by me... then if only I

[Beowulf] Re: ClusterMonkey/Doug E. got slashdotted

2006-04-25 Thread SIM DOG
Like most things, Slashdot does not seem to be what it used to be. What ever that was. -- Doug Slashdot used to be a source of distilled "interesting things" for me. These days it seems more for kiddies to complain about the latest troubles in WoW, mindless M$ bashing and 'if Linux is to

[Beowulf] More multiple things per node

2006-01-30 Thread SIM DOG
Great Minds I understand about multiple NICs per node (done that). I've got SMP nodes, how do I "bond" a NIC to a CPU in MPI 1.2x? Cheers Steve ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) vis

[Beowulf] using two separate networks for different data streams

2006-01-26 Thread SIM DOG
G'day Ricardo Are you using MPI (1.2x)? If so, check out my Tips page: http://members.iinet.net.au/~steve_heaton/lss/login_fr.html Under Multiple NICs. In short you give each secondary interface its own hostname then modify the mpirun.args. Of course, your machines list also reflects this :)