I agree, the current generation boards all seem to have decent GigE.
Nvidis runs it's own NForce. Broadcom and Marvell seem popular too.
Intel mobos run Intel (strangely enough) as do Supermicro. I think Tyan
uses Intel for some and 3Com for others.
My only word of caution is if you go bargain bin
G'day all
I remember hearing about the Australian army developing something
similar with DEC (who?) back in the early 1980's. Idea was it had to be
robust enough for a sled drop out of the back of a Herc!
Not sure how they handed the online storage. I doubt if your average
80's HDD could cope wit
G'day Kyle and all
> So how is having a PPU any different from dual- or quad-core? Or do
> the advantages lie in it's specialized physics-handling abilities
> [programming, instructions]?
You're right about application of a PPU in a Beowulf. I just happen to
run galaxy dynamics on my Beowulf. Mo
Mark started it so while we're asking loaded questions... =)
I recently visited a large educational institution (that shall remain
nameless) that hosts an excellent, world class, science research team.
They also have a reasonably large Beowulf environment (over 100 dual nodes).
Now maybe it was j
Evil *and* useless... ;)
If I see one more nice looking bit of kit (CPU, mobo etc) tested under
some gaming benchmark I'll spit! OK, there's a couple of Linux hardware
sites around and some of the bigger sites occasionally run a Linux test
but it's slim pickings.
Slashdot used to be worth reading
G'day all
Sorry if this turns out to be a dupe post but MS has just released their
HPC clustering kit.
http://www.microsoft.com/windowsserver2003/ccs/overview.mspx
While I've tried to approach this with an open mind... it didn't last
long. I'll refer anyone to ClusterMonkey's article about wh
G'day all
A couple of points to add (being a 3DCG *and* Cluster monkey ;)
Sorry RGB but you shouldn't mention 'production quality' and POVray in
the same sentence. While POVray is great for what it is (and what it
costs!) It falls a long way short of what's expected for production.
Ummm... by
Further to the discussion, AnandTech has a review of an ASUS card
sporting this beastie... (US$300)
http://www.anandtech.com/video/showdoc.aspx?i=2751
I can vaguely remember seeing some mention of AGEIA publishing the API.
Just Newtonian gravity calcs would be just fine by me... then if only I
Like most things, Slashdot does not seem to be what it used to be.
What ever that was.
--
Doug
Slashdot used to be a source of distilled "interesting things" for me.
These days it seems more for kiddies to complain about the latest
troubles in WoW, mindless M$ bashing and 'if Linux is to
Great Minds
I understand about multiple NICs per node (done that). I've got SMP
nodes, how do I "bond" a NIC to a CPU in MPI 1.2x?
Cheers
Steve
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) vis
G'day Ricardo
Are you using MPI (1.2x)? If so, check out my Tips page:
http://members.iinet.net.au/~steve_heaton/lss/login_fr.html
Under Multiple NICs.
In short you give each secondary interface its own hostname then modify
the mpirun.args. Of course, your machines list also reflects this :)
11 matches
Mail list logo