On Thu, 18 Sep 2008, Mark Hahn wrote:
* Cluster people with significant constraints on space, power, or AC.
just space, really. blade systems used to be almost unique in offering
high-efficiency power solutions, but I think most or all that's become
available in the commodity market now. (that is, 80-90% psu's in normal
1U servers).
and remember, less space and approximately the same power means higher
heat-density. I've never seen a lot of fully populated blade enclosures
in one spot (which is kinda the point), though it should be doable with
rack-back heat exchangers.
actually, are there any blade systems that skip air-cooling entirely?
that would actually make sense - if you're going to go for bespoke power
because of potentially greater efficiency, bespoke cooling makes sense
for the same reason.
* businesses that want a turnkey system, typically for HA
applications, that is compact and "easy" to support.
that part never made sense to me. I'm skeptical that the management
interface for blade systems is better than plain old IPMI. prettier,
perhaps.
Agreed and agreed, but there it is. If nothing else, a small system
that hides its "rackiness" LOOKS easier to manage than a rack of 1U or
3U boxes. And I admit I don't know the MTBF numbers and couldn't tell
you if they are more reliable or less expensive to manage. However,
they never quite go away, so somebody keeps buying them... and I doubt
that a lot are bought by clusterheads. ;-)
And that is fair enough, actually. Some places one literally has a
closet to put one's cluster in, and if one populates the closet with a
a closet which just happens to have a huge jet of cold air going through
it...
http://www.cray.com/Products/CX1/Product/Specifications.aspx
claims 1600W, 92% efficient. their pages don't give much info on the
engineering of the blades, though. given that you have to add ipmi
as an option card, it looks pretty close to commodity parts to me.
1600W for 8 8-core, 300 MHz, 2 GB/core RAM blades at full computational
load?
I don't believe it. I don't think my laptop averages that little (25W)
per core, sustained, using clock shifting and sitting around idle a lot.
It is sitting on my lap at the moment and is burning my hands on the
wrist-rests and my legs through my pants just a little, all the time, at
800 MHz idle except for typing.
Somebody just mailed me specs offline that suggested 375W/card, which is
at least not completely unreasonable (although I'd want to see
Kill-a-Watt validated wall-power draw, not a number that might be some
sort of "average" or might refer to idle power of their slowest clock
system. My KaW shows anywhere from 20-40% power variability from idle
to load in many systems, and the power drawn by the CPU, or a minimally
configured motherboard isn't the same as that drawn by a full system
including heat losses in the power supply etc.
1600W sounds not unreasonable for a >>4<< blade system, and is still a
wee bit warm for MY office, but one might be able to plug it into a 20A
circuit and not instantly blow the breaker. And with 3 blades, or 2, it
would still have 16-24 cores -- a very respectable total. But then one
could match it with a couple or three towers, which would also have
about the same footprint (except for height) and would almost certainly
cost only half as much.
rgb
--
Robert G. Brown Phone(cell): 1-919-280-8443
Duke University Physics Dept, Box 90305
Durham, N.C. 27708-0305
Web: http://www.phy.duke.edu/~rgb
Book of Lilith Website: http://www.phy.duke.edu/~rgb/Lilith/Lilith.php
Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf