It's my experience that when building large clusters, issues of space, power and cooling are often harder and more time-consuming to resolve than actually getting the cluster itself purchased, commissioned, and operating. For

that is somewhat perplexing, since the space/power/cooling issues aren't really _that_ complicated. I think it's one of those areas where too much choice leads to harder decisions. perhaps it also reflects the fact that we're still not really comfortable with the state of affairs - for instance,
vendors still advocate blade servers, which if fully populated are basically
uncoolable (~24 KW/rack!).

example I've recently taken up a new position in Hannover Germany where as part of my start-up package the MPG is building a cluster room (450 square meters floor space, 500kW cooling, 800kW UPS, with the option to double cooling/power in four years).

those numbers seem strange to me - unless I've botched conversion,
the cluster I sit next to is about 4.7 KW/sq-m. (such a large UPS seems strange too - did they choose it based on poor quality line power?
we have none of our compute hardware on UPS, and don't have problems,
since modern PDUs seem to ride out the typical 1-second glitch without much trouble...)

end of this year. So total design and construction time is 2.3 years. In

that's a bit extreme, I think.  our room was a bare-slab reno and took
a bit over a year. another one of our sites was built from scratch and took about 1.5.

construction time will be about 0.5 years. The cost of the cluster room is about equal to the cost of the initial cluster that will go into it. But the

strange. I'm pretty sure the cost ratio we see is more like 4:1 for the from-scratch site (and closer to 10:1 for renos.)

regards, mark hahn.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to