or ballpark amount. the 5 - 10 kW one vendor specified seems waaaay too low for a rack of high-density HPC nodes running at or near 100% utilization.

most vendors are contaminated by "enterprise" thinking.
they pull stupid things like singing the density praises of their blade solution and then admit you can't actually fill a rack with them ;)

to one digit of resolution, 300W/node is reasonable - canonical 2-socket node, no/few disks, 1G/core, etc.
obviously, adding two 300W GPUs changes a bit, but most
other add-ons won't make a huge difference, since are big disks 10W range, dimms something like 3W/8GB. (OK, power/disks could make a ~30% difference, but choice among CPU models could do that, too...)

I wouldn't configure a DC space with <15KW/rack, and would
prefer to design at the 2-node-per-U level (say, just under 30KW/rack).
(the former is easily doable with completely conventional chilled
air designs, and the latter probably needs water or at least
a much more careful, customized approach with air...

regards, mark hahn.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to