>> http://www.datacenterknowledge.com/archives/2013/11/13/3m-immersion-cooling/
tanks seems like more of a pain that figuring out a way to use node-level heatpipes and rack-level water loops. plumes of bubbles would seem fairly problematic, no? > However, (here begins a very possibly insane set of ideas) what if > instead of reaching for low density and really efficient cooling, you > went the other way and spread things out and tried not to actively chill > air at all? the big cloud vendors have some sites like this. > Sure, your network latency will shoot up, but for many > applications (data centers in particular) this may not matter at all. I don't think latency is much of an issue. remember 1ns/ft (roughly), and even in HPC, people done seem to care much about multiple 100ns. (if they did, more clusters would be 3d...) > Are there places in the world so arid and stable in temperature that you > could effectively run a data center or compute farm outside (or "almost aridity isn't the main thing, since hardware is warming the air, thus rh-wise, drying it. it's really a question of whether your heat extraction system has a low enough thermal resistance to bring your systems into an acceptable operating temperature. you can lower your thermal resistance by exchanging with more air (faster, bigger HS surface area), or change your working fluid (water, novec, R-22/etc). if your ambient is always <30C and your chips are OK up to, say, 60C, you have 30C; for reasonable airspeeds and HS sizes, you're limited to about 100W. for passive cooling, you'll need to make your dissipation lower, or HS area bigger, or greater delta-T. regards, mark hahn. _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf