Thanks for the offer. This is an academic exercise for now. Our budgets are committed to through 2026 for Frontier. 😄
On Tue, Mar 10, 2020 at 4:11 PM Jeff Johnson <jeff.john...@aeoncomputing.com> wrote: > Scott, > > They are about to release a 85kW version of the rack, same dimensions. Let > me know if you want me to connect you with their founder/inventor. > > --Jeff > > On Tue, Mar 10, 2020 at 1:08 PM Scott Atchley <e.scott.atch...@gmail.com> > wrote: > >> Hi Jeff, >> >> Interesting, I have not seen this yet. >> >> Looking at their 52 kW rack's dimensions, it works out to 3.7 kW/ft^2 for >> the enclosure if we do not count the row pitch. If we add 4-5 feet for row >> pitch, then it drops to 2.2-2.4 kW/ft^2. Assuming Summit's IBM AC922 nodes >> fit and again a row pitch of 4-5 feet, the performance per area would be >> 31-34 TF/ft^2. Both the performance per area and the power per are are >> close to Summit. Their PUE (1.15-1.2) is higher than we get on Summit (1.05 >> for 9 months and 1.1-1.2 for 3 months). It is very interesting for data >> centers that have widely varying loads for adjacent cabinets. >> >> Scott >> >> On Tue, Mar 10, 2020 at 3:47 PM Jeff Johnson < >> jeff.john...@aeoncomputing.com> wrote: >> >>> Scott, >>> >>> It's not immersion but it's a different approach to the conventional >>> rack cooling approach. It's really cool (literally and figuratively). >>> They're based here in San Diego. >>> >>> https://ddcontrol.com/ >>> >>> --Jeff >>> >>> On Tue, Mar 10, 2020 at 12:37 PM Scott Atchley < >>> e.scott.atch...@gmail.com> wrote: >>> >>>> Hi everyone, >>>> >>>> I am wondering whether immersion cooling makes sense. We are most >>>> limited by datacenter floor space. We can manage to bring in more power (up >>>> to 40 MW for Frontier) and install more cooling towers (ditto), but we >>>> cannot simply add datacenter space. We have asked to build new building and >>>> the answer has been consistently "No." >>>> >>>> Summit is mostly water cooled. Each node has cold plates on the CPUs >>>> and GPUs. Fans are needed to cool the memory and power supplies and is >>>> captured by rear-door heart exchangers. It occupies roughly 5,600 ft^2. >>>> With 200 PF of performance and 14 MW of power, that is 36 TF/ft^2 and 2.5 >>>> kW/ft^2. >>>> >>>> I am wondering what the comparable performance and power is per square >>>> foot for the densest, deployed (not theoretical) immersion cooled systems. >>>> Any ideas? >>>> >>>> To make the exercise even more fun, what is the weight per square foot >>>> for immersion systems? Our data centers have a limit of 250 or 500 >>>> pounds per square foot. I expect immersion systems to need higher loadings >>>> than that. >>>> >>>> Thanks, >>>> >>>> Scott >>>> _______________________________________________ >>>> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin >>>> Computing >>>> To change your subscription (digest mode or unsubscribe) visit >>>> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf >>>> >>> >>> >>> -- >>> ------------------------------ >>> Jeff Johnson >>> Co-Founder >>> Aeon Computing >>> >>> jeff.john...@aeoncomputing.com >>> www.aeoncomputing.com >>> t: 858-412-3810 x1001 f: 858-412-3845 >>> m: 619-204-9061 >>> >>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117 >>> >>> High-Performance Computing / Lustre Filesystems / Scale-out Storage >>> >> > > -- > ------------------------------ > Jeff Johnson > Co-Founder > Aeon Computing > > jeff.john...@aeoncomputing.com > www.aeoncomputing.com > t: 858-412-3810 x1001 f: 858-412-3845 > m: 619-204-9061 > > 4170 Morena Boulevard, Suite C - San Diego, CA 92117 > > High-Performance Computing / Lustre Filesystems / Scale-out Storage >
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf