Hi Mark Mark Hahn wrote:
> the premise of this approach is that whoever is using the node doesn't > mind the overhead of external accesses. do you have a sense (or even > measurements) on how bad this loss is (cpu, cache, memory, interconnect > overheads)? if you follow the reasoning that current machines are > pretty 'fat' wrt IB bandwidth and cpu power, there's still a question > of who does the work of raid/fec - ideally, it would be on the client > side to minimize the imposed jitter. As always: It depends. All our nodes run on single GigE but mostly their computations are non-MPI and even local to their core, i.e. the bandwidth should not be a problem. Of course you add more heat to the system, e.g. 1000 extra disks might be around 10 kW sustained, but OTOH you gain a lot, provided you can efficiently use these extra disks. I need to look into PVFS, if this would provide a kind of uniform namespace (and maybe some kind of automatically duplicated files) that would already be perfect. But I need to read first. Cheers Carsten -- Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics Callinstrasse 38, 30167 Hannover, Germany Phone/Fax: +49 511 762-17185 / -17193 http://www.top500.org/system/9234 | http://www.top500.org/connfam/6/list/31
begin:vcard fn:Carsten Aulbert n:Aulbert;Carsten org:Max Planck Institute for Gravitational Physics;Observational Relativity and Cosmology adr:;;Callinstrasse 38;Hannover;;30167;Germany email;internet:[EMAIL PROTECTED] title:Dr tel;work:+49 511 762 17185 tel;fax:+49 511 762 17193 tel;home:+49 511 220 4537 tel;cell:+49 1577 398 7963 x-mozilla-html:FALSE url:http://www.aei.mpg.de/ version:2.1 end:vcard
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf