On Tue, 28 Feb 2006 22:38:31 +0000 (WET), Ricardo Reis wrote > Thank you all for your reply's. > > 1. The system will be used for CFD intensive calculation, using > comercial and in the house codes, MPI flavor;
You want smaller systems then. > 2. The cluster I've thought to build initially would be: * 8 > nodes (including master), with dual motherboards (2 Opteron CPUs, > single core) * 16 Opteron 2.4GHz; * 4 GB per node (32 GB total) > ; * 1 80 Gb disc (SATA II) per node for system and scratch space; > * 2 80 Gb disc (SATA II) for system on master, on RAID 1; * 3 500 > Gb disc (SATA II) for storage, home; * 2 Gigabit switch, one for > MPI, another for system and NFS; * Motherboard is the Tyan > S2882G3NR-D; Not the best choice of MB. Uses broadcom NICs, and we have seen higher than we like failure rates with Tyan MBs at our customers sites. > 3. I thought that the lantency in this VX50 would be far less > than in the Gigabit network; Possibly, but at a much higher cost. If latency is your issue go with Infinipath or Infiniband (for the moment). I have been hearing interesting things about 10Gbe, but haven't had a chance to look into it yet in great depth. > 4. The solution for cluster vs. VX50 is around less 3500 euro for > the VX50; Interesting. You could get a bunch of single CPU boards, load them with dual core units, and come in at a lower price point. > 5. I thought also that the requirements in HVAC would be less for > the VX50; Fewer PS, more fans, more noise, single point of failure (the last one is bad). > 6. I'm aware and thinking that this technology is new and can be > a single-point of failure, regarding the cluster option; Yes. > 7. Why 2 single core are better than a dual core? because of > sharing resources? Actually for CFD, it depends upon the code and the memory access patterns. If you fill up the memory channel with one core, the second core will have to wait to access the memory. > > thanks for your knowledge sharing, > > Ricardo Reis > > "Non Serviam" > > n.p.: http://radio.ist.utl.pt > n.r.: http://atumtenorio.blogspot.com > <- Send with Pine Linux/Unix/Win/Mac OS-> > _______________________________________________ > Beowulf mailing list, [email protected] > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -- Scalable Informatics LLC http://www.scalableinformatics.com phone: +1 734 786 8423 _______________________________________________ Beowulf mailing list, [email protected] To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
