Re: [Beowulf] SGI and Sun: In Memoriam

2009-04-02 Thread Michael Brown
With all due respect, rgb, you're somewhat out of date with respect to (Open)Solaris. I don't think anyone would argue that Solaris x86 wasn't a mess prior to Solaris 10. However, it took a massive step forward in Solaris 10 once Sun really started pumping out Opteron servers. Additionally, it's

Re: [Beowulf] itanium vs. x86-64

2009-02-12 Thread Michael Brown
I've got a zx2000 (1.5 GHZ/6 MB Madison processor, 2 GB PC2100 RAM, general system details at http://www.openpa.net/systems/hp_zx2000.html) that I use for testing and benchmarking. Obviously there some difference in performance characteristics between this machine and a gazillion-processor Altix

Re: [Beowulf] Not sure if people have seen it yet, but 2TB disks from Western Digital appear to be in the wild ...

2009-01-29 Thread Michael Brown
Bruno Coutinho wrote: [...] Interesting thing is that this presumably 5400rpm drives outperforms its 7200rpm counterparts. And it's green too. :) It isn't a 5400rpm driver, it is multispeed with speed between 5400 and 7200rpm. Actually, it almost certainly is 5400 RPM. WD's website originally

Re: [Beowulf] SSD prices

2008-12-12 Thread Michael Brown
Greg Lindahl wrote: I was recently surprised to learn that SSD prices are down in the $2-$3 per gbyte range. I did a survey of one brand (OCZ) at NexTag and it was: 256 gigs = $700 128 gigs = 300 64 gigs = 180 32 gigs = 70 Alas, these drives have lousy random write performance. As in 4 IOp

Re: [Beowulf] Multicore Is Bad News For Supercomputers

2008-12-08 Thread Michael Brown
Mark Hahn wrote: (Well, duh). yeah - the point seems to be that we (still) need to scale memory along with core count. not just memory bandwidth but also concurrency (number of banks), though "ieee spectrum online for tech insiders" doesn't get into that kind of depth :( I think this needs t

[Beowulf] QsNet-1 parts, last call

2008-11-25 Thread Michael Brown
6-64, and IA64 supported, versions 2.6.18 and earlier IIRC) but it's not all that difficult to get set up. There's binaries for RHEL, and I got it to build with a bit of coercion on Debian Etch (4.0) IA-64. Sorry about the spam, Michael Brown

Re: [Beowulf] What class of PDEs/numerical schemes suitable for GPUclusters

2008-11-22 Thread Michael Brown
Jeff Layton wrote: offhand, I'd guess that adaptive grids will be substantially harder to run efficiently on a GPU than a uniform grid. One key thing is that unstructured grid codes don't work as well. The problem is the indirect addressing. Bingo. GPUs are still GPUs, and are still heavily o

Re: [Beowulf] zfs tuning for HJPC/cluster workloads?

2008-07-07 Thread Michael Brown
y to crash or lock up machines under certain workloads. -- Michael Brown Add michael@ to emboss.co.nz ---+--- My inbox is always open ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] Re: "hobbyists"es

2008-06-22 Thread Michael Brown
have to make sure that difficulty in obtaining the key file makes up for the easier breaking of the password. -- Michael Brown Add michael@ to emboss.co.nz ---+--- My inbox is always open ___ Beowulf mailing list, Beowulf@beowulf.org To change yo

Re: [Beowulf] A couple of interesting comments

2008-06-10 Thread Michael Brown
Perry E. Metzger wrote: Anyone have any cool tricks for how to consistently set the BIOS on large numbers of boxes without requiring steps that humans can screw up easily? Get a USB stick that boots into Linux. Set up one machine the way you want, then boot it up using the USB stick. Do: dd if=

[Beowulf] QsNet (one) gear, anyone interested?

2008-04-11 Thread Michael Brown
Hello all, I've recently ended up with a complete QsNet interconnect setup - a fully loaded QM-S128 with 126 QM-400 cards and associated cables. Now, I'm obviously not going to be using all of this. I don't have 16 computers to cluster, let alone 126, and the 700W power consumption of the swit