Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread John Hearns
2009/10/5 Mark Hahn : >> the other nodes are to be diskless.. I have separated these partitions: >>  /swap /boot /  /var and /home. Is this ok? > > I don't believe there is much value in separating partitions like this. > for instance, a swap partition has no advantage over a swap file, > and the l

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread John Hearns
2009/10/5 Nifty Tom Mitchell : > I like Ubuntu because it facilitates my WiFi and graphics support better than > some others.  It is also quite current in image tools so it is the distro > I connect my camera to more often than my other systems.  I can also I gotta put in a reply here for SuSE. T

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread Mark Hahn
the other nodes are to be diskless.. I have separated these partitions: /swap /boot / /var and /home. Is this ok? I don't believe there is much value in separating partitions like this. for instance, a swap partition has no advantage over a swap file, and the latter is generally more convenien

Re: [Beowulf] XEON power variations

2009-10-04 Thread Joe Landman
Tom Rockwell wrote: Hi, Intel assigns the same power consumption to different clockspeeds of L, E, X series XEON. All L series have the same rating, all E series etc. Not quite. They provide the maximum power consumption/dissipation, and quite possibly bin these numbers over a range of p

Re: [Beowulf] XEON power variations

2009-10-04 Thread Tiago Marques
Hi Tom, On Tue, Sep 15, 2009 at 8:45 PM, Tom Rockwell wrote: > Hi, > > Intel assigns the same power consumption to different clockspeeds of L, E, X > series XEON.  All L series have the same rating, all E series etc.  So, > taking their numbers, the fastest of each type will always have the best

Re: [Beowulf] ATX on switch

2009-10-04 Thread Michael Di Domenico
You can get a full complement of switches and leds for under 10 bucks. Just search froogle for "atx power switch". I've purchased the StarTech kits in the past. On Sun, Oct 4, 2009 at 12:44 PM, Tomislav Maric wrote: > John Hearns wrote: >> 2009/10/4 Tomislav Maric : >>> @John Hearns >>> Thank y

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread Nifty Tom Mitchell
On Sun, Oct 04, 2009 at 01:08:27PM +0200, Tomislav Maric wrote: > Mark Hahn wrote: > >> I've seen Centos mentioned a lot in connection to HPC, am I making a > >> mistake with Ubuntu?? > > > > distros differ mainly in their desktop decoration. for actually > > getting cluster-type work done, the

Re: [Beowulf] Re: RAID for home beowulf

2009-10-04 Thread Tomislav Maric
Mark Hahn wrote: >> disk is the /home with the results. What file system should I use for >> it, ext3? > > it doesn't matter much. ext3 is a reasonable, conservative choice; > ext4 is the modern upgrade, though considered too-new-to-be-stable by some. > xfs is prefered as a matter of taste by oth

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread Tomislav Maric
Mark Hahn wrote: I've seen Centos mentioned a lot in connection to HPC, am I making a mistake with Ubuntu?? >>> distros differ mainly in their desktop decoration. for actually >>> getting cluster-type work done, the distro is as close to irrelevant >>> as imaginable. a matter of taste,

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread Tomislav Maric
Mark Hahn wrote: So, maybe the bold question to ask would be: what would be the best RAID config for 3 HDDS and a max 6 node HPC cluster? Should I just use RAID 1 >>> do you mean for each node? >> No, the nodes are diskless. I plan to scale the cluster and 1TB of >> storage is quite enoug

Re: [Beowulf] Re: RAID for home beowulf

2009-10-04 Thread Tomislav Maric
Hi Jörg, thanks for the info. I'm converging to the solution regarding RAID now. Thank you a lot for the link I'll be needing it. :) Well, I don't need too much scratch space, the important part of the disk is the /home with the results. What file system should I use for it, ext3? Best regards,

Re: [Beowulf] ATX on switch

2009-10-04 Thread Tomislav Maric
John Hearns wrote: > 2009/10/4 Tomislav Maric : >> @John Hearns >> Thank you! I've been looking around and I new there must be some kind of >> power supply for multiple motherboards. That's exactly what I'll need >> when the time comes for scaling. > > Tomislav, this is not a true power supply for

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread Tomislav Maric
John Hearns wrote: > 2009/10/4 Tomislav Maric : >> J >> Yes, definitely, I'm removing the results after postprocessing, and I'm >> the only user. :) > > Aha. I see Just wait for those other users to come along. will > they remove the files immediately after postprocessing? Hmmm??? My > advice

Re: [Beowulf] ATX on switch

2009-10-04 Thread Dmitri Chubarov
Hello, Tomislav, Anthony is right pointing that the "On after AC loss" or a similar feature would help on multiple occasions of sudden loss of power, however a switch would probably be handy in your setup. Basically any microswitch from an electronics shop would go. It might be more difficult to

[Beowulf] Re: RAID for home beowulf

2009-10-04 Thread Jörg Saßmannshausen
Hi Tomislav I agree with what Skylar wrote. However, ask yourself what are you going to do with the cluster? For example, I am doing quite a lot of molecular modelling, which requires plenty of RAM and also scratch space. So for the machines at the old University, I set up /boot and / as RAID1

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread Tomislav Maric
John Hearns wrote: > 2009/10/4 Tomislav Maric : >> >> No, the nodes are diskless. I plan to scale the cluster and 1TB of >> storage is quite enough, even if I use 6 nodes, or 2x6 nodes. That's >> actually what I know from my small experience in running CFD codes on 96 >> cores cluster. > 1 Tbyte?

Re: [Beowulf] ATX on switch

2009-10-04 Thread Tomislav Maric
@John Hearns Thank you! I've been looking around and I new there must be some kind of power supply for multiple motherboards. That's exactly what I'll need when the time comes for scaling. @Tony Travis Thanks, I've sawed off a switch from an old box. :) It's doing the job so far. There were no fla

Re: [Beowulf] ATX on switch

2009-10-04 Thread Tony Travis
Tomislav Maric wrote: Hi Dmitri and Tony, thank you both very much for your answers. I'm on my way to rip out a switch from an old computer case so I can start the master node for the first time (hopefully without calling the firemen and an ambulance :) ). I'll setup the BIOS as you've told me:

Re: [Beowulf] ATX on switch

2009-10-04 Thread Tomislav Maric
Hi Dmitri and Tony, thank you both very much for your answers. I'm on my way to rip out a switch from an old computer case so I can start the master node for the first time (hopefully without calling the firemen and an ambulance :) ). I'll setup the BIOS as you've told me: "ON after AC loss". Doe

Re: [Beowulf] ATX on switch

2009-10-04 Thread Tony Travis
Tomislav Maric wrote: Hi again, where can I get an "on/off" and "reset" switch for ATX motherboard without buying and ripping apart a case? Should I make one? I'm planning on having up to 12 mobos: should I use software for powering them off and reseting them (i.e. over LAN), or make a bunch o

[Beowulf] ATX on switch

2009-10-04 Thread Tomislav Maric
Hi again, where can I get an "on/off" and "reset" switch for ATX motherboard without buying and ripping apart a case? Should I make one? I'm planning on having up to 12 mobos: should I use software for powering them off and reseting them (i.e. over LAN), or make a bunch of switches and place the

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread Tomislav Maric
Mark Hahn wrote: >> I've seen Centos mentioned a lot in connection to HPC, am I making a >> mistake with Ubuntu?? > > distros differ mainly in their desktop decoration. for actually > getting cluster-type work done, the distro is as close to irrelevant > as imaginable. a matter of taste, really

Re: [Beowulf] RAID for home beowulf

2009-10-04 Thread Tomislav Maric
Mark Hahn wrote: >> So, maybe the bold question to ask would be: what would be the best RAID >> config for 3 HDDS and a max 6 node HPC cluster? Should I just use RAID 1 > > do you mean for each node? No, the nodes are diskless. I plan to scale the cluster and 1TB of storage is quite enough, even