___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
@Greg Kurtzer & Jess Cannata
Thank you both very much for the information and advice! I'm digging
through it. :)
Best regards,
Tomislav
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mo
Hi,
I'm browsing through the web and there's multiple options for me to
choose between prepared programs that set up diskless nodes. Still for
the first two of them, I would like to learn how it's done. Can someone
tell me what I need to do or point me to a "manual" tutorial for the
diskless nodes
Mark Hahn wrote:
>> the other nodes are to be diskless.. I have separated these partitions:
>> /swap /boot / /var and /home. Is this ok?
>
> I don't believe there is much value in separating partitions like this.
> for instance, a swap partition has no advantage over a swap file,
> and the latte
the
> StarTech kits in the past.
>
> On Sun, Oct 4, 2009 at 12:44 PM, Tomislav Maric
> wrote:
>> John Hearns wrote:
>>> 2009/10/4 Tomislav Maric :
>>>> @John Hearns
>>>> Thank you! I've been looking around and I new there must be some kind
Nifty Tom Mitchell wrote:
> On Sun, Oct 04, 2009 at 01:08:27PM +0200, Tomislav Maric wrote:
>> Mark Hahn wrote:
>>>> I've seen Centos mentioned a lot in connection to HPC, am I making a
>>>> mistake with Ubuntu??
>>> distros differ mainly in their
Mark Hahn wrote:
>> disk is the /home with the results. What file system should I use for
>> it, ext3?
>
> it doesn't matter much. ext3 is a reasonable, conservative choice;
> ext4 is the modern upgrade, though considered too-new-to-be-stable by some.
> xfs is prefered as a matter of taste by oth
Mark Hahn wrote:
I've seen Centos mentioned a lot in connection to HPC, am I making a
mistake with Ubuntu??
>>> distros differ mainly in their desktop decoration. for actually
>>> getting cluster-type work done, the distro is as close to irrelevant
>>> as imaginable. a matter of taste,
Mark Hahn wrote:
So, maybe the bold question to ask would be: what would be the best RAID
config for 3 HDDS and a max 6 node HPC cluster? Should I just use RAID 1
>>> do you mean for each node?
>> No, the nodes are diskless. I plan to scale the cluster and 1TB of
>> storage is quite enoug
t;
>
> Am Samstag 03 Oktober 2009 schrieb beowulf-requ...@beowulf.org:
>> Message: 2
>> Date: Sat, 03 Oct 2009 19:41:28 +0200
>> From: Tomislav Maric
>> Subject: [Beowulf] RAID for home beowulf
>> To: beowulf@beowulf.org
>> Message-ID: <4ac78cc8.506
John Hearns wrote:
> 2009/10/4 Tomislav Maric :
>> @John Hearns
>> Thank you! I've been looking around and I new there must be some kind of
>> power supply for multiple motherboards. That's exactly what I'll need
>> when the time comes for scaling.
>
John Hearns wrote:
> 2009/10/4 Tomislav Maric :
>> J
>> Yes, definitely, I'm removing the results after postprocessing, and I'm
>> the only user. :)
>
> Aha. I see Just wait for those other users to come along. will
> they remove the files imme
John Hearns wrote:
> 2009/10/4 Tomislav Maric :
>>
>> No, the nodes are diskless. I plan to scale the cluster and 1TB of
>> storage is quite enough, even if I use 6 nodes, or 2x6 nodes. That's
>> actually what I know from my small experience in running CFD codes o
@John Hearns
Thank you! I've been looking around and I new there must be some kind of
power supply for multiple motherboards. That's exactly what I'll need
when the time comes for scaling.
@Tony Travis
Thanks, I've sawed off a switch from an old box. :) It's doing the job
so far. There were no fla
Hi Dmitri and Tony,
thank you both very much for your answers. I'm on my way to rip out a
switch from an old computer case so I can start the master node for the
first time (hopefully without calling the firemen and an ambulance :) ).
I'll setup the BIOS as you've told me: "ON after AC loss". Doe
Hi again,
where can I get an "on/off" and "reset" switch for ATX motherboard
without buying and ripping apart a case?
Should I make one? I'm planning on having up to 12 mobos: should I use
software for powering them off and reseting them (i.e. over LAN), or
make a bunch of switches and place the
Mark Hahn wrote:
>> I've seen Centos mentioned a lot in connection to HPC, am I making a
>> mistake with Ubuntu??
>
> distros differ mainly in their desktop decoration. for actually
> getting cluster-type work done, the distro is as close to irrelevant
> as imaginable. a matter of taste, really
Mark Hahn wrote:
>> So, maybe the bold question to ask would be: what would be the best RAID
>> config for 3 HDDS and a max 6 node HPC cluster? Should I just use RAID 1
>
> do you mean for each node?
No, the nodes are diskless. I plan to scale the cluster and 1TB of
storage is quite enough, even
Tony Travis wrote:
> Tomislav Maric wrote:
>> [...]
>> I've seen Centos mentioned a lot in connection to HPC, am I making a
>> mistake with Ubuntu??
>
> Hello, Tomislav.
>
> [Just let me put my flame-proof trousers on...]
>
> I know a lot of HPC peo
Lux, Jim (337C) wrote:
>
>
> On 10/3/09 1:48 PM, "Tomislav Maric" wrote:
>
>> First of all: thank you very much for the advice, Skylar. :)
>>
>> So, all I need to do is to create the same partitions on three disks and
>> set up a RAID 5 on /
Mark Hahn wrote:
>> It depends on your workload. RAID5 is good for large sequential writes,
>> but sucks at small sequential writes because for every write it has to
>> do a read to compare parity.
>
> well, it's bad at small random writes. small _sequential_ writes would
> be able to avoid read
Tony Travis wrote:
> Tomislav Maric wrote:
>> Hi everyone,
>>
>> I've finally gathered all the hardware I need for my home beowulf. I'm
>> thinking of setting up RAID 5 for the /home partition (that's where my
>> simulation data will be and RAID 1
Skylar Thompson wrote:
> Tomislav Maric wrote:
>> First of all: thank you very much for the advice, Skylar. :)
>>
>> So, all I need to do is to create the same partitions on three disks and
>> set up a RAID 5 on /home since I'll be doing CFD simulations (long
>
untu, and after I install it, I'll try to configure the RAID
manually. How do I make sure that the boot loader is on all disks? I
mean, isn't RAID going to make the OS look at the /boot partition that's
spread over 3 HDDs as a single mount point?
Best regards,
Tomislav
Skylar Th
Hi everyone,
I've finally gathered all the hardware I need for my home beowulf. I'm
thinking of setting up RAID 5 for the /home partition (that's where my
simulation data will be and RAID 1 for the system / partitions without
the /boot.
1) Does this sound reasonable?
2) I want to put the /home at
Jérôme wrote:
> Hi,
>
> ParaView comes with its own VTK sources. You can find in the source tree
> : ./Paraview3/VTK
> The VTK binaries will be put in the ParaView binary tree : ./ParaViewBin/bin
>
> Obviously, the paths depend on your calling way, and on your CMake settings
>
> Hope that helps
Jon Tegner wrote:
>
> using overlapping grids. Complete with gridgenerator and a bunch of
> solvers. Excellent software!
>
> /jon
>
I'll certainly take a look at it, but from what I've read on the main
page, it uses structured and curvilinear grids, while OF also supports
polyhedral finite v
Thank You for the detailed answer, it's really educational. I've been
reading Atom's description today on tom's hardware, and there it's also
described in the way You have described it.
I was thinking about using RAID because I was worried about backup space
and speed for data transfer on the hard
I've sent this mail as a direct reply to Mr. John Hearns, so I'm sending
it again to the list - my apologies to Mr. Hearns.
Hearns, John wrote:
> >
> > I would guess you are looking are looking at using OpenFOAM for the CFD
> > solver?
Of course, what else is there? :))
> > One thing to look at
Jon Forrest wrote:
> My suggestion to you is to forget about buying anything new
> right now. Instead, find some cheap used P4 PCs with at least
> 1GB of RAM. In the US such things can easily be found for ~$150.
I've thought about collecting old PCs from my friends, but they are all
different, an
Dmitry Zaletnev wrote:
> Dear Sir/Madam!
> A few days ago I tried to compile OpenFOAM, I didn't succeed in compiling
> paraFoam, neiteher I didn't succeed in using ParaView 3.4,
> downloaded from official site, in viewing VTK-format data from Open FOAM.
> Compiler refused finding cmake. Earlier I
Hello everyone,
it's the noob again. Actually, since my brother Mario joined my quest,
we're in plural now.
After reading loads of info from the net, we kind of figure that we
could read on for the rest of our lives, and still won't learn "enough".
Time to do some work.
Question: the cheapest th
Thank you for the advice.
I've built OpenFOAM with gcc many times, so this is no longer a problem.
It was at first because OF came with a precompiled binaries of gcc and
the config scripts searched for that binaries. If for some reason the
binaries wouldn't work, a begginer would have to change th
Thank you all very much for the advice, knowing that I can find help and
advice here is really great, thank you! I apologize in advance for many
newbish questions, and the delay in answering, I've had to switch to
Thunderbird because Evolution gave me problems and gmx.com app didn't
have a switch f
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
36 matches
Mail list logo