hmmm... 200 nodes writing to the same file. That is a hard problem.
In all my testing of global FS's I haven't found one that is capable
of doing this while delivering good performance. One might think that
that MPI-IO would deliver performance while writing to the same file
(on something like
Hi,
On Thursday 28 September 2006 17:57, [EMAIL PROTECTED] wrote:
> very fine-grained control over memory timings - from just choosing the
> specific SPD profile to completely overriding the SPD. Either one of: a)
> downclocking the DIMMS to 183MHz; or b) overriding the CAS to 3.0 gave us a
> rock
At 02:36 PM 9/29/2006, Vincent Diepeveen wrote:
In reality that isn't the case.
People just want to see whose car is better, not whether they can
outrun the car.
Additional they want to analyze their own running (most important
feature) in the most objective manner possible, or they want t
Hi,
On Wednesday 27 September 2006 17:19, Constantin Charissis wrote:
> One BIOS option can cause severe problems : PowerNow should be set to
> disabled.
> But this is the default in latest BIOS.
just had a look: PowerNow is disabled here.
> Maybe you have a batch of NoName RAM ?
No, as mentio
- Original Message
> From: Stuart Midgley <[EMAIL PROTECTED]>
> To: Beowulf List
> Sent: Thursday, September 28, 2006 7:41:17 PM
> Subject: Re: [Beowulf] commercial clusters
>
> actually, their IO requirements are not that great. Certainly the
> systems that I run have far greater io
Hi Mark,
We are going through a similar experience at one of our customer sites.
They are trying to run Intel MPI on more than 1,000 nodes. Are you
experiencing problems starting the MPD ring? We noticed it takes a
really long time especially when the node count is large. It also just
doesn'
> - Original Message
> From: Stu Midgley <[EMAIL PROTECTED]>
> To: Beowulf List
> Sent: Friday, September 29, 2006 10:58:58 AM
> Subject: Re: [Beowulf] commercial clusters
>
> ?? NFS? And you claim performance is a problem? I'm not surprised.
>
> Of the machines I use and manage, I hap
"Maxence Dunnewind" <[EMAIL PROTECTED]> writes:
> HI,
> i'm working on a way for split Debian/Ubuntu packaging process throught
> internet.
> I'm looking for the best solution does beowulf can do it ?
Package building is essentially an embarrassingly parallel problem.
You communicate at startup
Hi,
thank you very much for your reply!
On Thursday 28 September 2006 16:17, you wrote:
> > * Dual AMP Opteron DP270 (2.0 GHz)
>
> which rev?
How can I figure out? Is
# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 33
model name
Vincent Diepeveen wrote:
> The basic flaw i see in some assumptions taken here is that you as a
> human want
> to do a running contest with a car.
>
> In reality that isn't the case.
>
> People just want to see whose car is better, not whether they can outrun
> the car.
Your analogy is poor. Any
Hi,
thanks for your reply!
On Thursday 28 September 2006 16:02, you wrote:
> I bet if you decode the MCE it will say uncorrectable ECC memory error.
You'd win that bet.
> memtest86 doesn't see correctable memory errors.
As far as I can remember, memtest86 includes tests that also detect
correc
On Fri, 29 Sep 2006, Vincent Diepeveen wrote:
The basic flaw i see in some assumptions taken here is that you as a human
want
to do a running contest with a car.
In reality that isn't the case.
People just want to see whose car is better, not whether they can outrun the
car.
No, no, no. M
Actually if you are going to ask a top 100 list of "want to haves",
of course a new car at #1 in my personal list (though it is questionable
whether that 50k euro that a reasonable car costs here whether i ever
earn that in my life, so probably this will top my list for another 40
years),
then yo
- Original Message -
From: "Geoff Jacobs" <[EMAIL PROTECTED]>
To: "Vincent Diepeveen" <[EMAIL PROTECTED]>
Cc: "Robert G. Brown" <[EMAIL PROTECTED]>; "Jim Lux"
<[EMAIL PROTECTED]>; ; "Angel Dimitrov"
<[EMAIL PROTECTED]>
Sent: Friday, September 29, 2006 9:19 PM
Subject: Re: [Beowulf] c
Jim Lux wrote:
> Having spent a fair amount of time working in a notoriously shallow
> industry where people are more than willing to spend much more than they
> have to "look like a winner", I'd be willing to bet a substantial sum of
> money that chess playing ability isn't even in the top 100 lis
On Fri, 29 Sep 2006, Vincent Diepeveen wrote:
Actually with respect to chess, it is not overblown,
there is 1 billion people on this planet that can play chess and the average
IQ of a chessplayer
is significantly better than the world wide IQ (you can ask yourself whether
this is caused by
ch
- Original Message -
From: "Robert G. Brown" <[EMAIL PROTECTED]>
To: "Vincent Diepeveen" <[EMAIL PROTECTED]>
Cc: "Geoff Jacobs" <[EMAIL PROTECTED]>; "Jim Lux"
<[EMAIL PROTECTED]>; ; "Angel Dimitrov"
<[EMAIL PROTECTED]>
Sent: Friday, September 29, 2006 9:34 PM
Subject: Re: [Beowulf] c
HI,i'm working on a way for split Debian/Ubuntu packaging process throught internet.I'm looking for the best solution does beowulf can do it ?Thx-- Maxence DUNNEWIND
http://www.sos-sts.info <=== Entraide étudiantehttp://www.ubuntu-fr.org <=== La meilleure distribution Linux :)Contact : [EMAIL PR
You would use one thread per core (or even less) and use e.g. MPI to do the
communication. Any MPI transfer between cores on the same processor will
probably automatically be done using shared memory or some memory mapping
operation. Coding this explicitly (your 'mixed-mode' model) would gain yo
very well taken. There are an enormous number of people who could use "big
computation" if it were "easy to use" and "cheap enough". $10K is a
maybe. to me, if dell started selling $10k windows-cluster-in-a-box
that was really at the windows-drooler level, it would be a huge shame.
vast amoun
> Or you are going to claim that universities and government buys a
> 12288
> processor system because they NEED it?
>
> Comeon, they spend that 50 million euro because they CAN afford
> spending 50
> million euro and
> then have a look what they can get it and whether they can end up high
> in
>
Actually with respect to chess, it is not overblown,
there is 1 billion people on this planet that can play chess and the average
IQ of a chessplayer
is significantly better than the world wide IQ (you can ask yourself whether
this is caused by
chess, or whether it is the people who choose to pl
Orion computer falls in total other line of computing.
It falls in the region of users that's on this list.
Orion computer is interesting for those who wanted 192GB ram
and it runs linux. So for embarrassingly parallel software that is
number crunching it is interesting.
No average joe knows the
At 06:56 AM 9/29/2006, Geoff Jacobs wrote:
Vincent Diepeveen wrote:
> If it was possible to build your own cluster in easy manner and then run
> for
> example a chessprogram at it in a user friendly way,
> there would be 100k+ clusters right now of 64 cpu's and more.
I think you maybe overestima
At 08:32 AM 9/29/2006, Vincent Diepeveen wrote:
The above is the biggest problem. That's why you need good software that
'hides' the supercomputer. Basically they want with their windows PC start
your program (see attachment) and click somewhere and then run on a big
supercomputer (for the aver
Which drivers can get 300 KM/h with their car on the road?
Yet i want a car that can get 300KM/h too.
My average speed in Netherlands on the highways where there is no control is
about 160-170 though (you lose license when driving 170+) when trying to get
in time for an appointment.
In germany
?? NFS? And you claim performance is a problem? I'm not surprised.
Of the machines I use and manage, I happily see 450MB/s (168 cpu
machine) and 2.5GB/s (1936 cpu machine)...
I too work for one of those Houston companies and in our cluster performance
absolutely does count. The way
Vincent Diepeveen wrote:
> If it was possible to build your own cluster in easy manner and then run
> for
> example a chessprogram at it in a user friendly way,
> there would be 100k+ clusters right now of 64 cpu's and more.
I think you maybe overestimate the number of chess players who can
challe
- Original Message -
From: "Robert G. Brown" <[EMAIL PROTECTED]>
To: "Angel Dimitrov" <[EMAIL PROTECTED]>
Cc:
Sent: Friday, September 29, 2006 2:03 PM
Subject: Re: [Beowulf] commercial clusters
On Tue, 26 Sep 2006, Angel Dimitrov wrote:
Hello,
I have some experience of running o
We are going through a similar experience at one of our customer sites.
They are trying to run Intel MPI on more than 1,000 nodes. Are you
experiencing problems starting the MPD ring? We noticed it takes a
really long time especially when the node count is large. It also just
doesn't work somet
Jakob Oestergaard wrote:
> On Thu, Sep 28, 2006 at 10:03:42AM -0700, Michael Will wrote:
>> That's wierd. On my scyld cluster it worked fine once I had created
>> /tmp// on all compute nodes before running the job.
>>
>
> Nope, it's not weird, it's just lucky :)
>
> Writing to memory you
I've solved the issue. It was a network problem.
And yes, I konw about the file descriptor problem.
Mark, my question was a bit misleading. :-) I asked a very simplistic and broad
question to see if I missed anything before I started troubleshooting other
parts of the cluster. And yes I know
On Tue, 26 Sep 2006, Angel Dimitrov wrote:
Hello,
I have some experience of running of numerical weather models on
clusters.
Is there many clients for processor time? As I saw the biggest
supercomputers in the World are very busy! I'm wondering if it's
worthwhile to setup a commercial cluster
Does anyone have any experience running intel mpi over 1000 nodes and do you
have any tips to speed up task execution? Any tips to solve this issue?
it's not uncommon for someone to write naive select() code that fails
when the number of open file descriptors hits 1024... yes, even in
the int
On Thu, Sep 28, 2006 at 10:03:42AM -0700, Michael Will wrote:
> That's wierd. On my scyld cluster it worked fine once I had created
> /tmp// on all compute nodes before running the job.
>
Nope, it's not weird, it's just lucky :)
Writing to memory you aren't meant to write to can lead to a
I buddy of mine who has a cluster that is over 1000(2000) nodes.
I've compiled a simple helloworld app to test it out.
I am using Intel MPI 2.0 and running over ethernet so I'm trying both the
ssm(since the nodes are smp machines) and sock devices
i'm doing the following mpdboot -n 1500 --
36 matches
Mail list logo