Jim Lux wrote:
> Heh, heh, heh..
> I have a box of Artisoft 2Mbps NICs out in the garage.
> Or, maybe, some of those NE1000 coax adapters. I have lots of old coax,
> a bag full of connectors, a crimper, and I'm not afraid to use them.
>
> Hey, it's only to boot.
Save the connectors for important
- Original Message -
From: "Mark Hahn" <[EMAIL PROTECTED]>
To: "Vincent Diepeveen" <[EMAIL PROTECTED]>
Cc:
Sent: Thursday, June 29, 2006 2:44 AM
Subject: Re: [Beowulf] Three notes from ISC 2006
a huge L3 cache (which most specfp software is somehow) that it destroyed
spec is, afte
At 06:50 PM 6/28/2006, Mark Hahn wrote:
> btw does that 'boot over network means i need a 16 node hub for 100
mbit and
> connect all the machines besides the quadrics network also to 100 mbit?
sure, you need some sort of ethernet. doesn't have to be a 16pt switch
(please don't say that you ac
> http://krone.physik.unizh.ch/%7Estadel/zBox/
> The chimney design is interesting.
it's a very neat machine. but it's not really that
much different from a traditional hot/cold aisle design.
not to _ignore_ convection, but consider that the successor zbox2 uses
a more traditional "wall of comp
> If i boot a machine without harddrive, basically the machine says: "F you,
> error! Press enter to reboot"
>
> Ok let's start please there. What do i do after getting that message?
_before_ getting that message, look in the bios. there should be some
mention of net-boot and/or PXE. assuming
> a huge L3 cache (which most specfp software is somehow) that it destroyed
spec is, after all, CPU2000, and things have changed quite a lot in 6+ years.
Itaniums were one of the earliest "breaks" of spec cpu2000 codes,
(besides sun's spec-special compiler). if you take it2 results
and remove t
G'day all
A hopefully useful reference on another "no case" cluster.
The zBox previously mentioned on this list might be of interest:
http://krone.physik.unizh.ch/%7Estadel/zBox/
The chimney design is interesting.
Cheers
Stevo
> Date: Wed, 28 Jun 2006 16:49:01 -0700
> From: Jim Lux <[EMAIL
- Original Message -
From: "Brian Dobbins" <[EMAIL PROTECTED]>
To: "Vincent Diepeveen" <[EMAIL PROTECTED]>
Cc: "pauln" <[EMAIL PROTECTED]>; "Eray Ozkural" <[EMAIL PROTECTED]>;
Sent: Saturday, June 03, 2006 11:04 AM
Subject: Re: [Beowulf] Building my own highend cluster
Hi Vincent
At 11:51 AM 6/28/2006, Andrew Piskorski wrote:
On Sun, Jun 25, 2006 at 03:52:27PM -0700, Robert Fogt wrote:
> Hello beowulf mailing list,
>
> How much harm does removing the computer case do? I know computer
cases are
> designed for the air flow, and without them I was wondering if there
will
> Have a FAQ which describes how to boot nodes diskless?
PXE basically means "DHCP then TFTP". what you get via TFTP is
much like booting from disk: first you get a bootsector (pxelinux),
which then fetches a config file (again, via TFTP), which tells it
how to boot. that may be to boot from th
At 11:40 AM 6/28/2006, David Mathog wrote:
> How much harm does removing the computer case do?
For airflow it's hard to generalize - it might be better or worse than
the case which you removed. Some cases do a great job of moving air
where it needs to, and others just suck.
Almost always, ai
The team HAD a system a month ago nearly at the world champs computerchess
2006.
And it clocked 3Ghz and had 2 sockets (4 cores in total)
3Ghz is 25% more than the 2.4Ghz dual core opterons i've got here
and they get a 20% higher IPC.
So effectively it's 50% faster.
Now we already knew at some
Vincent Diepeveen wrote:
Woodcrest totally destroys everything in terms of raw cpu performance.
Not only it clocks nearly 25% higher. According to junior team who used
such a system
from HP (that's their normal sponsor) at world champs 2006 it was giving
a 20% higher ipc for their program too.
Woodcrest totally destroys everything in terms of raw cpu performance.
Not only it clocks nearly 25% higher. According to junior team who used such
a system
from HP (that's their normal sponsor) at world champs 2006 it was giving a
20% higher ipc for their program too.
That's 50% faster than
Salut Christian,
Christian Bell wrote:
I agree with you that the inverse of message rate, or the small
message gap in logP-derived models is a more useful way to view the
metric. How much more important it is than latency depends on what
the relative difference is between your gap and your late
On Wed, 2006-06-28 at 13:41, Erik Paulson wrote:
> On Wed, Jun 28, 2006 at 04:25:40PM -0400, Patrick Geoffray wrote:
> >
> > I just hope this will be picked up by an academic that can convince
> > vendors to donate. Tax break is usually a good incentive for that :-)
> >
>
> How much care should
This isn't very good but it gives an overview of what I'm doing -
it's not exact and needs updating. If people are interested in more
detail I'll gladly update the page. It also doesn't explain howto use
cfengine
to configure the ramdisk.
.. my apologies in advance:
http://www.psc.edu/~pauln/
On Wed, 28 Jun 2006, Patrick Geoffray wrote:
High message rate is good, but the question is how much is enough ? At 3
million packet per second, that's 0.3 us per message which all of it is
used by the communication library. Can you name real world applications
that need to send messages every
On Wed, Jun 28, 2006 at 04:25:40PM -0400, Patrick Geoffray wrote:
>
> I just hope this will be picked up by an academic that can convince
> vendors to donate. Tax break is usually a good incentive for that :-)
>
How much care should be given to the selection of the nodes? Performance
is a funct
Hi Paul,
Have a FAQ which describes how to boot nodes diskless?
I'm about to build coming months a 16 node cluster (as soon as i've got
budget
for more nodes) and the only achievement in my life so far in beowulf
area was getting a 2 node cluster to work *with* disks.
That's too expensive how
Kevin Ball wrote:
I have two large concerns.
One is that finding a software stack that works with the latest
interconnect products may or may not correlate well with what end users
are interested in. For some protocols (particularly MPI) this doesn't
I would only care for MPI, at least at
Patrick,
Maybe I'm just too dense to understand. But, you've basically labelled
Greg's post as spam. You've called their metric nonsense. You've
criticized the published number they used that came from your company's
product. For what?
You've provided no new data. No reference to new data.
Mike,
Mike Davis wrote:
Maybe I'm just too dense to understand. But, you've basically labelled
Greg's post as spam.
Yes, I did. Telling me about a new white paper and about something that
I cannot know but I really should does fit my definition of spam. It was
borderline, I recognized that,
On Wed, 28 Jun 2006, Patrick Geoffray wrote:
> High message rate is good, but the question is how much is enough ? At 3
> million packet per second, that's 0.3 us per message which all of it is
> used by the communication library. Can you name real world applications
> that need to send message
This isn't really a distribution-related comment but in light of Vincent's
points I think it's appropriate. We're running diskless nodes from a single
generic root fs ramdisk which is dynamically configured at boot by a
cfengine script. Other filesystems (ie /usr) are mounted over nfs.
I've fo
Let me kick off with a few points, most likely many will enhance that with
more points
a) having a compiled driver into the kernel of the network card in question
this is by far the hardest part.
b) pdsh installed at all machines and naming of machines in a logical manner
c) diskless operatio
Hi Joachim,
Joachim Worringen wrote:
An offer for "getting a secret white paper on request" is marketing, you
are right. But at least the SPEC number was technical content - and we
don't want to analyse every posting sentence-by-sentence, do we?
The SPEC stuff was actually fine. I didn't regi
Patrick,
Thank you for the rapid and thoughtful response,
On Wed, 2006-06-28 at 11:23, Patrick Geoffray wrote:
> Hi Kevin,
>
> Kevin Ball wrote:
> > Patrick,
> >
> >>
> >> From you flawed white papers, you compared your own results against
> >> numbers picked from the web, using older int
On Sun, Jun 25, 2006 at 03:52:27PM -0700, Robert Fogt wrote:
> Hello beowulf mailing list,
>
> How much harm does removing the computer case do? I know computer cases are
> designed for the air flow, and without them I was wondering if there will be
In theory, computer cases are designed for g
> How much harm does removing the computer case do?
For airflow it's hard to generalize - it might be better or worse than
the case which you removed. Some cases do a great job of moving air
where it needs to, and others just suck.
In terms of safety it doesn't seem like a very good idea.
In
> How much harm does removing the computer case do?
in principle, none at all.
> I know computer cases are
> designed for the air flow,
hah!
depends on the kind of case. it might be neat to run racks of 1u's
without case - essentially bladizing them. the density would be high
enough to avoi
Hi Kevin,
Kevin Ball wrote:
Patrick,
From you flawed white papers, you compared your own results against
numbers picked from the web, using older interconnect with unknown
software versions.
I have spent many hours searching to try to find application results
with newer Myrinet and Me
> - *If* you feel you need to use such a new metric for whatever reason, you
> should at least publish the benchmark that is used to gather these numbers to
> allow others to do comparative measurements. This goes to Greg.
This has been done. You can find the benchmark used for message rate
me
I would like to make a small survey here to get
a rough idea of every essential detail in a cluster
distro, because I am thinking of writing some add-on
for our linux distribution to this end.
Best,
--
Eray Ozkural (exa), PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
http://www.cs
Hello beowulf mailing list,
How much harm does removing the computer case do? I know computer cases are
designed for the air flow, and without them I was wondering if there will be
heating problems. My air conditioner will be enough for the amount of heat
generated, but will I need circulation
Having subscribed to this board for quite some time (a time when I was
in a Beowulf admin class with Jeff Layton, followed Greg's work on the
Legion project up the road in Charlottesville, bought our first cluster
from Doug Eadline and Paralogic, and ran into Robert Brown at Linux
Expo), I do n
Patrick Geoffray wrote:
Greg Lindahl wrote:
On Wed, Jun 28, 2006 at 07:28:53AM -0400, Patrick Geoffray wrote:
I have keep it quiet even when you where saying things driven by
marketing rather than technical considerations (the packet per
second nonsense),
Patrick, that "packet per second non
Patrick,
>
> From you flawed white papers, you compared your own results against
> numbers picked from the web, using older interconnect with unknown
> software versions.
I have spent many hours searching to try to find application results
with newer Myrinet and Mellanox interconnects. I
Vincent Diepeveen wrote:
"Microsoft is usually at the extreme of the marketing spectrum"
Is this your official companies statement about microsoft?
I am not an officer of the company that employs me, thus I have no
official voice. I cannot sign contract and my expression has no legal
bindin
Vincent Diepeveen wrote:
Not at all good marketing that third remark.
Because if there was really something interesting to report,
then it would already have been reported by the *official* marketing
department.
No. Marketing effort implies coordination, that's why most announcements
are emb
If the 'vendor' answer is done by someone who is 'not speaking for his
corporation'
then his answer is completely useless of course with respect to futuristic
projections.
Vincent
- Original Message -
From: "Patrick Geoffray" <[EMAIL PROTECTED]>
To: "Chris Dagdigian" <[EMAIL PROTECTE
"Microsoft is usually at the extreme of the marketing spectrum"
Is this your official companies statement about microsoft?
Vincent
- speaking for DiepSoft
- Original Message -
From: "Patrick Geoffray" <[EMAIL PROTECTED]>
To: "Chris Dagdigian" <[EMAIL PROTECTED]>
Cc:
Sent: Wednesday,
Not at all good marketing that third remark.
Because if there was really something interesting to report,
then it would already have been reported by the *official* marketing
department.
Good news travels fast and usually companies don't care much for NDA's there
and tell good news their bigges
Joe Landman wrote:
Greg Lindahl wrote:
On Wed, Jun 28, 2006 at 08:28:06AM -0400, Mark Hahn wrote:
the "I know something that I can't tell" bit was childish though ;)
Indeed, it was. I plead jet-lag.
No. It was good marketing. Anyone on the list not at least a little
curious what it is th
Chris,
Chris Dagdigian wrote:
In short, this was appropriate (and interesting). We've all seen vendor
spam and disguised marketing and this does not rise anywhere close to
that level.
I disagree on the level. I use the rule that a vendor should never
initiate a thread, only answer someone el
Greg Lindahl wrote:
On Wed, Jun 28, 2006 at 07:28:53AM -0400, Patrick Geoffray wrote:
I have keep it quiet even when you where saying things driven by
marketing rather than technical considerations (the packet per
second nonsense),
Patrick, that "packet per second nonsense" is the technical r
Greg Lindahl wrote:
> On Wed, Jun 28, 2006 at 08:28:06AM -0400, Mark Hahn wrote:
>
>> the "I know something that I can't tell" bit was childish though ;)
>
> Indeed, it was. I plead jet-lag.
No. It was good marketing. Anyone on the list not at least a little
curious what it is that Greg can'
On Wed, Jun 28, 2006 at 08:28:06AM -0400, Mark Hahn wrote:
> the "I know something that I can't tell" bit was childish though ;)
Indeed, it was. I plead jet-lag.
-- greg
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest
> My $.02 of course
me too. the "I know something that I can't tell" bit was childish though ;)
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowul
On Wed, Jun 28, 2006 at 07:28:53AM -0400, Patrick Geoffray wrote:
> I have keep it quiet even when you where saying things driven by
> marketing rather than technical considerations (the packet per
> second nonsense),
Patrick, that "packet per second nonsense" is the technical reason our
intercon
... it was a short note written by a list regular with a good signal
to noise ratio. The whitepaper contents sound on-topic for many
people on this list and the "email me for a copy" is exactly what I
see myself and many other employed-by-industry types do when we want
to share something
Greg Lindahl wrote:
Second, we have a new whitepaper about performance of the Intel
Woodcrest CPU and InfiniPath interconnect on real applications, email
me for a copy.
Third, MH MHH MH. (That's the sound I make when I
can't tell you something.)
Since when is Beowulf a plac
First off, HP has posted the highest SPEC200fp peak result for an x86
cpu, 3048. First over 3000, too. It uses a combination of the Intel
and PathScale compilers.
http://www.spec.org/cpu2000/results/res2006q2/cpu2000-20060612-06162.asc
Interestingly, this number is better than all currently publi
Good morning,
Has anyone tried using Oracle's File System 2 for
HPC clusters? (OCFS2). The webpage says it's POSIX
compliant so I'm curious if it could be used for clusters
and what kind of performance it would get.
TIA!
Jeff
___
Beowulf mailing lis
54 matches
Mail list logo