will be a short greeting by the sponsors about 10pm
Try to be there by then.
Again:
Monday, November 16 2009 9-11pm (Immediately after the SC09 Opening
Gala)
The Game, at the Rose Quarter (Close to the Convention Center)
--
Donald Becker bec...@scyld.com
Penguin
't mean "the only way to do it".
There is much more I could write about benefits, trade-offs, and
implementation details. Is there a specific area that you wanted to know
about?
--
Donald Becker bec...@scyld.com
Penguin Computing / Scyld Software
, or not put
into production because of internal politics. In the mid-1990s SGI was a
company that didn't realize that a team that could deliver working systems
was rare and precious.
A few people here will remember that disaster of SGI trying to run Cray.
The culture clash couldn't have bee
least one intermediate stage.. say 12V. And
if we are close, we might as well regulate it well and use it for the disk
drives. Hmmm, aren't we back at pretty much the standard design?
--
Donald Becker bec...@scyld
ction lines that can't fit a longer
burn-in into the process. A production line with pre-imaged OS
installations pretty much cannot do a full burn-in.
--
Donald Becker bec...@scyld.com
Penguin Computing / Scyld Software
www.penguincomputing.comwww.scyl
established to the user-level daemon, we optimize by
having some of the communication handled by a kernel module that shares
the socket. The optimization isn't needed for 'only' hundreds of nodes
and processes, or if you are willing to dedicate most of a very powerful
head node
ince the Linux 2.2 kernel and over multiple
architectures. We've been producing supported commercial releases
since 2000, longer than anyone else in the business.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.com
sponsor) http://xandmarketing.com
NVIDIAhttp://nvidia.com
Terascala http://terascala.com/
Clustermonkey http://clustermonkey.net/
ClusterCorp http://clustercorp.com/
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyl
guin/Scyld (organizing sponsor)http://penguincomputing.com
XAND Marketing (organizing sponsor) http://xandmarketing.com
NVIDIAhttp://nvidia.com
Terascala http://www.terascala.com/
Panasas http://www.panasas.com/
Clustermonkey http://www.clustermo
n the middle of the party.
- have the opportunity for technical, hands-on demos at the Bash
- will have their logos on the beowulf.org BeoBash 2008 web pages
and on the 2008 yearbook page
--
Donald Becker Never send mail to [EMAIL PROTECTED]
Penguin Computing / Scyl
On Fri, 31 Oct 2008, Chris Samuel wrote:
> Just a couple of weeks to go and I was wondering if
> there were plans to do another Beowulf bash at SC this
> year in Austin ?
Ooopsss, missing sending out the preliminary announcement to the list.
Look for it in my next message.
-
On Wed, 1 Oct 2008, Bogdan Costescu wrote:
> On Tue, 30 Sep 2008, Donald Becker wrote:
> > Ahhh, your first flawed assumption.
> > You believe that the OS needs to be statically provisioned to the nodes.
> > That is incorrect.
> Well, you also make the flawed assumption
. Or just skip ahead and read about Purdue PAPERS -- figure
out why it was very appealing but failed.
Meanwhile, I'll try to find out where I can plug a serial cable into a
modern server...
--
Donald Becker [EMAIL PROTECTED]
Penguin Computi
with a SCSI layer, can
take many seconds. Once you detect a disk it takes a bunch of slow seeks
to read the partition table and mount a modern file system (not EXT2).
So trimming the system initialization time further isn't a priority until
after the file system and IB init times are
k jobs), rather than re-think the need for running them at
all. IIRC, it was companies such as Octiga Bay that actually implemented
global-clock gang scheduling of system daemons, again with a network that
implemented global synchronization operations.
--
Donald Becker
On Fri, 26 Sep 2008, Robert G. Brown wrote:
> On Fri, 26 Sep 2008, Donald Becker wrote:
>
> > But that rule doesn't continue when we move to higher core counts. We
> > still want a little observability, but a number for each of a zillion
> > cores is useless
won't obviously break when we have 64 cores in each of 4
sockets. But it really just shifts the re-design work to the tools and
applications.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.comwww.
On Wed, 24 Sep 2008, Robert G. Brown wrote:
> On Tue, 23 Sep 2008, Donald Becker wrote:
>
> >> XML is (IMO) good, not bad.
> >
> > I have so much to write on this topic, I'll take the first pot shot at RGB
> > ;-)
> >
> > XML is evil. Well,
of them skyrockets and it becomes easier
> >>> which is to stick with the status quo :(
That's true for the "strip down a distribution" approach. You have to
repeat the work for each new release. And test it, since an
updated release might change to rely up
traffic, the whole cluster-wide app stops.
Next posting: how the app itself can be the cause of slow-downs, and why
cluster-specific nameservices and why library/executable memory
"wire-downs" solve problems.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing /
t or
broadcast) to a list of (presumably unicast) destinations. This adds a
trivial amount of packet construction overhead and code complexity, while
giving us many of the advantage of multicast. We can have multiple head
nodes that understand the clust
ed. Using multicast was an
excellent solution in the days of Ethernet coax and repeaters. When using
a repeater all machines see all traffic anyway, and older NICs treated all
packets individually. Today clusters use switched Ethernet or specialized
interconnects (Infiniband, Myri
ipped integrated PVFS starting with our second release, including
funding the PVFS guys to make it easy to configure. Over the years we
have included a few others, but preconfigured support for advanced
distributed and cluster file systems hasn't justified the effort and cost.
We now
eds older values (very few do) it can pick a
logging, summarization and coalescing approach of its own.
We made many other innovative architectural decisions when designing the
system, such as publishing the stats as a read-only shared memory version.
But this are less
orts take a long time to tune for NVM,
and almost all end up using NVM as a stylized message passing mechanism.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.comwww.scyld.com
Annapolis MD and San Francisc
likely
to get your "always moderate" box checked.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.comwww.scyld.com
Annapolis MD and San Francisco CA
___
s and door prizes at 7pm
Third Street Blues
125 W 3rd St, Reno, NV 89501
775-323-5005
The Third Street Blues is behind the El Dorado Casino parking lot, with
on-street parking also available.
Sponsors [[ Include logos ]]
Penguin Computing / Scyld Software
AMD
Intel
Nvidia
Terascala
--
Don
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.comwww.scyld.com
Annapolis MD and San Francisco CA
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription
bring a sign, there will be some ambient light.)
- are part of the brief greeting in the middle of the party.
- have the opportunity for technical, hands-on demos at the bash
- have their logos on the beowulf.org site through the end of SC07,
and on the 2007 yearbook page after the eve
new machines. Most
new motherboards have devices that you'll need a 2.6 kernel for. And
not an old kernel, a pretty recent one.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.com
Speaker: Donald Becker, CTO of Scyld Software and Penguin Computing
Host: Michael Fitzmaurice
I'll be talking about name services and process creation mechanisms for
clusters.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Sof
Baltimore Washington Beowulf User Group Meeting
Date: 9 Oct 2007 at 2:30 pm - 5:00pm.
Location: Georgetown University at Whitehaven Street
3300 Whitehaven Street, Washington DC 20007
Speaker: Donald Becker, CTO of Scyld Software and Penguin Computing
Host: Michael Fitzmaurice
the end of SC07,
and on the 2007 yearbook page after the event.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.comwww.scyld.com
Annapolis MD and San Francisco CA
_
sign for that
long-running job that stays at the peak power draw and thermal state of
the cluster.
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.comwww.scyld.com
Annapolis MD and San Francisco CA
Just a reminder that there is no west coast BayBUG meeting this month.
Stay tuned (or volunteer!) for the fall speaker schedule...
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincomputing.comwww.scyld.com
Annapolis MD
Please join moderator and Beowulf cluster co-inventor Donald Becker on
Tuesday, April 17 for the next Bay Area Beowulf Users Group (BayBUG):
Bay Area Beowulf User Group (BayBUG)
April 17, 2007
2:30 - 5:00 p.m.
AMD headquarters Common Building,
Room C-6/7/8
991 Stewart Drive, Sunnyvale
There
php
As an incentive, legitimate surveys (no automated submissions please!)
completed by April 20 will be entered into a drawing for the gadget de
jour -- a video iPod.
Good luck
--
Donald Becker [EMAIL PROTECTED]
Penguin Computing / Scyld Software
www.penguincom
: 27 Feb 2007 at 2:30 pm - 5:00pm.
Location: Georgetown University at Whitehaven Street
3300 Whitehaven Street, Washington DC 20007
Speaker: Donald Becker, CTO of Scyld Software / Penguin Computing
Host: Michael Fitzmaurice
Here is the announcement from Mike:
February 27th 2:30 to 5:00 PM
performance models,
including the model commonly known as Gustafson's Law or Scaled
Speedup. John received his B.S. degree from Caltech and his M.S. and
Ph.D. degrees from Iowa State University, all in Applied Mathematics.
--
Donald Becker [EMAIL P
al-in
information on the day of the meeting.
I'll see you in two weeks!
--
Donald Becker [EMAIL PROTECTED]
Scyld Software Scyld Beowulf cluster systems
914 Bay Ridge Road, Suite 220 www.scyld.com
Annapolis MD 21403 41
ineer from Cluster Resources Inc [black bold]
Join moderator and Beowulf cluster co-inventor Donald Becker for food and
drinks and to learn from and network with other Linux HPC professionals."
--
Donald Becker [EMAIL PROTECTED]
Scyld Software
vice has it's own server IP address and port.
We could point them to replicated masters or other file servers. They
just all point to the single master to keep things simple. For
reliability we can have cold, warm or hot spare masters. But again,
it's less complex to adminis
On Thu, 14 Dec 2006, Simon Kelley wrote:
> Donald Becker wrote:
> > It should repeat this: forking a dozen processes sounds like a good idea.
> > Thinking about forking a thousand (we plan every element to scale to "at
> > least 1000") makes "1" seem l
On Wed, 13 Dec 2006, Simon Kelley wrote:
> Donald Becker wrote:
> > On Tue, 12 Dec 2006, Simon Kelley wrote:
> >> Joe Landman wrote:
> >>>>> I would hazard that any DHCP/PXE type install server would struggle
> >>>>> with 2000 requests (yes-
ch the kernel. And you
want the logging to happen as the very first thing after booting the
kernel. (Boot kernel, load network driver, DHCP for loghost, dump kernel
message, only then activate additional hardware and do other risky
things.)
--
Donald Becker [EMA
a large number of clients is no problem.
>
>
> Cheers,
>
> Simon.
>
>
>
> ___
> Beowulf mailing list, Beowulf@beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http:
ot in advance,
>
> Sincerally yours,
>
> Jones
> ___
> Beowulf mailing list, Beowulf@beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-
more always better?"
I'm pretty certain that we can find speakers for that.
BTW, We are still accepting sponsors. (More than "accepting", how 'bout
"would be really happy to have a few more".) Contact "suzie" at beowulf
dot org if you or someo
ce Suite 900 Herndon, Virginia 20170
Webcast info at http://www.bwbug.org
--
Donald Becker [EMAIL PROTECTED]
Scyld Software Scyld Beowulf cluster systems
914 Bay Ridge Road, Suite 220 www.scyld.com
Annapolis MD 21403
the BayBUG please RSVP to [EMAIL PROTECTED] so that we have an
approximate count.
--
Bay Area Beowulf User Group (BayBUG)
2:30 - 5:00 p.m.
1st Meeting -- October 17, 2006
AMD headquarters Common Building,
Room C-6/7/8
991 Stewart Drive, Sunnyvale
Join moderator and Beowulf cluster co-inve
first meeting, please spread this announcement as
widely as possible!
--
Bay Area Beowulf User Group (BayBUG)
2:30 - 5:00 p.m.
1st Meeting -- October 17, 2006
AMD headquarters Common Building,
Room C-6/7/8
991 Stewart Drive, Sunnyvale
Join moderator and Beowulf cluster co-inventor Donald Beck
h was held at the Mchoney's Restaurant, Pittsburg. It was
sponsored by Penguin Computing, AMD and Scyld Software and featured a
10th birthday party cake.
SC2005
Seattle, Gordon Biersch Brewery, sponsored by Penguin Computing, Scyld
Software, AMD & HPCwire.
From: Michael Fitzmaurice <[EMAIL PROTECTED]>
Date:
August 8, 2006
Time:
2:30 PM to 5:00 PM
Location:
iLab 460 Springpark Place Suite 900 Herndon, Virginia 20170
Speakers:
Ragnar Kjorstad, Lead Engineer from Scali
Abstract:
Scali will be presenting a technical session on
Penguin Computing invites you to join our next Webinar on Linux Cluster
Architecture.
Pauline Nist and I (Donald Becker) will explain how Linux clustering can
cost-effectively optimize usage levels while simplifying cluster
operation. Scyld's cluster architecture eliminates complex
This is a reminder that the BWBUG meeting will be today at 2:45 at the
Radio Free Asia location. The talk by Crosswalk on their new iGrid
solution for very fast data I/O will be well worth your time.
If you would like to participate in the Web Cast please send an email
to our host, Michael Fitzm
hnology introductions (this one is just
"scheduled progress") and thus will have a smaller booth than we have had
in the past.
--
Donald Becker [EMAIL PROTECTED]
Scyld Software Scyld Beowulf cluster systems
914 Bay Ridge Road, Su
ntly has overhead, and it's still pretty noticeable
with the current implementations.
--
Donald Becker [EMAIL PROTECTED]
Scyld Software Scyld Beowulf cluster systems
914 Bay Ridge Road, Suite 220
or.
The bottom line is that we are considering a message board format to
replace the mailing list. It would have required logins to
post, and retroactive moderation to delete advertising and trolls.
Any opinions?
--
Donald Becker [EMAIL PROTECTED]
Scyld Software
Kernel being used on the slaves turns out to be exploitable,
> update the master, reboot the nodes. Problem solved.
The goal of clustering is to create a unified system out of independent
pieces -- creating the illusion of a single system. The more you have to
administer machines
ems to work. (Philosophy: You mount file
systems as needed for application data, not for the underlying system.)
But to be fast, efficient and effective the ramdisk can't just be a
stripped-down full distribution. You need a small 'init' system and
a dy
? Or up but not responding right now?
I've seen systems that use NSCD, the Name Service Caching Daemon.
It's another "it seems to work for me, at least today" solutions. Like
most caching systems, it reduces traffic in the common case. But it
doesn't
61 matches
Mail list logo