[Beowulf] Final Announcement: 11th Annual Beowulf Bash 9pm Nov 16 2009

2009-11-16 Thread Donald Becker
will be a short greeting by the sponsors about 10pm Try to be there by then. Again: Monday, November 16 2009 9-11pm (Immediately after the SC09 Opening Gala) The Game, at the Rose Quarter (Close to the Convention Center) -- Donald Becker bec...@scyld.com Penguin

Re: [Beowulf] BProc

2009-07-31 Thread Donald Becker
't mean "the only way to do it". There is much more I could write about benefits, trade-offs, and implementation details. Is there a specific area that you wanted to know about? -- Donald Becker bec...@scyld.com Penguin Computing / Scyld Software

RE: [Beowulf] Rackable / SGI

2009-04-02 Thread Donald Becker
, or not put into production because of internal politics. In the mid-1990s SGI was a company that didn't realize that a team that could deliver working systems was rare and precious. A few people here will remember that disaster of SGI trying to run Cray. The culture clash couldn't have bee

RE: [Beowulf] Interesting google server design

2009-04-02 Thread Donald Becker
least one intermediate stage.. say 12V. And if we are close, we might as well regulate it well and use it for the disk drives. Hmmm, aren't we back at pretty much the standard design? -- Donald Becker bec...@scyld

Re: [Beowulf] GPU diagnostics?

2009-03-30 Thread Donald Becker
ction lines that can't fit a longer burn-in into the process. A production line with pre-imaged OS installations pretty much cannot do a full burn-in. -- Donald Becker bec...@scyld.com Penguin Computing / Scyld Software www.penguincomputing.comwww.scyl

Re: [Beowulf] cli alternative to cluster top?

2008-11-30 Thread Donald Becker
established to the user-level daemon, we optimize by having some of the communication handled by a kernel module that shares the socket. The optimization isn't needed for 'only' hundreds of nodes and processes, or if you are willing to dedicate most of a very powerful head node

Re: [Beowulf] cli alternative to cluster top?

2008-11-30 Thread Donald Becker
ince the Linux 2.2 kernel and over multiple architectures. We've been producing supported commercial releases since 2000, longer than anyone else in the business. -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.com

[Beowulf] Final Announcement: 10th Annual Beowulf Bash 9pm Nov 17 2008

2008-11-17 Thread Donald Becker
sponsor) http://xandmarketing.com NVIDIAhttp://nvidia.com Terascala http://terascala.com/ Clustermonkey http://clustermonkey.net/ ClusterCorp http://clustercorp.com/ -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyl

[Beowulf] 10th Annual Beowulf Bash: Austin TX Nov 17 2008 9pm

2008-11-14 Thread Donald Becker
guin/Scyld (organizing sponsor)http://penguincomputing.com XAND Marketing (organizing sponsor) http://xandmarketing.com NVIDIAhttp://nvidia.com Terascala http://www.terascala.com/ Panasas http://www.panasas.com/ Clustermonkey http://www.clustermo

[Beowulf] 10th Annual Beowulf Bash: Announcement and sponsorship opportunity

2008-10-31 Thread Donald Becker
n the middle of the party. - have the opportunity for technical, hands-on demos at the Bash - will have their logos on the beowulf.org BeoBash 2008 web pages and on the 2008 yearbook page -- Donald Becker Never send mail to [EMAIL PROTECTED] Penguin Computing / Scyl

Re: [Beowulf] Beowulf bash at SC'08 ?

2008-10-31 Thread Donald Becker
On Fri, 31 Oct 2008, Chris Samuel wrote: > Just a couple of weeks to go and I was wondering if > there were plans to do another Beowulf bash at SC this > year in Austin ? Ooopsss, missing sending out the preliminary announcement to the list. Look for it in my next message. -

Re: [Beowulf] Compute Node OS on Local Disk vs. Ram Disk

2008-10-01 Thread Donald Becker
On Wed, 1 Oct 2008, Bogdan Costescu wrote: > On Tue, 30 Sep 2008, Donald Becker wrote: > > Ahhh, your first flawed assumption. > > You believe that the OS needs to be statically provisioned to the nodes. > > That is incorrect. > Well, you also make the flawed assumption

Re: [Beowulf] precise synchronization of system clocks

2008-09-30 Thread Donald Becker
. Or just skip ahead and read about Purdue PAPERS -- figure out why it was very appealing but failed. Meanwhile, I'll try to find out where I can plug a serial cable into a modern server... -- Donald Becker [EMAIL PROTECTED] Penguin Computi

Re: [Beowulf] Compute Node OS on Local Disk vs. Ram Disk

2008-09-30 Thread Donald Becker
with a SCSI layer, can take many seconds. Once you detect a disk it takes a bunch of slow seeks to read the partition table and mount a modern file system (not EXT2). So trimming the system initialization time further isn't a priority until after the file system and IB init times are

Re: [Beowulf] precise synchronization of system clocks

2008-09-30 Thread Donald Becker
k jobs), rather than re-think the need for running them at all. IIRC, it was companies such as Octiga Bay that actually implemented global-clock gang scheduling of system daemons, again with a network that implemented global synchronization operations. -- Donald Becker

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-28 Thread Donald Becker
On Fri, 26 Sep 2008, Robert G. Brown wrote: > On Fri, 26 Sep 2008, Donald Becker wrote: > > > But that rule doesn't continue when we move to higher core counts. We > > still want a little observability, but a number for each of a zillion > > cores is useless

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-26 Thread Donald Becker
won't obviously break when we have 64 cores in each of 4 sockets. But it really just shifts the re-design work to the tools and applications. -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.comwww.

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-24 Thread Donald Becker
On Wed, 24 Sep 2008, Robert G. Brown wrote: > On Tue, 23 Sep 2008, Donald Becker wrote: > > >> XML is (IMO) good, not bad. > > > > I have so much to write on this topic, I'll take the first pot shot at RGB > > ;-) > > > > XML is evil. Well,

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-24 Thread Donald Becker
of them skyrockets and it becomes easier > >>> which is to stick with the status quo :( That's true for the "strip down a distribution" approach. You have to repeat the work for each new release. And test it, since an updated release might change to rely up

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-23 Thread Donald Becker
traffic, the whole cluster-wide app stops. Next posting: how the app itself can be the cause of slow-downs, and why cluster-specific nameservices and why library/executable memory "wire-downs" solve problems. -- Donald Becker [EMAIL PROTECTED] Penguin Computing /

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-23 Thread Donald Becker
t or broadcast) to a list of (presumably unicast) destinations. This adds a trivial amount of packet construction overhead and code complexity, while giving us many of the advantage of multicast. We can have multiple head nodes that understand the clust

Re[2]: [Beowulf] How Can Microsoft's HPC Server Succeed?

2008-04-19 Thread Donald Becker
ed. Using multicast was an excellent solution in the days of Ethernet coax and repeaters. When using a repeater all machines see all traffic anyway, and older NICs treated all packets individually. Today clusters use switched Ethernet or specialized interconnects (Infiniband, Myri

Re: [Beowulf] How Can Microsoft's HPC Server Succeed?

2008-04-09 Thread Donald Becker
ipped integrated PVFS starting with our second release, including funding the PVFS guys to make it easy to configure. Over the years we have included a few others, but preconfigured support for advanced distributed and cluster file systems hasn't justified the effort and cost. We now

Re: [Beowulf] Performance metrics & reporting

2008-04-08 Thread Donald Becker
eds older values (very few do) it can pick a logging, summarization and coalescing approach of its own. We made many other innovative architectural decisions when designing the system, such as publishing the stats as a read-only shared memory version. But this are less

Re: [Beowulf] CSharifi Next generation of HPC

2007-12-04 Thread Donald Becker
orts take a long time to tune for NVM, and almost all end up using NVM as a stylized message passing mechanism. -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.comwww.scyld.com Annapolis MD and San Francisc

Re: [Beowulf] Question about list

2007-11-21 Thread Donald Becker
likely to get your "always moderate" box checked. -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.comwww.scyld.com Annapolis MD and San Francisco CA ___

[Beowulf] Beowulf Bash in Reno tomorrow (Tuesday) night

2007-11-12 Thread Donald Becker
s and door prizes at 7pm Third Street Blues 125 W 3rd St, Reno, NV 89501 775-323-5005 The Third Street Blues is behind the El Dorado Casino parking lot, with on-street parking also available. Sponsors [[ Include logos ]] Penguin Computing / Scyld Software AMD Intel Nvidia Terascala -- Don

[Beowulf] Corrected Beowulf Bash announcement

2007-10-30 Thread Donald Becker
-- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.comwww.scyld.com Annapolis MD and San Francisco CA ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription

[Beowulf] 9th Annual Beowulf Bash: Announcement and final sponsorship opportunity

2007-10-30 Thread Donald Becker
bring a sign, there will be some ambient light.) - are part of the brief greeting in the middle of the party. - have the opportunity for technical, hands-on demos at the bash - have their logos on the beowulf.org site through the end of SC07, and on the 2007 yearbook page after the eve

Re: [Beowulf] Old versions of Linux

2007-10-28 Thread Donald Becker
new machines. Most new motherboards have devices that you'll need a 2.6 kernel for. And not an old kernel, a pretty recent one. -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.com

[Beowulf] Reminder BWBUG meeting today, Tuesday Oct 9

2007-10-09 Thread Donald Becker
Speaker: Donald Becker, CTO of Scyld Software and Penguin Computing Host: Michael Fitzmaurice I'll be talking about name services and process creation mechanisms for clusters. -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Sof

[Beowulf] BWBUG / Beowulf Users Group meeting Oct 9 2007 at Georgetown

2007-10-05 Thread Donald Becker
Baltimore Washington Beowulf User Group Meeting Date: 9 Oct 2007 at 2:30 pm - 5:00pm. Location: Georgetown University at Whitehaven Street 3300 Whitehaven Street, Washington DC 20007 Speaker: Donald Becker, CTO of Scyld Software and Penguin Computing Host: Michael Fitzmaurice

[Beowulf] 9th Annual Beowulf Bash: Announcement and sponsorship opportunity

2007-10-05 Thread Donald Becker
the end of SC07, and on the 2007 yearbook page after the event. -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.comwww.scyld.com Annapolis MD and San Francisco CA _

Re: [Beowulf] power usage, Intel 5160 vs. AMD 2216

2007-07-12 Thread Donald Becker
sign for that long-running job that stays at the peak power draw and thermal state of the cluster. -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.comwww.scyld.com Annapolis MD and San Francisco CA

[Beowulf] Reminder -- No BayBUG meeting this month

2007-06-19 Thread Donald Becker
Just a reminder that there is no west coast BayBUG meeting this month. Stay tuned (or volunteer!) for the fall speaker schedule... -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincomputing.comwww.scyld.com Annapolis MD

[Beowulf] BayBUG meeting tomorrow April 17, 2007 in Sunnyvale

2007-04-16 Thread Donald Becker
Please join moderator and Beowulf cluster co-inventor Donald Becker on Tuesday, April 17 for the next Bay Area Beowulf Users Group (BayBUG): Bay Area Beowulf User Group (BayBUG) April 17, 2007 2:30 - 5:00 p.m. AMD headquarters Common Building, Room C-6/7/8 991 Stewart Drive, Sunnyvale There

[Beowulf] Survey on beowulf.org -- and a drawing for a video iPod

2007-04-10 Thread Donald Becker
php As an incentive, legitimate surveys (no automated submissions please!) completed by April 20 will be entered into a drawing for the gadget de jour -- a video iPod. Good luck -- Donald Becker [EMAIL PROTECTED] Penguin Computing / Scyld Software www.penguincom

[Beowulf] Rescheduled BWBUG meeting tomorrow, Feb 27 2007

2007-02-26 Thread Donald Becker
: 27 Feb 2007 at 2:30 pm - 5:00pm. Location: Georgetown University at Whitehaven Street 3300 Whitehaven Street, Washington DC 20007 Speaker: Donald Becker, CTO of Scyld Software / Penguin Computing Host: Michael Fitzmaurice Here is the announcement from Mike: February 27th 2:30 to 5:00 PM

[Beowulf] BayBUG meeting today, February 20 2007 in Sunnyvale CA

2007-02-20 Thread Donald Becker
performance models, including the model commonly known as Gustafson's Law or Scaled Speedup. John received his B.S. degree from Caltech and his M.S. and Ph.D. degrees from Iowa State University, all in Applied Mathematics. -- Donald Becker [EMAIL P

[Beowulf] BWBUG: Weather DELAY -- Today's BWBUG meeting rescheduled to Feb 27th at Georgetown university

2007-02-13 Thread Donald Becker
al-in information on the day of the meeting. I'll see you in two weeks! -- Donald Becker [EMAIL PROTECTED] Scyld Software Scyld Beowulf cluster systems 914 Bay Ridge Road, Suite 220 www.scyld.com Annapolis MD 21403 41

[Beowulf] BayBUG 2007 meeting plans

2007-01-15 Thread Donald Becker
ineer from Cluster Resources Inc [black bold] Join moderator and Beowulf cluster co-inventor Donald Becker for food and drinks and to learn from and network with other Linux HPC professionals." -- Donald Becker [EMAIL PROTECTED] Scyld Software

Re: [Beowulf] SATA II - PXE+NFS - diskless compute nodes

2006-12-14 Thread Donald Becker
vice has it's own server IP address and port. We could point them to replicated masters or other file servers. They just all point to the single master to keep things simple. For reliability we can have cold, warm or hot spare masters. But again, it's less complex to adminis

Re: [Beowulf] SATA II - PXE+NFS - diskless compute nodes

2006-12-14 Thread Donald Becker
On Thu, 14 Dec 2006, Simon Kelley wrote: > Donald Becker wrote: > > It should repeat this: forking a dozen processes sounds like a good idea. > > Thinking about forking a thousand (we plan every element to scale to "at > > least 1000") makes "1" seem l

Re: [Beowulf] SATA II - PXE+NFS - diskless compute nodes

2006-12-13 Thread Donald Becker
On Wed, 13 Dec 2006, Simon Kelley wrote: > Donald Becker wrote: > > On Tue, 12 Dec 2006, Simon Kelley wrote: > >> Joe Landman wrote: > >>>>> I would hazard that any DHCP/PXE type install server would struggle > >>>>> with 2000 requests (yes-

Re: [Beowulf] SATA II - PXE+NFS - diskless compute nodes

2006-12-12 Thread Donald Becker
ch the kernel. And you want the logging to happen as the very first thing after booting the kernel. (Boot kernel, load network driver, DHCP for loghost, dump kernel message, only then activate additional hardware and do other risky things.) -- Donald Becker [EMA

RE: [Beowulf] SATA II - PXE+NFS - diskless compute nodes

2006-12-12 Thread Donald Becker
a large number of clients is no problem. > > > Cheers, > > Simon. > > > > ___ > Beowulf mailing list, Beowulf@beowulf.org > To change your subscription (digest mode or unsubscribe) visit > http:

Re: [Beowulf] onboard Gb lan: any opinion, sugestion or impression?

2006-11-14 Thread Donald Becker
ot in advance, > > Sincerally yours, > > Jones > ___ > Beowulf mailing list, Beowulf@beowulf.org > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > -

Re: [Beowulf] SC'06 Beowulf Bash ?

2006-11-01 Thread Donald Becker
more always better?" I'm pretty certain that we can find speakers for that. BTW, We are still accepting sponsors. (More than "accepting", how 'bout "would be really happy to have a few more".) Contact "suzie" at beowulf dot org if you or someo

[Beowulf] Sorry, no webcast for the BayBUG meeting today. BWBUG webcast as usual

2006-10-17 Thread Donald Becker
ce Suite 900 Herndon, Virginia 20170 Webcast info at http://www.bwbug.org -- Donald Becker [EMAIL PROTECTED] Scyld Software Scyld Beowulf cluster systems 914 Bay Ridge Road, Suite 220 www.scyld.com Annapolis MD 21403

[Beowulf] Reminder: Today is "Beowulf User Group Tuesday!"

2006-10-17 Thread Donald Becker
the BayBUG please RSVP to [EMAIL PROTECTED] so that we have an approximate count. -- Bay Area Beowulf User Group (BayBUG) 2:30 - 5:00 p.m. 1st Meeting -- October 17, 2006 AMD headquarters Common Building, Room C-6/7/8 991 Stewart Drive, Sunnyvale Join moderator and Beowulf cluster co-inve

[Beowulf] Silicon Valley / S-F Bay Beowulf User Group (BayBUG) first meeting

2006-10-10 Thread Donald Becker
first meeting, please spread this announcement as widely as possible! -- Bay Area Beowulf User Group (BayBUG) 2:30 - 5:00 p.m. 1st Meeting -- October 17, 2006 AMD headquarters Common Building, Room C-6/7/8 991 Stewart Drive, Sunnyvale Join moderator and Beowulf cluster co-inventor Donald Beck

[Beowulf] 8th Annual Beowulf Bash sponsorship opportunity

2006-10-10 Thread Donald Becker
h was held at the Mchoney's Restaurant, Pittsburg. It was sponsored by Penguin Computing, AMD and Scyld Software and featured a 10th birthday party cake. SC2005 Seattle, Gordon Biersch Brewery, sponsored by Penguin Computing, Scyld Software, AMD & HPCwire.

[Beowulf] BWBUG: Meeting Aug 8 2006 on 'Scali Cluster Management' at the iLab

2006-08-07 Thread Donald Becker
From: Michael Fitzmaurice <[EMAIL PROTECTED]> Date: August 8, 2006 Time: 2:30 PM to 5:00 PM Location: iLab 460 Springpark Place Suite 900 Herndon, Virginia 20170 Speakers: Ragnar Kjorstad, Lead Engineer from Scali Abstract: Scali will be presenting a technical session on

[Beowulf] Webinar on Linux cluster architecture -- live today, June 20

2006-06-20 Thread Donald Becker
Penguin Computing invites you to join our next Webinar on Linux Cluster Architecture. Pauline Nist and I (Donald Becker) will explain how Linux clustering can cost-effectively optimize usage levels while simplifying cluster operation. Scyld's cluster architecture eliminates complex

[Beowulf] BWBUG: Meeting / web cast today at 2:45 at Radio Free Asia

2006-05-09 Thread Donald Becker
This is a reminder that the BWBUG meeting will be today at 2:45 at the Radio Free Asia location. The talk by Crosswalk on their new iGrid solution for very fast data I/O will be well worth your time. If you would like to participate in the Web Cast please send an email to our host, Michael Fitzm

Re: [Beowulf] anyone coming to LinuxWorld Boston?

2006-03-22 Thread Donald Becker
hnology introductions (this one is just "scheduled progress") and thus will have a smaller booth than we have had in the past. -- Donald Becker [EMAIL PROTECTED] Scyld Software Scyld Beowulf cluster systems 914 Bay Ridge Road, Su

Re: [Beowulf] Re: Cluster newbie, power recommendations

2006-03-21 Thread Donald Becker
ntly has overhead, and it's still pretty noticeable with the current implementations. -- Donald Becker [EMAIL PROTECTED] Scyld Software Scyld Beowulf cluster systems 914 Bay Ridge Road, Suite 220

[Beowulf] Apologies for the spam/virus yesterday

2006-02-08 Thread Donald Becker
or. The bottom line is that we are considering a message board format to replace the mailing list. It would have required logins to post, and retroactive moderation to delete advertising and trolls. Any opinions? -- Donald Becker [EMAIL PROTECTED] Scyld Software

Re: [Beowulf] distributions

2006-02-08 Thread Donald Becker
Kernel being used on the slaves turns out to be exploitable, > update the master, reboot the nodes. Problem solved. The goal of clustering is to create a unified system out of independent pieces -- creating the illusion of a single system. The more you have to administer machines

Re: [Beowulf] distributions

2006-02-06 Thread Donald Becker
ems to work. (Philosophy: You mount file systems as needed for application data, not for the underlying system.) But to be fast, efficient and effective the ramdisk can't just be a stripped-down full distribution. You need a small 'init' system and a dy

Re: [Beowulf] Fwd: NIS limitations question

2006-02-06 Thread Donald Becker
? Or up but not responding right now? I've seen systems that use NSCD, the Name Service Caching Daemon. It's another "it seems to work for me, at least today" solutions. Like most caching systems, it reduces traffic in the common case. But it doesn't