Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Robert G. Brown
On Tue, 15 Aug 2006, Mike Davis wrote: Mark Hahn wrote: huh? what value does big-A have to add here? the correct queueing system is the one that is cheap, low-maintenance, efficient, easy to use, etc. those are things that users and sysadmins know, not behind-desk-sitters... Difference of

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Joe Landman
Mike Davis wrote: > I'm not 100% sure about that Mark. I care about big-A administration. I > care about showing departments what resources are actually available. I > care about what is the most efficient use of limited University > resources. When I meet with researchers they often say that the

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Mike Davis
Mark Hahn wrote: huh? what value does big-A have to add here? the correct queueing system is the one that is cheap, low-maintenance, efficient, easy to use, etc. those are things that users and sysadmins know, not behind-desk-sitters... Difference of definition here. I believe that Big-A ad

Re: [Beowulf] DC Power Dist. Yields 20%

2006-08-15 Thread Geoff Jacobs
Jim Lux wrote: > At 08:41 AM 8/11/2006, Geoff Jacobs wrote: > > Sure you would.. Actually, these days, you might use an IGBT, depending > on the switch rate.. Well, it was just a guess based on the bare pcb of PicoPSU supplies. > Switching PSUs all rely on rectifying the AC supply to generate a

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Mark Hahn
I'm not 100% sure about that Mark. I care about big-A administration. I care about showing departments what resources are actually available. I care about what is the most efficient use of limited University resources. When I meet with researchers they often say that they had no idea that there

Re: [Beowulf] DC Power Dist. Yields 20%

2006-08-15 Thread Jim Lux
At 02:00 PM 8/15/2006, Geoff Jacobs wrote: Jim Lux wrote: > At 08:41 AM 8/11/2006, Geoff Jacobs wrote: > > Sure you would.. Actually, these days, you might use an IGBT, depending > on the switch rate.. > > Switching PSUs all rely on rectifying the AC supply to generate a DC bus > voltage, that i

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Mike Davis
I'm not 100% sure about that Mark. I care about big-A administration. I care about showing departments what resources are actually available. I care about what is the most efficient use of limited University resources. When I meet with researchers they often say that they had no idea that there

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Mike Davis
I've sent Bill a link to this Educause group. http://www.educause.edu/content.asp?page_id=6673&bhcp=1 It is one of the IT management lists and its misssion covers many of the ideas that we are discussing. Personally, I'm OK no matter what the ultimate decision is but some readers may not be

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Joe Landman
Mark Hahn wrote: > IMO, centralization breeds contempt ;) [virtual coffee splatters real screen] [] > that said, it's entirely possible to sustain a "rolling cluster": start > with one generation, and incrementally move it forward. this is easiest > if you have standard parts (plain old

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Mark Hahn
beowulf traffic itself is "noise"? If you are thinking of a "list for university deans" or members of research support offices or departmental ... administerable and accountable should they get audited) -- then yeah, I think a new list or other venue would be very useful. yes. the overlap is

Re: [Beowulf] DC Power Dist. Yields 20%

2006-08-15 Thread Geoff Jacobs
Jim Lux wrote: > At 08:41 AM 8/11/2006, Geoff Jacobs wrote: > > Sure you would.. Actually, these days, you might use an IGBT, depending > on the switch rate.. > > Switching PSUs all rely on rectifying the AC supply to generate a DC bus > voltage, that is then converted to the DC voltage you want

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Robert G. Brown
On Tue, 15 Aug 2006, Joe Landman wrote: Now that RGB has written a thesis on this ... :) No, the thesis went to Bill offline. You guys just got the synopsis...;-) And I'm too tired/busy from my "vacating" to do a proper job, sorry... Work starts in earnest next week, and a whole lot of it st

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Mark Hahn
- integration of a cluster into a larger University IT infrastructure (storage, authentication, policies, et. al.) just say no. we consciously avoid taking any integration steps that would involve the host institution trusting us or vice versa. well, not quite - we managed to live for ~5 yea

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Joe Landman
Now that RGB has written a thesis on this ... :) Robert G. Brown wrote: > On Tue, 15 Aug 2006, Bill Rankin wrote: > >>> Are there problems so specific to the higher education realm >>> that you think they'd benefit from their own forum? >> >> Not so much that would benefit from their own forum,

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Robert G. Brown
On Tue, 15 Aug 2006, Bill Rankin wrote: Are there problems so specific to the higher education realm that you think they'd benefit from their own forum? Not so much that would benefit from their own forum, but there are a lot of issues that are not directly related to the construction and te

Re: [Beowulf] [OT] HPC and University IT - forum/mailing list?

2006-08-15 Thread Bill Rankin
On Aug 14, 2006, at 12:28 AM, Brian Dobbins wrote: Echoing Mark's reply, it seems to me that a lot of the volume IS from people who are in an academic environment and wrestle with the issues therein. My two cents would be that I think the wide variety of experience and ideas here only helps whe

Re: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Mikhail Kuzminsky
The best pathscale fortran stream results for Opteron, at least for 1 core, about 1 year ago were obtained for -CG:use_prefetchnta -LNO:prefetch_ahead=4 -O3 -mp keys of compilation. May be there is some sense to play w/this parameters for Woodcrest also ? Yours Mikhail Kuzminsky Zelinsky Inst

Re: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Joe Landman
Peter Kjellstrom wrote: [EMAIL PROTECTED] streamd]# hostname ; date ; for i in 1 2 3 4 5 ; do export OMP_NUM_THREADS=$i ; ./streamd | egrep "Total memory re|Number of Th|Function |Copy:|Scale:|Add:|Triad:"; done tbox3 Fri Aug 11 17:59:22 CEST 2006 Total memory required = 457.8 MB. Number of Th

RE: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Kozin, I \(Igor\)
Interesting... Given that Add and Triad are virtually the same it's surprising that Copy and Scale are so different. IMHO Scale should be more like Copy. Compiler effect? > here you go (dell 2950 with 8 modules and streams compiled with icc-9.1 -O3: > > [EMAIL PROTECTED] streamd]# hostname ; dat

Re: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Peter Kjellstrom
On Tuesday 15 August 2006 17:25, Richard Walsh wrote: > Mark Hahn wrote: > >>> Good point which makes perfect sense to me. > >>> Given that the theoretical maximum is actually 21.3 GB/s > >>> the real maximum Triad number must be 21.3/3 = 7.1 GB/s. > > > > I don't get this - triad does two reads an

RE: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Kozin, I \(Igor\)
Oops. I misinterpreted what Keith was saying. I thought he was trying to justify the "one third" empirical rule by referring that every read needs to check against the other socket thus effectively reducing the bandwidth by 3. It's obvious that the "three quarters" rule which worked perfectly for

Re: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Richard Walsh
Mark Hahn wrote: Good point which makes perfect sense to me. Given that the theoretical maximum is actually 21.3 GB/s the real maximum Triad number must be 21.3/3 = 7.1 GB/s. I don't get this - triad does two reads and one write. if you don't use store-through ('nt' versions of mov), then the w

Re: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Mark Hahn
Good point which makes perfect sense to me. Given that the theoretical maximum is actually 21.3 GB/s the real maximum Triad number must be 21.3/3 = 7.1 GB/s. I don't get this - triad does two reads and one write. if you don't use store-through ('nt' versions of mov), then the write also implies

Re: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Bill Broadley
On Tue, Aug 15, 2006 at 12:29:02PM +0100, Kozin, I (Igor) wrote: > > Good point which makes perfect sense to me. > Given that the theoretical maximum is actually 21.3 GB/s > the real maximum Triad number must be 21.3/3 = 7.1 GB/s. > And that's the best number I've heard of. Then how do you explai

Re: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Joe Landman
Kozin, I (Igor) wrote: [...] > Here is a pointer to some measured latencies > http://www.anandtech.com/IT/showdoc.aspx?i=2772&p=4 Hmmm. The text was littered with the fluff marketing bits, about how this is devastating to Opteron and all that. Sounded quite a bit like it was written by Intel

RE: [Beowulf] Woodcrest Memory bandwidth

2006-08-15 Thread Kozin, I \(Igor\)
Good point which makes perfect sense to me. Given that the theoretical maximum is actually 21.3 GB/s the real maximum Triad number must be 21.3/3 = 7.1 GB/s. And that's the best number I've heard of. Here is a pointer to some measured latencies http://www.anandtech.com/IT/showdoc.aspx?i=2772&p=4