Re: [Beowulf] quad-core SPECfp2006: where are 4 FPresults/cycle ?

2007-10-12 Thread richard . walsh
-- Original message -- From: "Mikhail Kuzminsky" <[EMAIL PROTECTED]> > > But if I'll compare SPECfp2006 results w/x86-64 microarchitecture > w/2*64 bit FP results per cycle - previous Opteron generation - I'll > see some strange (IMHO) result. So, for Opteron SE/

Re: [Beowulf] quad-core SPECfp2006: where are 4 FPresults/cycle ?

2007-10-12 Thread Mark Hahn
This means that 2 additional FP results per cycle in microarchitecture gives only about 7% of performance increase :-( the 4 flops/cycle is really for linpack-like code: it assumes you are executing packed double SIMD. The question is - should we wait some better results for new incoming opt

Re: [Beowulf] Re: Memory limit enforcement

2007-10-12 Thread Andrew Shewmaker
On 10/10/07, David Mathog <[EMAIL PROTECTED]> wrote: > David Kewley <[EMAIL PROTECTED]> wrote: > > > > But the kernel doesn't really enforce anything useful. > > I agree, the kernel should be able to enforce these sorts of limits > on all processes of a user at once. > > Write Linus or whichever ke

[Beowulf] quad-core SPECfp2006: where are 4 FPresults/cycle ?

2007-10-12 Thread Mikhail Kuzminsky
I found 1st AMD quad core (Opteron 2347/1.9 Ghz) SPECfp2006 results (at www.spec.org) obtained by IBM: 11.2/10.7 for peak/base values. I'll say about 1 core only, i.e. for results w/Autoparallel=NO. Let me look to other x86-64 microarchitecture w/same 4*64 bit FP results per cycle, i.e. Intel

Re: [Beowulf] Channel bonding, again

2007-10-12 Thread Kilian CAVALOTTI
Hi all, On Friday 12 October 2007 04:57:50 am Henning Fehrmann wrote: > On the other hand, one needs to trunk (HP calls it trunking) the > ports on the switches. We tried it on HP and Cisco switches. > The switch collects the packages of the trunked ports and > redistributes them according to a l

Re: [Beowulf] Channel bonding, again

2007-10-12 Thread Henning Fehrmann
Hello Greg, On Thu, Oct 11, 2007 at 03:13:16PM -0700, Greg Lindahl wrote: > I'm thinking about using "balance-alb" channel bonding on a > medium-to-large Linux cluster; does anyone have experience with this? > It seems that it might generate a lot of arp replies if a switch > fails. > We did som