Re: [Beowulf] Mutiple IB networks in one cluster

2014-02-06 Thread Greg Lindahl
I'm saying that's what the definition of a Clos network is, and that's the only situation in which it's "non-blocking". People buy clusters with all kinds of network configurations. On Thu, Feb 06, 2014 at 09:51:10PM -0600, Alan Louis Scheinine wrote: > When I wrote "the number of nonblocking con

Re: [Beowulf] Mutiple IB networks in one cluster

2014-02-06 Thread Alan Louis Scheinine
When I wrote "the number of nonblocking connections is typically much less than the number of nodes" I had in mind the telephone network (in the age of copper wires). Are you sure that "1/2 the nodes can make a single call to the other 1/2 of the nodes" is typical of a computer interconnect? I t

Re: [Beowulf] Problems with Dell M620 and CPU power throttling

2014-02-06 Thread Bill Wichser
On 2/6/2014 9:30 AM, Aaron Knister wrote: Bill Wichser princeton.edu> writes: We have tested using c1 instead of c0 but no difference. We don't use logical processors at all. When the problems happens, it doesn't matter what you set the cores for C1/C0, they never get up to speed again witho

Re: [Beowulf] cloudy HPC?

2014-02-06 Thread Christopher Samuel
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 31/01/14 07:57, Mark Hahn wrote: > For instance, I've heard some complaints about doing MPI on virtualized > interconnect as being slow. but VM infrastructure > like KVM can give device ownership to the guest, so IB access *could* be > bare-metal.

Re: [Beowulf] Problems with Dell M620 and CPU power throttling

2014-02-06 Thread Aaron Knister
Bill Wichser princeton.edu> writes: > > We have tested using c1 instead of c0 but no difference. We don't use > logical processors at all. When the problems happens, it doesn't matter > what you set the cores for C1/C0, they never get up to speed again > without a power cycle/reseat. We be

Re: [Beowulf] Mutiple IB networks in one cluster

2014-02-06 Thread Lux, Jim (337C)
On 2/5/14 11:49 PM, "Greg Lindahl" wrote: >In the usual Clos network, 1/2 of the nodes can make a single call to >the other 1/2 of the nodes. That's what's non-blocking. Nothing else >is. Running any real code, every node talks to more than one other >node, and the network is not non-blocking.

[Beowulf] the Register on IBM/Lenovo

2014-02-06 Thread Hearns, John
http://www.theregister.co.uk/2014/01/31/hpc_implications_lenovo_ibm_system_x/ Well worth looking at the comments also. Dr John Hearns | CFD Hardware Specialist | McLaren Racing Limited McLaren Technology Centre, Chertsey Road, Woking, Surrey GU21 4YH, UK T: +44 (0) 1483 262000 D: +44 (0) 148

Re: [Beowulf] Mutiple IB networks in one cluster

2014-02-06 Thread John Hearns
A good article on Clos Networks. http://m.networkworld.com/community/blog/clos-networks-%E2%80%93-what%E2%80%99s-old-new-again What goes around comes around! ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your sub