On the QDR IB system I have testing on, the printf from each process is not
setlinebuf(stdout)?
maybe also due to change in mpi flavor?
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mod
Call for Participation
OnTheMove (OTM) 2009 is a federated event that counts 4 conferences and 10
workshops, co-located in the week of Nov.1 to 6, in the Tivoli Marina
Conference Center and Hotel overlooking the pleasant fishing and yacht harbour
of Vilamoura, in the Portuguese Algarve.
All wo
Mark Hahn wrote:
IPMI gets hung sometimes (like Gerry says in his reply). I guess I can
just attribute that to bad firmware coding in the BMC.
I think it's better to think of it as a piece of hw (the nic)
trying to be managed by two different OSs: host and BMC.
it's surprising that it works at
2009/10/26 Joe Landman
> Just curious -- how large and how big are the deltas in the
>
>> hierarchy?
>
At the start of the year we were seeing an average delta of about 10GB/day,
currently we are seeing an average delta of 70GB/day. There are still a
number of unknowns, but we are expecting that
On Mon, Oct 26, 2009 at 4:51 PM, John Hearns wrote:
> 2009/10/26 John Hearns :
>> 2009/10/26 Rahul Nabar :
>>>
>>> True. I just thought that if my BMC is running a webserver it cannot
>>> be all that stripped down. Maybe I am wrong and it is possible to
>>> write a compact webserver.
>
> The addr
Folks,
I have an benchmark code that uses printf on each MPI process to print
performance figures.
On the system that the original author's developed on, the output from
each process must have been line buffereed.
On the QDR IB system I have testing on, the printf from each process is
not l
Thanks, but I am not entirely clear on why the interrupts flow to
both the mlx-core driver and eth-mlx4-0.
This is what my /proc/interrupts table look like. Interrupts go to
CPU0 for mlx4_core and CPU6 for eth-mlx4-0:
4319: 0 0 0 0 0
0
Robert Kubrick wrote:
> I noticed my machine has 16 drivers in the /proc/interrupts table
> marked as eth-mlx4-0 to 15, in addition to the usual mlx-async and
> mlx-core drivers.
> The server runs Linux Suse RT, has an infiniband interface, OFED 1.1
> drivers, and 16 Xeon MP cores , so I'm assuming
I assume these are MSI-X interrupts of the one Mellanox driver instance.
This feature allows to spread interrupts more or less evenly across CPUs, in
conjunction with multiple send/recv queues.
Each PCI device has a single driver (unless we talk about virtualized I/O,
which does not apply here). B
Hi Tomislav,
thanks for the reply, and thanks to ed in 92626, too, for the url's
provided.
So, Tomislav, i got a cluster running, i dont know if it was i the
beowuld default configurations, but i got it running with serveral
howtos that i found o the web.
I got it running with LAM-MPI and PV
another one.
http://www.cacr.caltech.edu/beowulf/tutorial/tutorial.html
On Wed, Oct 21, 2009 at 11:27 AM, Tony Miranda wrote:
> Hi everyone,
>
> anyone could help me explaning how to build a beowulf cluster?
> An web site, a list of parameters anything updated. Cause i only found in
> the inter
Here's something.
http://www.mcsr.olemiss.edu/bookshelf/articles/how_to_build_a_cluster.html
On Wed, Oct 21, 2009 at 11:27 AM, Tony Miranda wrote:
> Hi everyone,
>
> anyone could help me explaning how to build a beowulf cluster?
> An web site, a list of parameters anything updated. Cause i o
2009/10/26 John Hearns :
> 2009/10/26 Rahul Nabar :
>>
>> True. I just thought that if my BMC is running a webserver it cannot
>> be all that stripped down. Maybe I am wrong and it is possible to
>> write a compact webserver.
The address of the world's first webserver lives on at http://info.cern
2009/10/26 Rahul Nabar :
>
> True. I just thought that if my BMC is running a webserver it cannot
> be all that stripped down. Maybe I am wrong and it is possible to
> write a compact webserver.
Google for Perl onleliner webserver
Heck, your mobile phone is more powerful than the mainframes of
y
On Mon, Oct 26, 2009 at 1:53 PM, David N. Lombard
wrote:
> On Mon, Oct 26, 2009 at 11:18:33AM -0700, Hearns, John wrote:
>
> Well, "running Linux" is very different from "running a full blown Linux".
> For
> example, I have a kernel, initrd, dhcp, shell, ability to mount file systems,
> kexec,
Mark Hahn wrote:
the BMCs were Motorola single board computers running Linux.
So ssh and http access were already there with whichever Linux distro
they
ran (you could look around in /proc for instance)
Wow! I didn't realize that the BMC was again running a full blown
Linux distro!
sigh. th
On Mon, Oct 26, 2009 at 11:18:33AM -0700, Hearns, John wrote:
>
> Wow! I didn't realize that the BMC was again running a full blown Linux
> distro!
Well, "running Linux" is very different from "running a full blown Linux". For
example, I have a kernel, initrd, dhcp, shell, ability to mount file
the BMCs were Motorola single board computers running Linux.
So ssh and http access were already there with whichever Linux distro
they
ran (you could look around in /proc for instance)
Wow! I didn't realize that the BMC was again running a full blown Linux distro!
sigh. the simplest unix dis
Wow! I didn't realize that the BMC was again running a full blown Linux
distro!
Only on those Sun servers - a friend who used to work for Sun showed me
this.
Those sun BMCs did more than act as IPMI controllers (ie you could have
virtual CD drivers etc,
though of course other IPMI type controller
On Mon, Oct 26, 2009 at 12:03 PM, Hearns, John wrote:
> I don't think that a standard is actually needed... My naive
> understanding is that the NIC firmware does packet inspection
>
> As I just said, I thought there was a bridge chip before the NIC,
> and I agree there is packet filtering.
> Look
On Mon, Oct 26, 2009 at 12:01 PM, Hearns, John wrote:
>
>
>
> On the original Opteron sun servers
> which had BMCs and the two Ethernet interfaces which you could
> daisy-chain,
> the BMCs were Motorola single board computers running Linux.
> So ssh and http access were already there with whicheve
I don't think that a standard is actually needed... My naive
understanding is that the NIC firmware does packet inspection
As I just said, I thought there was a bridge chip before the NIC,
and I agree there is packet filtering.
Look up my tortuous examination of what happens when you run a lot of
On Mon, Oct 26, 2009 at 11:34 AM, Bogdan Costescu
wrote:
> On Mon, Oct 26, 2009 at 4:50 PM, Rahul Nabar
wrote:
> The BMC is a CPU running some firmware. It's a low power one though,
> as it doesn't usually have to do too many things and it should not
> consume significant power while the main s
On Oct 26, 2009, at 10:55 AM, beowulf-requ...@beowulf.org wrote:
Message: 6
Date: Mon, 26 Oct 2009 10:50:26 -0500
From: Rahul Nabar
Subject: Re: [Beowulf] any creative ways to crash Linux?: does a
shared NIC IMPI always remain responsive?
To: Bogdan Costescu
Cc: Beowulf Mailing L
On Mon, Oct 26, 2009 at 5:23 PM, Mark Hahn wrote:
> it's surprising that it works at all, since there's no real
> standard for sharing the hardware.
I don't think that a standard is actually needed... My naive
understanding is that the NIC firmware does packet inspection (no need
for deep packet
On Mon, Oct 26, 2009 at 4:50 PM, Rahul Nabar wrote:
> I see. So I assume the BMC's network stack is something that's
> hardware or firmware implemented.
The BMC is a CPU running some firmware. It's a low power one though,
as it doesn't usually have to do too many things and it should not
consume
On Mon, Oct 26, 2009 at 11:34 AM, Bogdan Costescu wrote:
> On Mon, Oct 26, 2009 at 4:50 PM, Rahul Nabar wrote:
> The BMC is a CPU running some firmware. It's a low power one though,
> as it doesn't usually have to do too many things and it should not
> consume significant power while the main sy
So unless your application sits in the on-core cache, I am wondering where
the real benefit
is going to be (ignoring the fact that the processor is still PCI-e
"serve Web data" seems to be the target, as mentioned in the release.
that seems pretty fair, since webservers tend to have pretty smal
IPMI gets hung sometimes (like Gerry says in his reply). I guess I can
just attribute that to bad firmware coding in the BMC.
I think it's better to think of it as a piece of hw (the nic)
trying to be managed by two different OSs: host and BMC.
it's surprising that it works at all, since there's
On Mon, Oct 26, 2009 at 8:11 AM, Bogdan Costescu wrote:
> On Sat, Oct 24, 2009 at 11:13 PM, Rahul Nabar wrote:
>> What surprised me was that even if I take down my eth interface with a
>> ifdown the IPMI still works. How does it do that ?
>
> The IPMI traffic is IP (UDP) based and by inspecting t
Anyone ever played with the current generation of chip? What I saw from
the website
for the current generation was:
- No Fortran
- No Floating point
- In its fastest configuration, a 2-socket Nehalem has about the same
memory bandwidth
So unless your application sits in the on-core cache, I
On Sat, Oct 24, 2009 at 11:13 PM, Rahul Nabar wrote:
> What surprised me was that even if I take down my eth interface with a
> ifdown the IPMI still works. How does it do that ?
The IPMI traffic is IP (UDP) based and by inspecting the IP header one
can make a difference between packets with the
http://www.goodgearguide.com.au/article/323692
Tilera targets Intel, AMD with 100-core processor
Tilera hopes its new chips either replace or work alongside chips from Intel
and AMD
Agam Shah (IDG News Service) 26/10/2009 15:07:00
Tags: Intel, CPUs, amd
Tilera on Monday announced new general-
33 matches
Mail list logo