On Thu, 22 Feb 2007, Craig Tierney wrote:
> I didn't think it was that cheap. I would prefer Layer 3 if
> this was going into a rack of a multi-rack system, but the
> price is right.
Thanks Craig!
--
Christopher Samuel - (03)9925 4751 - VPAC Deputy Systems Manager
Victorian Partnership for A
On Wed, Feb 21, 2007 at 11:06:50PM -0500, Patrick Geoffray wrote:
> Time for dreaming about an MPI ABI :-)
Ssh! It isn't April Fools yet!
-- greg
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe)
On Thu, 22 Feb 2007, Patrick Geoffray wrote:
> Hi Chris,
G'day Patrick!
> Chris Samuel wrote:
> > We occasionally get users who manage to use up all the DMA memory that is
> > addressable by the Myrinet card through the Power5 hypervisor.
>
> The IOMMU limit set by the hypervisor varies dependin
Hi Chris,
Chris Samuel wrote:
We occasionally get users who manage to use up all the DMA memory that is
addressable by the Myrinet card through the Power5 hypervisor.
The IOMMU limit set by the hypervisor varies depending on the machine,
the hypervisor version and the phase of the moon. Somet
On Thu, 22 Feb 2007, Scott Atchley wrote:
Hello Scott!
> Isn't this in hex? If so, it would be 4096 bytes. I do not use GM
> much and I do not know what this is. I just loaded GM on one node and
> with no GM processes running except the mapper, I have a similar
> entry (at a different addre
On Thu, 22 Feb 2007, Chris Samuel wrote:
> Through various firmware and driver tweaks (thanks to both IBM and Myrinet)
> we've gotten that limit up to almost 1GB and then we use an undocumented
> environment variable (GMPI_MAX_LOCKED_MBYTE) to say only use 248MB of that
> per process (as we've got
On Feb 21, 2007, at 7:45 PM, Chris Samuel wrote:
Hi folks,
We've got an IBM Power5 cluster running SLES9 and using the GM
drivers.
We occasionally get users who manage to use up all the DMA memory
that is
addressable by the Myrinet card through the Power5 hypervisor.
Through various fir
weakly correlated with failure. However, of all the disks that failed, less
than half (around 45%) had ANY of the "strong" signals and another 25% had
some of the "weak" signals. This means that over a third of disks that
failed gave no appreciable warning. Therefore even combining the variab
Justin,
Yes, I came across your previous post further down the intertwined
thread. One other thing that could have been interesting to see then would be
to have monitored _all_ of the system's "health" monitors such as voltage,
powersupply fan speed. There may be some other correlations
Hi folks,
We've got an IBM Power5 cluster running SLES9 and using the GM drivers.
We occasionally get users who manage to use up all the DMA memory that is
addressable by the Myrinet card through the Power5 hypervisor.
Through various firmware and driver tweaks (thanks to both IBM and Myrinet)
Chris Samuel wrote:
On Thu, 22 Feb 2007, Thomas H Dr Pierce wrote:
I have been using the MYRICOM 10Gb card in my NFS server (head node) for
the Beowulf cluster. And it works well. I have a inexpensive 3Com switch
(3870) with 48 1Gb ports that has a 10Gb port in it and I connect the NFS
serv
How did they look for predictive models on the SMART data? It sounds
like they did a fairly linear data decomposition, looking for first
order correlations. Did they try to e.g. build a neural network on it,
or use fully multivariate methods (ordinary stats can handle it up to
5-10 variables).
On Thu, 22 Feb 2007, Thomas H Dr Pierce wrote:
> I have been using the MYRICOM 10Gb card in my NFS server (head node) for
> the Beowulf cluster. And it works well. I have a inexpensive 3Com switch
> (3870) with 48 1Gb ports that has a 10Gb port in it and I connect the NFS
> server to that port
Dear Mark and the List,
The head node is about a terabyte of raid10, with home directories and
application directories NFS mounted to the cluster. I am still tuning
NFS, 16 daemons now) and, of course, the head node had 1Gb link to my
intranet for remote cluster access.
The 10Gb link to th
Hello,
I have been using the MYRICOM 10Gb card in my NFS server (head node) for
the Beowulf cluster. And it works well. I have a inexpensive 3Com switch
(3870) with 48 1Gb ports that has a 10Gb port in it and I connect the
NFS server to that port. The switch does have small fans in it.
I
[snip]
> >
> > Dangling meat in front of the bears, eh? Well...
>
> Hey Justin. Are you going to stay in NC and move to the new facility as
> they build it?
>
> Let me add one general question to David's.
>
> How did they look for predictive models on the SMART data? It sounds
> like they di
Mark Hahn wrote:
Not sure the vapor pressure of the perfluoroethers that they use as
lubricants
varies that much over the operating temperature regime of a disk
drive.
on the other hand, do these lubricants tend to get sticky or something
at lowish temperatures? the google results showed s
I have been using the MYRICOM 10Gb card in my NFS server (head node) for
the Beowulf cluster. And it works well. I have a inexpensive 3Com switch
(3870) with 48 1Gb ports that has a 10Gb port in it and I connect the
NFS server to that port. The switch does have small fans in it.
that sounds
18 matches
Mail list logo