On 11/29/06, John Hearns <[EMAIL PROTECTED]> wrote:
Bill Broadley wrote:
>
>
> Does anyone know if IPMI can "share" a GigE port? Or does it usually
> require a dedicated port on the host?
Yes, definitely.
The Supermicro IPMI cards can share the eth0 port with the main gigabit
channel. The cards
Michael Huntingdon wrote:
So if it's possible to extend the number of systems per system
administrator, how about extending the number of systems per cabinet,
and the number of cabinets per system administrator. I don't mean to
minimize the job responsibilities of system managers. Quite the
Bill Broadley wrote:
Does anyone know if IPMI can "share" a GigE port? Or does it usually
require a dedicated port on the host?
Yes, definitely.
The Supermicro IPMI cards can share the eth0 port with the main gigabit
channel. The cards are powered up and on the network whenever power is
app
When comparing cluster offerings, seems reasonable, that the
additional $85-$100 would be factored in to any system/cluster
purchase, for at least power up/down and reset? This is astonishing,
or is there something I'm missing in this thread? The technology
mentioned isn't really earth shatter
The Broadcom 5721 NIC comes with IPMI capability. We use Dell PE850's which
are equipped with dual onboard Broadcom 5721J NICS. One of the NICS has IPMI
built in. You set up the IPMI in the BIOS-IP address, username, password-and
then you can access over the same CAT5 cable. The regular ethernet an
I've heard of folks successfully getting the tyan card to work:
http://www.tyan.com/products/html/m3291.html
At least the power up/down and reset functionality. I've googled
prices in the $85-$100 range. Not cheap, but not bad for what looks
like a relatively low volume product.
I sus
I have recently completed a number of performance tests on a Beowulf
cluster, using up to 48 dual-core P4D nodes, connected by an Extreme
Networks Gigabit edge switch. The tests consist of single and multi-node
application benchmarks, including DLPOLY, GROMACS, and VASP, as well as
specific tests o
Dear Ruhollah Moussavi B.
For our Numerical Simulations (CFD) and FEM computations, having low
budget, around $2, I have chosen Beowulf Cluster having 5 nodes
with 2 AMD dual core opteron processors and 4GB of RAM on each node.
Tyan Thunder K8HR or K8SRE motherboards are very good choices for