> no, I really meant to put one admin server (1u is fine) in each rack.
> I'd already have a Gb switch and possibly a high-speed interconnect
> leaf in the rack if possible. a modular approach like this cuts
> down on cabling and out-of-rack traffic.
No place to put it in the rack. these are bl
> I personally like the idea of putting one admin server in each rack.
>they don't have to be fancy servers, by any means.
*LOLOL* At first I was guilty of the one things I am always getting on the
other guys for-thinking too literally. I was going to say there is no room in
the rack. Of course
> I personally like the idea of putting one admin server in each rack.
>they don't have to be fancy servers, by any means.
*LOLOL* At first I was guilty of the one things I am always getting on the
other guys for-thinking too literally. I was going to say there is no room in
the rack. Of course
On the centennial of Adm. Hopper's birth:
She said in respect of the building of bigger computers: "In pioneer
days they used oxen for heavy pulling, and when one ox couldn't budge
a log, they didn't try to grow a larger ox. We shouldn't be trying
for bigger computers, but for more systems of
I would hazard that any DHCP/PXE type install server would struggle with
2000 requests
a single server (implying 1 gb nic?) might have trouble with the tftp part,
but I don't see why you couldn't scale up by splitting the tftp part
off to multiple servers. I'd expect a single DHCP (no TFTP) wou
particular lightweight
compute node model, (PXE booting into RAM) and so does not run into the
typical
nfs-root scalability issues.
I'm not sure I know what those would be. do you mean that the kernel code
for nfs-root has inappropriate timeouts or lacked effective retries?
At what node cou
Buccaneer for Hire. wrote:
>> I would hazard that any DHCP/PXE type install server would struggle
>> with 2000 requests (yes- you arrange the power switching and/or
>> reboots to stagger at N second intervals).
fwiw: we use dnsmasq to serve dhcp and handle pxe booting. It does a
marvelous job
Eric Shook wrote:
Not to diverge this conversation, but has anyone had any experience
using this pxe boot / nfs model with a rhel variant? I have been
wanting to do a nfs root or ramdisk model for some-time but our
software stack requires a rhel base so Scyld and Perceus most likely
will not
Thank you for writing...
> With 2000+ nodes you should definitely look at remote power control, and
> remote serial console access.
Have it already in place with remote monitoring as well.
> Also you might think of separate install servers for each (say) 500
> machines. Mirror them up to each
Not to diverge this conversation, but has anyone had any experience
using this pxe boot / nfs model with a rhel variant? I have been
wanting to do a nfs root or ramdisk model for some-time but our software
stack requires a rhel base so Scyld and Perceus most likely will not
work (although I am
Buccaneer for Hire. wrote:
[snip]
I agree with what Joe says about a few hundred nodes being the time you
would start to look closer at this approach.
I have started to explore the possibility of using this technology because I
would really like to see us with the ability to change OSs and O
[snip]
> I agree with what Joe says about a few hundred nodes being the time you
> would start to look closer at this approach.
I have started to explore the possibility of using this technology because I
would really like to see us with the ability to change OSs and OS Personalities
as neede
Joe Landman wrote:
Guy Coates wrote:
At what node count does the nfs-root model start to break down? Does anyone
have any rough numbers with the number of clients you can support with a generic
linux NFS server vs a dedicated NAS filer?
If you use warewulf or the new perceus variant, it cr
Guy Coates wrote:
>
> At what node count does the nfs-root model start to break down? Does anyone
> have any rough numbers with the number of clients you can support with a
> generic
> linux NFS server vs a dedicated NAS filer?
If you use warewulf or the new perceus variant, it creates a ra
> We configure clusters for our customers with Scyld Beowulf which does
> not nfs-mount
> root but rather just nfs-mounts the home directories because of its
> particular lightweight
> compute node model, (PXE booting into RAM) and so does not run into the
> typical
> nfs-root scalability issues
15 matches
Mail list logo