Rahul Nabar wrote: > I always took it as natural to keep all compute nodes on a private > switch and assigned them local I/P addresses. This was almost > axiomatic for an HPC application in my mind. This way I can channel > all traffic to the world and logins while a select login-node. Then > firewall the login nodes carefully. > > Just today, though, on a new project the admin said he always keeps > his compute nodes with public I/Ps and runs individual firewalls on > them. > > This seemed just so wrong to me in so many ways but i was curious if > there are legitimate reasons why people might do this? Just curious. > > I do everything I can to keep cluster nodes on a private network, with only the head node visible on the public network. One exception I've had to make is when storage is on a separate network. NAT doesn't do well with CIFS/NFS so it's just easier giving the nodes fully-routeable IP addresses.
-- -- Skylar Thompson (sky...@cs.earlham.edu) -- http://www.cs.earlham.edu/~skylar/
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf