Re: [Beowulf] HPC and Licensed Software

2017-04-14 Thread John Hearns
Brian, I managed several cluster which used Star-CCM+ when I was at McLaren F1. Can you tell us a little about what Pepsico are using CFD for? It sounds interesting! Yes, the normal method is to NAT through the cluster head node. Some notes though: a) the next step is then to have a license 'se

Re: [Beowulf] HPC and Licensed Software

2017-04-14 Thread Skylar Thompson
Back when we had software requiring MathLM/FlexLM, we just used NAT to get the cluster nodes talking to the licensing server. We also had a consumable in Grid Engine so that people could keep their jobs queued if there were no licenses available. Skylar On Fri, Apr 14, 2017 at 12:23 PM, Mahmood S

Re: [Beowulf] HPC and Licensed Software

2017-04-14 Thread Mahmood Sayed
We've used both NAT and fully routable private networks up to 1000s of nodes. NAT was a little more secure fire or needs. > On Apr 14, 2017, at 2:41 PM, Richter, Brian J {BIS} > wrote: > > Thanks a lot, Ed. I will be going the NAT route! > > Brian J. Richter > Global R&D Senior Analyst • In

Re: [Beowulf] HPC and Licensed Software

2017-04-14 Thread Richter, Brian J {BIS}
Thanks a lot, Ed. I will be going the NAT route! Brian J. Richter Global R&D Senior Analyst • Information Technology 617 W Main St, Barrington, IL 60010 Office: 847-304-2356 • Mobile: 847-305-6306 brian.j.rich...@pepsico.com From: Swindelles, Ed [mailto:ed.swi

Re: [Beowulf] HPC and Licensed Software

2017-04-14 Thread Ryan Novosielski
That’s what we do, yes — NAT via the master node or similar appropriate node in your setup. > On Apr 14, 2017, at 2:34 PM, Richter, Brian J {BIS} > wrote: > > Hi All, > > We just built our first HPC and I have what seems like a rather dumb question > regarding best practices and compute node

[Beowulf] HPC and Licensed Software

2017-04-14 Thread Richter, Brian J {BIS}
Hi All, We just built our first HPC and I have what seems like a rather dumb question regarding best practices and compute nodes. Our HPC setup is currently small, we have a HeadNode which runs MOAB/Torque and we have 4 compute nodes connected with IB. The compute nodes are on their own private