Re: [Beowulf] 40kw racks?

2019-10-21 Thread Mahmood Sayed
I’ve done 35KW racks using Motivair rear door heat exchangers. Designed by Cray for Duke University. 4x 50 amp PDUs. > On Oct 21, 2019, at 11:50 AM, Michael Di Domenico > wrote: > > Has anyone on the list built 40kw racks? I'm particularly interested > in what parts you used, rack, pdu, rea

Re: [Beowulf] HPC jobs query - from the other side of the fence

2019-05-21 Thread Mahmood Sayed
The problem is world wide. I have two HPC sysadmin positions open in RTP, NC, USA and I'm having a hard time getting viable candidates in. > On May 21, 2019, at 11:33 AM, Mark Lundie > wrote: > > Hi Gerald, > > It's worth looking at STFC: There are often HPC sysadmin positions > available at

[Beowulf] HPC Systems Engineer Positions

2018-06-01 Thread Mahmood Sayed
the area (or willing to move to the Oak Ridge, Tennessee area), please send me your info ASAP. Thanks! *Mahmood Sayed* Specialist, High Performance Computing, Federal Services [image: http://www.attain.com/sites/default/files/logo.png] 430 Davis Drive, Suite 270 | Morrisville, NC 27560 Cell

Re: [Beowulf] HPC and Licensed Software

2017-04-14 Thread Mahmood Sayed
We've used both NAT and fully routable private networks up to 1000s of nodes. NAT was a little more secure fire or needs. > On Apr 14, 2017, at 2:41 PM, Richter, Brian J {BIS} > wrote: > > Thanks a lot, Ed. I will be going the NAT route! > > Brian J. Richter > Global R&D Senior Analyst • In

Re: [Beowulf] Cluster diagramming tools, rack, cabling, etc?

2016-04-25 Thread Mahmood Sayed
I'm not sure if this is exactly what you're looking for, but I've used RackTables in the past. http://racktables.org/ On Mon, Apr 25, 2016 at 7:23 AM, Andrew Latham wrote: > Jeff, something along the lines of > https://en.wikipedia.org/wiki/DOT_(graph_description_language) for > diagrams that c

Re: [Beowulf] frequency scaling

2015-09-21 Thread Mahmood Sayed
Nuke it from the bios. It's the only way to be sure. Mahmood Sayed HPC Admin, NIEHS > On Sep 21, 2015, at 8:27 AM, Michael Di Domenico > wrote: > > What steps are generally taken to remove frequency scaling from a box? > I'm curious if there's something ab

Re: [Beowulf] Scheduler question -- non-uniform memory allocation to MPI

2015-07-30 Thread Mahmood Sayed
Most of my WRF users are running their jobs up at NCAR because of that reason alone. It's terribly inefficient and complicated to get set up correctly. Let the WRF pros deal with it... Mahmood Sayed HPC Admin US National Institute for Environmental Health Sciences On Thu, Jul 30, 2015 at

Re: [Beowulf] President Obama signs executive order for Exascale computer

2015-07-30 Thread Mahmood Sayed
This sounds like really good news for the HPC community! Mahmood Sayed HPC Admin US National Institute for Environment Health Sciences On Thu, Jul 30, 2015 at 2:28 PM, Prentice Bisbal < prentice.bis...@rutgers.edu> wrote: > Seriously? I'm going to be the first guy to post this?

Re: [Beowulf] disabling swap on cluster nodes?

2015-02-06 Thread Mahmood Sayed
Prentice, We regularly configure our compute nodes without any swap partition. There have been no adverse effects on the systems' performance under load. We're running clusters with everything from RHEL5/RHEL6 and the FOS variants thereof to several LTS versions of Ubuntu. RAM per node ranges