I’ve done 35KW racks using Motivair rear door heat exchangers. Designed by Cray
for Duke University. 4x 50 amp PDUs.
> On Oct 21, 2019, at 11:50 AM, Michael Di Domenico
> wrote:
>
> Has anyone on the list built 40kw racks? I'm particularly interested
> in what parts you used, rack, pdu, rea
The problem is world wide. I have two HPC sysadmin positions open in RTP, NC,
USA and I'm having a hard time getting viable candidates in.
> On May 21, 2019, at 11:33 AM, Mark Lundie
> wrote:
>
> Hi Gerald,
>
> It's worth looking at STFC: There are often HPC sysadmin positions
> available at
the area (or willing to move to the Oak Ridge, Tennessee
area), please send me your info ASAP.
Thanks!
*Mahmood Sayed*
Specialist, High Performance Computing, Federal Services
[image: http://www.attain.com/sites/default/files/logo.png]
430 Davis Drive, Suite 270 | Morrisville, NC 27560
Cell
We've used both NAT and fully routable private networks up to 1000s of nodes.
NAT was a little more secure fire or needs.
> On Apr 14, 2017, at 2:41 PM, Richter, Brian J {BIS}
> wrote:
>
> Thanks a lot, Ed. I will be going the NAT route!
>
> Brian J. Richter
> Global R&D Senior Analyst • In
I'm not sure if this is exactly what you're looking for, but I've used
RackTables in the past.
http://racktables.org/
On Mon, Apr 25, 2016 at 7:23 AM, Andrew Latham wrote:
> Jeff, something along the lines of
> https://en.wikipedia.org/wiki/DOT_(graph_description_language) for
> diagrams that c
Nuke it from the bios. It's the only way to be sure.
Mahmood Sayed
HPC Admin, NIEHS
> On Sep 21, 2015, at 8:27 AM, Michael Di Domenico
> wrote:
>
> What steps are generally taken to remove frequency scaling from a box?
> I'm curious if there's something ab
Most of my WRF users are running their jobs up at NCAR because of that
reason alone. It's terribly inefficient and complicated to get set up
correctly. Let the WRF pros deal with it...
Mahmood Sayed
HPC Admin
US National Institute for Environmental Health Sciences
On Thu, Jul 30, 2015 at
This sounds like really good news for the HPC community!
Mahmood Sayed
HPC Admin
US National Institute for Environment Health Sciences
On Thu, Jul 30, 2015 at 2:28 PM, Prentice Bisbal <
prentice.bis...@rutgers.edu> wrote:
> Seriously? I'm going to be the first guy to post this?
Prentice,
We regularly configure our compute nodes without any swap partition. There have
been no adverse effects on the systems' performance under load. We're running
clusters with everything from RHEL5/RHEL6 and the FOS variants thereof to
several LTS versions of Ubuntu. RAM per node ranges