Yes. Oh, I’m sure a modern multicore box could be purchased for my specific 
workflow from back then – and the codes are easy to run anywhere – Fortran my 
friends, Fortran – compiles everywhere, uses BLAS and LAPACK.

But it was sort of an example of where deskside HPC (defined as “need more than 
you can buy in a single box”) might be useful.
Because, after all, one of the things I was modeling was an array of 288 
antennas spanning 1.6 km out at the Owens Valley Radio Observatory Long 
Wavelength Array. – I was modeling interactions between adjacent antennas.  
But, of course, it would be cool to model all 288 (now more than 300, I think, 
they added some), and since the modeling is a combination of O(N^2) (building 
the interaction matrices) and O(N^3) (solving the linear equations), I can 
consume as much horsepower as one might have.  And yes, you’d really want to do 
this as a multi-grid sort of model, because the interaction between an antenna 
and another antenna that is 1km away is pretty small.

http://www.tauceti.caltech.edu/LWA/


From: Beowulf <beowulf-boun...@beowulf.org> on behalf of "Michael H. Frese" 
<michael.fr...@numerex-llc.com>
Date: Wednesday, August 25, 2021 at 2:01 PM
To: "beowulf@beowulf.org" <beowulf@beowulf.org>
Subject: [EXTERNAL] Re: [Beowulf] Deskside clusters


Jim,

I'm reluctant to expose my ignorance, but I think I have some experience to 
share.  And I don't believe this is a cluster issue.

My company -- and I do mean my company -- did 2-d multiphysics MHD simulations 
up until about 4 years ago using our proprietary multiblock domain 
decomposition code.  The diffusive magnetic field solver used all the vector 
operators in EM. So we share that much of the problems you are wanting to run.

At the end, we did our computations on 4 workstations connected by ancient 
single-channel Infiniband -- approximately 10 GB/sec but with ~1 micro second 
latency which was critical for the small messages we were sending.

About 10 years ago Mellanox was dumping those and we bought 24 cards and two 
switches, one 8 port and one 16 port.  They cost us a little more than GB 
ethernet cards, but they were 30-40 times shorter latency.

So we built two clusters in 2012.  Both used handbuilt deskside workstation 
boxes which we set up on commercial retail store chrome racks -- NOT 1U or 2U.

As time went by, we upgraded to dual core, then quad core, then 8 core CPUs 
with motherboards and memory to match.  We used CentOS 5 for the Infiniband 
until it wasn't suitable for the motherboards required and then we went to 
CentOS 7.  We never paid for RHEL.

At the last version, we found that we often were able to run problems on single 
8-core machines.

We stuck with AMD chips because they were faster for the money than Intel's.  
The last CPUs we bought were capable of hyperthreading so we could run 16 jobs 
on each 8-core box.

I bet one deskside workstation running an AMD Ryzen 9 5900X 12-core 
CPU<https://urldefense.us/v3/__https:/www.google.com/search?client=firefox-b-1-d&q=amd*12*core__;Kys!!PvBDto6Hs4WbVuu7!bVe3g1e9q03D81iZ-dSQQp6f9gyrtPyg2o75hhV5x0FB1hYkboXeLuT2UsYVy45CZbn5OJw$>
 would be able to do the job you want, quietly, at low power, and hence low 
cooling requirement.  Your problems seem to be embarrasingly parallel, but if 
you need communication between processes you could use in-memory communication, 
which is LOTS faster than any Infiniband.  And CentOS is fully equipped to get 
you an MPI with that.

Python is available.  Don't know about your NEC or plotting software.  Source 
for NEC could be built on your new workstation if necessary.

You need a friendly sysadmin and programmer to set it up, get it going, and get 
you around the list of approved workstations.

Hope this isn't too far from your requirements.  Good luck!

Mike




_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf

Reply via email to