Re: [Beowulf] malloc on filesystem

2025-02-05 Thread John Hearns
Is this any use https://en.wikipedia.org/wiki/Zram On Wed, 5 Feb 2025 at 15:50, Michael DiDomenico wrote: > this might sound like a bit of an oddify, but does anyone know if > there's a library out there that will let me override malloc calls to > memory and direct them to a filesystem instead?

Re: [Beowulf] immersion

2024-04-08 Thread John Hearns
HPC Sysadmins will have to gain other skills https://youtu.be/Jf8Sheh4MD4?si=La0KfEF6OGPRKA2- On Sun, 7 Apr 2024 at 23:07, Scott Atchley wrote: > On Sun, Mar 24, 2024 at 2:38 PM Michael DiDomenico > wrote: > >> i'm curious if others think DLC might hit a power limit sooner or later, >> like Air

Re: [Beowulf] [External] position adverts?

2024-02-23 Thread John Hearns
There is a Jobs channel on hpc.social Just saying On Fri, Feb 23, 2024, 2:09 PM Michael DiDomenico wrote: > Maybe we should come with some kind of standard/wording/what-have-you to > post such. I have some open positions as well. might liven the list up a > little too... :) > > On Thu, Feb 22,

Re: [Beowulf] And wearing another hat ...

2023-11-13 Thread John Hearns
https://sealandgov.org/ Move to Sealand. It is a WW2 gun platform in the English Channel. I believe the servers are down in the legs. On Mon, 13 Nov 2023, 17:07 Joshua Mora, wrote: > Some folks trying to bypass legally government restrictions. > > Is land on the Moon or Mars on sale for locatin

Re: [Beowulf] ib neighbor

2023-09-20 Thread John Hearns
netloc is the tool you want to use. Look in the latest hwloc dovumentation On Wed, 20 Sep 2023, 13:55 John Hearns, wrote: > I did manage to get the graphical netloc utility working once. Part of the > hwloc/openmpi project. > > It produces a very pretty image of I topology. I think

Re: [Beowulf] ib neighbor

2023-09-20 Thread John Hearns
Does ibnetdiscover not help you? On Tue, 19 Sep 2023, 19:03 Michael DiDomenico, wrote: > does anyone know if there's a simple command to pull the neighbor of > the an ib port? for instance, this horrible shell command line > > # for x in `ibstat | awk -F \' '/^CA/{print $2}'`; do iblinkinfo -C

Re: [Beowulf] ib neighbor

2023-09-20 Thread John Hearns
I did manage to get the graphical netloc utility working once. Part of the hwloc/openmpi project. It produces a very pretty image of I topology. I think if you zoom in you can get neighbours. A few years since I used it. On Tue, 19 Sep 2023, 19:03 Michael DiDomenico, wrote: > does anyone know i

Re: [Beowulf] NFS alternative for 200 core compute (beowulf) cluster

2023-08-10 Thread John Hearns
I would look at BeeGFS here On Thu, 10 Aug 2023, 20:19 leo camilo, wrote: > Hi everyone, > > I was hoping I would seek some sage advice from you guys. > > At my department we have build this small prototyping cluster with 5 > compute nodes,1 name node and 1 file server. > > Up until now, the nam

Re: [Beowulf] Your thoughts on the latest RHEL drama?

2023-08-09 Thread John Hanks
On Mon, Jun 26, 2023 at 12:27 PM Prentice Bisbal via Beowulf < beowulf@beowulf.org> wrote: > This is Red Hat biting the hands that feed them. > And that is the perfect summary of the situation. More and more I view "EL" as a standard, previously created/defined by Redhat but due to the behavior yo

Re: [Beowulf] interconnect wars... again...

2023-07-31 Thread John Hearns
lowly with the CPU. If I can get a > GPU in the mix, doing that to speed things up. > Again.. I would like to apologize for being quiet for so long. I'll try > to toss an "ack" in there from my phone if nothing else. > > > ./Andrew Falgout > KG5GRX > > > O

Re: [Beowulf] interconnect wars... again...

2023-07-31 Thread John Hearns
A quick ack would be nice. On Fri, 28 Jul 2023, 06:38 John Hearns, wrote: > Andrew, the answer is very much yes. I guess you are looking at the > interface of 'traditional' HPC which uses workload schedulers and > Kubernetes style clusters which use containers. > Firstly

Re: [Beowulf] interconnect wars... again...

2023-07-27 Thread John Hearns
Andrew, the answer is very much yes. I guess you are looking at the interface of 'traditional' HPC which uses workload schedulers and Kubernetes style clusters which use containers. Firstly I would ask if you are coming from the point of view of someone who wants to build a cluster in your home or

Re: [Beowulf] interconnect wars... again...

2023-07-26 Thread John Hearns
All the cool kids are on hpc.Social I am on the Slack there. I would encourage everyone to come over On Wed, 26 Jul 2023, 14:39 Michael DiDomenico, wrote: > just a mailing list as far as i know. it used to get a lot more > traffic, but seems to have simmered down quite a bit > > On Tue, Jul 25

Re: [Beowulf] Your thoughts on the latest RHEL drama?

2023-06-27 Thread John Hearns
Rugged individuaiist? I like that...Me puts on plaid shirt and goes to wrestle with some bears,,, > Maybe it is time for an HPC Linux distro, this is where Good move. I would say a lightweight distro that does not do much nd is rebooted every time a job finishes. Wonder what security types wou

Re: [Beowulf] Your thoughts on the latest RHEL drama?

2023-06-26 Thread John Hearns
There is a good discussion on this topic over on the Slack channel at hpc.social I would urge anyone on this list to join up there - you will find a home. hpcsocial.slack.com On Mon, 26 Jun 2023 at 19:27, Prentice Bisbal via Beowulf < beowulf@beowulf.org> wrote: > Beowulfers, > > By now, most of

Re: [Beowulf] [External] Re: old sm/sgi bios

2023-03-23 Thread John Hearns
That Supermicro board sounds like one of the boards from an ICE cluster, right? I know Joe flagged up the BIOS - thinking out loud is it not possible to copy the BIOS from another, working, board of the same model? Regarding SGI workstations when I worked in post production at Framestore we had lo

Re: [Beowulf] HPCG benchmark, again

2022-03-20 Thread John Hearns
Jörg, I would have a look at the Archer/UK-HPC benchmarks https://github.com/hpc-uk/archer-benchmarks They have Castep and CP2K in the applications benchmarks which will be relevant to you. Also thankyou for looking for advice here! As someone who has worked for several cluster vendors, please c

Re: [Beowulf] AMD Accelerated Data Center Keynote

2021-11-09 Thread John Hearns
All good Jim. However to be allowed to benchmark these systems you must pronounce the CPU as "Milawn" As I said elsewhere, they are getting pretty far north now. Is the plan to cross the Alps? On Tue, 9 Nov 2021 at 09:23, Jim Cownie wrote: > @Prentice: > > Certainly looking forward to running so

[Beowulf] Infiniband Fabric test tool

2021-10-22 Thread John Hearns
I recently saw a presentation which referenced a framework to test out Infiniband (or maybe in general MPI) fabrics. This was a Github repository. It ran a series of inter-node tests and analysed the results. It seemed similar in operation to Linktest https://www.fz-juelich.de/ias/jsc/EN/Expertise

Re: [Beowulf] Infiniband for MPI computations setup guide

2021-10-20 Thread John Hearns
As Paul says - start a subnet manager. I guess you are using the distro supplied IB stack? Run the following commands: sminfo ibdiagnet these will check out your subnet manager and your fabric On Wed, 20 Oct 2021 at 17:21, Paul Edmon via Beowulf wrote: > Oh you will also need a IB subnet manag

Re: [Beowulf] [EXTERNAL] server lift

2021-10-20 Thread John Hearns
The engine hoist is just superb! The right tool for the job. Thinking about this, old style factories had overhead cranes. At Glasgow University we had a cyclotron, and I am told one of the professors took a great joy in driving the crane. The Tate Modern art gallery has a huge overhead crane, kept

Re: [Beowulf] Data Destruction

2021-09-30 Thread John Hearns
I once had an RMA case for a failed tape with Spectralogic. To prove it was destroyed and not re-used I asked the workshop guys to put it through a bandsaw, then sent off the pictures. On Wed, 29 Sept 2021 at 16:47, Ellis Wilson wrote: > On 9/29/21 11:41 AM, Jörg Saßmannshausen wrote: > > If you

Re: [Beowulf] Rant on why HPC isn't as easy as I'd like it to be.

2021-09-21 Thread John Hearns
Some points well made here. I have seen in the past job scripts passed on from graduate student to graduate student - the case I am thinking on was an Abaqus script for 8 core systems, being run on a new 32 core system. Why WOULD a graduate student question a script given to them - which works. The

Re: [Beowulf] [EXTERNAL] Re: Deskside clusters

2021-09-21 Thread John Hearns
Over on the Julia discussion list there are often topics on performance or varying performance - these often turn out to be due to the BLAS libraries in use, and how they are being used. I believe that there is a project for pureJulia BLAS. On Mon, 20 Sept 2021 at 18:41, Lux, Jim (US 7140) via Beo

Re: [Beowulf] [EXTERNAL] Re: Deskside clusters

2021-09-21 Thread John Hearns
Yes, but which foot? You have enough space for two toes from each foot for q taste, and you then need some logic to decide which one to use. On Mon, 20 Sept 2021 at 21:59, Prentice Bisbal via Beowulf < beowulf@beowulf.org> wrote: > On 9/20/21 6:35 AM, Jim Cownie wrote: > > >> Eadline's Law : Cach

Re: [Beowulf] Rant on why HPC isn't as easy as I'd like it to be.

2021-09-20 Thread Peter St. John
My dream is to use some sort of optimization software (I would try Genetic Programing say) with a heterogeneous cluster (of mixed fat and light nodes, even different network topologies in sub-clusters) to determine the optimal configuration and optimal running parameters in an application domain fo

Re: [Beowulf] [beowulf] nfs vs parallel filesystems

2021-09-20 Thread John Hearns
This talk by Keith Manthey is well worth listening to. Vendor neutral as I recall, so don't worry about a sales message bein gpushed HPC Storage 101 in this series https://www.dellhpc.org/eventsarchive.html On Sat, 18 Sept 2021 at 18:21, Lohit Valleru via Beowulf < beowulf@beowulf.org> wrote: >

Re: [Beowulf] [EXTERNAL] Re: Deskside clusters

2021-09-19 Thread John Hearns
Eadline's Law : Cache is only good the second time. On Fri, 17 Sep 2021, 21:25 Douglas Eadline, wrote: > --snip-- > > > > Where I disagree with you is (3). Whether or not cache size is important > > depends on the size of the job. If your iterating through data-parallel > > loops over a large da

Re: [Beowulf] [beowulf] nfs vs parallel filesystems

2021-09-19 Thread John Hearns
Lohit, good morning. I work for Dell in the EMEA HPC team. You make some interesting observations. Please ping me offline regarding Isilon. Regarding NFS we have a brand new Ready Architecture which uses Poweredge servers and ME series storage (*) It gets some pretty decent performance and I woul

Re: [Beowulf] [EXTERNAL] Re: Deskside clusters

2021-08-25 Thread John Hearns
If anyone works with Dell kit I am happy to discuss thermal profiles and power capping. But definitely off list. On Wed, 25 Aug 2021 at 07:16, Tony Brian Albers wrote: > I have a Precision 5820 in my office. It's only got one CPU(14 physical > cores), but it's more quiet than my HP SFF desktop

Re: [Beowulf] List archives

2021-08-18 Thread John Hearns
I plead an advanced case of not keeping up with technology. I not this is for Ryzen - anyone care to comment on Rome/Milan? On Wed, 18 Aug 2021 at 08:56, Jim Cownie wrote: > John may have been looking for Doug’s tweet and just confused the delivery > medium... > > https:/

[Beowulf] List archives

2021-08-16 Thread John Hearns
The Beowulf list archives seem to end in July 2021. I was looking for Doug Eadline's post on limiting AMD power and the results on performance. John H ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change

Re: [Beowulf] [External] Just a quick heads up: new Beowulf coming ...

2021-06-22 Thread Peter St. John
how confusing, I thought you were talking about *trees * https://en.wikipedia.org/wiki/Tree_(data_structure) :-) On Tue, Jun 22, 2021 at 12:02 PM Prentice Bisbal via Beowulf < beowulf@beowulf.org> wrote: > I doubt it will impact us much. At worst, one or two people might find > this list and pos

Re: [Beowulf] AMD and AVX512 [EXT]

2021-06-19 Thread John Hearns
That is a very interesting point! I never thought of that. Also mobile drives ARM development - yes I know the CPUs in Isambard and Fugaku will not be seen in your mobile phone but the ecosystem is propped up by having a diverse market and also the power saving priorities of mobile will influence H

Re: [Beowulf] AMD and AVX512

2021-06-19 Thread John Hearns
Regarding benchmarking real world codes on AMD , every year Martyn Guest presents a comprehensive set of benchmark studies to the UK Computing Insights Conference. I suggest a Sunday afternoon with the beverage of your choice is a good time to settle down and take time to read these or watch the pr

Re: [Beowulf] head node abuse

2021-03-26 Thread John Hearns
https://bofhcam.org/co-larters/lart-reference/index.html [image: image.png] On Fri, 26 Mar 2021 at 13:57, Michael Di Domenico wrote: > does anyone have a recipe for limiting the damage people can do on > login nodes on rhel7. i want to limit the allocatable cpu/mem per > user to some low value

Re: [Beowulf] Project Heron at the Sanger Institute [EXT]

2021-02-04 Thread John Hearns
Referring to lambda functions, I think I flagged up that AWS now supports containers up to 10GB in size for the lambda payload https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/ which makes a Julia language lambda possible https://www.youtube.com/watch?v=6DvpneWRb_w On

Re: [Beowulf] Project Heron at the Sanger Institute [EXT]

2021-02-04 Thread John Hearns
In the seminar the graph of sequencing effort for Sanger/ rest of UK/ worldwide is very impressive. On Thu, 4 Feb 2021 at 10:21, Tim Cutts wrote: > > > > On 3 Feb 2021, at 18:23, Jörg Saßmannshausen < > sassy-w...@sassy.formativ.net> wrote: > > > > Hi John, &

[Beowulf] Project Heron at the Sanger Institute

2021-02-03 Thread John Hearns
https://edition.cnn.com/2021/02/03/europe/tracing-uk-variant-origins-gbr-intl/index.html Dressed in white lab coats and surgical masks, staff here scurry from machine to machine -- robots and giant computers that are so heavy, they're placed on solid steel plates to support their weight. Heavy met

Re: [Beowulf] The xeon phi

2020-12-31 Thread John Hearns
Stupid question from me - does OneAPI handle Xeon Phi? (a) I should read the manual (b) it is a discontinued product - why would they put any effort into it On Thu, 31 Dec 2020 at 05:52, Jonathan Engwall < engwalljonathanther...@gmail.com> wrote: > Hello Beowulf, > Both the Xeon Phi and Tesla

Re: [Beowulf] RIP CentOS 8

2020-12-12 Thread John Hearns
Great vision Doug. May I also promote EESSI https://www.eessi-hpc.org/ (the European part may maagically be transformed into something else soon) On Fri, 11 Dec 2020 at 18:57, Douglas Eadline wrote: > > > Some thoughts an this issue and future HPC > > First, in general it is poor move by CentOS

Re: [Beowulf] [External] RIP CentOS 8

2020-12-09 Thread John Hearns
A quick reminder that there are specific Redhat SKUs for cluster head nodes and a cheaper one for cluster nodes. Tha announcement regarding Centos Stream said that there would be a new offer. On Wed, 9 Dec 2020 at 11:59, Peter Kjellström wrote: > On Tue, 8 Dec 2020 18:13:46 + > Ryan Novosiel

Re: [Beowulf] [External] RIP CentOS 8

2020-12-09 Thread John Hearns
Jorg, a big seismic processing company I worked with did indeed use Debian. The answer though is that industrial customers use commercial software packages which are licensed and they want support from the software vendors. If you check the OSes which are supported then you find Redhat and SuSE. T

Re: [Beowulf] [EXTERNAL] Lambda and Alexa [EXT]

2020-12-03 Thread John Hearns
Reviving this topic slightly, these were flagged up on the Julia forum https://github.com/aws/aws-lambda-runtime-interface-emulator The Lambda Runtime Interface Emulator is a proxy for Lambda’s Runtime and Extensions APIs, which allows customers to locally test their Lambda function packaged as a

Re: [Beowulf] Automatically replication of directories among nodes

2020-11-27 Thread John Hearns
James, that is cool! A though I have had - for HA setups DRBD can be used for the shared files which the nodes need to keep updated. Has anyone tried Syncthing for this purpose? I suppose there is only one way to find out! On Fri, 27 Nov 2020 at 01:06, James Braid wrote: > On Wed, 25 Nov 2020, 0

Re: [Beowulf] RoCE vs. InfiniBand

2020-11-26 Thread John Hearns
Jorg, I think I might know where the Lustre storage is ! It is possible to install storage routers, so you could route between ethernet and infiniband. It is also worth saying that Mellanox have Metro Infiniband switches - though I do not think they go as far as the west of London! Seriously thoug

Re: [Beowulf] Lambda and Alexa [EXT]

2020-11-25 Thread John Hearns
h can be fun if your skill > needs to fetch data from some other source (in my case a rather sluggish > data service in Azure run by my local council), and there’s no clean way to > handle the event if you hit the 8 second limit, the function just gets > terminated and Alexa returns a rath

Re: [Beowulf] Clustering vs Hadoop/spark [EXT]

2020-11-25 Thread John Hearns
Or to put it simply: "Alexa - sequence my genome" On Wed, 25 Nov 2020 at 09:45, John Hearns wrote: > Tim, that is really smart. Over on the Julia discourse forum I have blue > skyed about using Lambdas to run Julia functions (it is an inherently > functional language) (*) &g

Re: [Beowulf] Clustering vs Hadoop/spark [EXT]

2020-11-25 Thread John Hearns
Tim, that is really smart. Over on the Julia discourse forum I have blue skyed about using Lambdas to run Julia functions (it is an inherently functional language) (*) Blue skying further, for exascale compute needs can we think of 'Science as a Service'? As in your example the scientist thinks abo

Re: [Beowulf] Best case performance of HPL on EPYC 7742 processor ...

2020-10-26 Thread John Hearns
This article might be interesting here: https://www.dell.com/support/article/en-uk/sln319015/amd-rome-is-it-for-real-architecture-and-initial-hpc-performance?lang=en And Hello Joshua. Long time no see. On Sun, 25 Oct 2020 at 23:11, Joshua Mora wrote: > Reach out AMD, > they have specific instr

Re: [Beowulf] ***UNCHECKED*** Re: Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-20 Thread John Hearns
> Most compilers had extensions from the IV/66 (or 77) – quoted strings, for instance, instead of Hollerith constants, and free form input. Some allowed array index origins other than 1 I can now date exactly when the rot set in. Hollerith constants are good enough for anyone. It's a gosh darned

Re: [Beowulf] [EXTERNAL] Re: ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-20 Thread John Hearns
arily a people problem, not a computer >> problem. Their existing work and data flows are already parallelized in >> some sense, and if they need to do it faster, they just add processors or >> storage as needed. >> >> >> >> >> >> >> >&g

Re: [Beowulf] ***UNCHECKED*** Re: Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-19 Thread John Hearns
> I have a let it "mellow a bit" approach to shinny new software. Software as malt whisky... I like it. Which reminds me to ask re LECBIG plans? On Mon, 19 Oct 2020 at 15:28, Douglas Eadline wrote: > --snip-- > > > Unfortunately the presumption seems to be that the old is deficient > > because

Re: [Beowulf] ***UNCHECKED*** Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-19 Thread John Hearns
see no rush to replace English, French, … which are all older > than any of our programming languages, and which adapt, as do our > programming languages). > > On 19 Oct 2020, at 09:48, John Hearns wrote: > > Jim you make good points here. I guess my replies are: > > Modern Fortran

[Beowulf] ***UNCHECKED*** Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-19 Thread John Hearns
> Mob: +44 780 637 7146 > > > On 15 Oct 2020, at 12:07, Oddo Da wrote: > > > > On Thu, Oct 15, 2020 at 1:11 AM John Hearns wrote: > > This has been a great discussion. Please keep it going. > > > > I am all out of ammo ;). In all seriousness, it is not eas

Re: [Beowulf] Julia on POWER9?

2020-10-16 Thread John Hearns
Hello Prentice. I think you need to come over to the Julia Discourse https://discourse.julialang.org/t/knet-on-powerpc64le-platform/48149 On Thu, 15 Oct 2020 at 22:09, Joe Landman wrote: > Cool (shiny!) > On 10/15/20 5:02 PM, Prentice Bisbal via Beowulf wrote: > > So while you've all been discu

Re: [Beowulf] [External] Spark, Julia, OpenMPI etc. - all in one place

2020-10-15 Thread Peter St. John
The idea of a "Chinese Github" interested me so I googled. Some sort of blog at Techcrunch https://techcrunch.com/2020/08/21/china-is-building-its-github-alternative-gitee/ describes Gitee, which I briefly looked at, https://gitee.com/?from=blog Several languages, prominently Java, Javascript,

Re: [Beowulf] [EXTERNAL] Re: ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-14 Thread John Hearns
This has been a great discussion. Please keep it going. To the points on technical debt, may I also add re-validation? Let's say you have a weather model which your institute has been running for 20 years. If you decide to start again from fresh with code in a new language you are going to have to

[Beowulf] ***UNCHECKED*** HPE locfg.pl Update_Firmware.xml

2020-08-10 Thread John McCulloch
Are any of you guys familiar with the locfg.pl for managing Proliant GEN10 iLO / bmc. I'm trying the Update_Firmware.xml without success. John McCulloch | PCPC Direct, Ltd. | desk 713-344-0923 ___ Beowulf mailing list, Beowulf@beowulf.org spon

Re: [Beowulf] experience with HPC running on OpenStack

2020-06-30 Thread John Hearns
Jorg, I would back up what Matt Wallis says. What benefits would Openstack bring you ? Do you need to set up a flexible infrastructure where clusters can be created on demand for specific projects? Regarding Infiniband the concept is SR-IOV. This article is worth reading: https://docs.openstack.or

Re: [Beowulf] experience with HPC running on OpenStack

2020-06-30 Thread John Hearns
The video is here. From 04:00 onwards https://fosdem.org/2020/schedule/event/magic_castle/ "OK your cluster will be available in about 20 minutes" On Tue, 30 Jun 2020 at 14:27, INKozin wrote: > And that's how you deploy an HPC cluster! > > On Tue, 30 Jun 2020 at 1

Re: [Beowulf] experience with HPC running on OpenStack

2020-06-30 Thread John Hearns
I saw Magic Castle being demonstrated lve at FOSDEM this year. It is more a Terraform/ansible setup for configuring clusters on demand. The person demonstrating it called a Google Home assistant with a voice command and asked it to build and deploy a cluster - which it did! On Tue, 30 Jun 2020 at

Re: [Beowulf] NFS over IPoIB

2020-06-12 Thread John McCulloch
. It is my understanding that setting MTU to 9000 is recommended but that seems to be applicable for 10GbE. Regards, John McCulloch | PCPC Direct, Ltd. From: Alex Chekholko Sent: Friday, June 12, 2020 3:53 PM To: John McCulloch Cc: beowulf@beowulf.org Subject: Re

[Beowulf] NFS over IPoIB

2020-06-12 Thread John McCulloch
-for-Tuning-and-Management Cheers, John McCulloch | PCPC Direct, Ltd. ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman

Re: [Beowulf] Neocortex unreal supercomputer

2020-06-12 Thread Peter St. John
ural networks. Here is a second article. > > https://insidehpc.com/2020/06/ai-supercomputer-at-psc-to-combine-cerebras-wafer-scale-chips-and-hpe-superdome-flex/ > > On Fri, Jun 12, 2020, 1:19 AM John Hearns wrote: > >> Will it dream of electric sheep when they turn out the

Re: [Beowulf] Neocortex unreal supercomputer

2020-06-12 Thread John Hearns
Will it dream of electric sheep when they turn out the lights and let it sleep? https://www.lanl.gov/discover/news-release-archive/2020/June/0608-artificial-brains.php On Fri, 12 Jun 2020 at 01:16, Jonathan Engwall < engwalljonathanther...@gmail.com> wrote: > This machine is planned, or possibl

Re: [Beowulf] [External] Re: Intel Cluster Checker

2020-04-30 Thread John Hearns
Thanks Chris. I worked in one place which was setting up Reframe. It looked to be complicated to get running. Has this changed? On Thu, 30 Apr 2020 at 20:09, Chris Samuel wrote: > On 4/30/20 6:54 am, John Hearns wrote: > > > That is a four letter abbreviation... > > A

Re: [Beowulf] [External] Re: Intel Cluster Checker

2020-04-30 Thread John Hearns
s normally the Intel C Compiler, or > C/C++ compiler suite (since you invoke the C compiler as “icc”). :-) > > On 30 Apr 2020, at 08:37, John Hearns wrote: > > Thanks Prentice. Iw as discussing this only to days ago... > I used the older version of ICC when working at XMA int the U

Re: [Beowulf] Intel Cluster Checker

2020-04-30 Thread John Hearns
Thanks Prentice. Iw as discussing this only to days ago... I used the older version of ICC when working at XMA int the UK. When the version as changed I found it a lot more difficult to implement. I looked two days ago and the project seems to be revived, and incorporated into oneAPI Is anyone usi

Re: [Beowulf] HPC for community college?

2020-02-21 Thread John Hearns via Beowulf
Thinking about the applications to be run at a community college, the concept of a local weather forecast has been running around in my head lately. The concept would be to install and run WRF, perhaps overnight, and produce a weather forecast in the morning. I suppose this hinges on WRF having a s

Re: [Beowulf] First cluster in 20 years - questions about today

2020-02-07 Thread Peter St. John
When I was young I sent my paper to a Professor I knew in the field, and he submitted it to the journal. If I wanted to do that now I would attend a relevant graduate seminar at nearest big city research university (which I do sometimes anyway). My institution was expected to pay "page charges" bu

Re: [Beowulf] HPC demo

2020-01-21 Thread John McCulloch
Thank you for taking the time to write that up, Benson, we’ll take a look. John McCulloch | PCPC Direct, Ltd. | desk 713-344-0923 From: Benson Muite Sent: Monday, January 20, 2020 10:02 PM To: Scott Atchley ; John McCulloch Cc: beowulf@beowulf.org Subject: Re: [Beowulf] HPC demo a) For

Re: [Beowulf] HPC demo

2020-01-14 Thread John McCulloch
Hey Scott, I think I saw an exhibit like what you’re describing at the AMSE when I was on a project in Oak Ridge. Was that it? John McCulloch | PCPC Direct, Ltd. | desk 713-344-0923 From: Scott Atchley Sent: Tuesday, January 14, 2020 7:19 AM To: John McCulloch Cc: beowulf@beowulf.org Subject

Re: [Beowulf] HPC demo

2020-01-14 Thread John McCulloch
stuff. John McCulloch | PCPC Direct, Ltd. | desk 713-344-0923 From: Renfro, Michael Sent: Monday, January 13, 2020 7:16 PM To: John McCulloch Cc: beowulf@beowulf.org Subject: Re: [Beowulf] HPC demo The homepage for your company specifically advertises HPC services and expertise. Which upper

[Beowulf] HPC demo

2020-01-13 Thread John McCulloch
Mellanox EDR interconnect. Any suggestions would be appreciated. Respectfully, John McCulloch | PCPC Direct, Ltd. ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit

Re: [Beowulf] 40kw racks?

2019-10-21 Thread John
Lenovo claims to be able to do up to 40kw racks with their Neptune cooling system. https://www.lenovo.com/us/en/data-center/Lenovo-Neptune/p/neptune On Mon, Oct 21, 2019 at 9:50 AM Michael Di Domenico wrote: > Has anyone on the list built 40kw racks? I'm particularly interested > in what parts

[Beowulf] Deadline Extended: Call for Papers: HPCSYSPROS Workshop @ SC19 Friday, November 22nd

2019-08-24 Thread John
te: http://hpcsyspros.org Updated CFP: http://sighpc-syspros.org/workshops/2019/HPCSYSPROS -CfP-2019.pdf Submission Site: https://submissions.supercomputing.org/?page=Submit&id=SC19WorkshopHPCSYSPROS19FirstSubmission&site=sc19 -- John Blaas HPCSYSPROS

Re: [Beowulf] Build Recommendations - Private Cluster

2019-08-20 Thread John Hearns via Beowulf
A Transputer cluster? Squ! I know John Taylor (formerly Meiko/Quadrics) very well. Perhaps send me a picture off-list please? On Wed, 21 Aug 2019 at 06:55, Richard Edwards wrote: > Hi John > > No doom and gloom. > > It's in a purpose built workshop/computer room t

Re: [Beowulf] Build Recommendations - Private Cluster

2019-08-20 Thread John Hearns via Beowulf
Add up the power consumption for each of those servers. If you plan on installing this in a domestic house or indeed in a normal office environment you probably wont have enough amperage in the circuit you intend to power it from. Sorry to be all doom and gloom. Also this setup will make a great de

[Beowulf] Cray Shasta Software

2019-08-17 Thread John Hearns via Beowulf
https://www.scientific-computing.com/news/cray-announces-shasta-software Joe Landman, would you care to tell us more? The integration of Kubernetes and batch system sounds interesting. ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Co

Re: [Beowulf] Lustre on google cloud

2019-07-31 Thread John Hearns via Beowulf
The RadioFreeHPC crew are listening to this thread I think! A very relevant podcast https://insidehpc.com/2019/07/podcast-is-cloud-too-expensive-for-hpc/ Re Capital One, here is an article from the Register. I think this is going off topic. https://www.theregister.co.uk/2019/07/30/capital_one_hac

Re: [Beowulf] Lustre on google cloud

2019-07-26 Thread John Hearns via Beowulf
) Terabyte scale data movement into or out of the cloud is not scary in 2019. You can move data into and out of the cloud at basically the line rate of your internet connection as long as you take a little care in selecting and tuning your firewalls and inline security devices. Pushing 1TB/day etc

[Beowulf] Reminder: Call for Papers: HPCSYSPROS Workshop @ SC19 Friday, November 22nd

2019-07-23 Thread John
te: http://hpcsyspros.org Updated CFP: http://sighpc-syspros.org/workshops/2019/HPCSYSPROS -CfP-2019.pdf Submission Site: https://submissions.supercomputing.org/?page=Submit&id=SC19WorkshopHPCSYSPROS19FirstSubmission&site=sc19 -- John Blaas HPCSYSPROS

Re: [Beowulf] flatpack

2019-07-23 Thread John Hearns via Beowulf
Having just spouted on about snaps/flatpak I saw on the roadmap for AWS Firecracker that snap support is to be included. Sorry that I am conflating snap and flatpak. On Tue, 23 Jul 2019 at 07:06, John Hearns wrote: > Having used Snaps on Ubuntu - which seems to be their preferred method

Re: [Beowulf] flatpack

2019-07-22 Thread John Hearns via Beowulf
Having used Snaps on Ubuntu - which seems to be their preferred method of distributing some applications, I have a slightly different take on the containerisation angle and would de-emphaise that. My take is that snaps/flatpak attack the "my distro ships with gcc version 4.1 but I need gcc version

[Beowulf] Differentiable Programming with Julia

2019-07-18 Thread John Hearns via Beowulf
Forgiveness is sought for my ongoing Julia fandom. We have seen a lot of articles recently on industry websites such asabout how machine learning workloads are being brought onto traditional HPC platforms. This paper on how Julia is bringing them together is I think significant https://arxiv.org/p

Re: [Beowulf] help for metadata-intensive jobs (imagenet)

2019-06-28 Thread John Hearns via Beowulf
Igor, if there are any papers published on what you are doing with these images I would be very interested. I went to the new London HPC and AI Meetup on Thursday, one talk was by Odin Vision which was excellent. Recommend the new Meetup to anyone in the area. Next meeting 21st August. And a plug

Re: [Beowulf] Rsync - checksums

2019-06-17 Thread pellman . john
I know that at one point, some Intel chips had instruction extensions available to speed up SHA checksums by computing them directly in hardware. Might be worth looking into: https://software.intel.com/en-us/articles/intel-sha-extensions More recently, Intel has been promoting QuickAssist/QAT, wh

[Beowulf] Call For Papers: HPCSYSPROS Workshop @ SC19 Friday, November 22nd

2019-06-17 Thread John
Website: http://hpcsyspros.org Updated CFP: HPCSYSPROS CFP 2019 Submission Site: HPCSYSPROS 2019 Submission -- John Blaas HPCSYSPROS19 Program Chair ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription

Re: [Beowulf] Rsync - checksums

2019-06-17 Thread John Hearns via Beowulf
Probably best asking this question over on the GPFS mailing list. A bit of Googling reminded me of https://www.arcastream.com/ They are active in the UK Academic community, not sure about your neck of the woods. Give them a shout though and ask for Steve Mackie. http://arcastream.com/what-we-do/

Re: [Beowulf] A careful exploit?

2019-06-13 Thread John Hearns via Beowulf
Regarding serial ports - if you have IPMI then of course you have a virtual serial port. I learned something new about serial ports and IPMI Serial Over LAN recently.. First of all you have to use the kernel config option console=ttyy0 console=ttyS1,115200 This is well known. In the bad old d

Re: [Beowulf] Containers in HPC

2019-05-23 Thread pellman . john
While not technically containers in the purest sense, Kata Containers also have the goal of producing a more secure containerization technology along with the advantage of major industry backing (Intel, Google, MS, AWS, etc). On Thu, May 23, 2019 at 10:13 AM Loncaric,

Re: [Beowulf] Frontier Announcement

2019-05-09 Thread John Hearns via Beowulf
Gerald that is an excellent history. One small thing though: "Of course the ML came along" What came first - the chicken or the egg? Perhaps the Nvidia ecosystem made the ML revolution possible. You could run ML models on a cheap workstation or a laptop with an Nvidia GPU. Indeed I am sitting next

Re: [Beowulf] Frontier Announcement

2019-05-08 Thread John Hearns via Beowulf
Seriously? Wha.. what? Someone needs to get help. And it wasn't me. I am a member of the People's Front of Julia. (contrived Python reference intentional) On Wed, 8 May 2019 at 22:57, Jeffrey Layton wrote: > I wrote some OpenACC articles for HPC Admin Magazine. A number of > pro-OpenMP people a

Re: [Beowulf] Frontier Announcement

2019-05-08 Thread John Hearns via Beowulf
I disagree. IT is a cyclical industry. Back in the bad old days codes were written to run on IBM mainframes. Which used the ECDIC character set. There were Little Endian and Big Endian machines. VAX machines had a rich set of file IO patterns. I really dont think you could read data written on an I

Re: [Beowulf] How to debug error with Open MPI 3 / Mellanox / Red Hat?

2019-05-02 Thread John Hearns via Beowulf
2019 at 17:18, John Hearns wrote: > Chris, I have to say this. I have worked for smaller companies, and have > worked for cluster integrators. > For big University sized and national labs the procurement exercise will > end up with a well defined support arrangement. > > I

Re: [Beowulf] How to debug error with Open MPI 3 / Mellanox / Red Hat?

2019-05-02 Thread John Hearns via Beowulf
Chris, I have to say this. I have worked for smaller companies, and have worked for cluster integrators. For big University sized and national labs the procurement exercise will end up with a well defined support arrangement. I have seen, in once company I worked at, an HPC system arrive which I w

Re: [Beowulf] How to debug error with Open MPI 3 / Mellanox / Red Hat?

2019-05-02 Thread John Hearns via Beowulf
://www.brightcomputing.com/ Bright will certainly give you excellent support. On Thu, 2 May 2019 at 17:02, John Hearns wrote: > You ask some damned good questions there. > I will try to answer them from the point of view of someone who has worked > as an HPC systems integrator and

Re: [Beowulf] How to debug error with Open MPI 3 / Mellanox / Red Hat?

2019-05-02 Thread John Hearns via Beowulf
You ask some damned good questions there. I will try to answer them from the point of view of someone who has worked as an HPC systems integrator and supported HPC systems, both for systems integrators and within companies. We will start with HP. Did you buy those systems direct from HP as servers

Re: [Beowulf] How to debug error with Open MPI 3 / Mellanox / Red Hat?

2019-05-01 Thread John Hearns via Beowulf
On the RHEL 6.9 servers run ibstatus also And sminfo On Wed, 1 May 2019 at 16:23, John Hearns wrote: > link_layer: Ethernet > > E…. > > On Wed, 1 May 2019 at 16:18, Faraz Hussain wrote: > >> >> Quoting John Hearns : >> >> > Wh

  1   2   3   4   5   6   7   8   9   10   >