We went straight to ESSL. It also has FFTs and selected LAPACK, some with GPU support ( https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_sm/1/872/ENUS5765-L61/index.html&lang=en&request_locale=en ).
I also try to push people to use MKL on Intel, as it has multi-code-path execution (we have a mix of architectures in our default patch partition). On Tue, Apr 16, 2019 at 1:59 PM Prentice Bisbal <pbis...@pppl.gov> wrote: > Thanks for the info. Did you try building/using any of the open-source > math libraries for Power9, like OpenBLAS, or did you just use ESSL for > everything? > > Prentice > > On 4/16/19 1:12 PM, Fulcomer, Samuel wrote: > > We had an AC921 and AC922 as a while as loaners. > > We had no problems with SLURM. > > Getting POWERAI running correctly (bugs since fixed in newer release) and > apps properly built and linked to ESSL was the long march. > > regards, > s > > On Tue, Apr 16, 2019 at 12:59 PM Prentice Bisbal <pbis...@pppl.gov> wrote: > >> Sergi, >> >> I'm working with Bill on this project. Is all the hardware >> identification/mapping and task affinity working as expected/desired >> with the Power9? I assume your answer implies "yes", but I just want to >> make sure. >> >> Prentice >> >> On 4/16/19 10:37 AM, Sergi More wrote: >> > Hi, >> > >> > We have a Power9 cluster (AC922) working without problems. Now with >> > 18.08, but have been running as well with 17.11. No extra >> > steps/problems found during installation because of Power9. >> > >> > Thank you, >> > Sergi. >> > >> > >> > On 16/04/2019 16:05, Bill Wichser wrote: >> >> Does anyone on this list run Slurm on the Sierra-like machines from >> >> IBM? I believe they are the ACC922 nodes. We are looking to >> >> purchase a small cluster of these nodes but have concerns about the >> >> scheduler. >> >> >> >> Just looking for a nod that, yes it works fine, as well as any issues >> >> seen during deployment. Danny says he has heard of no problems but >> >> that doesn't mean the folks in the trenches haven't seen issues! >> >> >> >> Thanks, >> >> Bill >> >> >> >>