On 5/7/19 1:59 PM, Prentice Bisbal via Beowulf wrote:
I agree. That means a LOT of codes will have to be ported from CUDA to
whatever AMD uses. I know AMD announced their HIP interface to convert
CUDA code into something that will run on AMD processors, but I don't
know how well that works in
On 5/7/19, 2:00 PM, "Beowulf on behalf of Prentice Bisbal via Beowulf"
wrote:
> I think it is interesting that they are using AMD for
> both the CPUs and GPUs
I agree. That means a LOT of codes will have to be ported from CUDA to
whatever AMD uses. I know AMD announced
I think it is interesting that they are using AMD for
both the CPUs and GPUs
I agree. That means a LOT of codes will have to be ported from CUDA to
whatever AMD uses. I know AMD announced their HIP interface to convert
CUDA code into something that will run on AMD processors, but I don't
kn
It's very interesting to see the design choices! At least in the materials
characterization space most of the code is based on CUDA, so this system
will be useless for that community except for the cpus. Almost none of the
software development is done using AMD, so processor optimisations either.
I
Hi Prentice,
that looks interesting and I hope it means I will finally get the neutron
structure which was measured last year there! :-)
On a more serious note: I think it is interesting that they are using AMD for
both the CPUs and GPUs. It sounds at least very fast of what they want to
build
Hello, it looks like AWS wants to catch up with Azure in the HPC context. I
have found some benchmarks describing good scaling on openfoam etc but no
raw performance metrics. Has anyone tried it yet? What's the MPI latency
like?
The bandwidth should be close to 100 Gbps.
___
ORNL's Frontier System has been announced:
https://www.hpcwire.com/2019/05/07/cray-amd-exascale-frontier-at-oak-ridge/
--
Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
http://www.pppl.gov
___
Beowulf mailing list, Beowulf@