I guess we have all seen this: https://access.redhat.com/articles/3307751
If not, 'HPC Workloads' (*) such as HPL are 2-5% affected. However as someone who recently installed a lot of NVMe drives for a fast filesystem, the 8-19% performance hit on random IO to NVMe drives is not pleasing. (*) Quotes are deliberate. We all know that the best benchmarks are your own applications. On 6 January 2018 at 12:05, John Hearns <hear...@googlemail.com> wrote: > Disabling branch prediction - that in itself will have an effect on > performance. > > One thing I read about the hardware is that the table which holds the > branch predictions is shared between processes running on the same CPU core. > That is part of the attack process - the malicious process has knowledge > of what the 'sharing' process will branch to. > > I float the following idea - perhaps this reinforces good practice for > running HPC codes. Meaning cpusets and process pinning, > which we already do for reasons of performance and for better resource > allocation. > I expose my ignorance here, and wonder if we will see more containerised > workloads, which are strictly contained within their own memory space. > I then answer myself by saying I am talking nonsense, because the kernel > routines need to be run somewhere and this exploit is all about being able > to probe > areas of memory which you should not be able to do by speculatively > running some instructions and capturing what effect they have. > And ""their own memory space" is within virtual memory. > > > > > > > > > > > > > > > > On 6 January 2018 at 02:26, Christopher Samuel <ch...@csamuel.org> wrote: > >> On 06/01/18 12:00, Gerald Henriksen wrote: >> >> For anyone interested this is AMD's response: >>> >>> https://www.amd.com/en/corporate/speculative-execution >>> >> >> Cool, so variant 1 is likely the one that SuSE has firmware for to >> disable branch prediction on Epyc. >> >> cheers, >> Chris >> -- >> Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC >> _______________________________________________ >> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> > >
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf