If you look at the top500 list, 50% of the machines have Infiniband and 37% 
Gigabit Ethernet. InfiniBand has a low latency of 3–5 microseconds, which is 
much lower than Ethernet's 20–80 microseconds. But Ethernet can be a 
cost-effective solution with reasonably good performance, provided that you 
configure the machine appropriately to minimize the software latency - by using 
a RoCE driver.

Jose


> El 31 mar 2025, a las 10:40, Klaij, Christiaan via petsc-users 
> <petsc-users@mcs.anl.gov> escribió:
> 
> The FAQ about the kind of parallel computers or clusters needed
> to use PETSc states:
> 
> "any ethernet (even 10 GigE) simply cannot provide the needed performance."
> 
> Does this statement still hold now that 100 GigE is common?
> 
> The broader question is that we are buying a new cluster to run
> our in-house CFD solver ReFRESCO. Typical production runs involve
> meshes up-to a few hundred million cells with half a dozen to a
> dozen equations. Most of the time is spend in KSPSolve. What kind
> of interconnect should we consider?
> 
> Chris <image916957.png>
> dr. ir.​​​​ Christiaan  Klaij  |  Senior Researcher  |  Research & Development
> T +31 317 49 33 44  |   c.kl...@marin.nl | 
> https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!aONPw6PPgHjgXIh5u-tf1Buvha38iSHeSim-KXX_1xXxgi0QiRpkdyrmx7jTObbjT2wgz4dkcLzVv2W9IOitYnmE$
>  <image775640.png><image297572.png><image268268.png>


Reply via email to