Lai Dragonfly wrote:
Hi,
There is a rank 367 which achieve 79.97% efficiency.
i'm using Infiniband to connect 2 nodes. each node has 8G memory and
dual AMD opteron 250.
i just tried Intel Compiler + GOTO, it just got a 60% efficiency which
have a large difference with my target.
maybe i need to try Pathscale or PGI compiler.
any suggestion?
thanks a lot.
It isn't the compiler. Use gcc and Goto and you will get the same
result. The biggest determiner of performance for HPL is the
configuration of HPL.dat. Also, run HPL on a single node to make
sure you are getting around 90% efficiency, then run across the IB
and see how things change.
Craig
PN
2006/9/22, Craig Tierney <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>>:
Lai Dragonfly wrote:
> Dear all,
>
> I'm doing the HPL benchmark on several nodes of AMD opteron
platform but
> may expand to hundred of nodes later.
> i hope to get around 80% efficiency.
What is your interconnect? 80% may be a bit high. You should look
at the Top500 list to see what the efficiencies of other systems
similar to yours.
> Anyone has good suggestions of combination of compiler (appreciate if
> includes compiler flags) + math lib on AMD platforms?
> thanks a lot.
>
Compiler and flags don't matter. HPL spends about 99.999% of its
time in BLAS. Use GotoBLAS, it is very fast.
What is most important is settings in the HPL.dat file. Use google to
find some good settings or someone else can provide one. I don't have
access to one right now.
Craig
on the list may be able
> PN
>
>
>
------------------------------------------------------------------------
>
> _______________________________________________
> Beowulf mailing list, Beowulf@beowulf.org
<mailto:Beowulf@beowulf.org>
> To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf