Re: [Beowulf] HPCG benchmark, again

2022-03-19 Thread Richard Walsh
domain specific application knowledge for general > performance measurement. > >> On 3/19/22 3:58 AM, Richard Walsh wrote: >> J, >> Trying to add a bit to the preceding useful answers … >> In my experience running these codes on very large systems for acceptances, >

Re: [Beowulf] HPCG benchmark, again

2022-03-18 Thread Richard Walsh
J, Trying to add a bit to the preceding useful answers … In my experience running these codes on very large systems for acceptances, to get optimal (HPCG or HPL) performance on GPUs (MI200 or A100) you need to obtain the optimized versions from the vendors which include scripts with ENV vari

[Beowulf] Job opportunity in HPC at HPE ...

2021-11-17 Thread Richard Walsh
All, I hope I am not violating protocol here, but I thought there might be interest in this reference to a job opening in HPE's performance engineering group. >===< HPC Application Performance Engineer-Specialist available at HPE. US Citizenship is required

Re: [Beowulf] Best case performance of HPL on EPYC 7742 processor ...

2020-08-19 Thread Richard Walsh
com> wrote: > Hi Richard, > > On Fri, Aug 14, 2020 at 2:30 PM Richard Walsh wrote: > > What have people achieved on this SKU on a single-node using the stock > > HPL 2.3 source... ?? > > I got similar findings as yours, about 75-80% of peak, albeit using a >

[Beowulf] Best case performance of HPL on EPYC 7742 processor ...

2020-08-14 Thread Richard Walsh
instructions from AMD for using BLIS and GCC for the build does not get me there. Thanks, Richard Walsh ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit https://beowulf.org

Re: [Beowulf] ***UNCHECKED*** HPCG

2020-08-06 Thread Richard Walsh
Prentice wrote: > When I compare my HPL results to my HPCG results, I'm getting HPCG > results that are 0.3 - 0.5% of HPL. On the HPCG Top500 list, most > systems are getting 2-3% of HPL, so I'm off by an order of magnitude. Of course HPCG is a bandwidth limited application so it will never come

Re: [Beowulf] [External] Re: LINPACK for GPUs

2019-08-15 Thread Richard Walsh
gt; Matt Wallis > ma...@madmonks.org > > > > >> On 15 Aug 2019, at 07:42, Richard Walsh wrote: >> >> >> You have to talk to the right people at NVIDIA ... benchmarking group. >> >> The version I am using from 2018 is: >> >> xhpl_cuda9.2.88

Re: [Beowulf] [External] Re: LINPACK for GPUs

2019-08-14 Thread Richard Walsh
You have to talk to the right people at NVIDIA ... benchmarking group. The version I am using from 2018 is: xhpl_cuda9.2.88_mkl_2018_ompi_3.1.0_gcc485_sm35_sm60_sm70_5_ 18_18 but there must be something more current than that now. This one works out through the V100 as the name implies. rbw

Re: [Beowulf] Frontier Announcement

2019-05-09 Thread Richard Walsh
mail threatening to kill me and my family >>> if I didn't stop writing about OpenACC. Given your pro-OpenMP, anti-OpenACC >>> stance, using the same tone as the threatening email, I wondered if that >>> email came from you. >>> >>> >>> >

Re: [Beowulf] Frontier Announcement

2019-05-08 Thread Richard Walsh
Nazis. Were you the one that > threatened me and my family because I wrote about OpenACC? > > > >> On Wed, May 8, 2019, 15:48 Richard Walsh wrote: >> >> Jeffry/All, >> >> Yes ... but given the choice of using OpenACC or OpenMP (if you are not >

Re: [Beowulf] Frontier Announcement

2019-05-08 Thread Richard Walsh
t gcc supports both NV and AMD GPUs with OpenACC. That's one > of the lead compilers listed on the Frontier specs. > > Jeff > > >> On Wed, May 8, 2019 at 3:29 PM Richard Walsh wrote: >> >> All, >> >> Cray has deprecated support for in Ope

Re: [Beowulf] Frontier Announcement

2019-05-08 Thread Richard Walsh
All, Cray has deprecated support for in OpenACC in light of the OpenMP 4.5 and 5.0 standards, and their target and data directives. NVIDIA’s PGI Compiler group will keep OpenACC going for a while, but on AMD devices ... maybe not. That Cray will support only OpenMP on Frontier seems to be a l

Re: [Beowulf] [EXTERNAL] Re: Frontier Announcement

2019-05-08 Thread Richard Walsh
All, I think the comparison with RoadRunner is off. Any application that already has a CUDA version can be largely converted to run on AMD GPUs with a perl script with some minor adjustments. Those without GPU implementations will have to be converted (many are already having this done under E

Re: [Beowulf] [upgrade strategy] Intel CPU design bug & security flaw - kernel fix imposes performance penalty

2018-01-07 Thread Richard Walsh
there would be “only” a system throughput performance degradation. Is it clear that this is measurably worse ... ?? Richard Walsh Thrashing River Computing Sent from my iPhone > On Jan 7, 2018, at 3:29 PM, Christopher Samuel wrote: > >> On 07/01/18 23:22, Jörg Saßmanns

Re: [Beowulf] Intel kills Knights Hill, Xeon Phi line "being revised"

2017-11-19 Thread Richard Walsh
just buy some more as the price drops, but if you did not buy in gen 1 then maybe you are not so disappointed at the change of plans ... and maybe it is time to merge many-core and multi-core anyway. Richard Walsh Thrashing River Computing Sent from my iPhone > On Nov 19, 2017, at 5:20

Re: [Beowulf] Intel Phi musings

2013-02-25 Thread Richard Walsh
e > same performance (both highly optimised finite difference codes). > > > > > -- > Dr Stuart Midgley > sdm...@sdm900.com > > > > > On 15/02/2013, at 4:53 AM, Richard Walsh wrote: > > > > > Hey Stuart, > > > > Thanks much for the d

Re: [Beowulf] Intel Phi musings

2013-02-25 Thread Richard Walsh
ever expend the energy to port all > our codes. Purchasing hundreds of them gives you a lot of impetus to port > your codes quickly :) > > > -- > Dr Stuart Midgley > sdm...@sdm900.com > > > > > On 13/02/2013, at 12:38 AM, Richard Walsh wrote: > > > >

Re: [Beowulf] Intel Phi musings

2013-02-25 Thread Richard Walsh
are on separate cards in separate slots can I assume that I am limited to MPI parallel implementations when using both. Maybe that is more than a few questions ... ;-) ... Regards, Richard Walsh Thrashing River Consulting On Tue, Feb 12, 2013 at 10:46 AM, Dr Stuart Midgley wrote: > It

Re: [Beowulf] Intel Phi musings

2013-02-25 Thread Richard Walsh
ld be interesting. Thanks, Richard Walsh Thrashing River Consulting On Tue, Feb 12, 2013 at 10:02 AM, Dr Stuart Midgley wrote: > I've started a blog to document the process I'm going through to get our > Phi's going. > > http://phi-musings.blogspot.com.au > > I

Fwd: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread richard . walsh
- Forwarded Message - From: "richard walsh" To: "Craig Tierney" Sent: Thursday, April 8, 2010 5:19:14 PM GMT -05:00 US/Canada Eastern Subject: Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ... On Thursday, April 8, 2010 2:42:49 PM

Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread richard . walsh
On Thursday, April 8, 2010 2:14:11 PM Greg Lindahl wrote: >> What are the approaches and experiences of people interconnecting >> clusters of more than128 compute nodes with QDR InfiniBand technology? >> Are people directly connecting to chassis-sized switches? Using multi-tiered >> approac

[Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

2010-04-08 Thread richard . walsh
All, What are the approaches and experiences of people interconnecting clusters of more than128 compute nodes with QDR InfiniBand technology? Are people directly connecting to chassis-sized switches? Using multi-tiered approaches which combine 36-port leaf switches? What are your experience

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread richard . walsh
On Monday, March 15, 2010 1:27:23 PM GMT Patrick Geoffray wrote: >I meant to respond to this, but got busy. You don't consider the protocol >efficiency, and this is a major issue on PCIe. Yes, I forgot that there is more to the protocol than the 8B/10B encoding, but I am glad to get your

Re: [Beowulf] error while using mpirun

2010-03-12 Thread richard . walsh
Akshar bhosale wrote: >When i do: > >/usr/local/mpich-1.2.6/bin/mpicc -o test test.c ,i get test ;but when i do >/usr/local/mpich-1.2.6/bin/mpirun -np 4 test,i get > >p0_31341: p4_error: Path to program is invalid while starting >/home/npsf/last with rsh on dragon: -1 >p4_error: latest ms

Re: [Beowulf] assigning cores to queues with torque

2010-03-08 Thread richard . walsh
Micha Feigin wrote: >The problem: > >I want to allow gpu related jobs to run only on the gpu >equiped nodes (i.e more jobs then GPUs will be queued), >I want to run other jobs on all nodes with either: > > 1. a priority to use the gpu equiped nodes last > 2. or better, use only two out of

[Beowulf] Configuring PBS for a mixed CPU-GPU and QDR-DDR cluster ...

2010-03-05 Thread richard . walsh
All, I am augmenting a DDR switched SGI ICE system with one that largely network-separate (a few 4x DDR links connect them) and QDR switched. The QDR "half" also includes GPUs (one per socket). Has anyone configured PBS to manage these kinds of natural divisions as a single cluster. Some p

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-28 Thread richard . walsh
All, In case anyone else has trouble keeping the numbers straight between IB (SDR, DDR, QDR, EDR) and PCI-Express, (1.0, 2.0, 30) here are a couple of tables in Excel I just worked up to help me remember. If anyone finds errors in it please let me know so that I can fix them. Regards,

Re: [Beowulf] Q: IB message rate & large core counts (per node) ?

2010-02-26 Thread richard . walsh
Larry Stewart wrote: >Designing the communications network for this worst-case pattern has a >number of benefits: > > * it makes the machine less sensitive to the actual communications pattern > * it makes performance less variable run-to-run, when the job controller > chooses different s

Re: [Beowulf] Q: IB message rate & large core counts (per node) ?

2010-02-26 Thread richard . walsh
Mark Hahn wrote: >> Doesn't this assume worst case all-to-all type communication >> patterns. > >I'm assuming random point-to-point communication, actually. A sub-case of all-to-all (possibly all-to-all). So you are assuming random point-to-point is a common pattern in HPC ... mmm ... I

Re: [Beowulf] Q: IB message rate & large core counts (per node) ?

2010-02-25 Thread richard . walsh
Mark Hahn wrote: >Regardless of how tight the seastar per-hop latency is, >IB has 2.5x the per-hop fanout (2or 3 outgoing 9.6 GB links >versus 18 outgoing 4 GB/s links). higher radix means an >advantage that increases with size. Doesn't this assume worst case all-to-all type communication

Re: [Beowulf] Re: GPU Beowulf Clusters

2010-02-01 Thread richard . walsh
Jon Forrest wrote: >On 2/1/2010 7:24 AM, richard.wa...@comcast.net wrote: > >> Coming in on this late, but to reduce this work load there is PGI's version >> 10.0 compiler suite which supports accelerator compiler directives. This >> will reduce the coding effort, but probably suffer from t

Re: [Beowulf] Re: GPU Beowulf Clusters

2010-02-01 Thread richard . walsh
David Mathog wrote: >Jon Forrest wrote: > >> Are there any other issues I'm leaving out? > >Yes, the time and expense of rewriting your code from a CPU model to a >GPU model, and the learning curve for picking up this new skill. (Unless >you are lucky and somebody has already ported t

Re: [Beowulf] HPC/mpi courses

2010-01-16 Thread richard . walsh
e-by-side course I teach on CAF and UPC (for your use only). I also have am am MPI intro course, but cannot send that (copyrighted). Finally, I recommend looking at the US National Lab sites (LLNL in particular) which have excellent OpenMP and MPI tutorials. Regards, Richard Walsh

Re: [Beowulf] Performance tuning for Jumbo Frames

2009-12-12 Thread richard . walsh
>On Dec 12, 2009 Håkon Bugge wrote: > >On Dec 12, 2009, at 7:59 , Rahul Nabar wrote: > >> I have seen a considerable performance boost for my codes by using >> Jumbo Frames. But are there any systematic tools or strategies to >> select the optimum MTU size? I have it set as 9000. (Of course,

[Beowulf] Dual head or service node related question ...

2009-12-03 Thread richard . walsh
All, In the typical cluster, with a single head node, that node provides login services, batch job submission services, and often supports a shared file space mounted via NFS from the head node to the compute nodes. This approach works reasonably well for not-too-large cluster systems.

Re: [Beowulf] Fortran Array size question

2009-11-03 Thread richard . walsh
>- Original Message - >From: "Prentice Bisbal" >To: "Beowulf Mailing List" >Sent: Tuesday, November 3, 2009 1:24:00 PM GMT -06:00 US/Canada Central >Subject: Re: [Beowulf] Fortran Array size question > >Greg Lindahl wrote: >> On Tue, Nov 03, 2009 at 01:17:02PM -0500, Prentice Bisb

[Beowulf] Re: Wake on LAN supported on both built-in interfaces ... ??

2009-09-05 Thread richard . walsh
>- Original Message - >From: "David Mathog" >Subject: Wake on LAN supported on both built-in interfaces ... ?? > >richard.wa...@comcast.net wrote: > >> I have a head node that am trying to get WOL set up on. >> >> It is a SuperMicro motherboard (X8DTi-F) with two built >> in inte

Re: [Beowulf] Re: amd 3 and 6 core processors

2009-08-20 Thread richard . walsh
>- Original Message - >From: "David Mathog" >To: beowulf@beowulf.org >Sent: Thursday, August 20, 2009 2:33:38 PM GMT -06:00 US/Canada Central >Subject: [Beowulf] Re: amd 3 and 6 core processors > >Jonathan Aquilina wrote: > >> a friend of mine told me that the amd tri cores we

Re: [Beowulf] METIS Partitioning within program

2009-08-15 Thread richard . walsh
Amjad, Have you thought of using the system call: "system(const char *string);" Type "man system" for a description. You can pass any string to the shell to be run with this call. For instance: system("date > date.out"); would instruct the shell to place the current date and time i

[Beowulf] Wake on LAN supported on both built-in interfaces ... ??

2009-08-13 Thread richard . walsh
All, I have a head node that am trying to get WOL set up on. It is a SuperMicro motherboard (X8DTi-F) with two built in interfaces (eth0, eth1). I am told by SuperMicro support that both interfaces support WOL fully, but when I probe them with ethtool only eth0 indicates that it support

[Beowulf] Xeon Nehalem 5500 series (socket 1366) DP motherboard recommendations/experiences ...

2009-06-29 Thread richard . walsh
All, I am putting together a bill of materials for a small cluster based on the Xeon Nehalem 5500 series. What dual-socket motherboards (ATX and ATX-extended) are people happy with? Which ones should I avoid? Thanks much, Richard Walsh Thrashing River Computing

Re: [Beowulf] Cluster Networking

2009-06-26 Thread richard . walsh
- Original Message - From: "Jeff Layton" To: "Greg Lindahl" >> >> CFD codes come in many shapes and sizes, so generalizing about them is >> not a good idea. Really. >> >I definitely agree. That's why I said "usually" :) Sources of unpredictable (non-pipeline-able) latency in

Re: Re[2]: [Beowulf] recommendations for cluster upgrades

2009-05-13 Thread richard . walsh
>- Original Message - >From: "Rahul Nabar" >To: "Jan Heichler" >Cc: "Beowulf Mailing List" , "Mark Hahn" > >Sent: Wednesday, May 13, 2009 10:21:06 AM GMT -06:00 US/Canada Central >Subject: Re: Re[2]: [Beowulf] recommendations for cluster upgrades > >On Wed, May 13, 2009 at 1:55 A

Re: [Beowulf] recommendations for cluster upgrades

2009-05-13 Thread richard . walsh
>- Original Message - >From: "Mark Hahn" >To: "richard walsh" >Cc: "Beowulf Mailing List" >Sent: Wednesday, May 13, 2009 12:35:47 AM GMT -06:00 US/Canada Central >Subject: Re: [Beowulf] recommendations for cluster upgrades >

Re: [Beowulf] recommendations for cluster upgrades

2009-05-12 Thread richard . walsh
>- Original Message - >From: "Rahul Nabar" >To: "Beowulf Mailing List" >Sent: Tuesday, May 12, 2009 2:19:33 PM GMT -06:00 US/Canada Central >Subject: [Beowulf] recommendations for cluster upgrades > >I'm currently shopping around for a cluster-expansion and was shopping >for optio

Re: [Beowulf] evaluating FLOPS capacity of our cluster

2009-05-11 Thread richard . walsh
>- Original Message - >From: "Greg Lindahl" > >On Mon, May 11, 2009 at 02:30:31PM -0400, Mark Hahn wrote: > >> 80 is fairly high, and generally requires a high-bw, low-lat net. >> gigabit, for instance, is normally noticably lower, often not much   >> better than 50%.  but yes,

Re: Re[2]: [Beowulf] Nehalem memory configs

2009-04-11 Thread richard . walsh
>- Original Message - >From: "Jan Heichler" >To: "richard walsh" >Cc: beowulf@beowulf.org >Sent: Saturday, April 11, 2009 3:56:10 AM GMT -05:00 US/Canada Eastern >Subject: Re[2]: [Beowulf] Nehalem memory configs > >Hallo richard,

Re: [Beowulf] Nehalem memory configs

2009-04-09 Thread richard . walsh
> Kilian CAVALOTTI wrote: >> On Wednesday 08 April 2009 19:48:30 Joe Landman wrote: >>> As an FYI, Beowulf veteran Jeff Layton wrote up a nice article on >>> memory >>> configuration issues for Nehalem (I had seen some discussion on this >>> previously). >>> >>> Link is here: >>> http://

Re: [Beowulf] Moores Law is dying

2009-04-08 Thread richard . walsh
- Original Message - From: "Ken Schuster" To: beowulf@beowulf.org Sent: Wednesday, April 8, 2009 2:29:17 PM GMT -05:00 US/Canada Eastern Subject: [Beowulf] Moores Law is dying >An IBM researcher says Moore's Law is running out of gas. IB M Fellow Carl >Anderson, who >oversees phy

[Beowulf] What is the status of "remote prefetching" in QPI?

2009-02-20 Thread richard . walsh
All, In early descriptions of QPI, this capability (a remote QPI agent's [on say an FPGA or GPU accelerator] ability to stimulate a QPI processor to prefetch data to its cache, avoiding memory), was listed as a possible feature of QPI.  This has obvious potential benefits in globalizin

Re: [Beowulf] itanium vs. x86-64

2009-02-14 Thread richard . walsh
>- Original Message - >From: "Michael Brown" >To: "Beowulf Mailing List" >Sent: Tuesday, February 10, 2009 3:52:04 PM GMT -05:00 US/Canada Eastern >Subject: Re: [Beowulf] itanium vs. x86-64 > >node. Hopefully, with the Nehalem and Tukwila sharing the same socket we >might be ab

Re: [Beowulf] RE: Capitalization Rates - How often should you replace a cluster? (resent - 1st sending wasn't posted ).

2009-01-20 Thread richard . walsh
- Original Message - From: "Greg Lindahl" >Hey! In year 4 it's about the same to keep the old cluster, or throw >it out and buy a new one. (If I want to spend the same $$, I can buy a >cluster 1/4 the size, but same performance, as the original one.) > >And in year 5, it's a big

Re: [Beowulf] Nehalem and Shanghai code performance for our rzf example

2009-01-20 Thread richard . walsh
- Original Message - From: "Bill Broadley" b...@cse.ucdavis.edu >If gallium arsenide or some other material gave us 10x the clock rate per >watt, but 1/2 the transistors would it really matter?  Seemed like even intel >is begrudgingly admitting it's the memory bus, and finally the

Re: [Beowulf] Is this the J. Dongarra of Beowulf fame?

2008-12-24 Thread richard . walsh
All, Actually the name is Sicilian ... although Jack is from Chicago as I recall. rbw - Original Message - From: "Robert G. Brown" To: "James P Lux" Cc: "Beowulf Mailing List" Sent: Wednesday, December 24, 2008 4:56:52 PM GMT -05:00 US/Canada Eastern Subject: Re: [Beow

Re: [Beowulf] Not all cores are created equal

2008-12-23 Thread richard . walsh
John/All, It is a useful reminder I guess, but I have to assume that this is something that folks on this list are familiar with, No?  State is more complicated for parallel work and less deterministic.  As these qualities accumulate in the processing of any workload a stochastic outc

Re: [Beowulf] Multicore Is Bad News For Supercomputers

2008-12-05 Thread richard . walsh
All, Yes, the stacked DRAM stuff is interesting.  Anyone visit the siXis booth at SC08?  They are stacking DRAM and FPGA dies directly onto SiCBs (Silicon Circuits Boards).  This allows for dramatically more IOs per chip and finer traces throughout the board which is small, but made ent

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-24 Thread richard . walsh
catamount on its initial XT3 offerings also. Regards, rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 ___ Beowulf mailing li

Re: NDAs Re: [Beowulf] Nvidia, cuda, tesla and... where's my double floating point?

2008-06-17 Thread richard . walsh
next year? Regards, rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 ___ Beowulf mailing list, Beowulf@beowulf.org To chang

Re: [Beowulf] Nvidia, cuda, tesla and... where's my double floating point?

2008-06-17 Thread richard . walsh
d IEEE 754 compatibility like single precision did/does... not that this is important to all applications. If you "know" otherwise and can point me to supporting documentation, I would be interested. rbw -- "Making predictions is hard, especially about the future."

Re: [Beowulf] Roadrunner picture

2008-06-12 Thread richard . walsh
? rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620 -- Original message -- From: "Peter St. John" <[EMAI

[Beowulf] Is PowerXCell eDP fully IEEE 754 compliant ... ?? ... the old Cell is/was ...

2008-06-12 Thread richard . walsh
of the answer to this question clarify? Best Regards, rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620_

Re: [Beowulf] Capacity / Capability Computing

2008-05-20 Thread richard . walsh
-- Original message -- From: Lawrence Stewart <[EMAIL PROTECTED]> > > On May 20, 2008, at 8:39 AM, andrew holway wrote: > > > Okay, for those unwilling to leap the mental chasm :) > > > > Would anyone care to give me what they believe to be the definition of > > Ca

Re: [Beowulf] How Can Microsoft's HPC Server Succeed?

2008-04-03 Thread richard . walsh
-- Original message -- From: Jon Forrest <[EMAIL PROTECTED]> > That said, I just don't see how Microsoft's HPC server > can succeed. Yes, it seems impossible, but then what is success here? In the most limited sense this is a perimeter defending move by Microsoft. N

Re: [Beowulf] SMPs + One processor machines = Heterogeneous Cluster

2008-04-02 Thread richard . walsh
the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620--- Begin Message --- ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode o

Re: [Beowulf] Opinions of Hyper-threading?

2008-02-27 Thread richard . walsh
tions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe)

RE: [Beowulf] Cheap SDR IB

2008-01-31 Thread richard . walsh
per node price differences? Then we can rougly determine the cost benefit relationship. Regards, rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-

Re: [Beowulf] how green is that?!?

2007-12-20 Thread richard . walsh
it is possible for custom machines to regain traction in this space, I think SiCortex's systems are the best bet. Regards, rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605

Re: [Beowulf] Stream numbers for SiCortex's MIPS based SOC ...

2007-12-17 Thread richard . walsh
especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620--- Begin Message --- ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (

[Beowulf] Stream numbers for SiCortex's MIPS based SOC ...

2007-12-17 Thread richard . walsh
how looks compared to Opteron, etc. It is supposed to be a balanced design, but it seems there are few measured results available to validate this. As always your thoughts are appreciated ... Regards, rbw -- "Making predictions is hard, especially about the future." Niels Bohr

Re: [Beowulf] multi-threading vs. MPI

2007-12-08 Thread richard . walsh
e references. The []s are light-weight symbols that remind the programmer of the overhead implicit in make remote references, but the work of actual making them effecient is left up to the compiler. rbw -- "Making predictions is hard, especially about the future." Niels Bohr --

Re: [Beowulf] multi-threading vs. MPI

2007-12-07 Thread richard . walsh
converted. Cheers, rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620 ___ Beowulf mailing list, Beowulf@beowul

Re: [Beowulf] multi-threading vs. MPI

2007-12-07 Thread richard . walsh
ed to be include in the Fortran 2008 standard. rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 --- Begin Message --- __

Re: [Beowulf] Using Autoparallel compilers or Multi-Threaded libraries with MPI

2007-12-03 Thread richard . walsh
-- Original message -- From: Greg Lindahl <[EMAIL PROTECTED]> > On Mon, Dec 03, 2007 at 09:47:41PM +, [EMAIL PROTECTED] wrote: > > > I think that the number of real-world apps in this class is perhaps > > not large, or there would be more hybrid code. > > Ah, bu

Re: [Beowulf] Using Autoparallel compilers or Multi-Threaded libraries with MPI

2007-12-03 Thread richard . walsh
this? ;-) rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620___ Beowulf mailing list, Be

Re: [Beowulf] Harpertown Numbers

2007-11-09 Thread richard . walsh
any changes to that. Do you compare to Barcelona in fact or indirectly? Clocks speeds equal? Bios upgrades? Maybe I will see you at SC07. Thanks, rbw PS How is it working with Maria M. ... ;-) ...?? -- "Making predictions is hard, especially about the future." Niels Bohr -- Ric

Re: [Beowulf] Harpertown Numbers

2007-11-09 Thread richard . walsh
s I recall the Tri-Labs win was for Dual Socket Barcelona's and there Intel blades solution was Clovertown. Can you clarify without violating an embargo ... ;-) ... Cheers, rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing Ri

Re: [Beowulf] Tilera to Introduce 64-Core Processor

2007-10-18 Thread richard . walsh
is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620--- Begin Message --- ___ Beowulf mailing list, Beowulf@beowulf.org To change your su

Re: [Beowulf] Tilera to Introduce 64-Core Processor

2007-10-15 Thread richard . walsh
hile back. It should be in the archives. rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620___

Re: [Beowulf] Tilera to Introduce 64-Core Processor

2007-10-15 Thread richard . walsh
core" future. rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620___ Beowulf mailing list, Beowulf@beowulf

Re: [Beowulf] quad-core SPECfp2006: where are 4 FPresults/cycle ?

2007-10-13 Thread richard . walsh
-- Original message -- From: "Mikhail Kuzminsky" <[EMAIL PROTECTED]> > In message from [EMAIL PROTECTED] (Fri, 12 Oct 2007 20:50:08 > +): > >Mikhail, > >I am not sure I fully understand what you are presenting here, but I > >might say that yes, at the FPU unit le

Re: [Beowulf] quad-core SPECfp2006: where are 4 FPresults/cycle ?

2007-10-12 Thread richard . walsh
even more radically that the concept of identity is fundamentally flawed ... ;-) ... Regards, rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620_

Re: [Beowulf] using extend-reach IB?

2007-10-11 Thread richard . walsh
-- Original message -- From: Mark Hahn <[EMAIL PROTECTED]> > > I can say a few words about optical active cable (OAC) choices. The > > current in production choice is from Intel, their Connects Cable. This is > > are they shipping? I checked their website a couple wee

Re: [Beowulf] using extend-reach IB?

2007-10-10 Thread richard . walsh
Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620___ Beowulf mailing list, Beowulf@beowulf.org To change your

Re: [Beowulf] Barcelona vs. Woodcrest, computational chemistry research

2007-09-26 Thread richard . walsh
ng systems and workloads to >>benchmark<<. Hope this is useful ... rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620___

Re: [Beowulf] Barcelona vs. Woodcrest, computational chemistry research

2007-09-26 Thread richard . walsh
avor larger caches will like Harpertown. If the Barcelona makes it to 2.6 or 2.8 GHz by the end of the year, then it will compete. It will win with even at lower clocks on very bandwidth intensive apps. Regards, rbw -- "Making predictions is hard, especially about the future."

Re: [Beowulf] Re: overclocking with liquids

2007-09-21 Thread richard . walsh
you 2 maybe 3 times the heat transfer ability without being a mess. Kind of like global warming ... ;-) ... but inside your computer. My friends at 3M must have thought about this ... maybe, I'll ask. rbw -- "Making predictions is hard, especially about the future." Niels Bohr -

Re: [Beowulf] Measuring port to port latency

2007-09-20 Thread richard . walsh
-na/eng/cluster/clustertoolkit/219848.htm Enjoy! rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620 -- Original message --

Re: [Beowulf] Barcelona numbers

2007-09-10 Thread richard . walsh
case first byte latencies for a single-core run refering to cc-NUMA-local memory on the Barcelona to roughly (5-10%) equal those of dual-core socket 1207 and/or socket 940 ... this is what I was thinking initially, but perhaps Bill's result and fact that there is an L3 cache to consider change

Re: [Beowulf] Barcelona numbers

2007-09-10 Thread richard . walsh
-- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620 ___ Beowulf mailing list, Beowulf@beowulf.org To chang

Re: [Beowulf] Intel Quad-Core or AMD Opteron

2007-08-24 Thread richard . walsh
g the in order execution). rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620___ Beowulf

Re: [Beowulf] Intel Quad-Core or AMD Opteron

2007-08-23 Thread richard . walsh
nding Greg's mind reading point). rbw -- "Making predictions is hard, especially about the future." Niels Bohr -- Richard Walsh Thrashing River Consulting-- 5605 Alameda St. Shoreview, MN 55126 Phone #: 612-382-4620___ Beow

Re: [Beowulf] 64-core processor...

2007-08-21 Thread richard . walsh
-- Original message -- From: Mark Hahn <[EMAIL PROTECTED]> > > I googled on "+raw +multicore +mesh", and found, among other things, > a book chapter that went into some detail on the "Raw" prototype: > > http://www.springerlink.com/index/g3u708645278lv32.pdf > > (ac

Re: [Beowulf] RE: programming multicore clusters

2007-06-16 Thread richard . walsh
AF do not yet equal the performance of well- written MPI code. Although it would seem that much MPI code is not that "well-written". As to how parallel programming will evolve in this context I think that my signature quote below is relevant. Regards, rbw --

Re: [Beowulf] Win64 Clusters!!!!!!!!!!!!

2007-04-11 Thread Richard Walsh
Peter St. John wrote: > On 4/11/07, *Geoff Jacobs* <[EMAIL PROTECTED] > > wrote: > > Jon Forrest wrote: > > The examples you give are very good reasons why there is > > a clear need for more than 32-bits of address space for > > data. Again, I agree complet

Re: [Beowulf] Performance characterising a HPC application

2007-04-04 Thread Richard Walsh
Ashley Pittman wrote: > Patrick Geoffray wrote: > >> I would bet that UPC could more efficiently leverage a strided or vector >> communication primitive instead of message aggregation. I don't know if >> GasNet provides one, I know ARMCI does. >> > > GasNet does however get extra credit for

Re: [Beowulf] OT? GPU accelerators for finite difference time domain

2007-04-02 Thread Richard Walsh
Gerry Creager wrote: > > Richard Walsh wrote: >> Mark Hahn wrote: >>>> The next gen of hardware will support native double precision (AFAIK). >>> my point is that there's native and there's native. if the HW supports >>> doubles, but they

Re: [Beowulf] OT? GPU accelerators for finite difference time domain

2007-04-02 Thread Richard Walsh
Mark Hahn wrote: >> The next gen of hardware will support native double precision (AFAIK). > > my point is that there's native and there's native. if the HW supports > doubles, but they take 8x as long, then there's still a huge reason to > make sure the program uses only low-precision. and 8x (W

Re: [Beowulf] OT? GPU accelerators for finite difference time domain

2007-04-02 Thread Richard Walsh
Mark Hahn wrote: >> If you want to use GPUs for computations, I suggest that you take a >> look at CUDA >> (http://www.nvidia.com/cuda). The SDK is available for free and it is >> using a C like syntax (so you don't need to write shader and be >> familiar with OpenGL or DX9 ). > there's ATI/AMD's

Re: [Beowulf] Performance characterising a HPC application

2007-03-29 Thread Richard Walsh
Hey Patrick, Patrick Geoffray wrote: > Message aggregation would be much more beneficial in the context of > UPC, where the compiler will likely generates many small grain > communications to the same remote process. However, as Greg pointed > out, MPI applications often already aggregate messages

[Beowulf] Overcoming processor/accelerator multiplicity and heterogeniety with virtual machines ... ??

2007-03-28 Thread Richard Walsh
All, The nice-ities of commodity, ILP and clock-driven performance improvements have given way to the complexity of multi-core (with two flavors, hetero and homo) and connected accelerators/processors (GPUs, FPGAs, HPC Specific Accelerators). And while the hardware mentioned is various, a large p

  1   2   >