-- Original message --
From: [EMAIL PROTECTED]
> should be easier on this processor with its lower clock, but is does have 2
> cores. I am
Ugh! How did I type that ... that should read "6 cores"
Sorry,
rbw
--
"Making predictions is hard, especially about the futu
All,
Anyone seem Stream numbers for one and/or more cores from SiCortx, say a
SiCortex
Catapult System. The chip has two memory controllers, and I have heard
provides:
"more than 10 Terabytesof bandwidth"
in the largest configuration, but have not seen any measured memory bandwidth
numbers
f
G.M.Sigut wrote:
> On Sat, 2007-12-15 at 12:01 -0800, [EMAIL PROTECTED] wrote:
> ...
>>2. Re: large array to run (Geoff Jacobs)
> ...
>> Message: 2
>> Date: Fri, 14 Dec 2007 21:46:42 -0600
>> From: Geoff Jacobs <[EMAIL PROTECTED]>
>> Subject: Re: [Beowulf] large array to run
>> To: Craig Tierne
A bit of self promotion.
This Wednesday, December 19, 2007, there will be a free webinar
sponsored by IBM, Cisco, and Intel that will discuss
Multi-core in HPC. This will be a panel discussion
so you can *ask* questions!
The webinar will be live: 1:00 PM Eastern | 10:00 AM Pacific | 5:00 PM GMT
Dear Donald(Shillady),James(Cownie),
Alan(scheinine),Toon(Moene),Geoff(Jacobs)
Thank you very much for your feedback.
This array is meant to be used to build
an FDTD radio environment at the master ( MPI_WORLD == 0 ).
And part of it is planned to be sent to each slave.
That's why this particular
Blast, forgot the subject...
On Sat, 2007-12-15 at 12:01 -0800, [EMAIL PROTECTED] wrote:
...
>2. Re: large array to run (Geoff Jacobs)
...
> Message: 2
> Date: Fri, 14 Dec 2007 21:46:42 -0600
> From: Geoff Jacobs <[EMAIL PROTECTED]>
> Subject: Re: [Beowulf] large array to run
> To: Craig Tiern
On Sat, 2007-12-15 at 12:01 -0800, [EMAIL PROTECTED] wrote:
...
>2. Re: large array to run (Geoff Jacobs)
...
> Message: 2
> Date: Fri, 14 Dec 2007 21:46:42 -0600
> From: Geoff Jacobs <[EMAIL PROTECTED]>
> Subject: Re: [Beowulf] large array to run
> To: Craig Tierney <[EMAIL PROTECTED]>
> Cc: b
[EMAIL PROTECTED] wrote on 12/13/2007 06:50:28 PM:
> This reminds me of a similar issue I had. What approaches do you
> take for large dense matrix multiplication in MPI, when the matrices
> are too large to fit into cluster memory? If I hack up something to
> cache intermediate results to disk