On 09/16/2012 02:52 PM, Jeffrey Rossiter wrote:> The intention is for
the system to be
> used for scientific computation.
That doesn't narrow it down much.
> I am trying to decide on a linux
> distribution to use.
I suggest doing it yourself based on whatever popular linux distro you
have experi
List member Jack Dongarra makes the news on Slashdot, with a comparison of
iPad2 and Cray2 speeds..
http://apple.slashdot.org/story/12/09/17/203232/apple-ipad-2-as-fast-as-the-cray-2-supercomputer
Referencing a paper with slides at
http://web.eecs.utk.edu/~luszczek/pubs/hpec2012_elb.pdf
On 09/16/2012 05:52 PM, Jeffrey Rossiter wrote:
> Hello everyone!
>
> I am getting started on a cluster building project at my university. We
> just replaced all of our lab machines so I am going to be using the old
> machines to rebuild our cluster. The intention is for the system to be
> used for
On Mon, 17 Sep 2012, Gus Correa wrote:
> Tangentially related to the recent discussion
> of server/data center cooling with Helium [and Helium depletion]:
Thanks. Actually, very interesting. Lower drag (lower viscosity) makes
sense. Helium should also be a substantially better conductor of hea
What distro are you most familiar with? (and which supports your future
applications)
Does it have a convenient way to do a mass update?
Jim Lux
From: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org] On
Behalf Of Jeffrey Rossiter
Sent: Sunday, September 16, 2012 2:52 PM
To: be
Tangentially related to the recent discussion
of server/data center cooling with Helium [and Helium depletion]:
https://www.computerworld.com/s/article/9231220/Helium_filled_WD_drives_promise_huge_boost_in_capacity?taxonomyId=19&pageNumber=1
http://arstechnica.com/information-technology/2012/09/h
I'll be there as well. Some northeast HPC people may not
know that it is free to walk around the show (which is a
rather cozy venue). You have to pay to see the talks.
Oh and of course if you are interested in HFT ...
--
Doug
> Hi folks:
>
>I realized I had not mentioned this here, but I'll
You may want to look at Warewulf. Jeff Layton recently
wrote a great 4 part series on HPC Admin about it.
Here is the first article:
http://hpc.admin-magazine.com/Articles/Warewulf-Cluster-Manager-Master-and-Compute-Nodes
You can also take a look at this link on Cluster Monkey,
which has links t
Hi Jeff:
In our experience, the answer should come from your users. They must
have some preferred systems that they do not even know they need. For
many scientists they typically start using some packages running on
Windows. If you build a Linux cluster, it would NOT help them much.
Even for peopl
On 9/16/12 8:39 PM, "Ellis H. Wilson III" wrote:
>
>On 09/16/2012 07:16 PM, Joe Landman wrote:
>>> > In addition. Could we, a company of 15 people pay for the continued
>>> > development and support of OpenIndina?
>> Yes, if this is what you wished to use. Or you could do it yourself.
>
>I th
On Sun, Sep 16, 2012 at 02:44:15PM -0700, Greg Lindahl wrote:
> On Sun, Sep 16, 2012 at 10:43:57PM +0200, Andrew Holway wrote:
>
> > case in point: We have based a reasonable chunk of our backend
> > infrastructure on openindiana. http://lwn.net/Articles/514046/. What
> > do we do now?
>
> Choose
hi Jeffrey,
Maybe you want to give some more details to this list. Most importantly:
WHICH software needs to work well on it, and how many machines do we
speak about and how important are i/o concerns?
(for some software every X gflop they need X bytes per second i/o)
In general the crunching
http://www.theregister.co.uk/2012/09/13/codethink_basertock_slab_arm_system/
Codethink jumps into the ARM server fray with Baserock Slab
A Marvell-ous cluster in a box
By Timothy Prickett Morgan • Get more from this author
Posted in Servers, 13th September 2012 23:51 GMT
The crafty engineers
Let me email you a latencytest using all cores at the same time.
All those claims always about using 1 core are not so relevant for HPC,
as we wouldn't need multicore cpu's then.
In a perfect world you're right, regrettably that's not how software
usually works.
It usually hits 100 other proble
14 matches
Mail list logo