"Robert G. Brown" <[EMAIL PROTECTED]> writes:

> On Fri, 13 Apr 2007, [EMAIL PROTECTED] wrote:
>
>> Perhaps I'm not thinking as broadly as Rich. But I see a web-base solution 
>> as a
>> better idea than asking (or forcing) ISV's to put some new code in their 
>> applications
>> to run on a cluster (BTW - in my experience some customers use the GUIs that
>> come with ISV codes and some don't.).
>
> I agree, in detail -- I just think that what you are describing is
> something that has been implemented many times over in grid-land,
> specifically (to my knowledge) in ATLAS (the DOE/HEP project, not the
> linear algebra library).

ATLAS is a *user* of various grid infrastructures; they don't build
their own grid. In Europe they run on top of LCG and Nordugrid, on
your side of the pond they run on top of OSG and others. (Well,
basically. It's complicated.) The various grid infrastructures are
federated into WLCG, the Worldwide LHC Computing Grid, which serves
the different LHC experiments (not just ATLAS).

> The thing that makes these clusters into a grid is that one agency pays
> for them all

Nope. There is a plethora of funding agencies involved. The thing that
makes the clusters a grid is that the various cluster owners agree,
for various reasons, to make resources available to ATLAS (and
others), and that these insanely distributed resources need to be
collected into a single, somewhat manageable entity.

So we have a) different software stacks to implement grid
infrastructures, and b) the ATLAS software stack that needs to be
installed on the clusters to run ATLAS jobs. Those are different
things.

Some of the grid flavours are horrible to install, some are easy. Some
make farreaching assumptions about your underlying OS, others are more
or less distribution agnostic.

The ATLAS software has historically been a beast to install on
anything but Scientific Linux CERN edition, but things have actually
improved hugely the last year or two. The PACMAN installation usually
works even on other distributions, and there are third-party RPM:s
available.

Currently, the focus is to build a system that can handle the
multipetabyte/year data streams from LHC when it goes online -
providing nice user interfaces for individual researchers who want to
do science with the data comes later.

-- 
Leif Nixon                       -            Systems expert
------------------------------------------------------------
National Supercomputer Centre    -      Linkoping University
------------------------------------------------------------
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to