XCPU is one:

     http://xcpu.org
or
     http://xcpu.sourceforge.net/

and on portability, Ionkov & Mirtchovski say: 

      Anything with a socket :)
      http://mirtchovski.com/p9/xcpu-talk.pdf

XCPU presentation from a Plan 9 workshop:

     http://lsub.org/iwp9/cready/xcpu-madrid.pdf

Here is an abstract introduction:

XCPU: a new, 9p-based, process management system for clusters and grids
Minnich, R.   Mirtchovski, A.   
Los Alamos Nat. Lab, NM;

This paper appears in: Cluster Computing, 2006 IEEE International Conference on
Publication Date: 25-28 Sept. 2006
On page(s): 1-10
Location: Barcelona, 
ISSN: 1552-5244
ISBN: 1-4244-0327-8
INSPEC Accession Number: 9464866
Digital Object Identifier: 10.1109/CLUSTR.2006.311843
Current Version Published: 2007-02-20 

Abstract
Xcpu is a new process management system that is equally at home on clusters and 
grids. Xcpu provides a process execution service visible to client nodes as a 
9p server. It can be presented to users as a file system if that functionality 
is desired. The Xcpu service builds on our earlier work with the Bproc system. 
Xcpu differs from traditional remote execution services in several key ways, 
one of the most important being its use of a push rather than a pull model, in 
which the binaries are pushed to the nodes by the job starter, rather than 
pulled from a remote file system such as NFS. Bproc used a proprietary 
protocol; a process migration model; and a set of kernel modifications to 
achieve its goals. In contrast, Xcpu uses a well-understood protocol, namely 
9p; uses a non-migration model for moving the process to the remote node; and 
uses totally standard kernels on various operating systems such as Plan 9 and 
Linux to start, and MacOS and others in development. In this pap!
 er, we describe our clustering model; how Bproc implements it and how Xcpu 
implements a similar, but not identical model. We describe in some detail the 
structure of the various Xcpu components. Finally, we close with a discussion 
of Xcpu performance, as measured on several clusters at LANL, including the 
1024-node Pink cluster, and the 256-node Blue Steel InfiniBand cluster

EXCERPTED FROM :: 
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=4100349&isnumber=4100333

====================================

      Jul 31, 2009 05:19:36 AM, Marian Marinov wrote:

      Hello list,
      do you know if this project is still alive ? Or replaced/renamed ?

      http://bproc.sourceforge.net/

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to