Re: [Beowulf] hardware RAID versus mdadm versus LVM-striping

2010-03-12 Thread Geoff Jacobs
Tony Travis wrote: > Rahul Nabar wrote: >> If I have a option between doing Hardware RAID versus having software >> raid via mdadm is there a clear winner in terms of performance? Or is >> the answer only resolvable by actual testing? I have a fairly fast >> machine (Nehalem 2.26 GHz 8 cores) and 4

Re: [Beowulf] error while using mpirun

2010-03-12 Thread richard . walsh
Akshar bhosale wrote: >When i do: > >/usr/local/mpich-1.2.6/bin/mpicc -o test test.c ,i get test ;but when i do >/usr/local/mpich-1.2.6/bin/mpirun -np 4 test,i get > >p0_31341: p4_error: Path to program is invalid while starting >/home/npsf/last with rsh on dragon: -1 >p4_error: latest ms

Re: [Beowulf] Cluster of Linux and Windows

2010-03-12 Thread Geoff Jacobs
Leonardo Machado Moreira wrote: > Basicaly, Is a Cluster Implementation just based on these two libraries > MPI on the Server and SSH on the clients?? Technically you don't need a server as long as all your clients have a copy of your application and are able to talk to each other. File servers an

Re: [Beowulf] Cluster of Linux and Windows

2010-03-12 Thread Geoff Jacobs
Leonardo Machado Moreira wrote: > Basicaly, Is a Cluster Implementation just based on these two libraries > MPI on the Server and SSH on the clients?? Technically you don't need a server as long as all your clients have a copy of your application and are able to talk to each other. File servers an

Re: [Beowulf] Cluster of Linux and Windows

2010-03-12 Thread Geoff Jacobs
Mark Hahn wrote: >> I am used to work with Arch Linux. What do you think about it? > > the distro is basically irrelevant. clustering is just a matter of your > apps, middleware like mpi (may or may not be provided by the cluster), > probably a shared filesystem, working kernel, network stack, >

[Beowulf] error while using mpirun

2010-03-12 Thread akshar bhosale
i have installed mpich 1.2 6 on my desktop (core 2 duo) my test file is : #include #include int main(int argc,char *argv[]) { int rank=0; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); printf("my second program rank is %d \n",rank); MPI_Fin

[Beowulf] [hpc-announce] P2S2-2010 submission still open: deadline extended to 3/17/2010

2010-03-12 Thread Yong Chen
[Apologies if you got multiple copies of this email. If you'd like to opt out of these announcements, information on how to unsubscribe is available at the bottom of this email.] Dear Colleague: We would like to inform you that the paper submission deadline of the Third International Workshop o

Re: [Beowulf] assigning cores to queues with torque

2010-03-12 Thread Micha Feigin
On Mon, 8 Mar 2010 10:39:08 -0500 Glen Beane wrote: > > > > On 3/8/10 10:14 AM, "Micha Feigin" wrote: > > I have a small local cluster in our lab that I'm trying to setup with minimum > hustle to support both cpu and gpu processing where only some of the nodes > have > a gpu and those have

Re: [Beowulf] how large can we go with 1GB Ethernet? / Re: how large of an installation have people used NFS, with?

2010-03-12 Thread Mark Hahn
It looks like Allied Telesis makes chassis switches now too. as well as Fortinet (I don't think Henning named them), they took over the WovenSystems stuff after the latter went under. interesting - I wondered what had happened to Woven. but Woven reminds me of Gnodal, which also aims to prod