On 03/27/2014 08:40 AM, Perry Cate wrote:> I am trying to benchmark my Rocks Linux cluster with Linpack. I followed
> this guide:
> http://frankinformation.wordpress.com/2013/09/09/installing-linpack-benchmark-hpl-on-a-rocks-cluster/
>
> And have hpl successfully installed on my frontend node-when the
> frontend is the only computer in the machines file, the program runs
> fine. When I try to run it on some compute nodes by adding them to the
> machines file, mpirun reports that it "was unable to launch the
> specified application as it could not find an executable"
>
> I'm sure I'm probably doing something wrong here, but does this mean
> that I have to manually install HPL on all of the compute nodes, or I'm
> just not starting it correctly?
>
> I appreciate your help...
>
>

On 03/27/2014 06:52 PM, Adam DeConinck wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hi Perry,

When you run an MPI program on a cluster, the executable usually needs
to be visible in the same location on all the nodes. This might mean
installing the application on each node, but for many clusters it just
means putting the executable on a network filesystem which is visible to
all the nodes.

So for example if /home is an NFS filesystem mounted on all the nodes,
you should be able to run a binary under /home across the nodes with MPI.
I don't have any experience with ROCKS so I don't know if it sets up an NFS
filesystem by default.

If you don't want a shared filesystem, you can install the app separately
on each node instead.

Adam

Hi Perry

Adding to Adam's comments:

On Rocks clusters the standard NFS directory shared across the nodes
that is meant to install software applications for general use
is /share/apps.
You could install HPL in, say, /share/apps/hpl, for instance,
specially if it is meant for general use.

The home directories are also NFS shares, so an alternative could be
/home/your_username/hpl, say, if it is only for your own use.
This seems to be what your web site guide recommends.

I'd suggest that you first try to run some very simple mpi program,
to test MPI functionality, before you go for HPL.
The OpenMPI distribution has a great trio for these tests:
hello_c.c, ring_c.c, and connectivity_c.c in the "examples" directory,
if you downloaded the source tarball. MPICH has cpi.c,
which will also work.
However, you can even run "hostname" on all nodes at once
under mpiexec for a quick test.

Also, if the executable (say, hpl.x),
is installed in an NFS share, as it seems to be,
make sure the directory is properly mounted on all nodes.
For instance, run "(hostname;ls -l /home/your_name/hpl/hpl.x)"
with tentakel, pdsh, or even with mpiexec using all nodes.
Check the output.

Not sure how Rocks handle your PATH and LD_LIBRARY_PATH,
but if you have trouble with them
you may need to add something like this to your ~/.bashrc file:

export PATH=/opt/openmpi/bin:$PATH
export LD_LIBRARY_PATH=/opt/openmpi/lib:$LD_LIBRARY_PATH

Likewise, you may need to add something similar to your PATH to point to
wherever HPL is, or use its full path name instead on the
mpiexec command line.

I hope this helps,
Gus Correa


_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to