2009/2/17 Robert G. Brown :
>>
> Right now the short answer for a vanilla starter cluster is: Connect
> your systems via a switched network on a private IP subnet, setup a
> shared common account space and remote mounted (e.g. NFS) working
> directories from a shared server.
Just to amplify on wh
On Mon, Feb 16, 2009 at 05:06:17PM +0530, Indrajit Deb wrote:
>
>Hello,
>I want to setup beowulf for my eight node cpu to run simulation. Debian
>4.0 is installed in each cpu. I am not familiar with beowulf. If any
>one help me to setup beowulf for the begining to the end i will be
On Mon, 16 Feb 2009, Indrajit Deb wrote:
Hello,
I want to setup beowulf for my eight node cpu to run simulation. Debian 4.0
is installed in each cpu. I am not familiar with beowulf. If any one help me
to setup beowulf for the begining to the end i will be grateful.
Thanks.
Can you give us more
On Sat, Feb 14, 2009 at 6:43 PM, David Mathog wrote:
> Tiago Marques
>
>
> > I've been trying to get the best performance on a small cluster we
> have here
> > at University of Aveiro, Portugal, but I've not been enable to get most
> > software to scale to more than one node.
>
>
>
> > The prob
On Mon, 2009-02-16 at 08:55 +0100, Carsten Aulbert wrote:
> Hi all,
>
> sorry in advance for this vague subject and also the vague email, I'm
> trying my best to summarize the problem:
>
> On our large cluster we sometimes encounter the problem that our main
> scheduling processes are often in st
On Sat, Feb 14, 2009 at 1:49 PM, John Hearns wrote:
> 2009/2/13 Tiago Marques :
> >
> > As for software, I'm using Gentoo Linux, ICC/IFC/GotoBLAS, tried
> scalapack
> > with no benefit, OpenMPI and Torque, running in x86-64 mode.
> >
>
> Pallas/PMB, or there are test programs included with OpenM
Hello,
I want to setup beowulf for my eight node cpu to run simulation. Debian 4.0
is installed in each cpu. I am not familiar with beowulf. If any one help me
to setup beowulf for the begining to the end i will be grateful.
Thanks.
Indrajit Deb
Research Scholar.
Dept. of Biophysics, Molecular Bio
On Sat, Feb 14, 2009 at 10:13 AM, Nicholas M Glykos wrote:
>
> Hi Tiago,
>
>
>
> > I tried with Gromacs ...
>
>
> Concerning your MD tests, would it be worthwhile to check also NAMD's
> parallel efficiency ? Based on past experience, I would suggest that you
> try both the UDP- & TCP-based vers
Hi Joe.
Could you please get some dd to either read or write through NFS with lots of
small chunks (ie. high request rate rather than high throughput rate) in order
to find out how it correlates with the higher latency wrt Infiniband ?
Thanks,
Joshua
-- Original Message --
Received: 11:02
Hi Joe,
(keeping all lists cross-posted, please shout briefly at me if I fall
off the line of not being rude):
Joe Landman schrieb:
>
> Are you using a "standard" cluster scheduler (SGE, PBS, ...) or a
> locally written one?
>
We use Condor (http://www.cs.wisc.edu/condor/).
>
> Hmmm... Th
Carsten Aulbert wrote:
Hi all,
sorry in advance for this vague subject and also the vague email, I'm
trying my best to summarize the problem:
On our large cluster we sometimes encounter the problem that our main
scheduling processes are often in state D and in the end not capable
anymore of pus
Hi Trond,
Trond Myklebust schrieb:
> 2.6.27.7 has a known NFS client performance bug due to a change in the
> authentication code. The fix was merged in 2.6.27.9: see the commit
> http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.27.y.git&a=commitdiff&h=a0f04d0096bd7edb543576c55f7a0993628
Hi all,
sorry in advance for this vague subject and also the vague email, I'm
trying my best to summarize the problem:
On our large cluster we sometimes encounter the problem that our main
scheduling processes are often in state D and in the end not capable
anymore of pushing work to the cluster.
13 matches
Mail list logo