Re: [Beowulf] Replacement for C3 suite from ORNL

2015-02-28 Thread Ashley Pittman
to be available here: >> >> http://www.csm.ornl.gov/torc/C3/ > > We've been using pdsh: > > https://code.google.com/p/pdsh/ I’m a long-time user of pdsh although more recently I’ve been looking at clush which has a lot more options and a python interface should you need i

Re: [Beowulf] RFC: Policy on CFP's being sent to the Beowulf list

2014-03-19 Thread Ashley Pittman
e hpc-announce list to be lower than that of the Beowulf list - and hpc-annouce gets a surprisingly large number of postings. Given that it’s easy enough for people to subscribe to hpc-announce and anything posted would be unlikely to provoke (on-topic) discussion on the Beowulf list I don

Re: [Beowulf] Cloud / HPC

2013-04-15 Thread Ashley Pittman
xisting pretty strong connotations is hurting > more than helping at this point. I've taken to saying I work in "Computing" as a distinct field from "IT". The difference being that Computing is about using computers for calculations/analytics rather than as a

Re: [Beowulf] Supercomputers face growing resilience problems

2012-11-23 Thread Ashley Pittman
d be, at least in part this is due to a "not invented here" attitude from both sides but also commercial pressures keep a lot of the work and algorithms secret. Just look at the number of people from HPC who have signed on with Amazon/Google and th

Re: [Beowulf] Re: Interesting

2010-10-29 Thread Ashley Pittman
rse Nasa are famous for this but at the end of the day it's something that we've probably all done, I don't own a DVD player any more and neglected to backup all my DVDs before it broke. With audio tapes and vinyl I'm not so bad, the challenging one for me would be all the

Re: [Beowulf] diskless cluster questions

2010-07-07 Thread Ashley Pittman
ient or you may choose to have an entire fs tree for each client. Another option might be to use fuse although I don't have much experience of that myself, it's basically the same but each client would have a copy-on-write version of /var and /etc to allow them to write to files in t

Re: [Beowulf] pdsh question

2010-05-11 Thread Ashley Pittman
earlier on today I ran "pdsh -w [0-25] -R exec tune2fs -O extents /dev/mapper/ost_%h" to re-tune all the devices in a lustre filesystem. Ashley. -- Ashley Pittman, Bath, UK. Padb - A parallel job inspection tool for cluster computing http://padb.pittman.org.uk _

Re: [Beowulf] confidential data on public HPC cluster

2010-03-01 Thread Ashley Pittman
cluster or traveling over the wire somewhere to get there. Ashley. -- Ashley Pittman, Bath, UK. Padb - A parallel job inspection tool for cluster computing http://padb.pittman.org.uk ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Pe

Re: [Beowulf] clustering using xen virtualized machines

2010-01-29 Thread Ashley Pittman
On 26 Jan 2010, at 19:37, Paul Van Allsburg wrote: > Ashley Pittman wrote: >> On 25 Jan 2010, at 15:28, Jonathan Aquilina wrote: >>> has anyone tried clustering using xen based vm's. what is everyones take on >>> that? its something that popped into my

Re: [Beowulf] clustering using xen virtualized machines

2010-01-25 Thread Ashley Pittman
are perspective it's very similar to running real hardware. For my needs (development) it's perfectly adequate, I've not benchmarked it against running the same code on the raw hardware though. Ashley, -- Ashley Pittman, Bath, UK. Padb - A parallel job inspection

Re: [Beowulf] Re: cluster fails to boot with managed switch, but 5-port switch works OK

2009-12-03 Thread Ashley Pittman
net driver. Or the new distro you are trying enumerates the ethernet devices differently and it's trying to load the getfile from a different unconnected ethernet port. That's fairly common as well. It could even be worse that than in that the enumeration could be non-deterministic to

Re: [Beowulf] mpirun and line buffering

2009-10-27 Thread Ashley Pittman
y with the same settings will produce good output on some clusters and bad on others. The only resource manager which seems to reliably not mess up output is orte, that and RMS of course. I believe most people take the route of getting rank[0] to do all the printing. Ashley, -- Ashley Pittman, Bath

Re: [Beowulf] Virtualization in head node ?

2009-09-16 Thread Ashley Pittman
nce. I'm sure a case could be made for running ten Login instances here but I'm not sure of the benefits myself. Ashley Pittman. -- Ashley Pittman, Bath, UK. Padb - A parallel job inspection tool for cluster computing http://padb.pittman.org.uk _

Re: [Beowulf] filesystem metadata mining tools

2009-09-13 Thread Ashley Pittman
"noatime" these days? Ashley. -- Ashley Pittman, Bath, UK. Padb - A parallel job inspection tool for cluster computing http://padb.pittman.org.uk ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your s

Re: [Beowulf] Parallel Programming Question

2009-07-01 Thread Ashley Pittman
of programmers time and is also likely to make the application run slower. Yours, Ashley Pittman. -- Ashley Pittman, Bath, UK. Padb - A parallel job inspection tool for cluster computing http://padb.pittman.org.uk ___ Beowulf mailing list, Beowulf@beowu

Re: [Beowulf] dedupe filesystem

2009-06-28 Thread Ashley Pittman
On Thu, 2009-06-25 at 13:09 -0500, Rahul Nabar wrote: > On Tue, Jun 2, 2009 at 12:39 PM, Ashley Pittman > wrote: > Fdupes scans the filesystem looking for files where the size > matches, if > it does it md5's them checking for matches and if that

Re: [Beowulf] HPC fault tolerance using virtualization

2009-06-16 Thread Ashley Pittman
care about underlying performance, the traditional HPC crowd who, lets be honest, are the ones with the money and the talent anyway. It's as though HPC has gone or is infiltrating mainstream whilst at the same time mainstream computing is jumping into the cloud. All of a sudden HPC doesn

Re: [Beowulf] dedupe filesystem

2009-06-02 Thread Ashley Pittman
cate files. There is another test it could do after checking the sizes and before the full md5, it could compare the first say Kb which should mean it would run quicker in cases where there are lots of files which match in size but not content but anyway I digress. Ashley Pittman.

Re: [Beowulf] More casualties in the HPC landscape

2009-05-28 Thread Ashley Pittman
I can't speak for SiCortex but the Quadrics news as reported on the register is spot on, I'm surprised however that it's taken so long for the news to filter through into the main-stream. Ironically enough SiCortex were one of the first people I sent my CV to :( Ashley, On Thu

Re: [Beowulf] Should I go for diskless or not?

2009-05-15 Thread Ashley Pittman
u know the application doesn't fit in memory and can allocate some extra nodes to host the swapped memory, preferably swapping over the network to RAM on a remote machine. This doubles the nodes required to run your job however and makes scheduling it with normal jobs impossible. Ashley Pi

Re: [Beowulf] evaluating FLOPS capacity of our cluster

2009-05-11 Thread Ashley Pittman
with that number, > until somebody in the OpenMPI list told me that > "anything below 85%" needs improvement. :( At 24 nodes that's probably a reasonable statement. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org sponsored

Re: [Beowulf] 1 multicore machine cluster

2009-04-29 Thread Ashley Pittman
six or seven years old, before then a CPU was just a CPU and you would refer to "a N CPU cluster". All in it can be confusing, particularly when dealing with specifications or software which is more than one generation of hardware old. Ashley, [1] Of course I'm actually referri

Re: [Beowulf] programming guidence request

2009-01-23 Thread Ashley Pittman
ve? What features do you want? print and gdb are the most common but others are available if you have specific requirements that these don't meet. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or u

Re: [Beowulf] Re: RRDtools graphs of temp from IPMI

2008-11-11 Thread Ashley Pittman
bound and sometimes getting a huge > load average from failed ipmitool instances hanging around. Even when it does work running "ipmptool sensor" in-band can often take 30 seconds to complete which isn't great for performance. Ashley, ___ Beo

Re: XML alternatives [was Re: [Beowulf] What services do you run on your cluster nodes?]

2008-09-26 Thread Ashley Pittman
nds but is horribly complex and is incredibly difficult to get 100%. Perhaps we could talk off-list and you can tell me what I've been doing wrong? Ashley. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-25 Thread Ashley Pittman
en you'll see the effects at 32 nodes. Ashley Pittman. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] shmem

2008-09-24 Thread Ashley Pittman
re-use a large part of it if you were so inclined. > Separately, does anyone here happen to know whether shmem applications > care about independent progress? That is, if rank A is puting and > geting > to rank B, and rank B is off in application code, do the puts and gets >

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-23 Thread Ashley Pittman
of metrics so that it only has to startup and send data once rather than N times? Ashley. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-23 Thread Ashley Pittman
CPU and some kernel versions are pretty bad, one version of Red-Hat was effectively un-usable on clusters because of kscand. Ashley Pittman. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-23 Thread Ashley Pittman
On Mon, 2008-09-22 at 15:44 -0400, Eric Thibodeau wrote: > Ashley Pittman wrote: > > On Mon, 2008-09-22 at 14:56 -0400, Eric Thibodeau wrote: > > If it were up to me I'd turn *everything* possible off except sshd and > > ntp. The problem however is the maintenanc

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-22 Thread Ashley Pittman
erpdfs/pap301.pdf Also look at "whatelse" from http://www.c3.lanl.gov/pal/software.shtml Ashley Pittman. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] Re: GPU boards and cluster servers.

2008-09-08 Thread Ashley Pittman
but the > guilty shall remain nameless. You don't have to buy Dell hardware direct from Dell, there are plenty of people who will sell you dell nodes with value-add hardware and software. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

RE: [Beowulf] Infiniband modular switches

2008-08-26 Thread Ashley Pittman
much higher bandwidth figure, this would have completely defeated the point of the benchmark in the first place however which was to show that adaptive routing is necessary for consistent network performance. Ashley. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] Can one Infiniband net support MPI and a parallel filesystem?

2008-08-13 Thread Ashley Pittman
l number of processes in a big job sharing a node with a resource hogging job and slow down the entire big job however I've never seen this happening in the wild. > Also, a poorly behaved program can cause the other codes on > that node to crash (

Re: [Beowulf] copying big files

2008-08-08 Thread Ashley Pittman
a version that copies a single file, slightly harder to do multiple files but still not rocket science. Basically efficient broadcast isn't as easy to make generic as it seems, why waste time even trying when you can get MPI to do all tricky bits like work out toplogy/starting deamons/sec

Re: [Beowulf] Infiniband modular switches

2008-07-28 Thread Ashley Pittman
within the same job which effectively prevent there being a single optimum "site" algorithm. AlltoAll *is* the hardest MPI function to implement well and in my view it makes a good benchmark not just of the network but also of the MPI stack itself, there is a good chance tha

Re: [Beowulf] Update on mpi problem

2008-07-10 Thread Ashley Pittman
that openmpi is being MPI compliant in both cases. Ashley Pittman. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] An annoying MPI problem

2008-07-09 Thread Ashley Pittman
x27;ll point out if you are doing anything silly with MPI calls, there is enough flexibility in the standard that you can do something completely illegal but have it work in 90% of cases, marmot should pick up on these. http://www.hlrs.de/organization/amt/projects/marmot/ We could take this off-l

Re: [Beowulf] mdns

2008-07-07 Thread Ashley Pittman
ive directory offers. Unfortunately with Multicast I think network bottle necks are a fact of life and on network with static hardware configuration it really is better to have a static software configuration as well. What problem are you trying to solve? Ashley Pittman. On Mon, 2008-07-07 at 16:26 +0200

Re: [Beowulf] mdns

2008-07-07 Thread Ashley Pittman
e most part constant. In addition it used to be the case there were performance issues associated with using zeroconf on large networks and the last thing you want in a cluster is additional network traffic clogging up the system. Ashley Pittman.

Re: [Beowulf] Re: "hobbyists"

2008-06-24 Thread Ashley Pittman
are thin on the ground which is ironic as HPC is probably the one industry where Linux has the biggest market share. Ashley Pittman. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] Again about NUMA (numactl and taskset)

2008-06-24 Thread Ashley Pittman
ication pattern of the application, at least in the case when you aren't using all cpu's per node. Your MPI should pick sensible defaults for you. Ashley Pittman. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (diges

Re: [Beowulf] security for small, personal clusters

2008-06-24 Thread Ashley Pittman
oing searches tends to give a few "head in the > sand" sites but predominantly seem to be oriented for the security > professional. It's no different than securing a standard Desktop machine or your laptop. Disable as many

Re: Re[2]: [Beowulf] MVAPICH2 and osu_latency

2008-06-13 Thread Ashley Pittman
the be same every run, this probably won't change anything as all numactl can do is to stabilise the results towards the bottom of the range observed without it. Ashley Pittman. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] A couple of interesting comments

2008-06-09 Thread Ashley Pittman
problem with this is if the head node PXE boots on the customers network and gets automatically re-installed as a windows workstation everybody gets egg on their face. Yes even "modern" BIOSes are bad but localboot first is a sensible default. Ashley Pittman. _

Re: [Beowulf] Supercomputing Companies in the United Kingdom

2008-05-20 Thread Ashley Pittman
ing. If you have a specific question I'm sure either of us could help. >Pure software HPC is another matter. Are you looking for companies > with products, or specific markets? Or companies that develop HPC code > for customers? I could name several companies in a few d

Re: [Beowulf] MPICH vs. OpenMPI

2008-04-24 Thread Ashley Pittman
ve some codes which work better with one and some which work better with the other. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] Opinions of Hyper-threading?

2008-02-27 Thread Ashley Pittman
ood enough to do the tests I'd have liked to have done the window was closed and hardware technology had moved on. Ashley. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] VMC - Virtual Machine Console

2008-01-16 Thread Ashley Pittman
we are still a long way from it being widely used. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] ever heard of ScaleMP?

2007-12-11 Thread Ashley Pittman
roduct called f1200 so I assume are related somehow. The talk is on-line although I'll admit it was about half way through before I understood what they were talking about. http://www.cse.scitech.ac.uk/disco/mew18/Presentations/Day2/7th_Session/RobinHarker.pdf Ashley,

Re: [Beowulf] Really efficient MPIs??

2007-11-29 Thread Ashley Pittman
g within a node consumes CPU cycles and if your code is overlapping comms and compute to such an extent that latency is not a large factor handing the comms of to the nic to be handled asynchronously without CPU intervention can improve performance. Ashley,

Re: [Beowulf] Network Filesystems performance

2007-11-18 Thread Ashley Pittman
xperience of in-house and then modify that distro to meet your needs rather than the other way around. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] IEEE 1588 (PTP) - a better cluster clock?

2007-07-24 Thread Ashley Pittman
On Tue, 2007-07-24 at 14:37 +, Patrick Ohly wrote: > On Tue, 2007-07-24 at 15:15 +0100, Ashley Pittman wrote: > [examples for the need of a more accurate clock] > > But none of the ones you list are more than vaguely related to HPC. > [...] > > The only thing I've fou

Re: [Beowulf] IEEE 1588 (PTP) - a better cluster clock?

2007-07-24 Thread Ashley Pittman
;d use it if it was packaged with the distro and just worked out the box (without using multicast preferably) but it's absence isn't enough to cause me problems. Ashley. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscr

Re: [Beowulf] Cluster Diagram of 500 PC

2007-07-16 Thread Ashley Pittman
n the kernel, 5.x versions of score used a kernel patch, 6.x versions of score patch the network card drivers themselves as this is easier approach to manage.. Using different kernels isn't hard but it does require patching code. Ashley, ___ Beowulf

Re: [Beowulf] MPI_reduce roundoff question.

2007-07-12 Thread Ashley Pittman
st and I've seen procurement contracts which state that any results obtained by the computer must be 100% repeatable which in effect means the answer to your question is no, they cannot differ. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.or

Re: [Beowulf] backtraces

2007-06-12 Thread Ashley Pittman
is jobs tend to be 32-128p, > > and run for a week, so it's not ideal to run them under the debugger. It really shouldn't be that difficult, on a Quadrics cluster at least you can use the command "padb -x -r " from anywhere in the clu

Re: [Beowulf] 1.2 us IB latency?

2007-04-25 Thread Ashley Pittman
On Wed, 2007-04-25 at 07:35 -0700, Christian Bell wrote: > On Wed, 25 Apr 2007, Ashley Pittman wrote: > > > You'd have thought that to be the case but PIO bandwidth is not a patch > > on DMA bandwidth. On alphas you used to get a performance improvement > > by evi

Re: [Beowulf] 1.2 us IB latency?

2007-04-25 Thread Ashley Pittman
On Wed, 2007-04-25 at 11:31 +0200, Håkon Bugge wrote: > At 17:55 24.04.2007, Ashley Pittman wrote: > >That would explain why qlogic use PIO for up to 64k messages and we > >switch to DMA at only a few hundred. For small messages you could best > >describe what we use as

Re: [Beowulf] 1.2 us IB latency?

2007-04-24 Thread Ashley Pittman
which seems a little fast to me. Regardless of how they have done it 1.2 is impressive, what would make me even more impressed if it was quoted as 1.20 which would, as far as I'm aware, mean that they had the lowest latency of anybody. Ashley, __

Re: [Beowulf] debugging

2007-04-13 Thread Ashley Pittman
On Fri, 2007-04-13 at 01:14 +0900, Naoya Maruyama wrote: > On 4/12/07, Ashley Pittman <[EMAIL PROTECTED]> wrote: > > My advice would be first and foremost to look at the core file, I assume > > your program is receiving a SEGV and exiting? core files can be > > problem

Re: [Beowulf] debugging

2007-04-12 Thread Ashley Pittman
uld be to set MALLOC_CHECK_=2 to enable integrity checking in the libc malloc implementation and if using ia64 download compile gdb from source otherwise you might find it's not all that accurate at times. TotalView and DDT are both great if you have a licence for eithe

Re: [Beowulf] Re: Performance characterising a HPC application

2007-04-04 Thread Ashley Pittman
80 9770.92 10413.19 10318.85 > 1048576 40 18792.78 20762.40 20533.59 > 2097152 20 33849.45 42141.25 41535.32 > 4194304 10 65966.61 81472.99 79850.54 Why is 512 so much quicker than 8? The reduce figures showed the same issue and it's something I'd want to look at. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] Performance characterising a HPC application

2007-04-04 Thread Ashley Pittman
Christian Bell wrote: > On Wed, 04 Apr 2007, Ashley Pittman wrote: >> GasNet does however get extra credit for having a asynchronous >> collective, namely barrier. Unfortunately when you read the spec it's >> actually a special case asynchronous reduce which is almost

Re: [Beowulf] Performance characterising a HPC application

2007-04-04 Thread Ashley Pittman
Richard Walsh wrote: > Ashley Pittman wrote: >> Patrick Geoffray wrote: >> >>> I would bet that UPC could more efficiently leverage a strided or vector >>> communication primitive instead of message aggregation. I don't know if >>> GasNet provides

Re: [Beowulf] Performance characterising a HPC application

2007-04-04 Thread Ashley Pittman
ective, namely barrier. Unfortunately when you read the spec it's actually a special case asynchronous reduce which is almost impossible to optimise anything like as well as barrier which is a shame. Ashley, ___ Beowulf mailing list, Beowulf@beowulf

Re: [Beowulf] Win64 Clusters!!!!!!!!!!!!

2007-04-04 Thread Ashley Pittman
systems they sell, buy a Quadrics system from HP and you get HP-MPI by default. I don't think it's because their MPI is necessarily any better than our MPI on the particular platform, it's just that it's the HP-MPI and it's a HP platform. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] Performance characterising a HPC application

2007-03-23 Thread Ashley Pittman
It's probably a good thing to benchmark to get a idea of the capability of a given network. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] A start in Parallel Programming?

2007-03-23 Thread Ashley Pittman
C.. I was the last year of intake to be taught Ada when I started in 1996, there was a "C and Unix" module in the second year which I'm fairly sure was compulsory. These days I program mostly in c although can at least read fortran. They switched from A

Re: [Beowulf] NSLU2 as part of a low end cluster

2007-03-23 Thread Ashley Pittman
eresting toy cluster, the constraints would certainly sharpen the mind. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] What is a "proper" machine count for a cluster

2007-03-15 Thread Ashley Pittman
w you to experiment with most things. Of course you are only two nodes away from having 16 cpus and 16 is the next step up the list, what was your budget again... Of course when you start doing real things rather than just interesting things it really does depend, not least on what the real thi

RE: [Beowulf] Re: SGI to offer Windos on clusters ---> Skew/Jitter paper

2007-01-22 Thread Ashley Pittman
s not a new idea, IIRC PSC were doing this six or seven years ago, I'd be interested to see if hyperthreading helps the situation, it's almost always turned of and any cluster over 32 CPU's but it might be advantageous to enable it and use something like cpuset

RE: [Beowulf] Re: SGI to offer Windos on clusters ---> Skew/Jitter paper

2007-01-18 Thread ashley
> And got it. The title is: > >"The Case of the Missing Supercomputing Performance" I wondered if you were talking about that paper but it's from lanl not sandia, it should be essential reading for everyone working with

Re: [Beowulf] SGI to offer Windows on clusters

2007-01-18 Thread Ashley Pittman
On Thu, 2007-01-18 at 08:17 -0600, Richard Walsh wrote: > Ashley Pittman wrote: > > On Wed, 2007-01-17 at 08:50 +0100, Mikael Fredriksson wrote > >> Yes, it is. And more so if this cluster/LAN can also utilize som type > >> of "MOSIX" system. This will su

Re: [Beowulf] SGI to offer Windows on clusters

2007-01-18 Thread Ashley Pittman
their parts the parts are mostly that of a bog standard Linux distribution. I suppose it could be true that changing to OS to Windows would make them less specialised however that probably says more about Windows than it does about "hard-core&qu

RE: [Beowulf] 'liquid cooled' racks

2006-12-06 Thread Ashley Pittman
;t imagine hot liquid would be any different however. Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-07 Thread Ashley Pittman
out interrupts is 2.72uSec, using interrupts it is 7.20uSec. One fundamental difference between these two measurements is that when you use interrupts the kernel has to get involved, without interrupts it doesn't so you don't just have the interrupt but also a extra syscall. I know of p

Re: [Beowulf] MS HPC... Oh dear...

2006-06-14 Thread Ashley Pittman
mpich2 header file) anybody can create a DLL which is binary compatible and the app won't notice. For this to work of course Microsoft would probably need to release their source changes to mpich2 under the BSD Licence. Intel use a dynamic layer underneath or inside MPI which allow

Re: [Beowulf] MS HPC... Oh dear...

2006-06-12 Thread Ashley Pittman
ll create/mandate the model by > supplying the MPI. Some standards are defined after the fact. > > I am not advocating mimicing the Microsoft ABI. I am advocating getting > a single MPI ABI per ISA ABI. The question of course is, which one. Aren't these two statements contra

Re: [Beowulf] MS HPC... Oh dear...

2006-06-12 Thread Ashley Pittman
On Mon, 2006-06-12 at 11:18 -0400, Joe Landman wrote: > > Ashley Pittman wrote: > > >> More to the point, this dynamic binding allows you to write to the API, > >> present a consistent ABI, and handle the hardware details elsewhere in a > >> driver which can

Re: [Beowulf] MS HPC... Oh dear...

2006-06-12 Thread Ashley Pittman
On Mon, 2006-06-12 at 10:49 -0400, Joe Landman wrote: > > Ashley Pittman wrote: > > On Mon, 2006-06-12 at 00:02 -0400, Joe Landman wrote: > > > >> What Microsoft will do is to take away as much of this as they can. I > >> haven't seen it yet, but I beli

Re: [Beowulf] MS HPC... Oh dear...

2006-06-12 Thread Ashley Pittman
s at runtime, and just have it work. This is a nice idea. Perhaps I've missed something here, what do windows DLLs provide that a linux .so doesn't? Ashley, ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode