hem interactive access
to those compute nodes - eg for profiling, debugging or computational
steering.
Daniel
Daniel Kidger
Bull Information Systems, UK
On 25 July 2013 06:08, Christopher Samuel wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 25/07/13 14:40, Mark
On 19 September 2012 15:09, Chris Dagdigian wrote:
> Daniel Kidger wrote:
>
>> The technology was also known as MIC : pronounced 'Mick' or 'Mike'
>> depending on who you spoke to.
>> That was confusing - so with PHI it is now unambiguous right? er
All,
Having seen the pictures of Simon's Raspberry Pi cluster at Southampton,
one thing that strikes me is how ugly all the cables make it look.
So how can this be improved - indeed is 'cable-free' possible?
- The network could be Wifi using micro adapters.
- USB Power could be at least daisy-cha
Eugen,
I am certainly interested !
I touched on the Gromacs port to ClearSpeed when I worked there - I then
went on to write the port of AMBER to CS
plus I have a pair of RPis that I tinker with.
Simon Cox at Southampton did a publicity stunt recently - building a 64node
RPi cluster with his youn
John,
Remember that Knight's Corner is an instance (cf IvyBridge) whereas Phi is
a product line (cf Xeon)
In the same way Kepler is an instance of Nvidia's Telsa line.
The technology was also known as MIC : pronounced 'Mick' or 'Mike'
depending on who you spoke to.
That was confusing - so with PH
Mikhail,
I still think that there could be a NUMA issue here
With no NUMA binding:
- the one process case can migrate between cores on the core sockets - if
its memory is on the first socket, then it will run a little slower when
scheduled on the second socket.
- with two process on a node, th
|
> http://www.computecanada.org
> ___
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
>
--
Bull,
tect of an Open World TM
Dr. Daniel Kidger, HPC Technical Consultant
daniel.kid...@bull.co.uk
+44 (0) 7966822177
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscri
king acks )
Daniel
--
Bull, Architect of an Open World TM
Dr. Daniel Kidger, HPC Technical Consultant
daniel.kid...@bull.co.uk
+44 (0) 7966822177
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscrip
--
Bull, Architect of an Open World TM
Dr. Daniel Kidger, HPC Technical Consultant
daniel.kid...@bull.co.uk
+44 (0) 7966822177
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or uns
not line buffered and so the output is largely unreadable as all the
lines have been jumbled together.
Is this a known issue with a workaround? or does everyone else only
print from just one MPI process?
Daniel
--
Bull, Architect of an Open World TM
Dr. Daniel Kidger, HPC Technical Consu
>Rich Sudlow wrote:
>> In the past we've used cyclades console servers for serial
>> interfaces into our cluster nodes.
>>
>> We're replacing 360 nodes which couldn't do SOL with 360
>> which could.
>>
>> Now that we can do SOL is that a better to use that instead of the
>> Cyclades?
>>
>> Thou
Actually I had never heard of Astroglide until yesterday - I guess it is
only sold in the USA.
I saw it mentioned on Wikipedia when I searched for 'lubricant' (!)
But you have got to admit, if NASA invented something for easing
sticking rails in clusters, Astroglide would be a great product name.
>
> >> Dear Beowulfers
> >>
> >> A mundane question:
> >>
> >> What is the right lubricant for computer rack sliding
> rails?
> >> Silicone, paraffin, graphite, WD-40, machine oil, grease,
> other?
To avoid such lock-ups, we use Crisco
Tsubame isn't just about delivering flops to production codes.
It is trying to spearhead coprocessing.
The people there have been porting codes to ClearSpeed / GPUs for a
while - and hence been publishing their experiences.
Daniel
On Fri, 2008-12-12 at 12:01 +, Loic Tortay wrote:
> Florent C
ausing the Flurinert to catch fire?
Daniel
Dr. Daniel Kidger, Technical Consultant, ClearSpeed Technology plc, Bristol,
UK
E: [EMAIL PROTECTED]
T: +44 117 317 2030
M: +44 7738 458742
"Write a wise saying and your name will live forever." - Anonymous.
-Original Message-
From: [EMAI
any online resources for this ?
Daniel
Dr. Daniel Kidger, Technical Consultant, ClearSpeed Technology plc, Bristol,
UK
E: [EMAIL PROTECTED]
T: +44 117 317 2030
M: +44 7738 458742
"Write a wise saying and your name will live forever." - Anonymous.
-Original Message-
Fro
Bogdan,
Parallel applications with lots of MPI traffic should run fine on a cluster
with large jiffies - just as long as the interconnect you use doesn't need to
take any interrupts. (Interrupts add hugely to the latency figure)
Daniel
-Original Message-
From: [EMAIL PROTECTED] [mailto:
platforms, ACML on AMD, or there are generic ones like the excellent
libgoto or ATLAS.
b) an MPI library such as lam, mpich or similar
If you tell us the locations of these two, many people on this list would be
able to mail you a suitable Make. file
Daniel
Dr. Daniel Kidger, Technical
s met this same issue?
2. Which is the latest version of ITC/Vampir that can be used on Opteron?
3. Is there a workaround (cf. the patch to use the Intel compiler on Opteron)?
Daniel
Dr. Daniel Kidger, Technical Consultant, Clearspeed plc, Bristol UK
E: [EMAIL PROTECTED]
T: +44 117 317 2030
M
Geoff Jacobs wrote:
>Jim Lux wrote:
>> Heh, heh, heh..
>> I have a box of Artisoft 2Mbps NICs out in the garage.
>> Or, maybe, some of those NE1000 coax adapters. I have lots of old coax,
>> a bag full of connectors, a crimper, and I'm not afraid to use them.
>>
>> Hey, it's only to boot.
>Save
.
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=1545894&isnumber=3
2989
.. not sure about any issue with beta radiation and clusters - most nodes are
in metal boxes in metal racks?
Daniel
Dr. Daniel Kidger, Technical Consultant, Clearspeed plc, Bristol UK
E: [EMAIL PROTECTED]
T:
22 matches
Mail list logo