Re: [Beowulf] Intel Phi musings

2013-03-01 Thread Igor Kozin
Vectorization is crucial for getting performance out of Phi. Intel SPMD Program Compiler runs on Phi http://ispc.github.com/ but it's probably not an officially supported product. On 28 February 2013 03:30, "C. Bergström" wrote: > Unless I missed something I only see this being wrapped around som

[Beowulf] follow up on SSD caching

2013-02-18 Thread Igor Kozin
Intel Introduces Cache Acceleration Software http://newsroom.intel.com/community/intel_newsroom/blog/2013/02/12/introducing-intel-cache-acceleration-software-for-use-with-intel-ssd-data-center-family It would be interesting to know if anyone had a success with it especially in the context of para

Re: [Beowulf] Maker2 genomic software license experience?

2012-11-09 Thread Igor Kozin
You nailed it! And not just the code, new codes appear all the time. bowtie, bwa, soap2, soap3, bowtie2, snap .. On 9 November 2012 05:32, James Lowey wrote: > Bingo, the code changes so fast that parallelism is best left to the > scheduler, for now... > > > James Lowey > Director, NCS > TGen __

Re: [Beowulf] Processors that can do 20+ GFLOPS/Watt

2012-10-05 Thread Igor Kozin
help the winner win http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone On 4 October 2012 21:05, Eugen Leitl wrote: > > http://www.streamcomputing.eu/blog/2012-08-27/processors-that-can-do-20-gflops-watt/ > > Processors that can do 20+ GFLOPS/Watt > > by Vincent H

Re: [Beowulf] Southampton's RPi cluster is cool but too many cables?

2012-09-25 Thread Igor Kozin
Hi Ellis, if we are to believe the video on the page John pointed to then the membrane indeed processes both DNA strands. You will probably want to have a second read anyway in order to improve reliability. We can only guess what's their signal to noise ratio is. Igor > I do wonder however if the

Re: [Beowulf] Southampton's RPi cluster is cool but too many cables?

2012-09-25 Thread Igor Kozin
"this thing" does only ~ 1/20 of the genome. you have to pay quite a bit more for your full genome which makes it comparable (price-wise) with other technologies. hopefully in a few years time it'll get cheaper. stored as characters (1 byte per char) the genome is ~ 3 GB. you could use two bits to

Re: [Beowulf] value of parallel programming experience (was: Checkpointing using flash)

2012-09-25 Thread Igor Kozin
It is not so much about parallel programming experience but about scientific software development career path. Quite often parallel skills are needed anyway. A former colleague and a good friend of mine explains it quite nicely here: http://software.ac.uk/blog/2012-04-23-work-scientific-software-e

Re: [Beowulf] General thoughts on Xeon 56xx versus E5 series?

2012-09-14 Thread Igor Kozin
if memory bandwidth is your concern then there are models which boost it quite significantly. e.g. http://ark.intel.com/products/64584/Intel-Xeon-Processor-E5-2660-20M-Cache-2_20-GHz-8_00-GTs-Intel-QPI probably very few codes are going to benefit from AVX without extra efforts but BW is a clear win

Re: [Beowulf] 10GbE topologies for small-ish clusters?

2011-10-12 Thread Igor Kozin
Gnodal was probably the first to announce a 1U 72 port switch http://www.gnodal.com/docs/Gnodal%20GS7200%20datasheet.pdf Other vendors either have announced or will be probably announcing dense packaging too. On 12 October 2011 15:52, Chris Dagdigian wrote: > > First time I'm seriously pondering

Re: [Beowulf] China Wrests Supercomputer Title From U.S.

2010-10-28 Thread Igor Kozin
> http://www.hpcwire.com/blogs/New-China-GPGPU-Super-Outruns-Jaguar-105987389.html I have been wondering what use if any Tianhe-1 made of Radeon HD 4870 X2. I had a card like that and it died three times during one year warranty. Needless to add it perished shortly after the warranty run out. Perh

[Beowulf] bandwidth to GPU

2010-05-21 Thread Igor Kozin
Hello everyone, I'm quite curios about the bandwidth to GPUs people are getting especially with NVIDIA C1060 or Fermi on Intel hosts with two 5520 chipsets. Using bandwidthTest from CUDA SDK and averaging the results over all cores and GPUs (we have S1070) I'm getting with memory=pageable 3672 MB

Re: [Beowulf] Nvidia FERMI/gt300 GPU

2009-10-02 Thread Igor Kozin
> Not only CUDA and OpenCL, but also DirectX, DirectCompute, C++, and > Fortran. From a programmer's point of view, it could be a major > improvement, and the only thing which still kept people from using > GPUs to run their code. all of the above but C++ can be used on the current hardware. CUDA

[Beowulf] seminars and webcasting on 7th July

2009-07-03 Thread Igor Kozin
Dear All, As you probably already know there will be three extremely interesting seminars held on the 7th July at Daresbury - Jack Dongarra, U of Tennessee "Multicore and Hybrid Computing for Dense Linear Algebra Computations" - Benoit Raoult, CAPS "HMPP: Leverage Computing Power Simply by Directiv

[Beowulf] Re: 10 GbE

2009-02-11 Thread Igor Kozin
an answer to my own question on Arista 7148SX latency. or rather an upper estimate. there is a mellanox white paper http://www.mellanox.com/pdf/whitepapers/wp_mellanox_en_Arista.pdf which reports the "TCP latency using standard test suite" (netperf?) as 7.36 us (Mellanox ConnectX EN -> Arista 7124S

[Beowulf] 10 GbE

2009-02-11 Thread Igor Kozin
Hello everyone, we are embarking on evaluation of 10 GbE for HPC and I was wondering if someone has already had experience with Arista 7148SX 48 port switch or/and Netxen cards. General pros and cons would be greatly appreciated and in particular - Switch latency (btw, the data sheet says x86 insi

Re: [Beowulf] NAMD/CUDA scaling: QDR Infiniband sufficient?

2009-02-09 Thread Igor Kozin
are the slides of this presentation available? 2009/2/9 Dow Hurst DPHURST > Has anyone tested scaling of NAMD/CUDA over QLogic or ConnectX QDR > interconnects for a large number of IB cards and GPUs? I've listened to > John Stone's presentation on VMD and NAMD CUDA acceleration. The consensus

Re: Re: [Beowulf] IBM Sequoia

2009-02-08 Thread Igor Kozin
2009/2/4 Eloi Gaudry > Kilian CAVALOTTI wrote: > >> Hi John, >> >> On Tuesday 03 February 2009 18:25:04 John Hearns wrote: >> >> >>> http://www.theregister.co.uk/2009/02/03/llnl_buys_ibm_supers/ >>> >>> I make this 400 cores per 1U rack unit. How is the counting being done >>> here? >>> >>> >> >>

[Beowulf] Re: Brice Goglin's seminar on OpenMX - 13/01/2008 at STFC Daresbury Lab

2009-01-15 Thread Igor Kozin
the presentation is now available on-line at http://www.cse.scitech.ac.uk/disco/presentations/BriceGoglin.shtml > * * CSE SEMINAR * * > > You are invited to attend a CSED Seminar taking place on Tuesday 13 January > 2009 in CR1, A Block at 2pm. The Seminar will be given by Brice Goglan from > IN

[Beowulf] Brice Goglin's seminar on OpenMX - 13/01/2008 at STFC Daresbury Lab

2009-01-08 Thread Igor Kozin
I do not know how big the UK audience of this list is but hopefully a few of you may find it very interesting. * * CSE SEMINAR * * You are invited to attend a CSED Seminar taking place on Tuesday 13 January 2009 in CR1, A Block at 2pm. The Seminar will be given by Brice Goglan from INRIA, France.

Re: [Beowulf] Inside Tsubame - the Nvidia GPU supercomputer

2008-12-12 Thread Igor Kozin
23.55 Mflops/W according to green500 estimates (#488 in thier list) 2008/12/12 Vincent Diepeveen > > On Dec 12, 2008, at 8:56 AM, Eugen Leitl wrote: > > >> http://www.goodgearguide.com.au/article/270416/inside_tsubame_- >> _nvidia_gpu_supercomputer?fp=&fpid=&pf=1 >> >> Inside Tsubame - the Nvidi

Re: [Beowulf] Nehalem Xeons

2008-10-14 Thread Igor Kozin
> > Did you really hold the Nehalem Xeon chips in your hands? They probably > require new motherboards with a new chipset. > absolutely > It would be nice to hear from you some numbers concerning Harpertown vs > Nehalem performance > those who know will not be able tell you because of the NDA. i

Re: [Beowulf] FY;) the Helmer cluster

2008-10-06 Thread Igor Kozin
lol the guy surely is very ambitious - 4 PFLOPS for $0.9M http://helmer3.sfe.se/ 2008/10/5 Eugen Leitl <[EMAIL PROTECTED]> > > A Linux cluster built in a Helmer IKEA cabinet: > > http://helmer.sfe.se/ > ___ Beowulf mailing list, Beowulf@beowulf.org T