Heya guys,
Well my Beowulf is still building with 6 nodes and a master sever containing a 
total of 56 2.10Ghz AMD 2373EE cores (dual socket, quad cores) nd while I'm 
working this up to around 12 nodes and a master creating a total of 218.4Ghz 
CPU.
I've ran into a right issue, I wanted to apply some GPU hardware, just as a 
little boost using the old GTX570 PCI-E x16 spare slot within each node, sadly 
this have given me some serious heartburn.. And I already knew the card itself 
would not even entertain being seated with a 1U node, but the use of an PCI-E 
ribbon extenders might have solved the issue..
Even more sadly, haha and even with the motherboard being able to accept 
version 3.0 standards while the GTX570 will be fine with version 2.0 PCI-E 
standards (and their mainly backwards compatible) it would not even get past 
the bios, infact I reckon it was halting even prior to that no matter how much 
juice I fed the card or the BIOS settings..
So neglecting all of this, I took some old skills and knowledge from the 
BitCoin ladies, purchased a ASROCK H81 pro board, (6 PCI-E slots see) steadily 
collecting GTX570 cards from CEX stores around the country online and I'm 
building a single GPU node instead..
Now I know what your going to say, why use these old boring GTX570 cards with 
only 480 CUDA cores, firstly each card is only £35 GBP, two year warranty to 
boot. So for £210 I can get myself 2880 CUDA cores and it will give me time to 
save up for upgrading these to GTX1080 cards as the price falls over a period 
of another year. 
It will also help me get used to managing a multi card GPU cluster node, albeit 
second hand parts, but you have to start somewhere don't you. It also outs a 
bit more life into some older cheap, used cards!
By all means I would love to employ a 4U node decked out with XEON Phi cards, 
infact a complete rack full of them but the cheapest I can get my hands on a 
knights landing card is coming close to over £500 GBP and I reckon these 6 in 
total GPU cards will be much faster and also easier to work with, it's both 
nice that the XEON Phi cards have an inbuilt Linux OS subsystem but it's a bit 
of a bummer sometimes and to really eek out the extra horsepower that you 
really need some serious code tailoring to get you there..
I find that in itself the major drawback of folks buying into the XEON Phi 
family for coprocessing or offloading needs..
I do hope someone comes along with an idea I had many years ago to use SoC 
technology, I even have a 20x20mm ARM big.LITTLE SoC sat on my desk with 64 
inbuilt ALU/Crypto/GPU cores and an array of these on a single PCI-E card I 
think would really make a good game changer in the coprocessing market.. 
(anyone want to invest in my PCB{shameless plug})
Anyway, my home lab beowulf cluster experience and experiments are doing well 
regardless of the hiccups and total waste of time spent on some areas.. It's 
all going nicely indeed :)

*I wonder, if everyone would like to post links of software they use with 
regards to beowulf and clustering so that I may catalogue all these links and 
put them in an html document on my company server, might help us all out or in 
the future :D

> Kind regards,
> Darren Wise Esq, 
> B.Sc, HND, GNVQ, City & Guilds.
> 
> Managing Director (MD)
> Art Director (AD)
> Chief Architect/Analyst (CA/A)
> Chief Technical Officer (CTO)
> 
> www.wisecorp.co.uk> www.wisecorp.co.uk/babywise
> www.darrenwise.co.uk
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to