One of us. One of us.
On Sat, 23 Feb 2019 at 15:41, Will Dennis wrote:
> Hi folks,
>
>
>
> I thought I’d give a brief introduction, and see if this list is a good
> fit for my questions that I have about my HPC-“ish” infrastructure...
>
>
>
> I am a ~30yr sysadmin (“jack-of-all-trades” type), co
Hi Jim,
There is a group at JPL doing Kubernetes. It might be interesting to ask
them if you can execute Jobs on their clusters.
Cheers,
Andrew
On 27 July 2018 at 20:47, Lux, Jim (337K) wrote:
> I’ve just started using Jupyter to organize my Pythonic ramblings..
>
>
>
> What would be kind of
On 1 May 2018 at 22:57, Robert Taylor wrote:
> Hi Beowulfers.
> Does anyone have any experience with Bright Cluster Manager?
>
I used to work for ClusterVision from which Bright Cluster Manager was
born. Although my experience is now quite some years out of date I would
still recommend it mainly
I put €10 on the nose for a faulty power supply.
On 10 August 2017 at 19:45, Gus Correa wrote:
> + Leftover processes from previous jobs hogging resources.
> That's relatively common.
> That can trigger swapping, the ultimate performance killer.
> "top" or "htop" on the node should show somethin
Broadcom technology portfolio is from a bunch of acquisitions over the
years so everything tends to be highly siloed. If you can work out which
company the tech came from then you can sometimes track someone down with
some Linkedin Sleuthing.
Of course they may also just be sweating the assets and
> I don't have info for that exact version at my fingertips, but did an
> OpenFOAM 3.0 build last week. It took ~3 hours on a 24-core Xeon Broadwell
> server.
>
I propose we start a book on when it will finish. Who will give me odds on
30 days for the build to complete.
___
On 22 February 2015 at 19:09, Jeffrey Layton wrote:
> Dell has been doing some really good things around NFS and IB using IPoIB.
>
IPoIB is just vanilla NFS and will work as normal but with lots of lovely
bandwidth. You still get the TCP overhead so latency and therefore random
IO performance is
It seems that in environments where you don't care about security then
docker is a great enabler so that scientists can make any kind of mess in a
sandbox type environment and no one cares because your not on a public
facing network. There are however difficulties in using docker with mpi so
its pr
This is the problem that I think everyone using Docker now is looking to
> solve. How can you distribute an app in a reasonable manner an remove all
> of the silliness you don't need in the app distribution that the base OS
> can solve.
>
Its seems to encourage users to "do whatever they want in
esults seem pretty obvious
> to me.
>
>
> On 01/21/2015 04:26 PM, Andrew Holway wrote:
>
>> *yawn*
>>
>> On 19 August 2014 at 18:16, Kilian Cavalotti
>> wrote:
>>
>>> Hi all,
>>>
>>> On Tue, Aug 19, 2014 at 7:10 AM, Douglas E
*yawn*
On 19 August 2014 at 18:16, Kilian Cavalotti
wrote:
> Hi all,
>
> On Tue, Aug 19, 2014 at 7:10 AM, Douglas Eadline wrote:
>> I ran across this interesting paper by IBM:
>> An Updated Performance Comparison of Virtual Machines and Linux Containers
>
> It's an interesting paper, but I kin
>
> Regarding ZFS: is that available for Linux now? I lost a bit track here.
>
Yes.
http://zfsonlinux.org/
I would say its ready for production now. Intel are about to start
supporting it under Lustre in the next couple of months and they are
typically careful about such things.
Cheers,
Andrew
Hello,
How would you get IP packets into your docker containers from a Mellanox
Infiniband network?
I am assuming EoIB with connectx-3 is the answer here?
Ta,
Andrew
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To chang
Hello,
I work for a company that is selling a line of Illumos / ZFS / NFS based
NAS appliances. I am trying to work out if these kinds of systems are
interesting for HPC systems. Would you pay a premium for a system with a
nice API and GUI or would you just download OpenIndiana and do it yourself.
>
> Yes.. not everything has to be nuts and bolts.. Sticky tape and Velcro.
>
In the UK, we call this "Blue Peter Engineering".
Jim Lux
>
>
> -Original Message-
> From: Beowulf [mailto:beowulf-boun...@beowulf.org] On Behalf Of Eugen
> Leitl
> Sent: 2014-Jun-03 8:27 AM
> To: beowulf@beowu
Hi,
This cluster is now a little bit ancient. I have a feeling that, for the
price of upgrading your network to Infiniband (around $1 for QDR), you
could buy a single, dual socket server that will be more powerful. The pcie
bus on those systems is PCIe x8 Gen1 which would halve the speed anywa
Important paragraph:
"Some larger players in the HPC arena have begun to provide rich support
for high-performance parallel file systems as a complete alternative to
HDFS. IBM's GPFS file system has a file placement optimization (FPO)
capability that allows GPFS to act as a drop-in replacement fo
software is a very difficult thing. Maybe
Gluster has less web hits because there are very few people
complaining about it. Maybe Lustre has 1,480,000 pages of excellent
documentation!
On 30 April 2014 17:52, Prentice Bisbal wrote:
> On 04/30/2014 12:15 PM, Andrew Holway wrote:
>>
>> On 3
On 30 April 2014 15:05, Prentice Bisbal wrote:
>
> On 04/30/2014 08:34 AM, Chris Samuel wrote:
>>
>> So Red Hat now has Glusterfs and Ceph. Interesting..
>>
>> http://www.redhat.com/inktank/
>>
>> cheers,
>> Chris
>
>
> Gluster never seemed to gain the traction of Lustre or GPFS, though.
In the
Hi Jörg,
Typically we need to be looking at the amount of performance per unit
of power that computers give us in order to get an objective analysis.
Lets assume that all computer cores consume 20W of power and cost
£200. Models from 10 years ago give us 20GFLOP. Lets assume that this
performance
> I’m TH and am interested with this
> http://www.beowulf.org/pipermail/beowulf/2005-January/011626.html. I’m
> currently looking at a solution to launch an object detection app on Host,
> with the GUI running on Host and the compute nodes doing all the video
> processing and analytics part. I see
> There is no specific application. This is for a university-wide cluster that
> will be used by many different researchers in many different fields using
> many different applications.
Unless you have a specific application that requires it I would be
fairly confident is saying that a secondary I
On 30 January 2014 16:33, Prentice Bisbal wrote:
> Beowulfers,
>
> I was talking to a colleague the other day about cluster architecture and
> big data, and this colleague was thinking that it would be good to have two
> separate FDR IB clusters within a single cluster: one for message-passing,
>
Hello,
I am looking at Lustre 2.4 currently and have it working in a test
environment (actually with minimal shouting and grinding of teeth).
Taking a holistic approach: What does ZFS mean for HPC? I am excited
about on the fly data compression and snapshotting however for most
scientific datas I
Its been awfully quiet around here since Vincent Diepeveen was kicked
On 25 November 2013 20:25, Prentice Bisbal wrote:
> On 11/22/2013 02:41 PM, Ellis H. Wilson III wrote:
>> On 11/22/13 16:15, Joe Landman wrote:
>>> On 11/22/2013 02:00 PM, Ellis H. Wilson III wrote:
I think "no support
http://www.lexology.com/library/detail.aspx?g=4472d242-ff83-430a-8df4-6be5d63422ca
There is a legal mechanism to deal with this which protects both the
copyright holder and the host. Please can we talk about supercomputers
again because this "issue" is _EXTREMELY_ boring.
One of the lurking issue
+1 for Option 1.
On 22 November 2013 20:42, Joe Landman wrote:
> Folks:
>
>We are seeing a return to the posting of multiple full articles
> again. We've asked several times that this not occur. It appears to be
> a strong consensus from many I spoke with at SC13 this year, that there
> is
Hello,
Anyone using ZFS in production? Stories? challenges? Caveats?
I've been spending a lot of time with zfs on freebsd and have found it
thoroughly awesome.
Thanks,
Andrew
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> enough. In no way is that a free
> market, and the anti-competitive mechanism is obvious.
/me starts a compiler company.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscri
> Saltstack and even Python Fabric are great tools for managing large
> numbers of systems. I have not used them with a Beowulf system yet but
> the threading and logging for "push" configurations is the best of
> both worlds. One great aspect of Python Fabric is the local command
> execution so th
Hiho,
We need 3 demo machines to demo our stuff at hostingcon in a few
weeks. Anyone got any idea where we can get them from?
We will need 16 GB ram, Single CPU and some disks. Server chassis.
ta
Andrew
http://www.hybridcluster.com/
___
Beowulf mailin
Considering the quality and durability of modern computer components;
anyone using AC chillers to cool their DC could be considered somewhat
moronic.
[When will | is it required for] computer manufacturers and DC's be
forced to comply with similar stringent emissions regulations applied
to the
There is going to be a paradigm shift or some new kind of disruptive
technology is going to pop up before 'exascale' happens.
Quantum or some shizzle. It will do to clusters what the cluster did
to IBM and Cray.
On 16 May 2013 17:01, Eugen Leitl wrote:
> On Thu, May 16, 2013 at 10:28:12AM -0400,
Perhaps you could inform us of exactly what kind of discourse is acceptable.
On 12 May 2013 18:38, "C. Bergström" wrote:
> Can you people please stop the noise, take it offlist, change the
> subject.. and or add OT in the subject or something..
>
>
> Thanks
> _
On 1 May 2013 04:32, Caio Freitas de Oliveira wrote:
> Hi all,
>
> I'm completely new to all this beowulf thing. I've just now connected my
> 2 PCs using Heartbeat + Pacemaker, but I don't have a clue of how to use
> them as a single computer.
Single computer - maybe google "Single System Image".
Use a real operating system.
On 28 April 2013 09:36, Jörg Saßmannshausen wrote:
> Hi Josh,
>
> interesting. However, I am not using XEN on that machine at all and I don't
> have the XEN kernel installed. Thus, that is not the problem.
>
> All the best from a sunny London
>
> Jörg
>
>
> On Sonntag
> The net of all this is that (and I'll bet you if you read all 21 of the
> references, you'll find this).. Disk drive life time is very hard to
> predict.
I dunno about that; the error bars are not that big. Given a big enough
sample size I think you could predict failure rates with some accuracy
Did anyone post this yet? I thinking this is one of the definitive
works on disk failure.
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en//archive/disk_failures.pdf
On 19 April 2013 17:56, Joe Landman wrote:
> On 4/19/2013 11:47 AM, mathog wrote:
>>> My
the hosts and a specialised bios
and/or infiniband firmware to make it work.
ta
Andrew
On 20 March 2013 19:33, Jonathan Aquilina wrote:
> Yes that’s what I am curious about.
>
> ** **
>
> *From:* Andrew Holway [mailto:andrew.hol...@gmail.com]
> *Sent:* Wednesday, Marc
Do you mean a single system image?
http://en.wikipedia.org/wiki/Single_system_image
On 20 March 2013 19:16, Jonathan Aquilina wrote:
> Combining each servers resources into one massive server.
>
> -Original Message-
> From: Reuti [mailto:re...@staff.uni-marburg.de]
> Sent: Wednesday, Ma
Hello all,
I am giving a talk on beowulf clustering to a local lug and was wondering
if you had some interesting themes that I could talk about.
ta for now.
Andrew
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change y
> You seem to have no idea what determines prices when you buy in a lot versus
> just 1.
Yes indeed. I am obviously clueless in the matter.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest m
>>> So any SSD solution that's *not* used for latency sensitive workloads, it
>>> needs thousands of dollars worth of SSD's.
>>> In such case plain old harddrive technology that's at buy in price right
>>> now $35 for a 2 TB disk
>>> (if you buy in a lot, that's the actual buy in price for big sh
>> Find me an application that needs big bandwidth and doesn't need massive
>> storage.
Databases. Lots of databases.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) vi
Lustre is now implementing ZFS which I think has the most advanced SSD caching
stuff available.
If you have a google around for "roadrunner Lustre ZFS" you might find
something juicy.
Ta,
Andrew
Am 6 Feb 2013 um 21:36 schrieb Prentice Bisbal :
> Beowulfers,
>
> I've been reading a lot about
> I would think faster memory would be the only thing that could be done
> about it,
Indeed but I am imagining that the law of diminishing returns is going
to kick in here hard and fast.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin
As its a single thread I doubt that faster memory is going to help you much.
It's going to suck whatever you do.
Am 9 Jan 2013 um 17:29 schrieb Jörg Saßmannshausen
:
> Dear all,
>
> many thanks for the quick reply and all the suggestions.
>
> The code we want to use is that one here:
>
> htt
> So if I am using a single socket motherboard, would that not be faster or does
> a single CPU not cope with that amount of memory?
I am not aware of a single socket motherboard that can cope with 500GB
ram. 2 socket motherboards support about 256GB (128GB per processor)
or so at the moment and q
Dear Listeroons,
I have been doing some testing with KVM and Virtuozzo(containers based
virtualisation) and various storage devices and have some results I would
like some help analyzing.
I have a nice big ZFS box from Oracle (Yes, evil but Solaris NFS is
amazing). I have 10G and IB connecting t
http://ceph.com/docs/master/rados/
Ceph looks to be a fully distributed object store.
2012/11/28 Jonathan Aquilina
>
>
> On Wed, Nov 28, 2012 at 9:21 AM, Andrew Holway wrote:
>
>> http://ceph.com/
>
>
>
> What sets ceph apart from something like CIFS or
http://ceph.com/
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
ads of 64MB at a time. If you do not have petabytes of
>> big data i would assume doing reads of 64MB at a time
>> at your laptop isn't gonna make things better :)
>>
>> >
>> > On Tue, Nov 27, 2012 at 9:21 AM, Andrew Holway
>> > wrote:
>> >
2012/11/27 Jonathan Aquilina
> Interesting indeed. Does LVM span across multiple storage servers?
There is Clustered LVM but I dont think this is what your looking for. CLVM
allows you to have a shared storage target such as an iSCSI box and give
one LV to one box and another LV to another box
machines and mac machines and if
> someone wants windows machines?
>
> On Tue, Nov 27, 2012 at 9:21 AM, Andrew Holway wrote:
>
>> not as efficient as gluster I would venture.
>>
>>
>> 2012/11/27 Jonathan Aquilina
>>
>>> Hey guys I was looking at th
not as efficient as gluster I would venture.
2012/11/27 Jonathan Aquilina
> Hey guys I was looking at the hadoop page and it got me wondering. is it
> possible to cluster together storage servers? If so how efficient would a
> cluster of them be?
>
> --
> Jonathan Aquilina
>
> _
Intel never wanted monopoly.
2012/11/20 mathog
> Should Intel become the sole supplier of x86 chips we can expect
> technological stagnation at ever increasing prices in both the x86
> desktop and laptop markets. At that point ARM will likely become the
> chip du jour, since there is still com
> Again, knowing what you need will help us a lot here.
>
Its primary function will be as an Origin for a content delivery network.
As far as I understand it will act in a similar way to a DNS root. Changes
will be pushed back to the origin and it will hold the master copy but it
wont actually do
Hello,
I've been asked to look how we would provide a PB+50%/year of storage for
objects between 0.5 and 10mb per file.
It will need some kind of restful interface (only just understanding what this
means but it seems to me mostly "is there a http server in front of it")
Gluster seems to do th
OR
put in a USB thumb drive in each node. It would be somewhat simpler to set
up :)
2012/11/12 Andrew Holway
>
>
>
> 2012/11/12 Vincent Diepeveen
>
>> Problem is not the infiniband NIC's.
>
>
> Yes it is. You have to flash the device firmware in order
2012/11/12 Vincent Diepeveen
> Problem is not the infiniband NIC's.
Yes it is. You have to flash the device firmware in order for the BIOS
to recognize it as a bootable device.
Yes your a bit screwed for booting over IB but booting over 1GE is
perfectly acceptable. 1GE bit rate is not 1000 mil
> It's all a bit academic now (ahem) as the MPI component is a Perl
> program, and Perl isn't supported on BlueGene/Q. :-(
huh? perl mpi?
Interpreted language? High performance message passing interface?
confused.
___
Beowulf mailing list, Beowulf@beo
2012/10/25 Sabuj Pattanayek :
> link?
I should have explained, I'm not allowed to publish it.
I was planning on emailing the link to those interested.
>
> Thanks,
> Sabuj
>
> On Thu, Oct 25, 2012 at 9:32 AM, Andrew Holway
> wrote:
>> Hello.
>>
>> W
Hello.
Would anyone like to volunteer to critique an engineers report on
storage benchmarks that I've just completed?
Thanks,
Andrew
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode o
> Well, there was the propeller-top beanie and hazing when I first arrived
> at graduate school, the secret physics handshake, the decoder ring, and
> the wierd robes they made us wear in quantum mechanics "to keep us safe
> from virtual photons", but by in large, no. I mean, except for The
> Ritu
o solve.
Would it be possible for a friendly professor to supervise me in this
kind of circumstance?
If this is possible, what kind of costs would I incur?
Thanks,
Andrew
Linkedin http://www.linkedin.com/in/andrewholway
References - https://th.physik.uni-frankfurt.de/~holway/
Computer Sci
> bitter? sure. to me Canadian HPC is on the verge of extinction,
> partly because of this issue.
Is Canadien HPC a distinct entity from US HPC?
For instance; although chock full of HPC and computational science
Ireland does not have enough HPC to support its own industry. They
seem to buy from
Using rear door chillers which were evaporativity cooled we were
removing 25kw per rack. This was 72 modern supermicro nodes per rack
at full power.
It was cheap as chips and very effective but we did just exhaust the
heat into the atmosphere.
2012/9/28 Mark Hahn :
> I have a modest proposal:
2012/9/28 Mark Hahn :
> in the spirit of Friday, here's another, even less realistic idea:
> let's slide 1U nodes into a rack on their sides, and forget the
> silly, fussy, vendor-specific, not-that-cheap rail mechanism entirely.
That sounds almost as good as submerging your servers in oil.
__
> Not sure about there, yet price of colocation and hosting servers in
> Europe has nothing to do with energy price.
*cough* *splutter*
oh really?
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription
This is probably wildly inaccurate and out of date but might be a good
place to start :)
http://en.wikipedia.org/wiki/Quantum_chemistry_computer_programs
Let the benchmarks begin!!!
2012/9/26 Mark Hahn :
> forwarding by request:
>
> From: Mikhail Kuzminsky
>
> Do somebody know any modern refere
2012/9/24 Justin YUAN SHI :
> I think the Redundant Memory paper was really mis-configured. It uses
> a storage solution, trying to solve a volatle memory problem but
> insisting on eliminating volatility. It looks very much messed up.
http://thebrainhouse.ch/gse/silvio/74.GSE/Silvio's%20Corner%20
> Basically the transition nearly killed the project/community. Lots of
> folks grew tired of this and moved on. We are one such ... I don't have
> time for political battles between two nearly identical projects that
> should merge. This "sectarianism" is deadly to open source projects ...
> fo
> Haha, I doubt it -- probably the opposite in terms of development cost.
> Which is why I question the original statement on the grounds that
> "cost" isn't well defined. Maybe the costs just performance-wise, but
> that's not even clear to me when we consider things at huge scales.
40 years a
> Of course the physical modelers won't bat an eyelash,
> but the common programmer who still tries to figure out
> this multithreading thing will be out to lunch.
Whenever you push a problem to from hardware software you
exponentially increase the cost of solving that problem.
___
2012/9/21 David N. Lombard :
> Our primary approach today is recovery-base resilience, a.k.a.,
> checkpoint-restart (C/R). I'm not convinced we can continue to rely on that
> at exascale.
- Snapshotting seems to be an ugly and inelegant way of solving the
problem. For me it is especially laughable
> To be exact, the OSI layers 1-4 can defend packet data losses and
> corruptions against transient hardware and network failures. Layers
> 5-7 provides no protection. MPI sits on top of layer 7. And it assumes
> that every transmission must be successful (this is why we have to use
> checkpoint in
BMW make their 'Flagship' new mini in the UK near oxford and also own
Rolls Royce (cars).
2012/9/21 Vincent Diepeveen :
> A NATO bunker doesn't even have enough power to run 0.01% of the
> crunching power of BMW,
> which is of course a lot larger, as far as generic crunching
> hardware, than what
> which of course in combination with getting rid of nuclear reactors
That wont last long.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org
Hi,
Are you sure that you replicated your hostfile to all of your nodes?
Please can I see the output of your hosts file?
Thanks,
Andrew
2012/9/20 Antti Korhonen :
> Hi Vincent
>
> Master works with all slaves.
> M0+S1 works, M0+S2 works, M0+S3 works.
> All nodes work fine as single nodes.
>
>
> I just saw anecdotes and opinions.
Ditto
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
> Or, he could *PAY* someone to develop it for him. Last I saw, you can
> download the source code to OI, and go to town. There is a full build
> environment, and the stack is self hosting.
Are you making the case that I or my company could get into the
operating system development game?
> So .
> Choose more carefully next time? Just like you have to do a little due
> diligence before deciding if a commercial vendor is a good bet, you
> should also evaluate open source projects to see if they're a good
> bet.
I think at the time I think there was no other choice for ZFS/NFS. The
Oracle e
>> This is also demonstrably false. Just because cluster vendor A is
>> using a completely open source stack does not mean that you have any
>> less risk then Cluster Vendor B with their proprietary closed source
>> stack.
>
> Risk is a function of your control over the stack against small or large
> With regards to risk perception, I am still blown away at some of the
> conversations I have with prospective customers who, still to this day,
> insist that "larger company == less risk". This is demonstrably false.
>
> A company with open products (real open products), open software stacks,
>
> Ok. Let me be clear, 10M as a file size is not useful as a test. The
> numbers are, for lack of any better way to describe this, meaningless.
I am specifically testing the performance of NFSoRDMA vs NFSoTCP. I do not
want my load to go to disk.
What Im really curious about is why each proces
>
> Ah, this looks great! You added 50% IOPs with the doubling of procs,
> and I would bet you could squeeze a little more out by going to 24 or 32
> procs.
Children see throughput for 1 random writers= 44703.30 KB/sec
Children see throughput for 2 random writers= 802
> I assume so, but just to be clear you witnessed this behavior even with
> the -I (directio) parameter?
Yes.
for i in 1 2 4 8 16; do /cm/shared/apps/iozone/current/sbin/iozone -I -l
$i -u $i -r 16k -s 10M -F file1..file16 ; done > output &
>
>>> Can anyone tell me what might be the bottleneck
he individual machines to see where
> things are getting lost. One you have the traces, post em, and see if
> people can help.
https://th.physik.uni-frankfurt.de/~holway/strace-trunkd.log
https://th.physik.uni-frankfurt.de/~holway/strace.log
I have chopped out all the data from the strace in
> I've found point-and-click works until you want to change something to
> suit your environment. Then you have to start customizing things, and
> that can get messy.
There is actually a quite powerful shell also. I dont actually use the gui
that much apart from as a quick reference.
Rather then
or more machines from Oracle and the dude is benchmarking
> MySQL...
No, this dude is benchmarking filesystems as he clearly stated in my
original post.
Vincent, I think we've all got something to bring to this conversation,
but I think that from now on, the thing you should bring is silence.
>
> Any general thought out there on Xeon 56xx versus E5 performance wise?
We were comparing the performance of an older AMD with a newer AMD a while
ago and found that the newer AMDs floating point performance was
significantly reduced.
It turned out that the kernel did not yet support AVX or FMA4
Hi,
I am a bit confused.
I have 4 top notch dual socket machines with 128GB ram each. I also have a
Nexenta box which is my NFS / ZFS server. Everything is connected together
with QDR infiniband.
I want to use this for setup for mysql databases so I am testing for 16K
random and stride performan
> I've done 4 nodes a bunch of times, and that seems a bit too trivial.
> Heck, there's a lot of people who have 4 computers in their office,
> forming a defacto heterogenous cluster.
I have a four node HP DL380 G8 cluster that im using to benchmark storage
devices at the moment. I'm using a com
e during surges. That will have a dramatic impact
> on the crunching power they need and with those
> bandwidths it's 100% HPC what they do back in their analysis department.
>
> Now how they do their calculations, there is a as many solutions to
> that as there are traders, so t
Sorry. It was a general reply. I was being lazy.
2012/9/10 Chris Samuel :
> On Monday 10 September 2012 20:08:53 hol...@th.physik.uni-frankfurt.de
> wrote:
>
>> So please cease debasing these discussions with 'Iceland will never
>> have datacenters because you cant use it for HFT". Thou speaks fro
> That's very interesting! Where do you find out information on the banks'
> setups?
> The few times I have interviewed in the City they wouldn't let me see into
> the server rooms.
I just know a bit about RBS setup as I was interested in working for
them a little while back. I've recently learn
You make about as much sense as a Japanese VCR instruction manual!
2012/9/10 Vincent Diepeveen :
> You are rather naive to believe that public released numbers, other
> than total CPU's sold, will give you ANY data on
> what holds the biggest secrets of society. Secrecy is a higher
> priority here
> In light of our recent discussions on oil and water based cooling, it
> strikes me that the air con setups I see there
> Are quite conventional - chiller farms on the roofs (as far as I know the
> 'pointy hat' at the top of Canary Wharf is
> full of air con gear).
Give it time. We have only jut
> A single tad larger trader typically has a 3000 machines and they're
> all equipped with a highspeed network.
Europe installed over 2 million new servers in 2011 according to Gartner.
The Royal Bank of Scotland, one of the the most massive banks on the
planet, operates something less than 10k s
http://www.google.com/about/datacenters/inside/index.html
High Frequency Trading is actually a tiny amount of the world wide
computer power. Far far less than HPC. This kit tends to be located
very close to the exchange anyway. and is generally uninteresting in
terms of green IT.
I recently met w
1 - 100 of 188 matches
Mail list logo