Re: [Beowulf] Rant on why HPC isn't as easy as I'd like it to be. [EXT]

2021-09-21 Thread Tim Cutts
Yes. Is it cost effective? Yes. Sold! The one thing it did need was knowledgeable cloud architects, but the cloud providers can and do help with that. Tim -- Tim Cutts Head of Scientific Computing Wellcome Sanger Institute On 21 Sep 2021, at 12:24, John Hearns mailto:hear...@gmail.com>&

Re: [Beowulf] AMD and AVX512 [EXT]

2021-06-19 Thread Tim Cutts
fundamental driver was to increase phone battery life. Tim -- Tim Cutts Head of Scientific Computing Wellcome Sanger Institute On 19 Jun 2021, at 16:49, Gerald Henriksen mailto:ghenr...@gmail.com>> wrote: I suspect that is marketing speak, which roughly translates to not that no one has

Re: [Beowulf] Project Heron at the Sanger Institute [EXT]

2021-02-04 Thread Tim Cutts
Indeed. We’re actively looking at these as well. GPUs can be used for a lot of this stuff (e.g. the Parabricks product on nVidia GPUs) and they’re an inherent part of some sequencers. The base called on the Oxford Nanopore system is an ML algorithm, and ONT worked with nVidia on that, I belie

Re: [Beowulf] Project Heron at the Sanger Institute [EXT]

2021-02-04 Thread Tim Cutts
On 4 Feb 2021, at 10:40, Jonathan Aquilina mailto:jaquil...@eagleeyet.net>> wrote: Maybe SETI@home wasnt the right project to mention, just remembered there is another project but not in genomics on that distributed platform called Folding@home. Right, protein dynamics simulations like that

Re: [Beowulf] Project Heron at the Sanger Institute [EXT]

2021-02-04 Thread Tim Cutts
hought of doing something like SETI@home and those projects to get idle compute power to help churn through the massive amounts of data? Regards, Jonathan From: Tim Cutts mailto:t...@sanger.ac.uk>> Sent: 04 February 2021 11:26 To: Jonathan Aquilina m

Re: [Beowulf] Project Heron at the Sanger Institute [EXT]

2021-02-04 Thread Tim Cutts
On 4 Feb 2021, at 10:14, Jonathan Aquilina via Beowulf mailto:beowulf@beowulf.org>> wrote: I am curious though to chunk out such large data is something like hadoop/HBase and the like of those platforms, are those whats being used? It’s a combination of our home-grown sequencing pipeline whi

Re: [Beowulf] Project Heron at the Sanger Institute [EXT]

2021-02-04 Thread Tim Cutts
> On 3 Feb 2021, at 18:23, Jörg Saßmannshausen > wrote: > > Hi John, > > interesting stuff and good reading. > > For the IT interests on here: these sequencing machine are chucking out large > amount of data per day. The project I am involved in can chew out 400 GB or > so > on raw data

Re: [Beowulf] apline linux [EXT]

2021-02-01 Thread Tim Cutts
source usage for what you want to crunch? Regards, Jonathan From: Tim Cutts mailto:t...@sanger.ac.uk>> Sent: 01 February 2021 15:43 To: Jonathan Aquilina mailto:jaquil...@eagleeyet.net>> Cc: Beowulf mailto:beowulf@beowulf.org>> Subject: Re: [Beowulf] apline linux [EXT] Th

Re: [Beowulf] apline linux [EXT]

2021-02-01 Thread Tim Cutts
The only place I regularly encounter is in containers, so people who are running singularity might find their users are using alpine in a way they can’t easily see. Presumably the libm point Janne made applies in the container context as well, since the application will presumably be using the

Re: [Beowulf] [External] RIP CentOS 8 [EXT]

2020-12-08 Thread Tim Cutts
I don’t know how often we ever actually used Red Hat support for RHEL itself. Very rarely, I suspect. Even before they hiked the price on us, I expect we effectively paid them several thousand dollars per support call. Some of the other products, like RH OpenStack Platform, yes, but not for th

Re: [Beowulf] [External] RIP CentOS 8 [EXT]

2020-12-08 Thread Tim Cutts
ny of these: Debian, Ubuntu, RHEL, CentOS, SUSE, etc. I got the least pain using Debian, while SUSE was the hardest, though RHEL was right behind it. On 12/8/20 4:55 PM, Tim Cutts wrote: We did use Debian at Sanger for several years. The main reason for switching away from it (I’m talking abou

Re: [Beowulf] [External] RIP CentOS 8 [EXT]

2020-12-08 Thread Tim Cutts
On 8 Dec 2020, at 21:52, Ryan Novosielski mailto:novos...@rutgers.edu>> wrote: It’s pretty common that if something supports only one distribution, it’s RedHat-based. That’s also true of hardware vendors. True, officially, but often not officially. Again, back around 2008 I found it hilario

Re: [Beowulf] [External] RIP CentOS 8 [EXT]

2020-12-08 Thread Tim Cutts
We did use Debian at Sanger for several years. The main reason for switching away from it (I’m talking about 2008 here) was a desire to have a common OS across desktops and servers. Debian’s extremely purist stance on open source device drivers made it a pain on desktops and laptops, because i

Re: [Beowulf] RIP CentOS 8 [EXT]

2020-12-08 Thread Tim Cutts
We’re moving wholesale back to Ubuntu; we didn’t use CentOS anyway very much, but Red Hat increased our licensing costs 10x which put them out of the picture. Tim > On 8 Dec 2020, at 16:37, Ryan Novosielski wrote: > > The first comment on the blog does not equivocate. :-D > > -- > #BlackLives

Re: [Beowulf] [EXTERNAL] Lambda and Alexa [EXT]

2020-11-25 Thread Tim Cutts
I think the 8 second limit is probably arbitrary. Lambda’s normal limit is 5 minutes. I presume Amazon did some UX work, and basically asked “what’s the maximum length of time your average user is willing to wait for an answer before they consider it a bad experience”, and came up with 8 secon

Re: [Beowulf] [External] Re: Clustering vs Hadoop/spark [EXT]

2020-11-25 Thread Tim Cutts
being called or do you mean the memory is always allocated regardless of the object being in scope and that part of the programme being used or not? Regards, Jonathan From: Beowulf mailto:beowulf-boun...@beowulf.org>> On Behalf Of Tim Cutts Sent: 25 November 2020 12:40 To: Prentice Bisbal

Re: [Beowulf] [External] Re: Clustering vs Hadoop/spark [EXT]

2020-11-25 Thread Tim Cutts
Except of course, you do really. Java applications can end up with huge memory leaks because the programmers really need to understand the mechanism when objects get moved from Eden and Survivor space into Tenured space. Tenured space never decreases, so every object which ends up there is allo

Re: [Beowulf] Lambda and Alexa [EXT]

2020-11-25 Thread Tim Cutts
Indeed, my main personal experience with Lambda so far has been in writing an Alexa skill in my spare time. It’s been quite fun, and very instructive in the benefits and pitfalls of lambda. My main takehomes so far: 1. I love the fact that there’s basically no code at all other than that req

Re: [Beowulf] Clustering vs Hadoop/spark [EXT]

2020-11-25 Thread Tim Cutts
On 24 Nov 2020, at 18:31, Alex Chekholko via Beowulf mailto:beowulf@beowulf.org>> wrote: If you can run your task on just one computer, you should always do that rather than having to build a cluster of some kind and all the associated headaches. If you take on the cloud message, that of cou

Re: [Beowulf] experience with HPC running on OpenStack [EXT]

2020-07-09 Thread Tim Cutts
I think that’s certainly true. As with all things, it depends on your workloads. The vast majority of our genomics codes are either single threaded, or multi-threaded on a single node. There's relatively little MPI, and we maintain a "parallel" queue on bare metal for precisely that set of wo

Re: [Beowulf] experience with HPC running on OpenStack [EXT]

2020-07-01 Thread Tim Cutts
Here, we deploy some clusters on OpenStack, and some traditionally as bare metal. Our largest cluster is actually a mixture of both, so we can dynamically expand it from the OpenStack service when needed. Our aim eventually is to use OpenStack as a common deployment layer, even for the bare m

Re: [Beowulf] Interactive vs batch, and schedulers [EXT]

2020-01-17 Thread Tim Cutts
Indeed, and you can quite easily get into a “boulders and sand” scheduling problem; if you allow the small interactive jobs (the sand) free access to everything, the scheduler tends to find them easy to schedule, partially fills nodes with them, and then finds it can’t find contiguous resources

Re: [Beowulf] [EXTERNAL] Re: HPE completes Cray acquisition [EXT]

2019-09-27 Thread Tim Cutts
They use that same wordage when advertising Aruba wireless kit. It’s clearly “the way things are branded” in HPE these days. Tim On 27 Sep 2019, at 15:40, Lux, Jim (US 337K) via Beowulf mailto:beowulf@beowulf.org>> wrote: “A HPE company” seems sort of bloodless and corporate. -- The We

Re: [Beowulf] Lustre on google cloud [EXT]

2019-07-26 Thread Tim Cutts
I try to avoid the phrase “cloud bursting” now, for precisely this reason. Many of my users have heard the phrase, and think it means they’ll be able to instantly start work in the cloud, just because the local cluster is busy. On the compute side, yes, it’s pretty quick but as you say, gettin

Re: [Beowulf] Swimming in oil..

2019-02-11 Thread Tim Cutts
The Dead Sea is so buoyant it makes swimming rather difficult. Floating on your back is about all that's possible. I learned the hard way not to attempt swimming on your front. Your backside pops out of the water and the resulting rotation shoves your face into it. And that salinity *hurts*. T

Re: [Beowulf] No HTTPS for the mailman interface

2018-12-02 Thread Tim Cutts
I think that's a good idea, although it's only a partial solution. Mailman sends password reminders unencrypted anyway, and presumably therefore doesn't store the passwords as hashes or whatever. Tim Sent from my iPhone > On 3 Dec 2018, at 6:44 am, "jaquil...@eagleeyet.net" > wrote: > > Hi

Re: [Beowulf] HPC Workflows

2018-12-02 Thread Tim Cutts
Ho ho. Yes, there is rarely anything completely new. Old ideas get dusted off, polished up, and packaged slightly differently. At the end of the day, a Dockerfile is just a script to build your environment, but it has the advantage now of doing it in a reasonably standard way, rather than wha

Re: [Beowulf] New tools from FB and Uber

2018-10-30 Thread Tim Cutts
I vaguely remember hearing about Btrfs from someone at Oracle, it seems the main developer has moved around a bit since! Tim On 30 Oct 2018, at 17:27, Lachlan Musicman mailto:data...@gmail.com>> wrote: I always find it weird when companies I don't like release new toolsets that make me begrud

Re: [Beowulf] OT, X11 editor which works well for very remote systems

2018-06-07 Thread Tim Cutts
That’s only the case for Windows servers, isn’t it? UNIX machines can run arbitrary numbers of VNC servers from user land, if I remember correctly, although that’s not a VNC console of course. For Windows servers I’d use RDP anyway. I have to admit, all of these solutions are ropey to some ext

Re: [Beowulf] OT, X11 editor which works well for very remote systems?

2018-06-07 Thread Tim Cutts
On 7 Jun 2018, at 03:14, James Cuff mailto:jc...@nextplatform.com>> wrote: I miss SGI jot. It had this super strange GL offload to the client that I’ve never seen since. http://rainbow.ldeo.columbia.edu/documentation/sgi-faq/apps/6.html You’re a very bad man, Cuff. Jot, and everything els

Re: [Beowulf] Storage Best Practices

2018-02-19 Thread Tim Cutts
No policy here. People can keep stuff as long as they like. I don’t agree with that lack of policy, but that’s where we are. We did propose a 90 day limit, about 10 years ago. It lasted about, er, 90 days, before faculty started screaming. ☺ Tim On 19/02/2018, 15:31, "Beowulf on behalf of

Re: [Beowulf] [upgrade strategy] Intel CPU design bug & security flaw - kernel fix imposes performance penalty

2018-01-07 Thread Tim Cutts
It seems fairly clear to me that any processor which performs speculative execution will be vulnerable to timing attacks of this nature. I was pointed to a very much simplified but very clear explanation of this in a blog post by Eben Upton (of Raspberry Pi fame): https://www.raspberrypi.org/bl

Re: [Beowulf] nVidia revealed as evil

2018-01-03 Thread Tim Cutts
I am henceforth renaming my datacentre the “magical informatics cupboard” Tim On 03/01/2018, 15:58, "Beowulf on behalf of Lawrence Stewart" wrote: https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/ Of course you cannot use our less expensive hardware for whatever you wan

Re: [Beowulf] Thoughts on git?

2017-12-19 Thread Tim Cutts
I agree it is quite intimidating, but the basic version control features are pretty basic; if you don’t want to branch/merge, you don’t have to. Neither do you have to do all the git pull/push to another git instance somewhere else. You can do basic version control on an entire directory with j

Re: [Beowulf] slow jobs when run through queue

2017-12-06 Thread Tim Cutts
Of course, if you charge for your cluster time, that hurts them in the wallet, since they pay for all the allocated unused time. If you don’t charge (which is the case for us) it’s hard to incentivise them not to do this. Shame works, a bit. We publish cluster analytics showing CPU efficiency

Re: [Beowulf] cluster deployment and config management

2017-09-05 Thread Tim Cutts
Sanger made a similar separation with our FAI-based Ubuntu deployments. The FAI part of the installation was kept as minimal as possible, with the task purely of partitioning and formatting the hard disk of the machine, determining the appropriate network card configuration, and unpacking a min

Re: [Beowulf] GPFS and failed metadata NSD

2017-05-25 Thread Tim Cutts
> On 25 May 2017, at 16:40, Prentice Bisbal wrote: > > > On 05/21/2017 09:32 PM, Joe Landman wrote: >> Third is "RAID is not a backup". > > If I had a penny for every time I've had to explain this, including to other > system admins! > > Also, people also don't seem to understand that you ne

Re: [Beowulf] GPFS and failed metadata NSD

2017-05-01 Thread Tim Cutts
Oh ... The birthday paradox as applied to disk failures. That had never occurred to me before, but it should have done! Tim Sent from my iPhone > On 29 Apr 2017, at 3:36 pm, Peter St. John wrote: > > just a friendly reminder that while the probability of a particular > coincidence might be v

Re: [Beowulf] Troubleshooting NFS stale file handles

2017-04-20 Thread Tim Cutts
I've seen, in the past, problems with fragmented packets being misinterpreted, resulting in stale NFS symptoms. In that case it was an Intel STL motherboard (we're talking 20 years ago here), which shared a NIC for management as well as the main interface. The fragmented packets got inappropria

Re: [Beowulf] HPC cloud bursting providers?

2017-02-21 Thread Tim Cutts
We don't actively use it, but we have demonstrated we can extend our LSF clusters into AWS. I'm sure it could be done just as easily to other public cloud platforms. It's a more practical prospect for us than for many traditional HPC sites, because most of our workloads are embarrassingly para

Re: [Beowulf] Suggestions to what DFS to use

2017-02-15 Thread Tim Cutts
In my limited knowedge, that's the primary advantage of GPFS, in that it isn't just a DFS, and fits into a much larger ecosystem of other features like HSM and so on, which is something the other DFS alternatives don't tend to do quite so neatly. Normally I'm wary of proprietary filesystems, ha

Re: [Beowulf] Does anyone here mix CISC and RISC within their clusters.

2016-10-26 Thread Tim Cutts
Years ago our cluster was mixed architecture. At one point it contained four architectures: x86, x86_64, Itanium and Alpha. It worked fine for us, but our workload is largely embarrassingly parallel and we certainly never tried running MPI jobs across architectures. Tim -- The Wellcome Tru

Re: [Beowulf] Generation of strings MPI fashion..

2016-10-09 Thread Tim Cutts
Sent from my iPhone > On 9 Oct 2016, at 11:05 pm, John Hearns wrote: > > This sounds interesting. > > I would question this step though, as it seems intrinically a bottlneck: >> Removes duplicate lines: >> $ sort filename.txt | uniq > > First off - do you need a sorted list? If not do not p

Re: [Beowulf] more automatic building

2016-09-28 Thread Tim Cutts
Any number of approaches will work. When I used to do this years ago (I've long since passed on the technical side) I'd PXE boot, partition the hard disk and set up a provisioning network and base OS install using the Debian FAI (Fully Automated Install) system, and then use cfengine to configu

Re: [Beowulf] Underwater data centers -- the future?

2016-09-09 Thread Tim Cutts
1. We already do this for nuclear and other power stations. I remember visiting Fawley, an old oil fired power station in the UK, back in about 1982. Its cooling water was taken from and returned to the Solent. The increased temperature of the water made it ideal for growing oysters, and a co

Re: [Beowulf] Recent HPE support missive

2016-08-24 Thread Tim Cutts
s for HPE software (what ... Ibrix or similar?) Services are typically outsourced managed services. > Peter > > On Tue, Aug 23, 2016 at 6:47 AM, Tim Cutts <mailto:t...@sanger.ac.uk>> wrote: > > Really not very impressed with HPE's missive yesterday changing the >

[Beowulf] Recent HPE support missive

2016-08-23 Thread Tim Cutts
Really not very impressed with HPE's missive yesterday changing the software support contracts so that now it's going to be impossible to reinstate a software support contract if you let it lapse for more than 12 months. The existing system was expensive but reasonable (you have to pay to reinst

Re: [Beowulf] NFS HPC survey

2016-07-15 Thread Tim Cutts
The appliance/linux server question should be checkboxes rather than radio buttons. We use both! Similarly for your "other networked filesystems" question – people may be using more than one. -- Head of Scientific Computing Wellcome Trust Sanger Institute On 15/07/2016, 01:52, "Beowulf on be

Re: [Beowulf] package management on Debian/Ubuntu (wasn't cluster os)

2016-05-20 Thread Tim Cutts
ich has to work across multiple Linux versions, and therefore sits in our /software NFS filesystem). We do this for our centrally supported R distributions, for example. Regards, Tim From: Jonathan Aquilina mailto:jaquil...@eagleeyet.net>> Date: Friday, 20 May 2016 at 11:34 To: Ti

Re: [Beowulf] cluster os

2016-05-20 Thread Tim Cutts
In practice, at Sanger we haven't made very heavy use of the ability of Debian to upgrade from release to release. We use FAI to install the boxes, so frankly it's faster and less hassle to just reinstall them from scratch when the next release comes out. For some complicated bespoke systems,

Re: [Beowulf] cluster os

2016-05-19 Thread Tim Cutts
On 19/05/2016, 20:51, "Beowulf on behalf of Andrew M.A. Cater" wrote: >On Thu, May 19, 2016 at 06:20:37AM +0200, Jonathan Aquilina wrote: >> Good Morning, >> >> I am just wondering what distribution of choice would one use for their >> cluster? Would one go for a source based distro like gento

Re: [Beowulf] urgent: cost of fire suppression?

2016-04-19 Thread Tim Cutts
On 19/04/2016, 18:32, "Beowulf on behalf of Per Jessen" wrote: >William Johnson wrote: > >> Hello, >> >> I can't speak to the cost in dollars, but you my want to define your >> goal in fire suppression. >> Whether you are trying to just save the building or also have hopes >> for data recovery

Re: [Beowulf] Slide on big data

2014-02-19 Thread Tim Cutts
issues notwithstanding). So perhaps whether your questioning of the data is hypothesis driven in the traditional sense is the criterion. Tim -- Dr Tim Cutts Acting Head of Scientific Computing Wellcome Trust Sanger Institute -- The Wellcome Trust Sanger Institute is operated by Genome

Re: [Beowulf] Docker in HPC

2013-11-27 Thread Tim Cutts
er to that, so using this sort of approach opens up a lot of possibilities in the future. Regards, Tim -- Dr Tim Cutts Acting Head of Scientific Computing Wellcome Trust Sanger Institute On 27 Nov 2013, at 12:18, John Hearns wrote: > > > On 27 November 2013 12:01, Peter Clapham

Re: [Beowulf] SC13 wrapup, please post your own

2013-11-26 Thread Tim Cutts
never they claim they can offer great ROI and you ask them to actually do so. Usually their crude spreadsheets end up telling you're going to spend $millions more using their solution. Regards, Tim -- Dr Tim Cutts Acting Head of Scientific Computing Wellcome Trust Sanger Institute On 26

Re: [Beowulf] SC13 wrapup, please post your own

2013-11-26 Thread Tim Cutts
On 25 Nov 2013, at 23:03, Prentice Bisbal wrote: > 4. I went to a BoF on ROI on HPC investment. All the presentations in > the BoF frustrated me. Not because they were poorly done, but because > they tried to measure the value of a cluster by number of papers > published that used that HPC res

Re: [Beowulf] SC13 wrapup, please post your own

2013-11-24 Thread Tim Cutts
On 24 Nov 2013, at 01:32, Lawrence Stewart wrote: > Water Cooling > > There was a lot of water cooling equipment on the floor. I liked the Staubli > booth for sheer mechanical pron. They make the drip-free connectors. If you went to HP-CAST as well, and got the tour of NREL's new data centre

Re: [Beowulf] BeoBash/SC13

2013-11-11 Thread Tim Cutts
Four of us, including me, from Sanger are going to SC13. Tim -- Dr Tim Cutts Acting Head of Scientific Computing Wellcome Trust Sanger Institute On 10 Nov 2013, at 22:25, Christopher Samuel wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 10/11/13 03:15, Joe L

Re: [Beowulf] monitoring...

2013-08-29 Thread Tim Cutts
On 29 Aug 2013, at 20:38, Raphael Verdugo P. wrote: > Hi, > > I need help . Ganglia or Nagios to monitoring activity in cluster?. > Both. They have different if overlapping purposes. Ganglia is very nice for historical load metric graphs. Nagios is rather better at actually alerting

Re: [Beowulf] Definition of HPC

2013-04-19 Thread Tim Cutts
On 18 Apr 2013, at 19:45, Adam DeConinck wrote: > Tying in another recent discussion on the list, "root access" is > actually one of the places I've seen some success using Cloud for HPC. > It costs more, it's virtualized, and you usually can't get > HPC-specialized hardware, so it's obviously

Re: [Beowulf] Definition of HPC

2013-04-15 Thread Tim Cutts
On 15 Apr 2013, at 20:59, Joe Landman wrote: > On 04/15/2013 02:46 PM, James Cuff wrote: > > [...] > >> So here it is, a new name for what we do >> >> H{P,T}(R,A,T)C > > And I thought my regex skills dropped by half when I went all PHB-like :D > > For those unaware: PHB == Pointy Haire

Re: [Beowulf] Configuration management tools/strategy

2013-01-07 Thread Tim Cutts
On 6 Jan 2013, at 18:55, Skylar Thompson wrote: > CFengine probably isn't a bad choice - going with something that's > well-tested and -used is helpful because it's a lot easier to get > recipes for what you need to do. We use cfengine2 and cfengine3 here; still in the middle of migrating from

Re: [Beowulf] how cluster's storage can be flexible/expandable?

2012-11-12 Thread Tim Cutts
On 12 Nov 2012, at 03:50, Duke Nguyen wrote: > On 11/9/12 7:26 PM, Bogdan Costescu wrote: >> On Fri, Nov 9, 2012 at 7:19 AM, Christopher Samuel >> wrote: >>> So JBODs with LVM on top and XFS on top of that could be resized on >>> the fly. You can do the same with ext[34] as well (from memory).

Re: [Beowulf] Maker2 genomic software license experience?

2012-11-08 Thread Tim Cutts
On 8 Nov 2012, at 13:52, Skylar Thompson wrote: > I guess if your development time is sufficiently shorter than the > equivalent compiled code, it could make sense. This is true, and a lot of what these guys are writing is pipeline glue joining other bits of software together, for which script

Re: [Beowulf] Maker2 genomic software license experience?

2012-11-08 Thread Tim Cutts
On 8 Nov 2012, at 10:10, Andrew Holway wrote: > > It's all a bit academic now (ahem) as the MPI component is a Perl > program, and Perl isn't supported on BlueGene/Q. :-( > > huh? perl mpi? > > Interpreted language? High performance message passing interface? > > confused. Welcome to the wo

Re: [Beowulf] Degree

2012-10-24 Thread Tim Cutts
On 24 Oct 2012, at 16:09, Peter Clapham wrote: > On 24/10/2012 15:46, Vincent Diepeveen wrote: >> On Oct 24, 2012, at 3:19 PM, Hearns, John wrote: >> >>> . >>> >>> Thing is, I need some kind of degree in this stuff to do the kind of >>> work I really want to do. Especially in Germany, organisa

Re: [Beowulf] Southampton's RPi cluster is cool but too many cables?

2012-09-25 Thread Dr Tim Cutts
On 25 Sep 2012, at 18:01, Jesse Becker wrote: > The .2bit FASTA[1] format specifically compresses the ACGT data into > 2 bits (T:00, C:01, A:10, G:11), plus some header/metada information. > Other formats such as 'VCF' specifically store variants against known > references[2]. The current *hum

Re: [Beowulf] IBM's Watson on Jeopardy tonight

2011-02-16 Thread Tim Cutts
On 16 Feb 2011, at 15:20, Lux, Jim (337C) wrote: > Aside from how brains are "programmed" (a fascinating question) > > One big difference between biological systems and non-bio is the handling of > errors and faults. Biological systems tend to assume that (many) failures > occur and use a str

Re: [Beowulf] first cluster

2010-07-19 Thread Tim Cutts
On 16 Jul 2010, at 6:11 pm, Douglas Guptill wrote: > On Fri, Jul 16, 2010 at 12:51:49PM -0400, Steve Crusan wrote: >> We use a PAM module (pam_torque) to stop this behavior. Basically, if you >> your job isn't currently running on a node, you cannot SSH into a node. >> >> >> http://www.rpmfind.

Re: [Beowulf] Re: looking for good distributed shell program

2010-05-12 Thread Tim Cutts
On 11 May 2010, at 11:53 pm, Dave Love wrote: > Tim Cutts writes: > >>> We use Dancer's shell "dsh": >>> >>> http://www.netfort.gr.jp/~dancer/software/dsh.html.en >> >> Second that recommendation - we use that one too. It's pr

Re: [Beowulf] looking for good distributed shell program

2010-05-11 Thread Tim Cutts
On 11 May 2010, at 3:00 pm, Tony Travis wrote: > On 11/05/10 14:46, Joe Landman wrote: >> Prentice Bisbal wrote: >>> That's the 3rd or 4th vote for pdsh. I guess I better take a good look >>> at at. >> >> Allow me to 5th pdsh. We don't install clusters without it. > > Hello, Joe and Prentice.

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-20 Thread Tim Cutts
On 19 Feb 2010, at 11:56 pm, Mark Hahn wrote: however, I'm not actually claiming iSCSI is prevalent. the protocol is relatively heavy-weight, and it's really only providing SAN access, not shared, file-level access, which is ultimately what most want... iSCSI seems fairly common in the virtu

Re: [Beowulf] clustering using xen virtualized machines

2010-01-28 Thread Tim Cutts
On 28 Jan 2010, at 4:23 pm, Gavin Burris wrote: Sorry, I'm not drinking the virtualization/cloud koolaid. I'd love to have everything abstracted and easy to manage, but I find standardizing on an OS or two and keeping things as stock as possible is easier, and cheaper to manage at this poin

Re: [Beowulf] clustering using xen virtualized machines

2010-01-28 Thread Tim Cutts
On 28 Jan 2010, at 3:10 pm, Mark Hahn wrote: I don't buy the argument that the winning case is packaging up a VM with all your software. If you really are unable to build the required software stack for a given cluster and its OS, I think using something you're right, but only for narrow

Re: [Beowulf] clustering using xen virtualized machines

2010-01-26 Thread Tim Cutts
On 26 Jan 2010, at 1:24 pm, Tim Cutts wrote: 2) Raw device maps (where you pass a LUN straight through to a single virtual machine, rather than carving the disk out of a datastore) reduce contention and increase performance somewhat, at the cost of using up device minor numbers on ESX

Re: [Beowulf] clustering using xen virtualized machines

2010-01-26 Thread Tim Cutts
On 26 Jan 2010, at 12:00 pm, Jonathan Aquilina wrote: does anyone have any benchmarks for I/O in a virtualized cluster? I don't have formal benchmarks, but I can tell you what I see on my VMware virtual machines in general: Network I/O is reasonably fast - there's some additional latency,

Re: [Beowulf] Ahoy shipmates

2009-10-13 Thread Tim Cutts
On 12 Oct 2009, at 9:33 pm, Marian Marinov wrote: I don't know about the wave power but the cooling power of the ocean or sea water is pretty good idea to look at. Isn't sea water fairly corrosive? You get severe electrolytic corrosion problems on boats, hence the big lump of zinc on a y

Re: [Beowulf] RAID for home beowulf

2009-10-05 Thread Tim Cutts
On 5 Oct 2009, at 11:02 am, Tomislav Maric wrote: OK, I guess then Ubuntu will suffice for a 12 node Cluster. :) Anyway, I'll try it and see. Thanks! We run Debian on our clusters, so you're definitely not the only person using a Debian-based distro for your cluster. Debian does have a ki

Re: [Beowulf] recommendation on crash cart for a cluster room: full cluster KVM is not an option I suppose?

2009-09-30 Thread Tim Cutts
On 30 Sep 2009, at 2:23 pm, Rahul Nabar wrote: I like the shared socket approach. Building a separate IPMI network seems a lot of extra wiring to me. Admittedly the IPMI switches can be configured to be dirt cheap but it still feels like building a extra tiny road for one car a day when a huge

Re: [Beowulf] Re: What is best OS for GRID or clusters?

2009-09-28 Thread Tim Cutts
On 28 Sep 2009, at 4:28 pm, Stuart Barkley wrote: EROS (Extremely Reliable Operating System) http://www.eros-os.org/eros.html Looks like abandonware to me. There appears to have been no activity for well over 5 years. If I don't see any activity on a status page for over 2 years then

Re: [Beowulf] Virtualization in head node ?

2009-09-18 Thread Tim Cutts
On 18 Sep 2009, at 1:15 pm, Robert G. Brown wrote: On Thu, 17 Sep 2009, Gerry Creager wrote: I was a dyed-in-the-wool vmware user until quite recently, too, but the pain of keeping it running on "current" distros (read: Fedora) finally forced me to look elsewhere. I think you'll be plea

Re: [Beowulf] Virtualization in head node ?

2009-09-16 Thread Tim Cutts
On 15 Sep 2009, at 11:55 pm, Dmitry Zaletnev wrote: When install CentOS 5.3, you get Xen virtual machine for free, with a nice interface, and in it, modes with internal network and NAT to outside world work simultaneously, witch is not the case of Sun xVM VirtualBox. Never used VMWare beca

Re: RS: [Beowulf] Virtualization in head node ?

2009-09-16 Thread Tim Cutts
On 16 Sep 2009, at 8:23 am, Alan Ward wrote: I have been working quite a lot with VBox, mostly for server stuff. I agree it can be quite impressive, and has some nice features (e.g. do not stop a machine, sleep it - and wake up pretty fast). On the other hand, we found that anything that

Re: [Beowulf] Cluster install and admin approach (newbie question)

2009-08-30 Thread Tim Cutts
On 28 Aug 2009, at 11:37 am, madskad...@gmail.com wrote: That issue I see it by another point of view: finally I will learn something really new. Yes, I will loose time but I hope that in the end all players will win: me because I got money and know how and the cluster users because we doubled

Re: [Beowulf] confused about high values of "used" memory under "top" even without running jobs

2009-08-12 Thread Tim Cutts
On 12 Aug 2009, at 8:07 pm, Mark Hahn wrote: also, I often do this: awk '{print $3*$4,$0}' /proc/slabinfo|sort -rn|head to get a quick snapshot of kinds of memory use. That's a little gem! Tim -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registe

Re: [Beowulf] The True Cost of HPC Cluster Ownership

2009-08-11 Thread Tim Cutts
On 11 Aug 2009, at 3:38 pm, Daniel Pfenniger wrote: Douglas Eadline wrote: All, I posted this on ClusterMonkey the other week. It is actually derived from a white paper I wrote for SiCortex. I'm sure those on this list have some experience/opinions with these issues (and other cluster issues!)

Re: [Beowulf] Small form computers as cluster nodes - any comments about the Shuttle brand ?

2009-08-09 Thread Tim Cutts
If space is a constraint, but up-front cost less so, you might want to consider a small blade chassis; something like an HP c-3000, which can take 8 blades. Especially if all you want is a GigE interconnect, which will fit in the same box. Potentially that will get you 64 cores in 6U, a

Re: [Beowulf] nvidia card id?

2009-07-14 Thread Tim Cutts
On 13 Jul 2009, at 7:49 pm, Mark Hahn wrote: Does anyone know off hand if there is a way to pull the exact card information from an nvidia GPU inside a linux server from linux itself? well, there's lspci. is that what you meant? it's usually a bit fuzzy how to match the pci-level id (vend

Re: [Beowulf] Erlang Usage

2009-06-24 Thread Tim Cutts
On 24 Jun 2009, at 8:22 am, Marian Marinov wrote: Hello, I'm currently learning Erlang and I'm curious have any of you guys have ever used Erlang on their clusters? Have anyone experimented in doing any academic work with it? Only indirectly - my only encounter with it is people using cou

Re: [Beowulf] Oracle buys Sun

2009-04-20 Thread Tim Cutts
On 20 Apr 2009, at 12:42 pm, Chris Dagdigian wrote: It's official: http://www.sun.com/third-party/global/oracle/index.jsp So, what's going to happen to Lustre now, I wonder? Tim -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England

Re: [Beowulf] Re: Rackable / SGI

2009-04-05 Thread Tim Cutts
On 5 Apr 2009, at 11:24 pm, Greg Lindahl wrote: On Sun, Apr 05, 2009 at 10:00:44PM +0100, Tim Cutts wrote: It still is. Our machines run it once a day. I'm curious, do you also run it at boot? The one thing my usual scheme lacks is an easy boot-time "should I rejoin the h

Re: [Beowulf] Rackable / SGI

2009-04-05 Thread Tim Cutts
pport, you know :( Tim Cutts and the Sanger Inst. aren't enough to convince senior management that Debian is workable, even though HP and IBM will both support it. Hehe. I'd be very worried if my reputation *were* considered big enough to swing that sort of decision. :-) But you kn

Re: [Beowulf] Re: Rackable / SGI

2009-04-05 Thread Tim Cutts
On 5 Apr 2009, at 4:00 pm, Jason Riedy wrote: A similar situation exists in the node management space, where existing solutions like CFengine were pretty much ignored by HPC people. Ha! Cfengine was pretty much ignored by *everyone*, including its author for quite some time. Promising (pun

Re: [Beowulf] Re: Rackable / SGI

2009-04-05 Thread Tim Cutts
On 4 Apr 2009, at 10:29 pm, Jason Riedy wrote: And Joe Landman writes: Good performance: -- GlusterFS PVFS2 I don't suppose you've experimented with Ceph or POHMELFS? I just attempted to build Lustre support for experimenting and remembered why I avoid it. And Lustre alread

Re: [Beowulf] Rackable / SGI

2009-04-03 Thread Tim Cutts
On 3 Apr 2009, at 11:11 pm, Mark Hahn wrote: involved with Linux, and open source things such as XFS we would not have the enterprise-level features that we see now. unclear in several ways. for instance, linux has hotplug cpu and memory support, but I really think this is dubious, since t

Re: [Beowulf] Rackable / SGI

2009-04-03 Thread Tim Cutts
On 3 Apr 2009, at 3:17 pm, John Hearns wrote: 2009/4/3 Robert G. Brown : There are similar questions associated with IBM. Sun provides support for some major tools used in Linux these days, notably open office but also SGE. Don't forget MySQL Who knows what the result of the catfight

Re: [Beowulf] SGI and Sun: In Memoriam

2009-04-03 Thread Tim Cutts
On 3 Apr 2009, at 12:14 am, Lux, James P wrote: But at least the assembler is still source code compatible with your code for an 8080. Is it really? Do mov a,m or dad d exist on X86? I always got the impression that there wasn't real compatibility between the 8080 and 8086, just that

Re: [Beowulf] SGI and Sun: In Memoriam

2009-04-03 Thread Tim Cutts
On 2 Apr 2009, at 10:22 pm, Michael Brown wrote: On the other side, there's Sun's official "OpenSolaris" distribution, which is confusingly named the same as the OpenSolaris project, which is somehow related to Solaris 11, and then there's Solaris Express, which doesn't exist any more ...

Re: [Beowulf] Wired article about Go machine

2009-03-26 Thread Tim Cutts
On 26 Mar 2009, at 3:54 pm, Joshua Baker-LePain wrote: Note that Leif mentioned medical equipment with embedded Windows systems. And he's right -- you're not allowed to touch the software build on those without getting the new build approved by the FDA (at least, not if you want to use sai

Re: [Beowulf] Wired article about Go machine

2009-03-26 Thread Tim Cutts
On 26 Mar 2009, at 2:42 pm, Robert G. Brown wrote: Um, I don't believe that this is the case, and I say this as a semi- pro consultant in health care. I don't know about hospital software, but it's certainly the case for some DNA sequencer instruments. Our ABI 3700 capillary sequencers

  1   2   3   >