Yes. Is it cost effective? Yes. Sold! The one thing it did
need was knowledgeable cloud architects, but the cloud providers can and do
help with that.
Tim
--
Tim Cutts
Head of Scientific Computing
Wellcome Sanger Institute
On 21 Sep 2021, at 12:24, John Hearns
mailto:hear...@gmail.com>&
fundamental driver was to increase
phone battery life.
Tim
--
Tim Cutts
Head of Scientific Computing
Wellcome Sanger Institute
On 19 Jun 2021, at 16:49, Gerald Henriksen
mailto:ghenr...@gmail.com>> wrote:
I suspect that is marketing speak, which roughly translates to not
that no one has
Indeed. We’re actively looking at these as well. GPUs can be used for a lot
of this stuff (e.g. the Parabricks product on nVidia GPUs) and they’re an
inherent part of some sequencers. The base called on the Oxford Nanopore
system is an ML algorithm, and ONT worked with nVidia on that, I belie
On 4 Feb 2021, at 10:40, Jonathan Aquilina
mailto:jaquil...@eagleeyet.net>> wrote:
Maybe SETI@home wasnt the right project to mention, just remembered there is
another project but not in genomics on that distributed platform called
Folding@home.
Right, protein dynamics simulations like that
hought of doing something like SETI@home and those projects to
get idle compute power to help churn through the massive amounts of data?
Regards,
Jonathan
From: Tim Cutts mailto:t...@sanger.ac.uk>>
Sent: 04 February 2021 11:26
To: Jonathan Aquilina m
On 4 Feb 2021, at 10:14, Jonathan Aquilina via Beowulf
mailto:beowulf@beowulf.org>> wrote:
I am curious though to chunk out such large data is something like hadoop/HBase
and the like of those platforms, are those whats being used?
It’s a combination of our home-grown sequencing pipeline whi
> On 3 Feb 2021, at 18:23, Jörg Saßmannshausen
> wrote:
>
> Hi John,
>
> interesting stuff and good reading.
>
> For the IT interests on here: these sequencing machine are chucking out large
> amount of data per day. The project I am involved in can chew out 400 GB or
> so
> on raw data
source usage for what you want to crunch?
Regards,
Jonathan
From: Tim Cutts mailto:t...@sanger.ac.uk>>
Sent: 01 February 2021 15:43
To: Jonathan Aquilina mailto:jaquil...@eagleeyet.net>>
Cc: Beowulf mailto:beowulf@beowulf.org>>
Subject: Re: [Beowulf] apline linux [EXT]
Th
The only place I regularly encounter is in containers, so people who are
running singularity might find their users are using alpine in a way they can’t
easily see. Presumably the libm point Janne made applies in the container
context as well, since the application will presumably be using the
I don’t know how often we ever actually used Red Hat support for RHEL itself.
Very rarely, I suspect. Even before they hiked the price on us, I expect we
effectively paid them several thousand dollars per support call.
Some of the other products, like RH OpenStack Platform, yes, but not for th
ny of these: Debian, Ubuntu, RHEL, CentOS, SUSE,
etc. I got the least pain using Debian, while SUSE was the hardest, though
RHEL was right behind it.
On 12/8/20 4:55 PM, Tim Cutts wrote:
We did use Debian at Sanger for several years. The main reason for switching
away from it (I’m talking abou
On 8 Dec 2020, at 21:52, Ryan Novosielski
mailto:novos...@rutgers.edu>> wrote:
It’s pretty common that if something supports only one distribution, it’s
RedHat-based. That’s also true of hardware vendors.
True, officially, but often not officially. Again, back around 2008 I found it
hilario
We did use Debian at Sanger for several years. The main reason for switching
away from it (I’m talking about 2008 here) was a desire to have a common OS
across desktops and servers. Debian’s extremely purist stance on open source
device drivers made it a pain on desktops and laptops, because i
We’re moving wholesale back to Ubuntu; we didn’t use CentOS anyway very much,
but Red Hat increased our licensing costs 10x which put them out of the picture.
Tim
> On 8 Dec 2020, at 16:37, Ryan Novosielski wrote:
>
> The first comment on the blog does not equivocate. :-D
>
> --
> #BlackLives
I think the 8 second limit is probably arbitrary. Lambda’s normal limit is 5
minutes. I presume Amazon did some UX work, and basically asked “what’s the
maximum length of time your average user is willing to wait for an answer
before they consider it a bad experience”, and came up with 8 secon
being called or do you mean
the memory is always allocated regardless of the object being in scope and that
part of the programme being used or not?
Regards,
Jonathan
From: Beowulf mailto:beowulf-boun...@beowulf.org>>
On Behalf Of Tim Cutts
Sent: 25 November 2020 12:40
To: Prentice Bisbal
Except of course, you do really. Java applications can end up with huge memory
leaks because the programmers really need to understand the mechanism when
objects get moved from Eden and Survivor space into Tenured space.
Tenured space never decreases, so every object which ends up there is allo
Indeed, my main personal experience with Lambda so far has been in writing an
Alexa skill in my spare time. It’s been quite fun, and very instructive in the
benefits and pitfalls of lambda.
My main takehomes so far:
1. I love the fact that there’s basically no code at all other than that
req
On 24 Nov 2020, at 18:31, Alex Chekholko via Beowulf
mailto:beowulf@beowulf.org>> wrote:
If you can run your task on just one computer, you should always do that rather
than having to build a cluster of some kind and all the associated headaches.
If you take on the cloud message, that of cou
I think that’s certainly true. As with all things, it depends on your
workloads. The vast majority of our genomics codes are either single threaded,
or multi-threaded on a single node. There's relatively little MPI, and we
maintain a "parallel" queue on bare metal for precisely that set of wo
Here, we deploy some clusters on OpenStack, and some traditionally as bare
metal. Our largest cluster is actually a mixture of both, so we can
dynamically expand it from the OpenStack service when needed.
Our aim eventually is to use OpenStack as a common deployment layer, even for
the bare m
Indeed, and you can quite easily get into a “boulders and sand” scheduling
problem; if you allow the small interactive jobs (the sand) free access to
everything, the scheduler tends to find them easy to schedule, partially fills
nodes with them, and then finds it can’t find contiguous resources
They use that same wordage when advertising Aruba wireless kit. It’s clearly
“the way things are branded” in HPE these days.
Tim
On 27 Sep 2019, at 15:40, Lux, Jim (US 337K) via Beowulf
mailto:beowulf@beowulf.org>> wrote:
“A HPE company” seems sort of bloodless and corporate.
--
The We
I try to avoid the phrase “cloud bursting” now, for precisely this reason.
Many of my users have heard the phrase, and think it means they’ll be able to
instantly start work in the cloud, just because the local cluster is busy. On
the compute side, yes, it’s pretty quick but as you say, gettin
The Dead Sea is so buoyant it makes swimming rather difficult. Floating on your
back is about all that's possible. I learned the hard way not to attempt
swimming on your front. Your backside pops out of the water and the resulting
rotation shoves your face into it.
And that salinity *hurts*.
T
I think that's a good idea, although it's only a partial solution. Mailman
sends password reminders unencrypted anyway, and presumably therefore doesn't
store the passwords as hashes or whatever.
Tim
Sent from my iPhone
> On 3 Dec 2018, at 6:44 am, "jaquil...@eagleeyet.net"
> wrote:
>
> Hi
Ho ho. Yes, there is rarely anything completely new. Old ideas get dusted
off, polished up, and packaged slightly differently. At the end of the day, a
Dockerfile is just a script to build your environment, but it has the advantage
now of doing it in a reasonably standard way, rather than wha
I vaguely remember hearing about Btrfs from someone at Oracle, it seems the
main developer has moved around a bit since!
Tim
On 30 Oct 2018, at 17:27, Lachlan Musicman
mailto:data...@gmail.com>> wrote:
I always find it weird when companies I don't like release new toolsets that
make me begrud
That’s only the case for Windows servers, isn’t it? UNIX machines can run
arbitrary numbers of VNC servers from user land, if I remember correctly,
although that’s not a VNC console of course.
For Windows servers I’d use RDP anyway.
I have to admit, all of these solutions are ropey to some ext
On 7 Jun 2018, at 03:14, James Cuff
mailto:jc...@nextplatform.com>> wrote:
I miss SGI jot. It had this super strange GL offload to the client that I’ve
never seen since.
http://rainbow.ldeo.columbia.edu/documentation/sgi-faq/apps/6.html
You’re a very bad man, Cuff. Jot, and everything els
No policy here. People can keep stuff as long as they like. I don’t agree
with that lack of policy, but that’s where we are.
We did propose a 90 day limit, about 10 years ago. It lasted about, er, 90
days, before faculty started screaming. ☺
Tim
On 19/02/2018, 15:31, "Beowulf on behalf of
It seems fairly clear to me that any processor which performs speculative
execution will be vulnerable to timing attacks of this nature.
I was pointed to a very much simplified but very clear explanation of this in a
blog post by Eben Upton (of Raspberry Pi fame):
https://www.raspberrypi.org/bl
I am henceforth renaming my datacentre the “magical informatics cupboard”
Tim
On 03/01/2018, 15:58, "Beowulf on behalf of Lawrence Stewart"
wrote:
https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/
Of course you cannot use our less expensive hardware for whatever you wan
I agree it is quite intimidating, but the basic version control features are
pretty basic; if you don’t want to branch/merge, you don’t have to. Neither do
you have to do all the git pull/push to another git instance somewhere else.
You can do basic version control on an entire directory with j
Of course, if you charge for your cluster time, that hurts them in the wallet,
since they pay for all the allocated unused time. If you don’t charge (which
is the case for us) it’s hard to incentivise them not to do this. Shame works,
a bit. We publish cluster analytics showing CPU efficiency
Sanger made a similar separation with our FAI-based Ubuntu deployments. The
FAI part of the installation was kept as minimal as possible, with the task
purely of partitioning and formatting the hard disk of the machine, determining
the appropriate network card configuration, and unpacking a min
> On 25 May 2017, at 16:40, Prentice Bisbal wrote:
>
>
> On 05/21/2017 09:32 PM, Joe Landman wrote:
>> Third is "RAID is not a backup".
>
> If I had a penny for every time I've had to explain this, including to other
> system admins!
>
> Also, people also don't seem to understand that you ne
Oh ... The birthday paradox as applied to disk failures. That had never
occurred to me before, but it should have done!
Tim
Sent from my iPhone
> On 29 Apr 2017, at 3:36 pm, Peter St. John wrote:
>
> just a friendly reminder that while the probability of a particular
> coincidence might be v
I've seen, in the past, problems with fragmented packets being misinterpreted,
resulting in stale NFS symptoms. In that case it was an Intel STL motherboard
(we're talking 20 years ago here), which shared a NIC for management as well as
the main interface. The fragmented packets got inappropria
We don't actively use it, but we have demonstrated we can extend our LSF
clusters into AWS. I'm sure it could be done just as easily to other public
cloud platforms.
It's a more practical prospect for us than for many traditional HPC sites,
because most of our workloads are embarrassingly para
In my limited knowedge, that's the primary advantage of GPFS, in that it isn't
just a DFS, and fits into a much larger ecosystem of other features like HSM
and so on, which is something the other DFS alternatives don't tend to do quite
so neatly. Normally I'm wary of proprietary filesystems, ha
Years ago our cluster was mixed architecture. At one point it contained four
architectures: x86, x86_64, Itanium and Alpha. It worked fine for us, but our
workload is largely embarrassingly parallel and we certainly never tried
running MPI jobs across architectures.
Tim
--
The Wellcome Tru
Sent from my iPhone
> On 9 Oct 2016, at 11:05 pm, John Hearns wrote:
>
> This sounds interesting.
>
> I would question this step though, as it seems intrinically a bottlneck:
>> Removes duplicate lines:
>> $ sort filename.txt | uniq
>
> First off - do you need a sorted list? If not do not p
Any number of approaches will work. When I used to do this years ago (I've
long since passed on the technical side) I'd PXE boot, partition the hard disk
and set up a provisioning network and base OS install using the Debian FAI
(Fully Automated Install) system, and then use cfengine to configu
1. We already do this for nuclear and other power stations. I remember
visiting Fawley, an old oil fired power station in the UK, back in about 1982.
Its cooling water was taken from and returned to the Solent. The increased
temperature of the water made it ideal for growing oysters, and a co
s for HPE software (what ... Ibrix or similar?) Services are
typically outsourced managed services.
> Peter
>
> On Tue, Aug 23, 2016 at 6:47 AM, Tim Cutts <mailto:t...@sanger.ac.uk>> wrote:
>
> Really not very impressed with HPE's missive yesterday changing the
>
Really not very impressed with HPE's missive yesterday changing the software
support contracts so that now it's going to be impossible to reinstate a
software support contract if you let it lapse for more than 12 months.
The existing system was expensive but reasonable (you have to pay to reinst
The appliance/linux server question should be checkboxes rather than radio
buttons. We use both!
Similarly for your "other networked filesystems" question – people may be using
more than one.
--
Head of Scientific Computing
Wellcome Trust Sanger Institute
On 15/07/2016, 01:52, "Beowulf on be
ich has to work across multiple Linux versions, and therefore
sits in our /software NFS filesystem). We do this for our centrally supported
R distributions, for example.
Regards,
Tim
From: Jonathan Aquilina
mailto:jaquil...@eagleeyet.net>>
Date: Friday, 20 May 2016 at 11:34
To: Ti
In practice, at Sanger we haven't made very heavy use of the ability of Debian
to upgrade from release to release. We use FAI to install the boxes, so
frankly it's faster and less hassle to just reinstall them from scratch when
the next release comes out.
For some complicated bespoke systems,
On 19/05/2016, 20:51, "Beowulf on behalf of Andrew M.A. Cater"
wrote:
>On Thu, May 19, 2016 at 06:20:37AM +0200, Jonathan Aquilina wrote:
>> Good Morning,
>>
>> I am just wondering what distribution of choice would one use for their
>> cluster? Would one go for a source based distro like gento
On 19/04/2016, 18:32, "Beowulf on behalf of Per Jessen"
wrote:
>William Johnson wrote:
>
>> Hello,
>>
>> I can't speak to the cost in dollars, but you my want to define your
>> goal in fire suppression.
>> Whether you are trying to just save the building or also have hopes
>> for data recovery
issues notwithstanding).
So perhaps whether your questioning of the data is hypothesis driven in the
traditional sense is the criterion.
Tim
--
Dr Tim Cutts
Acting Head of Scientific Computing
Wellcome Trust Sanger Institute
--
The Wellcome Trust Sanger Institute is operated by Genome
er to that, so using this
sort of approach opens up a lot of possibilities in the future.
Regards,
Tim
--
Dr Tim Cutts
Acting Head of Scientific Computing
Wellcome Trust Sanger Institute
On 27 Nov 2013, at 12:18, John Hearns wrote:
>
>
> On 27 November 2013 12:01, Peter Clapham
never they claim they can offer
great ROI and you ask them to actually do so. Usually their crude spreadsheets
end up telling you're going to spend $millions more using their solution.
Regards,
Tim
--
Dr Tim Cutts
Acting Head of Scientific Computing
Wellcome Trust Sanger Institute
On 26
On 25 Nov 2013, at 23:03, Prentice Bisbal wrote:
> 4. I went to a BoF on ROI on HPC investment. All the presentations in
> the BoF frustrated me. Not because they were poorly done, but because
> they tried to measure the value of a cluster by number of papers
> published that used that HPC res
On 24 Nov 2013, at 01:32, Lawrence Stewart wrote:
> Water Cooling
>
> There was a lot of water cooling equipment on the floor. I liked the Staubli
> booth for sheer mechanical pron. They make the drip-free connectors.
If you went to HP-CAST as well, and got the tour of NREL's new data centre
Four of us, including me, from Sanger are going to SC13.
Tim
--
Dr Tim Cutts
Acting Head of Scientific Computing
Wellcome Trust Sanger Institute
On 10 Nov 2013, at 22:25, Christopher Samuel wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 10/11/13 03:15, Joe L
On 29 Aug 2013, at 20:38, Raphael Verdugo P. wrote:
> Hi,
>
> I need help . Ganglia or Nagios to monitoring activity in cluster?.
>
Both. They have different if overlapping purposes. Ganglia is very nice for
historical load metric graphs. Nagios is rather better at actually alerting
On 18 Apr 2013, at 19:45, Adam DeConinck wrote:
> Tying in another recent discussion on the list, "root access" is
> actually one of the places I've seen some success using Cloud for HPC.
> It costs more, it's virtualized, and you usually can't get
> HPC-specialized hardware, so it's obviously
On 15 Apr 2013, at 20:59, Joe Landman wrote:
> On 04/15/2013 02:46 PM, James Cuff wrote:
>
> [...]
>
>> So here it is, a new name for what we do
>>
>> H{P,T}(R,A,T)C
>
> And I thought my regex skills dropped by half when I went all PHB-like :D
>
> For those unaware: PHB == Pointy Haire
On 6 Jan 2013, at 18:55, Skylar Thompson wrote:
> CFengine probably isn't a bad choice - going with something that's
> well-tested and -used is helpful because it's a lot easier to get
> recipes for what you need to do.
We use cfengine2 and cfengine3 here; still in the middle of migrating from
On 12 Nov 2012, at 03:50, Duke Nguyen wrote:
> On 11/9/12 7:26 PM, Bogdan Costescu wrote:
>> On Fri, Nov 9, 2012 at 7:19 AM, Christopher Samuel
>> wrote:
>>> So JBODs with LVM on top and XFS on top of that could be resized on
>>> the fly. You can do the same with ext[34] as well (from memory).
On 8 Nov 2012, at 13:52, Skylar Thompson wrote:
> I guess if your development time is sufficiently shorter than the
> equivalent compiled code, it could make sense.
This is true, and a lot of what these guys are writing is pipeline glue joining
other bits of software together, for which script
On 8 Nov 2012, at 10:10, Andrew Holway wrote:
>
> It's all a bit academic now (ahem) as the MPI component is a Perl
> program, and Perl isn't supported on BlueGene/Q. :-(
>
> huh? perl mpi?
>
> Interpreted language? High performance message passing interface?
>
> confused.
Welcome to the wo
On 24 Oct 2012, at 16:09, Peter Clapham wrote:
> On 24/10/2012 15:46, Vincent Diepeveen wrote:
>> On Oct 24, 2012, at 3:19 PM, Hearns, John wrote:
>>
>>> .
>>>
>>> Thing is, I need some kind of degree in this stuff to do the kind of
>>> work I really want to do. Especially in Germany, organisa
On 25 Sep 2012, at 18:01, Jesse Becker wrote:
> The .2bit FASTA[1] format specifically compresses the ACGT data into
> 2 bits (T:00, C:01, A:10, G:11), plus some header/metada information.
> Other formats such as 'VCF' specifically store variants against known
> references[2]. The current *hum
On 16 Feb 2011, at 15:20, Lux, Jim (337C) wrote:
> Aside from how brains are "programmed" (a fascinating question)
>
> One big difference between biological systems and non-bio is the handling of
> errors and faults. Biological systems tend to assume that (many) failures
> occur and use a str
On 16 Jul 2010, at 6:11 pm, Douglas Guptill wrote:
> On Fri, Jul 16, 2010 at 12:51:49PM -0400, Steve Crusan wrote:
>> We use a PAM module (pam_torque) to stop this behavior. Basically, if you
>> your job isn't currently running on a node, you cannot SSH into a node.
>>
>>
>> http://www.rpmfind.
On 11 May 2010, at 11:53 pm, Dave Love wrote:
> Tim Cutts writes:
>
>>> We use Dancer's shell "dsh":
>>>
>>> http://www.netfort.gr.jp/~dancer/software/dsh.html.en
>>
>> Second that recommendation - we use that one too. It's pr
On 11 May 2010, at 3:00 pm, Tony Travis wrote:
> On 11/05/10 14:46, Joe Landman wrote:
>> Prentice Bisbal wrote:
>>> That's the 3rd or 4th vote for pdsh. I guess I better take a good look
>>> at at.
>>
>> Allow me to 5th pdsh. We don't install clusters without it.
>
> Hello, Joe and Prentice.
On 19 Feb 2010, at 11:56 pm, Mark Hahn wrote:
however, I'm not actually claiming iSCSI is prevalent. the protocol
is relatively heavy-weight, and it's really only providing SAN access,
not shared, file-level access, which is ultimately what most want...
iSCSI seems fairly common in the virtu
On 28 Jan 2010, at 4:23 pm, Gavin Burris wrote:
Sorry, I'm not drinking the virtualization/cloud koolaid. I'd love to
have everything abstracted and easy to manage, but I find
standardizing
on an OS or two and keeping things as stock as possible is easier, and
cheaper to manage at this poin
On 28 Jan 2010, at 3:10 pm, Mark Hahn wrote:
I don't buy the argument that the winning case is packaging up a VM
with
all your software. If you really are unable to build the required
software stack for a given cluster and its OS, I think using
something
you're right, but only for narrow
On 26 Jan 2010, at 1:24 pm, Tim Cutts wrote:
2) Raw device maps (where you pass a LUN straight through to a
single virtual machine, rather than carving the disk out of a
datastore) reduce contention and increase performance somewhat, at
the cost of using up device minor numbers on ESX
On 26 Jan 2010, at 12:00 pm, Jonathan Aquilina wrote:
does anyone have any benchmarks for I/O in a virtualized cluster?
I don't have formal benchmarks, but I can tell you what I see on my
VMware virtual machines in general:
Network I/O is reasonably fast - there's some additional latency,
On 12 Oct 2009, at 9:33 pm, Marian Marinov wrote:
I don't know about the wave power but the cooling power of the ocean
or sea
water is pretty good idea to look at.
Isn't sea water fairly corrosive? You get severe electrolytic
corrosion problems on boats, hence the big lump of zinc on a y
On 5 Oct 2009, at 11:02 am, Tomislav Maric wrote:
OK, I guess then Ubuntu will suffice for a 12 node Cluster. :) Anyway,
I'll try it and see. Thanks!
We run Debian on our clusters, so you're definitely not the only
person using a Debian-based distro for your cluster.
Debian does have a ki
On 30 Sep 2009, at 2:23 pm, Rahul Nabar wrote:
I like the shared socket approach. Building a separate IPMI network
seems a lot of extra wiring to me. Admittedly the IPMI switches can be
configured to be dirt cheap but it still feels like building a extra
tiny road for one car a day when a huge
On 28 Sep 2009, at 4:28 pm, Stuart Barkley wrote:
EROS (Extremely Reliable Operating System)
http://www.eros-os.org/eros.html
Looks like abandonware to me. There appears to have been no activity
for well over 5 years. If I don't see any activity on a status page
for over 2 years then
On 18 Sep 2009, at 1:15 pm, Robert G. Brown wrote:
On Thu, 17 Sep 2009, Gerry Creager wrote:
I was a dyed-in-the-wool vmware user until quite recently, too,
but the pain of keeping it running on "current" distros (read:
Fedora) finally forced me to look elsewhere. I think you'll be
plea
On 15 Sep 2009, at 11:55 pm, Dmitry Zaletnev wrote:
When install CentOS 5.3, you get Xen virtual machine for free, with
a nice interface, and in it, modes with internal network and NAT to
outside world work simultaneously, witch is not the case of Sun xVM
VirtualBox. Never used VMWare beca
On 16 Sep 2009, at 8:23 am, Alan Ward wrote:
I have been working quite a lot with VBox, mostly for server stuff.
I agree it can be quite impressive, and has some nice features (e.g.
do not stop a machine, sleep it - and wake up pretty fast).
On the other hand, we found that anything that
On 28 Aug 2009, at 11:37 am, madskad...@gmail.com wrote:
That issue I see it by another point of view: finally I will learn
something really new. Yes, I will loose time but I hope that in the
end all players will win: me because I got money and know how and the
cluster users because we doubled
On 12 Aug 2009, at 8:07 pm, Mark Hahn wrote:
also, I often do this:
awk '{print $3*$4,$0}' /proc/slabinfo|sort -rn|head
to get a quick snapshot of kinds of memory use.
That's a little gem!
Tim
--
The Wellcome Trust Sanger Institute is operated by Genome Research
Limited, a charity registe
On 11 Aug 2009, at 3:38 pm, Daniel Pfenniger wrote:
Douglas Eadline wrote:
All,
I posted this on ClusterMonkey the other week.
It is actually derived from a white paper I wrote for
SiCortex. I'm sure those on this list have some
experience/opinions with these issues (and other
cluster issues!)
If space is a constraint, but up-front cost less so, you might want to
consider a small blade chassis; something like an HP c-3000, which can
take 8 blades. Especially if all you want is a GigE interconnect,
which will fit in the same box. Potentially that will get you 64
cores in 6U, a
On 13 Jul 2009, at 7:49 pm, Mark Hahn wrote:
Does anyone know off hand if there is a way to pull the exact card
information from an nvidia GPU inside a linux server from linux
itself?
well, there's lspci. is that what you meant? it's usually a bit
fuzzy how to match the pci-level id (vend
On 24 Jun 2009, at 8:22 am, Marian Marinov wrote:
Hello,
I'm currently learning Erlang and I'm curious have any of you guys
have ever
used Erlang on their clusters?
Have anyone experimented in doing any academic work with it?
Only indirectly - my only encounter with it is people using cou
On 20 Apr 2009, at 12:42 pm, Chris Dagdigian wrote:
It's official:
http://www.sun.com/third-party/global/oracle/index.jsp
So, what's going to happen to Lustre now, I wonder?
Tim
--
The Wellcome Trust Sanger Institute is operated by Genome Research
Limited, a charity registered in England
On 5 Apr 2009, at 11:24 pm, Greg Lindahl wrote:
On Sun, Apr 05, 2009 at 10:00:44PM +0100, Tim Cutts wrote:
It still is. Our machines run it once a day.
I'm curious, do you also run it at boot? The one thing my usual scheme
lacks is an easy boot-time "should I rejoin the h
pport, you know :(
Tim Cutts and the Sanger Inst. aren't enough to convince senior
management
that Debian is workable, even though HP and IBM will both support it.
Hehe. I'd be very worried if my reputation *were* considered big
enough to swing that sort of decision. :-)
But you kn
On 5 Apr 2009, at 4:00 pm, Jason Riedy wrote:
A similar situation exists in the node management space, where
existing solutions like CFengine were pretty much ignored by HPC
people.
Ha! Cfengine was pretty much ignored by *everyone*, including
its author for quite some time. Promising (pun
On 4 Apr 2009, at 10:29 pm, Jason Riedy wrote:
And Joe Landman writes:
Good performance:
--
GlusterFS
PVFS2
I don't suppose you've experimented with Ceph or POHMELFS? I just
attempted to build Lustre support for experimenting and remembered
why I avoid it.
And Lustre alread
On 3 Apr 2009, at 11:11 pm, Mark Hahn wrote:
involved with Linux, and open source things such as XFS we would not
have the enterprise-level features that we see now.
unclear in several ways. for instance, linux has hotplug cpu
and memory support, but I really think this is dubious, since
t
On 3 Apr 2009, at 3:17 pm, John Hearns wrote:
2009/4/3 Robert G. Brown :
There are similar questions associated with IBM. Sun provides
support
for some major tools used in Linux these days, notably open office
but
also SGE.
Don't forget MySQL
Who knows what the result of the catfight
On 3 Apr 2009, at 12:14 am, Lux, James P wrote:
But at least the assembler is still source code compatible with your
code for an 8080.
Is it really? Do
mov a,m
or
dad d
exist on X86? I always got the impression that there wasn't real
compatibility between the 8080 and 8086, just that
On 2 Apr 2009, at 10:22 pm, Michael Brown wrote:
On the other side, there's Sun's official "OpenSolaris"
distribution, which is confusingly named the same as the OpenSolaris
project, which is somehow related to Solaris 11, and then there's
Solaris Express, which doesn't exist any more ...
On 26 Mar 2009, at 3:54 pm, Joshua Baker-LePain wrote:
Note that Leif mentioned medical equipment with embedded Windows
systems. And he's right -- you're not allowed to touch the software
build on those without getting the new build approved by the FDA (at
least, not if you want to use sai
On 26 Mar 2009, at 2:42 pm, Robert G. Brown wrote:
Um, I don't believe that this is the case, and I say this as a semi-
pro
consultant in health care.
I don't know about hospital software, but it's certainly the case for
some DNA sequencer instruments. Our ABI 3700 capillary sequencers
1 - 100 of 204 matches
Mail list logo