I am curious though to chunk out such large data is something like hadoop/HBase 
and the like of those platforms, are those whats being used?

Regards,
Jonathan
________________________________
From: Beowulf <beowulf-boun...@beowulf.org> on behalf of Jörg Saßmannshausen 
<sassy-w...@sassy.formativ.net>
Sent: 03 February 2021 19:23
To: beowulf@beowulf.org <beowulf@beowulf.org>
Subject: Re: [Beowulf] Project Heron at the Sanger Institute

Hi John,

interesting stuff and good reading.

For the IT interests on here: these sequencing machine are chucking out large
amount of data per day. The project I am involved in can chew out 400 GB or so
on raw data per day. That is a small machine. That then needs to be processed
before you actually can analyze it. So there is quite some data movement etc
involved here.

All the best

Jörg

Am Mittwoch, 3. Februar 2021, 14:06:36 GMT schrieb John Hearns:
> https://edition.cnn.com/2021/02/03/europe/tracing-uk-variant-origins-gbr-int
> l/index.html
>
> Dressed in white lab coats and surgical masks, staff here scurry from
> machine to machine -- robots and giant computers that are so heavy, they're
> placed on solid steel plates to support their weight.
> Heavy metal!



_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf

Reply via email to