BeeGFS sounds interesting. Is it possible to say something general about
how it compares to Lustre regarding performance?
/jon
On 02/13/2017 05:54 PM, John Hanks wrote:
We've had pretty good luck with BeeGFS lately running on SuperMicro
vanilla hardware with ZFS as the underlying filesystem. It works
pretty well for the cheap end of the hardware spectrum and BeeGFS is
free and pretty amazing. It has held up to abuse under a very mixed
and heavy workload and we can stream large sequential data into it
fast enough to saturate a QDR IB link, all without any in depth
tuning. While we don't have redundancy (other than raidz3), BeeGFS can
be set up with some redundancy between metadata servers and mirroring
between storage. http://www.beegfs.com/content/
jbh
On Mon, Feb 13, 2017 at 7:40 PM Alex Chekholko
<alex.chekho...@gmail.com <mailto:alex.chekho...@gmail.com>> wrote:
If you have a preference for Free Software, GlusterFS would work,
unless you have many millions of small files. It would also depend
on your available hardware, as there is not a 1-to-1
correspondence between a typical GPFS setup and a typical
GlusterFS setup. But at least it is free and easy to try out. The
mailing list is active, the software is now mature ( I last used
GlusterFS a few years ago) and you can buy support from Red Hat if
you like.
Take a look at the RH whitepapers about typical GlusterFS
architecture.
CephFS, on the other hand, is not yet mature enough, IMHO.
On Mon, Feb 13, 2017 at 8:31 AM Justin Y. Shi <s...@temple.edu
<mailto:s...@temple.edu>> wrote:
Maybe you would consider Scality (http://www.scality.com/) for
your growth concerns. If you need speed, DDN is faster in
rapid data ingestion and for extreme HPC data needs.
Justin
On Mon, Feb 13, 2017 at 4:32 AM, Tony Brian Albers <t...@kb.dk
<mailto:t...@kb.dk>> wrote:
On 2017-02-13 09:36, Benson Muite wrote:
> Hi,
>
> Do you have any performance requirements?
>
> Benson
>
> On 02/13/2017 09:55 AM, Tony Brian Albers wrote:
>> Hi guys,
>>
>> So, we're running a small(as in a small number of
nodes(10), not
>> storage(170TB)) hadoop cluster here. Right now we're on
IBM Spectrum
>> Scale(GPFS) which works fine and has POSIX support. On
top of GPFS we
>> have a GPFS transparency connector so that HDFS uses GPFS.
>>
>> Now, if I'd like to replace GPFS with something else,
what should I use?
>> It needs to be a fault-tolerant DFS, with POSIX
support(so that users
>> can move data to and from it with standard tools).
>>
>> I've looked at MooseFS which seems to be able to do the
trick, but are
>> there any others that might do?
>>
>> TIA
>>
>
Well, we're not going to be doing a huge amount of I/O. So
performance
requirements are not high. But ingest needs to be really
fast, we're
talking tens of terabytes here.
/tony
--
Best regards,
Tony Albers
Systems administrator, IT-development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C,
Denmark.
Tel: +45 2566 2383 <tel:%2B45%202566%202383> / +45 8946
2316 <tel:%2B45%208946%202316>
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
<mailto:Beowulf@beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe)
visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
<mailto:Beowulf@beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
<mailto:Beowulf@beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
‘[A] talent for following the ways of yesterday, is not sufficient to
improve the world of today.’
- King Wu-Ling, ruler of the Zhao state in northern China, 307 BC
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf