On 07/25/2018 04:36 PM, Prentice Bisbal wrote:

Paging Dr. Joe Landman, paging Dr. Landman...


My response was

"I'd seen/helped build/benchmarked some very nice/fast CephFS based storage systems in $dayjob-1.  While it is a neat system, if you are focused on availability, scalability, and performance, its pretty hard to beat BeeGFS.  We'd ($dayjob-1) deployed several very large/fast file systems with it on our spinning rust, SSD, and NVMe units."

at the bottom of the post.

Yes, BeeGFS compares very favorably to Lustre across performance, management, resiliency dimensions.  Distributed replicated metadata and data is possible, atop zfs, xfs, etc.  We sustained >  40GB/s in a single rack of spinning disk in 2014 at a customer site using it, no SSD/cache implicated, and using 56Gb IB throughout.  Customer wanted to see us sustain 46+GB/s writes, and we did.

These are some of our other results with it:

https://scalability.org/2014/05/massive-unapologetic-firepower-2tb-write-in-73-seconds/

https://scalability.org/2014/10/massive-unapologetic-firepower-part-2-the-dashboard/
(that was my first effort with Grafana, and look at the writes ... vertical scale is 10k MB/s, aka 10GB/s increments.

W.r.t. BeeGFS, very easy to install, you can set it up trivially on extra hardware to see it in action.  Won't be as fast as my old stuff, but that's the price people pay for not buying the good stuff when it was available.

Prentice
On 07/24/2018 10:19 PM, James Burton wrote:
Does anyone have any experience with how BeeGFS compares to Lustre? We're looking at both of those for our next generation HPC storage system.

Is CephFS a valid option for HPC now? Last time I played with CephFS it wasn't ready for prime time, but that was a few years ago.

On Tue, Jul 24, 2018 at 10:58 AM, Joe Landman <joe.land...@gmail.com <mailto:joe.land...@gmail.com>> wrote:



    On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:

        Forgive me for saying this, but the philosophy for software
        defined storage such as CEPH and Gluster is that forklift
        style upgrades should not be necessary.
        When a storage server is to be retired the data is copied
        onto the new server then the old one taken out of service.
        Well, copied is not the correct word, as there are
        erasure-coded copies of the data. Rebalanced is probaby a
        better word.


    This ^^

    I'd seen/helped build/benchmarked some very nice/fast CephFS
    based storage systems in $dayjob-1.  While it is a neat system,
    if you are focused on availability, scalability, and performance,
    its pretty hard to beat BeeGFS.  We'd ($dayjob-1) deployed
    several very large/fast file systems with it on our spinning
    rust, SSD, and NVMe units.


-- Joe Landman
    e: joe.land...@gmail.com <mailto:joe.land...@gmail.com>
    t: @hpcjoe
    w: https://scalability.org
    g: https://github.com/joelandman
    l: https://www.linkedin.com/in/joelandman
    <https://www.linkedin.com/in/joelandman>


    _______________________________________________
    Beowulf mailing list, Beowulf@beowulf.org
    <mailto:Beowulf@beowulf.org> sponsored by Penguin Computing
    To change your subscription (digest mode or unsubscribe) visit
    http://www.beowulf.org/mailman/listinfo/beowulf
    <http://www.beowulf.org/mailman/listinfo/beowulf>




--
James Burton
OS and Storage Architect
Advanced Computing Infrastructure
Clemson University Computing and Information Technology
340 Computer Court
Anderson, SC 29625
(864) 656-9047


_______________________________________________
Beowulf mailing list,Beowulf@beowulf.org  sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) 
visithttp://www.beowulf.org/mailman/listinfo/beowulf



_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

--
Joe Landman
e: joe.land...@gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to