Another one of us! I got a rush just then when I seen the figures 5-30 servers in collections and then looked at my home built rig.

https://wisecorp.co.uk/images/1.jpg

https://wisecorp.co.uk/images/2.jpg

https://wisecorp.co.uk/images/3.jpg

*Yes, you belong here, sorry I'm rammed with work currently! But you are very welcome and over the period of my membership I've been helped many times, even offered an open invite to come and visit a proper professional HPC data centre too of which soon I might hope to take them up on the offer if it still stands.
You'll meet some very knowledgeable folks here, Welcome any time!

Kind regards,

Darren Wise
wisecorp.co.uk

P.S: If you spot any drink or drugs in the images, I never posted them.. Right?? (but want them back) I'm only kidding! Honestly!

On 25/02/2019 14:38, Andrew Holway wrote:
One of us. One of us.

On Sat, 23 Feb 2019 at 15:41, Will Dennis <wden...@nec-labs.com <mailto:wden...@nec-labs.com>> wrote:

    Hi folks,

    I thought I’d give a brief introduction, and see if this list is a
    good fit for my questions that I have about my HPC-“ish”
    infrastructure...

    I am a ~30yr sysadmin (“jack-of-all-trades” type), completely
    self-taught (B.A. is in English, that’s why I’m a sysadmin :-P)
    and have ended up working at an industrial research lab for a
    large multi-national IT company (http://www.nec-labs.com). In our
    lab we have many research groups (as detailed on the
    aforementioned website) and a few of them are now using “HPC”
    technologies like Slurm, and I’ve become the lead admin for these
    groups. Having no prior background in this realm, I’m learning as
    fast as I can go :)

    Our “clusters” are collections of 5-30 servers, all collections
    bought over years and therefore heterogeneous hardware, all with
    locally-installed OS (i.e. not trad head-node with PXE-booted
    diskless minions) which is as carefully controlled as I can make
    it via standard OS install via Cobbler templates, and then further
    configured via config management (we use Ansible.) Networking is
    basic 10GbE between nodes (we do have Infiniband availability on
    one cluster, but it’s fell into disuse now since the project that
    has required it has ended.) Storage is one or more traditional NFS
    servers (some use ZFS, some not.) We have within the past few
    years adopted Slurm WLM for a job-scheduling system on top of
    these collections, and now are up to three different Slurm
    clusters, with I believe a fourth on the way.

    My first question for this list is basically “do I belong here?” I
    feel there’s a lot of HPC concepts it would be good for me to
    learn, so as I can improve the various research group’s computing
    environments, but not sure if this list is for much larger “true
    HPC” environments, or would be a good fit for a “HPC n00b” like me...

    Thanks for reading, and let me know your opinions :)

    Best,

    Will

    _______________________________________________
    Beowulf mailing list, Beowulf@beowulf.org
    <mailto:Beowulf@beowulf.org> sponsored by Penguin Computing
    To change your subscription (digest mode or unsubscribe) visit
    http://www.beowulf.org/mailman/listinfo/beowulf


_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf


---
This email has been checked for viruses by AVG.
https://www.avg.com
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to