Morning
I strongly suggest you get Mellanox to come in help with the initial
config. Their technical teams are great and they know what they are doing.
We run OpenMPI with UCX on top of Mellanox Multi-Host ethernet network.
The setup required a few parameters on each switch and away we went.
A
On 9/29/21 5:51 PM, Jörg Saßmannshausen wrote:
interesting concept. I did not know about the Lustre fsencrypt but then, I am
less the in-detail expert in PFS.
Just to make sure I get the concept of that correct: Basically Lustre is
providing projects which itself are encrypted, similar to the en
Hi Ellis,
interesting concept. I did not know about the Lustre fsencrypt but then, I am
less the in-detail expert in PFS.
Just to make sure I get the concept of that correct: Basically Lustre is
providing projects which itself are encrypted, similar to the encrypted
containers I mentioned befo
Dear all,
back at a previous work place the shredder did come to our place and like Jim
said: loud, not much to look at other than a heap of shredded metal and
plastic the other end.
There is an active volcano around right now:
https://www.spiegel.de/wissenschaft/natur/vulkanausbruch-auf-la-pa
Hello Everyone..
I am now at a similar confusion between what to choose for a new cluster -
ROCE vs Infiniband.
My experience with ROCE when I tried it recently was that it was not easy
to set it up. It required me to set up qos for lossless fabric, and pfc for
flow control. On top of that - it re
There are special purpose drive shredders - they'll even come out to your
facility with such a device mounted on a truck.
It's not as exciting as you might think. Throw stuff in, makes horrible noise,
done.
For entertainment, the truck sized wood chipper used when clearing old orchards
is much
On 9/29/21 11:41 AM, Jörg Saßmannshausen wrote:
If you still need more, don't store the data at all but print it out on paper
and destroy it by means of incineration. :D
I have heard stories from past colleagues of one large US Lab putting
their HDDs through wood chippers with magnets on the c
Apologies in advance for the top-post -- too many interleaved streams
here to sanely bottom-post appropriately.
SED drives, which are a reasonably small mark-up for both HDDs and SSDs,
provide full drive or per-band solutions to "wipe" the drive by revving
the key associated with the band or d
Dear all,
interesting discussion and very timely for me as well as we are currently
setting up a new HPC facility, using OpenStack throughout so we can build a
Data Safe Haven with it as well.
The question about data security came up too in various conversations, both
internal and with industri
In this case, we've successfully pushed back with the granting agency (US NIH,
generally, for us) that it's just not feasible to guarantee that the data
are truly gone on a production parallel filesystem. The data are encrypted
at rest (including offsite backups), which has been sufficient for our
I guess the question is for a parallel filesystem how do you make sure
you have 0'd out the file with out borking the whole filesystem since
you are spread over a RAID set and could be spread over multiple hosts.
-Paul Edmon-
On 9/29/2021 10:32 AM, Scott Atchley wrote:
For our users that have
Yeah, that's what we were surmising. But paranoia and compliance being
what it is we were curious what others were doing.
-Paul Edmon-
On 9/29/2021 10:32 AM, Renfro, Michael wrote:
I have to wonder if the intent of the DUA is to keep physical media
from winding up in the wrong hands. If so,
We have one storage system (DDN/GPFS) that is required to be
NIST-compliant, and we bought self-encrypting drives for it. The up-charge
for SED drives has diminished significantly over the past few years so that
might be easier than doing it in software and then having to verify/certify
that the so
I have to wonder if the intent of the DUA is to keep physical media from
winding up in the wrong hands. If so, if the servers hosting the parallel
filesystem (or a normal single file server) is physically secured in a data
center, and the drives are destroyed on decommissioning, that might satis
For our users that have sensitive data, we keep it encrypted at rest and in
movement.
For HDD-based systems, you can perform a secure erase per NIST standards.
For SSD-based systems, the extra writes from the secure erase will
contribute to the wear on the drives and possibly their eventually wear
The former. We are curious how to selectively delete data from a
parallel filesystem. For example we commonly use Lustre, ceph, and
Isilon in our environment. That said if other types allow for easier
destruction of selective data we would be interested in hearing about it.
-Paul Edmon-
On
Are you asking about selectively deleting data from a parallel file system
(PFS) or destroying drives after removal from the system either due to
failure or system decommissioning?
For the latter, DOE does not allow us to send any non-volatile media
offsite once it has had user data on it. When we
Occassionally we get DUA (Data Use Agreement) requests for sensitive
data that require data destruction (e.g. NIST 800-88). We've been
struggling with how to handle this in an era of distributed filesystems
and disks. We were curious how other people handle requests like this?
What types of f
18 matches
Mail list logo