Yep, this could be considered as a form of COTS high volume data transfer ;-)

from https://aws.amazon.com/snowmobile/faqs/ (the very last item)

"Q: How much does a Snowmobile job cost?

"Snowmobile provides a practical solution to exabyte-scale data migration and is significantly faster and cheaper than any network-based solutions, which can take decades and millions of dollars of investment in networking and logistics. Snowmobile jobs cost $0.005/GB/month based on the amount of provisioned Snowmobile storage capacity and the end to end duration of the job, which starts when a Snowmobile departs an AWS data center for delivery to the time when data ingestion into AWS is complete. Please see AWS Snowmobile pricing or contact AWS Sales for an evaluation."

So it seems a fully loaded snowmobile, 100PB at 0.005/GB/month, would cost 
$524,288.00/month!

Cheers,
Fred.

On 26/07/18 21:49, Lux, Jim (337K) wrote:
SO this is the modern equivalent of "nothing beats the bandwidth of a station wagon 
full of mag tapes"
It *is* a clever idea - I'm sure all the big cloud providers have figured out how to do a 
"data center in shipping container", and that's basically what this is.

I wonder what it costs (yeah, I know I can "Contact Sales to order a AWS 
Snowmobile"... but...)


Jim Lux
(818)354-2075 (office)
(818)395-2714 (cell)

-----Original Message-----
From: Beowulf [mailto:beowulf-boun...@beowulf.org] On Behalf Of Fred Youhanaie
Sent: Tuesday, July 24, 2018 11:21 AM
To: beowulf@beowulf.org
Subject: Re: [Beowulf] Lustre Upgrades

Nah, that ain't large scale ;-) If you want large scale have a look at 
snowmobile:

        https://aws.amazon.com/snowmobile/

They drive a 45-foot truck to your data centre, fill it up with your data bits, 
then drive it back to their data centre :-()

Cheers,
Fred

On 24/07/18 19:04, Jonathan Engwall wrote:
Snowball is the very large scale AWS data service.


On July 24, 2018, at 8:35 AM, Joe Landman <joe.land...@gmail.com> wrote:



On 07/24/2018 11:06 AM, John Hearns via Beowulf wrote:
Joe, sorry to split the thread here. I like BeeGFS and have set it up.
I have worked for two companies now who have sites around the world,
those sites being independent research units. But HPC facilities are
in headquarters.
The sites want to be able to drop files onto local storage yet have
it magically appear on HPC storage, and same with the results going
back the other way.

One company did this well with GPFS and AFM volumes.
For the current company, I looked at gluster and Gluster
geo-replication is one way only.
What do you know of the BeeGFS mirroring? Will it work over long
distances? (Note to me - find out yourself you lazy besom)

This isn't the use case for most/all cluster file systems.   This is
where distributed object systems and buckets rule.

Take your file, dump it into an S3 like bucket on one end, pull it out
of the S3 like bucket on the other.  If you don't want to use get/put
operations, then use s3fs/s3ql.  You can back this up with replicating
EC minio stores (will take a few minutes to set up ... compare that to
others).

The down side to this is that minio has limits of about 16TiB last I
checked.   If you need more, replace minio with another system
(igneous, ceph, etc.).  Ping me offline if you want to talk more.

[...]

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To 
change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to