On Feb 13, 2013, at 4:21 PM, Prentice Bisbal wrote:
> On 02/12/2013 09:31 PM, Ellis H. Wilson III wrote:
>> On 02/12/2013 05:41 PM, Prentice Bisbal wrote:
>>> On 02/12/2013 12:28 AM, Ellis H. Wilson III wrote:
Yes, this diverges from the OP's question, but few took that
seriously
On 02/12/2013 09:31 PM, Ellis H. Wilson III wrote:
> On 02/12/2013 05:41 PM, Prentice Bisbal wrote:
>> On 02/12/2013 12:28 AM, Ellis H. Wilson III wrote:
>>> Yes, this diverges from the OP's question, but few took that seriously
>>> anyhow."
>>>
>> Hey! What the hell is that supposed to mean?
> Hah
: beowulf@beowulf.org
Subject: Re: [Beowulf] SSD caching for parallel filesystems
On 02/12/2013 05:41 PM, Prentice Bisbal wrote:
>
> On 02/12/2013 12:28 AM, Ellis H. Wilson III wrote:
>>
>> Yes, this diverges from the OP's question, but few took that
>> seriously anyhow.&quo
On 02/12/2013 05:41 PM, Prentice Bisbal wrote:
>
> On 02/12/2013 12:28 AM, Ellis H. Wilson III wrote:
>>
>> Yes, this diverges from the OP's question, but few took that seriously
>> anyhow."
>>
> Hey! What the hell is that supposed to mean?
Hah! Sorry, reading this a second time I understand how
On 02/12/2013 12:28 AM, Ellis H. Wilson III wrote:
>
> Yes, this diverges from the OP's question, but few took that seriously
> anyhow."
>
Hey! What the hell is that supposed to mean?
--
Prentice (aka 'The Op')
___
Beowulf mailing list, Beowulf@beowulf.
On Sun, Feb 10, 2013 at 07:19:44PM +, Andrew Holway wrote:
> >> Find me an application that needs big bandwidth and doesn't need massive
> >> storage.
>
> Databases. Lots of databases.
blekko's search engine. We own 1/2 petabyte of 160 gig Intel X-25Ms,
are bandwidth-limited, and we wouldn't
Vincent, seer of seers, prognosticator of prognosticators, Grendel of
Grendels, answer me this (ironically, this will get us back to the OP's
question, in a form at least, which would be just swell):
"You are charged with creating the most efficient and cost-effective
cache layer possible using
this is getting absurd. I think we all know the relative prices
and performances of off-the-shelf disks/ssd/ram. each have peculiarities
that make their use somewhat complex.
- with disks, you have to think about seek time, since it can range from
zero to ~15ms. for some workloads, a saving gr
Jim Lux
-Original Message-
From: Vincent Diepeveen [mailto:d...@xs4all.nl]
Sent: Monday, February 11, 2013 4:32 PM
To: Lux, Jim (337C)
Cc: beowulf@beowulf.org List
Subject: Re: [Beowulf] SSD caching for parallel filesystems
>
> I was responding to your question asking for an e
s for $9100...
This is embedded hardware, again not some HPC type workload.
Not a good example therefore.
>
> 256GB is probably comparable to a 400 foot magazine at the highest
> resolution.
>
>
> Jim Lux
>
>
> -Original Message-
> From: beowulf-boun...@beow
On Feb 12, 2013, at 12:39 AM, Joe Landman wrote:
>
> If you know what you are doing, 3GB/s is so ... several years ago.
> c.f.
> http://scalability.org/?p=3157
>
If you scroll back Joe,
The 3GB/s figure is what they posted they were happy to reach with a
SSD.
That's why i kept using it.
>
riginal Message-
From: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org] On
Behalf Of Vincent Diepeveen
Sent: Monday, February 11, 2013 3:30 PM
To: beowulf@beowulf.org List
Subject: Re: [Beowulf] SSD caching for parallel filesystems
Ah you're doing the simulations a lot slower :
On Feb 12, 2013, at 12:39 AM, Joe Landman wrote:
[snip]
>
> $10 - $20 /TB? For desktop drives, certainly, you can get the slow
> (5400RPM) cheap ones for about $40/TB or so (+/- some).
Even for consumers the 7200 RPM raid ones delivering 200MB/s are just
8% more expensive
than the cheap ones
On 02/11/2013 06:26 PM, Vincent Diepeveen wrote:
> You make a joke out of it,
>
> Yet SSD's you never buy in in huge quantities, whereas for any self
The great thing about gross overarching generalizations is that they
tend to be incorrect. They are sort of a recursive joke.
First, define "hug
Ah you're doing the simulations a lot slower :)
No big deal. You get what you pay for :)
You're speaking about 1 box here or so?
What size SSD array size is in that box, what bandwidth does it
deliver to your CPU's and what price did you buy it for Jim?
Then we can compare it with a harddrive r
You make a joke out of it,
Yet SSD's you never buy in in huge quantities, whereas for any self
respecting HPC practicing organisation or company,
they buy in massive harddrive storage, so they *can* get every single
harddrive for the prices quoted, which is between $10 and $20 a terabyte
right
>> In any event, your original statement used to be wholly correct.
>> It has
>> changed to a certain degree to "SSDs are about IOPs," which isn't
>> quite the same thing. However, more pointedly, with modern HDDs
>> barely approaching 200MB/s and SSD solutions approaching 2-4GB/s,
>> this is a
> Buy in price actually for a company here is $35 already a year ago
> for 2TB disks.
> They basically have 2 shops selling well in Netherlands and Germany.
> I'm not sure they like to get mentionned here, but knowing you know
> some German,
> you should have no problems figuring out that shop nam
On Feb 11, 2013, at 7:29 PM, Lux, Jim (337C) wrote:
>> In any event, your original statement used to be wholly correct.
>> It has
>> changed to a certain degree to "SSDs are about IOPs," which isn't
>> quite the same thing. However, more pointedly, with modern HDDs
>> barely approaching 200MB/s
> In any event, your original statement used to be wholly correct.
> It has
> changed to a certain degree to "SSDs are about IOPs," which isn't
> quite the same thing. However, more pointedly, with modern HDDs
> barely approaching 200MB/s and SSD solutions approaching 2-4GB/s, this
> is an i
Andrew,
Companies can deliver to companies things dirt cheap. Companies have
basically zero rights compared to citizen.
For example if you buy product A, then a customer has the right to
return, within a given period of time.
A company doesn't have that right, unless agreed contractual.
So th
> You seem to have no idea what determines prices when you buy in a lot versus
> just 1.
Yes indeed. I am obviously clueless in the matter.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest m
On Feb 10, 2013, at 11:39 PM, Andrew Holway wrote:
So any SSD solution that's *not* used for latency sensitive
workloads, it needs thousands of dollars worth of SSD's.
In such case plain old harddrive technology that's at buy in
price right now $35 for a 2 TB disk
(if
>>> So any SSD solution that's *not* used for latency sensitive workloads, it
>>> needs thousands of dollars worth of SSD's.
>>> In such case plain old harddrive technology that's at buy in price right
>>> now $35 for a 2 TB disk
>>> (if you buy in a lot, that's the actual buy in price for big sh
On Feb 10, 2013, at 3:06 PM, Ellis H. Wilson III wrote:
> On 02/10/13 08:40, Vincent Diepeveen wrote:
>>
>> On Feb 10, 2013, at 2:09 PM, Ellis H. Wilson III wrote:
>>
>>> On 02/10/13 04:41, Vincent Diepeveen wrote:
SSD's are not about bandwidth, they're about latency.
>>>
>>> This is a bit a
>> Find me an application that needs big bandwidth and doesn't need massive
>> storage.
Databases. Lots of databases.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) vi
On 02/10/13 08:40, Vincent Diepeveen wrote:
>
> On Feb 10, 2013, at 2:09 PM, Ellis H. Wilson III wrote:
>
>> On 02/10/13 04:41, Vincent Diepeveen wrote:
>>> SSD's are not about bandwidth, they're about latency.
>>
>> This is a bit aggressive of a vantage point -- let's tone it back:
>> "SSD's aren'
On 02/09/13 16:32, Bill Broadley wrote:
> On 02/09/2013 01:22 PM, Vincent Diepeveen wrote:
>> SATA is very bad protocol for SSD's.
>>
>> SSD's allows perfectly parallel stores and writes, SATA doesn't.
>> So SATA really limits the SSD's true performance.
>
> SSDs and controllers often support NCQ w
On 10/02/13 08:32, Bill Broadley wrote:
> SSDs and controllers often support NCQ which allows multiple
> outstanding requests.
That's only useful if the SATA spec lists the command as being
queueable, and until SATA 3.1 TRIM wasn't one of those (and so
executing it hit performance, and not execut
On Feb 10, 2013, at 2:09 PM, Ellis H. Wilson III wrote:
> On 02/10/13 04:41, Vincent Diepeveen wrote:
>> SSD's are not about bandwidth, they're about latency.
>
> This is a bit aggressive of a vantage point -- let's tone it back:
> "SSD's aren't always the cheapest way to achieve bandwidth, but
On 02/10/13 04:41, Vincent Diepeveen wrote:
> SSD's are not about bandwidth, they're about latency.
This is a bit aggressive of a vantage point -- let's tone it back:
"SSD's aren't always the cheapest way to achieve bandwidth, but they are
critical for latency-sensitive applications that are too
SSD's are not about bandwidth, they're about latency.
With a raid array of cheapo disks we can also get 3GB/s bandwidth,
more than most 2 socket nodes effectively can handle.
Only theoretically a higher bandwidth will be possible (benchmarks huh).
However getting 20 bytes from a SSD is in the fe
On 02/09/2013 01:22 PM, Vincent Diepeveen wrote:
> SATA is very bad protocol for SSD's.
>
> SSD's allows perfectly parallel stores and writes, SATA doesn't.
> So SATA really limits the SSD's true performance.
SSDs and controllers often support NCQ which allows multiple outstanding
requests. Not
On Feb 9, 2013, at 4:16 PM, Ellis H. Wilson III wrote:
> On 02/09/13 13:16, Mark Hahn wrote:
>>> They buy a controller design from one place (some
>>> make this component), SSD packages from someplace else, some channel
>>> controllers, etc, etc, and strap it all together. Which is totally
>>
>>
On 02/09/13 13:16, Mark Hahn wrote:
>> They buy a controller design from one place (some
>> make this component), SSD packages from someplace else, some channel
>> controllers, etc, etc, and strap it all together. Which is totally
>
> well, I only pay attention to the SATA SSD market, but the medi
>>> solid devices as well (also expensive however). I think Micron also has
>>> a native PCIe device in the wild now, the P320h? Anybody know of other,
>>> native PCIe devices?
>>
>> I'm not even sure what "native" PCIe flash would look like. Do you mean
>> that the driver and/or filesystem woul
On 02/08/13 17:57, Mark Hahn wrote:
>> solid devices as well (also expensive however). I think Micron also has
>> a native PCIe device in the wild now, the P320h? Anybody know of other,
>> native PCIe devices?
>
> I'm not even sure what "native" PCIe flash would look like. Do you mean
> that the
> solid devices as well (also expensive however). I think Micron also has
> a native PCIe device in the wild now, the P320h? Anybody know of other,
> native PCIe devices?
I'm not even sure what "native" PCIe flash would look like. Do you mean
that the driver and/or filesystem would have to do t
On 02/08/2013 11:20 AM, Brock Palen wrote:
> To add another side note, in the process of interviewing the Gluster team for
> my podcast (www.rce-cast.com) he mentioned writing a plugin, that would first
> write data local to the host, and gluster would then take it to the real disk
> in the back
On 02/08/2013 11:29 AM, Jonathan Aquilina wrote:
> Brock the pcie SSD's from ocz the enterprise ones seem to have insane
> performance.
These are exactly the SSD's I'm referring to that suffer from SATAe
internal interface overheads. I would caution everyone to make sure you
try before you buy
On 02/08/2013 11:50 AM, Jonathan Aquilina wrote:
> What would be the best way to bench mark these devices be it the sata
> ssd's or pcie. I have a sata based ssd in my netbook and its quite zippy.
Joe should have some nice suggestions here, but ultimately the best way
to benchmark it for OPs case
What would be the best way to bench mark these devices be it the sata ssd's
or pcie. I have a sata based ssd in my netbook and its quite zippy.
On Fri, Feb 8, 2013 at 5:32 PM, Ellis H. Wilson III wrote:
> On 02/08/2013 11:29 AM, Jonathan Aquilina wrote:
>
>> Brock the pcie SSD's from ocz the ent
Brock the pcie SSD's from ocz the enterprise ones seem to have insane
performance.
On Fri, Feb 8, 2013 at 5:20 PM, Brock Palen wrote:
> To add another side note, in the process of interviewing the Gluster team
> for my podcast (www.rce-cast.com) he mentioned writing a plugin, that
> would first
To add another side note, in the process of interviewing the Gluster team for
my podcast (www.rce-cast.com) he mentioned writing a plugin, that would first
write data local to the host, and gluster would then take it to the real disk
in the background. There was constraints to doing this. I as
On 02/06/2013 04:36 PM, Prentice Bisbal wrote:
> Beowulfers,
>
> I've been reading a lot about using SSD devices to act as caches for
> traditional spinning disks or filesystems over a network (SAN, iSCSI,
> SAS, etc.). For example, Fusion-io had directCache, which works with
> any block-based sto
On 02/07/2013 01:20 PM, Prentice Bisbal wrote:
> I'm looking for something that can cache a parallel filesystem from
> the client side. From what I've read of CacheCade, it only works with
> local disks directly attached to the PERC controller. -- Prentice
>
On 02/06/2013 06:24 PM, Brendan Moloney wrote:
> You could use something like CacheFS for client side caching, but this
> will always be a read only cache.
That might not be a problem. Fusion-io's directCache is a write-though
cache, so all writes are to disk, anyway. I think several other pro
On Thu, Feb 7, 2013 at 12:24 AM, Brendan Moloney
wrote:
> The newer versions of CacheCade can do write caching which could be pretty
> useful for speeding up RAID6 on the servers. Pretty sure there are other
> options to do write caching on the server (like Bcache).
Or BTIER, which looks like a r
You could use something like CacheFS for client side caching, but this will
always be a read only cache.
The newer versions of CacheCade can do write caching which could be pretty
useful for speeding up RAID6 on the servers. Pretty sure there are other
options to do write caching on the server (li
On 02/06/2013 05:07 PM, Sabuj Pattanayek wrote:
> On Wed, Feb 6, 2013 at 3:46 PM, Brock Palen wrote:
>> I have been thinking about this. DDN's SFX looks like it might be able to
>> do this at the block level. I am trying to get them to think that they
>> should do this.
> Yes, supposedly it'l
On Wed, Feb 6, 2013 at 3:46 PM, Brock Palen wrote:
> I have been thinking about this. DDN's SFX looks like it might be able to do
> this at the block level. I am trying to get them to think that they should
> do this.
Yes, supposedly it'll just be a firmware upgrade and addition of SSD's
into
I have been thinking about this. DDN's SFX looks like it might be able to do
this at the block level. I am trying to get them to think that they should do
this.
I wonder how much one should have. A quick scan of my robinhood data base shows
my 640TB lustre filesystem (50% full) 56TB of data i
Lustre is now implementing ZFS which I think has the most advanced SSD caching
stuff available.
If you have a google around for "roadrunner Lustre ZFS" you might find
something juicy.
Ta,
Andrew
Am 6 Feb 2013 um 21:36 schrieb Prentice Bisbal :
> Beowulfers,
>
> I've been reading a lot about
Beowulfers,
I've been reading a lot about using SSD devices to act as caches for
traditional spinning disks or filesystems over a network (SAN, iSCSI,
SAS, etc.). For example, Fusion-io had directCache, which works with
any block-based storage device (local or remote) and Dell is selling
LSI'
54 matches
Mail list logo