Arcady Genkin put forth on 7/12/2010 10:49 PM:
> After dealing with all the idiosyncrasies of iSCSI and software RAID
> under Linux I am a bit skeptical whether what we are building is going
> to actually be better than a black-box fiber-attached RAID solution,
> but it surely is cheaper and mor
On Mon, Jul 12, 2010 at 22:28, Stan Hoeppner wrote:
> I'm curious as to why you're (apparently) wasting 2/3 of your storage for
> redundancy. Have you considered a straight RAID 10 across those 30
> disks/LUNs?
This is a very good question. And the answer is: because Linux's MD
does not impleme
On 07/12/2010 06:26 PM, Stan Hoeppner wrote:
> Now, you can argue what RAID 10 is from now until you are blue in the face,
> and the list is tired of hearing it. But that won't change the industry
> definition of RAID 10. It's been well documented for over 15 years and won't
> be changing any tim
On Mon, Jul 12, 2010 at 20:06, Stan Hoeppner wrote:
> I had the same reaction Mike. Turns out mdadm actually performs RAID 1E with
> 3 disks when you specify RAID 10. I'm not sure what, if any, benefit RAID 1E
> yields here--almost nobody uses it.
The people who are surprised to see us do RAID
Arcady Genkin put forth on 7/12/2010 12:45 PM:
> I just tried to use LVM for striping the RAID1 triplets together
> (instead of MD). Using the following three commands to create the
> logical volume, I get 550 MB/s sequential read speed, which is quite
> faster than before, but is still 10% slower
Aaron Toponce put forth on 7/12/2010 6:56 PM:
> The argument is not whether Linux software RAID 10 is standard or not,
> but the requirement of the number of disks that Linux software RAID
> supports. In this case, it supports 2+ disks, regardless what its
> "effectiveness" is.
Yes, it is the arg
Roger Leigh put forth on 7/12/2010 5:45 PM:
> Have a closer look at lvcreate(8). The last arguments are:
>
>[-Z|--zero y|n] VolumeGroupName [PhysicalVolumePath[:PE[-PE]]...]
Good catch. As I said I've never used it before, so I wasn't exactly sure how
it all fits. Seemed logical that
Mike Bird put forth on 7/12/2010 4:00 PM:
> On Mon July 12 2010 12:45:57 Arcady Genkin wrote:
>> Creating the ten 3-way RAID1 triplets - for N in 0 through 9:
>> mdadm --create /dev/mdN -v --raid-devices=3 --level=raid10 \
>> --layout=n3 --metadata=0 --bitmap=internal --bitmap-chunk=2048 \
>> --c
On 7/12/2010 5:52 PM, Stan Hoeppner wrote:
> Aaron Toponce put forth on 7/12/2010 5:16 PM:
>> On 7/12/2010 4:13 PM, Stan Hoeppner wrote:
>>> Is that a typo, or are you turning those 3 disk mdadm sets into RAID10 as
>>> shown above, instead of the 3-way mirror sets you stated previously? RAID
>>>
Aaron Toponce put forth on 7/12/2010 5:16 PM:
> On 7/12/2010 4:13 PM, Stan Hoeppner wrote:
>> Is that a typo, or are you turning those 3 disk mdadm sets into RAID10 as
>> shown above, instead of the 3-way mirror sets you stated previously? RAID 10
>> requires a minimum of 4 disks, you have 3. Som
On Mon July 12 2010 15:16:47 Aaron Toponce wrote:
> Incorrect. The Linux RAID implementation can do level 10 across 3 disks.
> In fact, it can even do it across 2 disks.
>
> http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
Thanks, I learned something new today.
Now I guess t
On Mon, Jul 12, 2010 at 05:13:16PM -0500, Stan Hoeppner wrote:
> Arcady Genkin put forth on 7/12/2010 11:52 AM:
> > On Mon, Jul 12, 2010 at 02:05, Stan Hoeppner wrote:
> >
> >> lvcreate -i 10 -I [stripe_size] -l 102389 vg0
> >>
> >> I believe you're losing 10x performance because you have a 10 "d
On 7/12/2010 4:13 PM, Stan Hoeppner wrote:
> Is that a typo, or are you turning those 3 disk mdadm sets into RAID10 as
> shown above, instead of the 3-way mirror sets you stated previously? RAID 10
> requires a minimum of 4 disks, you have 3. Something isn't right here...
Incorrect. The Linux RA
Arcady Genkin put forth on 7/12/2010 11:52 AM:
> On Mon, Jul 12, 2010 at 02:05, Stan Hoeppner wrote:
>
>> lvcreate -i 10 -I [stripe_size] -l 102389 vg0
>>
>> I believe you're losing 10x performance because you have a 10 "disk" mdadm
>> stripe but you didn't inform lvcreate about this fact.
>
> H
On Mon July 12 2010 12:45:57 Arcady Genkin wrote:
> Creating the ten 3-way RAID1 triplets - for N in 0 through 9:
> mdadm --create /dev/mdN -v --raid-devices=3 --level=raid10 \
> --layout=n3 --metadata=0 --bitmap=internal --bitmap-chunk=2048 \
> --chunk=1024 /dev/sdX /dev/sdY /dev/sdZ
RAID 10 wi
On 7/12/2010 1:45 PM, Arcady Genkin wrote:
> Creating the ten 3-way RAID1 triplets - for N in 0 through 9:
> mdadm --create /dev/mdN -v --raid-devices=3 --level=raid10 \
> --layout=n3 --metadata=0 --bitmap=internal --bitmap-chunk=2048 \
> --chunk=1024 /dev/sdX /dev/sdY /dev/sdZ
>
> Then the big
On Mon, Jul 12, 2010 at 14:54, Aaron Toponce wrote:
> Can you provide the commands from start to finish when building the volume?
>
> fdisk ...
> mdadm ...
> pvcreate ...
> vgcreate ...
> lvcreate ...
Hi, Aaron, I already provided all of the above commands in earlier
messages (except for fdisk, s
On 7/12/2010 11:45 AM, Arcady Genkin wrote:
> I would still like to know why LVM on top of RAID0 performs so poorly
> in our case.
Can you provide the commands from start to finish when building the volume?
fdisk ...
mdadm ...
pvcreate ...
vgcreate ...
lvcreate ...
etc.
My experience has been t
I just tried to use LVM for striping the RAID1 triplets together
(instead of MD). Using the following three commands to create the
logical volume, I get 550 MB/s sequential read speed, which is quite
faster than before, but is still 10% slower than what plain MD RAID0
stripe can do with the same d
On Mon, Jul 12, 2010 at 02:05, Stan Hoeppner wrote:
> lvcreate -i 10 -I [stripe_size] -l 102389 vg0
>
> I believe you're losing 10x performance because you have a 10 "disk" mdadm
> stripe but you didn't inform lvcreate about this fact.
Hi, Stan:
I believe that the -i and -I options are for usin
Arcady Genkin put forth on 7/11/2010 10:46 PM:
> lvcreate -l 102389 vg0
Should be:
lvcreate -i 10 -I [stripe_size] -l 102389 vg0
I believe you're losing 10x performance because you have a 10 "disk" mdadm
stripe but you didn't inform lvcreate about this fact. Delete the vg, and
then recreate t
I'm seeing a 10-fold performance hit when using an LVM2 logical volume
that sits on top of a RAID0 stripe. Using dd to read directly from
the stripe (i.e. a large sequential read) I get speeds over 600MB/s.
Reading from the logical volume using the same method only gives
around 57MB/s. I am new t
22 matches
Mail list logo