hi,
Am Montag, den 21.02.2011, 17:16 -0600 schrieb Stan Hoeppner:
> I'm guessing your setup is different than this or you wouldn't be
> askig
> about RAID. Could you please describe your storage back end?
4 x LSI 630J Storage with 12 x SAS HDD connected to a SAS Switch. From
the SAS switch one
Denny Schierz put forth on 2/21/2011 4:20 AM:
> hi Stan,
>
> Am Sonntag, den 20.02.2011, 20:13 -0600 schrieb Stan Hoeppner:
>> It's not clear to me at this point if you need real time
>> file/filesystem sharing or simply manual fail over from a dead host to
>> a backup server.
>
> than it's my fa
hi Stan,
Am Sonntag, den 20.02.2011, 20:13 -0600 schrieb Stan Hoeppner:
> It's not clear to me at this point if you need real time
> file/filesystem sharing or simply manual fail over from a dead host to
> a backup server.
than it's my fault :-)
I want failover (the second in your words). If nod
Denny Schierz put forth on 2/20/2011 11:56 AM:
> hi,
>
> Am Freitag, den 18.02.2011, 20:37 +0800 schrieb Justin Jereza:
>
>> I'd consider running clvm + gfs2 instead. That way, both nodes can
>> stay up and connected to the same filesystem at the same time. The
>> only decision left would be whic
hi,
Am Freitag, den 18.02.2011, 20:37 +0800 schrieb Justin Jereza:
> I'd consider running clvm + gfs2 instead. That way, both nodes can
> stay up and connected to the same filesystem at the same time. The
> only decision left would be which node to use. OTOH, you can have an
> HA configuration as
Justin Jereza put forth on 2/18/2011 6:37 AM:
>> we have two nodes connected to one big SAS storage (LSI 630j Jbod) with
>> SAS HBAs and they can see all disks at same time.
>> Now we want build a failover construct for lvm with ISCSI:
>>
>> LSI Jbod -> node* | raid | lvm | ISCSI -> Global IP ->> C
> we have two nodes connected to one big SAS storage (LSI 630j Jbod) with
> SAS HBAs and they can see all disks at same time.
> Now we want build a failover construct for lvm with ISCSI:
>
> LSI Jbod -> node* | raid | lvm | ISCSI -> Global IP ->> Client
>
> If the primary node fails, start raid on
7 matches
Mail list logo