Hi,

We've used iSCSI SANs with 6x1TB 15k SAS drives RAID10 in production
environments, and this works very well for both reads and writes. We
also have FibreChannel environments, and this is faster as you would
expect. It's also a lot more expensive.

The performance bottleneck will have more to do with running
virtualization rather than with iSCSI-based hardware. If you run a
physical server using iSCSI with decent disks, you should be getting
good results.

Peter




On Thu, Oct 7, 2010 at 12:45 PM, Shawn Heisey <elyog...@elyograg.org> wrote:
>  On 10/6/2010 7:23 AM, Thijs wrote:
>>
>> Hi.
>>
>> Our hardware department is planning on moving some stuff to new machines
>> (on our request)
>> They are suggesting using virtualization (some CISCO solution) on those
>> machines and having the 'disk' connected via ISCSI.
>>
>> Does anybody have experience running a SOLR index on a ISCSI drive?
>> We have already tried with NFS but that is slowing the index process down
>> to much, about 12 times slower. So NFS is a no-go. I could have know that as
>> it is mentioned on a lot of places to avoid nfs. But I can't find info about
>> ISCSI
>>
>> Does anybody have experience running a SOLR index on a virtualized
>> environment? Is it resistant enough that it keeps working when the
>> virtualized machine is transfered to a different hardware node?
>>
>> thanks
>
> I've not actually used it myself, but I would not expect it to cause you any
> issues.  It should be similar to fibrechannel.  Usually fibrechannel is
> faster, unless you REALLY spend some money and get 10Gb/s ethernet hardware.
>  If we assume that you'll have a fairly standard gigabit setup with only one
> port on your server, you should see potential speeds near one gigabit.  This
> is faster than the sustained rate on most single hard drives.  I was just
> reading that Seagate's 15K 600GB SAS drive is 171MB/s, which would get close
> to 1.3GB/s, so in that case, it could overwhelm a single iSCSI port.
>
> With something like iSCSI or fibrechannel, you have extra points of failure,
> because you normally don't want to implement them without dedicated
> switching hardware.  The solution there is redundancy, which of course
> drives the cost up even higher.  You also usually get higher speeds because
> of load balancing across those multiple links.
>
> Shawn
>
>

Reply via email to