Hi Adi,

RAID10 is good for satisfying both indexing and query, striping across
mirror sets. However, you lose half of your raw disk space, just like with
RAID1.

Here is a mail thread of mine which discusses RAID levels for Solr
specific:
https://lists.apache.org/thread.html/462d7467b2f2d064223eb46763a6a6e606ac670fe7f7b40858d97c0d@1366325333@%3Csolr-user.lucene.apache.org%3E

Kind Regards,
Furkan KAMACI

On Mon, Jul 29, 2019 at 10:25 PM Kaminski, Adi <adi.kamin...@verint.com>
wrote:

> Hi,
> We are about to size large environment with 7 nodes/servers with
> replication factor 2 of SolrCloud cluster (using Solr 7.6).
>
> The system contains parent-child (nested documents) schema, and about to
> have 40M parent docs with 50-80 child docs each (in total 2-3.2B Solr docs).
>
> We have a use case that will require to update parent document fields
> triggered by an application flow (with re-indexing or atomic/partial update
> approach, that will probably require to upgrade to Solr 8.1.1 that supports
> this feature and contains some fixes in nested docs handling area).
>
> Since these updates might be quite heavy from IOPS perspective, we would
> like to make sure that the IO hardware and RAID configuration are optimized
> (r/w ratio of 50% read and 50% write, to allow balanced search and update
> flows).
>
> Can someone share similar scale/use- case/deployment RAID level
> configuration ?
> (I assume that RAID5&6 are not an option due to parity/dual parity heavy
> impact on write operations, so it leaves RAID 0, 1 or 10).
>
> Thanks in advance,
> Adi
>
>
>
>
> Sent from Workspace ONE Boxer
>
>
> This electronic message may contain proprietary and confidential
> information of Verint Systems Inc., its affiliates and/or subsidiaries. The
> information is intended to be for the use of the individual(s) or
> entity(ies) named above. If you are not the intended recipient (or
> authorized to receive this e-mail for the intended recipient), you may not
> use, copy, disclose or distribute to anyone this message or any information
> contained in this message. If you have received this electronic message in
> error, please notify us by replying to this e-mail.
>

Reply via email to