On 1/7/2015 3:29 PM, Nishanth S wrote:
> I  am working on coming up with a solr architecture layout  for my use
> case.We are a very write heavy application with  no down time tolerance and
>  have low SLAs on reads when compared with writes.I am looking at around
> 12K tps with average index size of solr document in the range of 6kB.I
> would like to go with 3 replicas for that extra fault tolerance and  trying
> to identify the number  of shards.The machines are monsterous and have
>  around 100 GB of RAM and  more than 24 cores on each.Is there a way to
> come at the number of  desired shards in this case.Any pointers would be
> helpful.

This is one of those questions that's nearly impossible to answer
without field trials that have a production load on a production index. 
Minor changes to either config or schema can have a major impact on the
query load Solr will support.

https://lucidworks.com/blog/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/

A query load of 12000 queries per second is VERY high.  That is likely
to require a **LOT** of hardware, because you're going to need a lot of
replicas.  Because each server will be handling quite a lot of
simultaneous queries, the best results will come from having only one
replica (solr core) per server.

Generally you'll get better results for a high query load if you don't
shard your index, but depending on how many docs you have, you might
want to shard.  You haven't said how many docs you have.

The key to excellent performance with Solr is to make sure that the
system never hits the disk to read index data -- for 12000 queries per
second, the index must be fully cached in RAM.  If Solr must go to the
actual disk, query performance will drop significantly.

Thanks,
Shawn

Reply via email to