I’d say that 100M/shard is in the smallest doc use case possible, such as 
straight up log items with only a timestamp, id, and short message kind of 
thing.

In other contexts, big full text docs, 10M/shard is kind of a max.

How many documents do you have in your collection?

        Erik Hatcher
        Senior Solutions Architect
        Lucidworks.com



> On Jun 4, 2018, at 6:36 PM, Oakley, Craig (NIH/NLM/NCBI) [C] 
> <craig.oak...@nih.gov> wrote:
> 
> I have a sharding question.
> 
> 
> 
> 
> 
> We have a collection (one shard, two replicas, currently running Solr6.6) 
> which sometimes becomes unresponsive on the non-leader node. It is 214 
> gigabytes, and we were wondering whether there is a rule of thumb how large 
> to allow a core to grow before sharding. I have a reference in my notes from 
> the 2015 Solr conference in Austin "baseline no more than 100 million 
> docs/shard" and "ideal shard-to-memory ratio, if at all possible index should 
> fit into RAM, but other than that it gets really specific really fast"; but 
> that was several versions ago, and so I wanted to ask whether these 
> suggestions have been recalculated.
> 
> Thanks

Reply via email to