Hi,
To get more complete picture, can you tell us how many shards/replicas do you 
have per collection? Also what is index size on disk? Did you check GC?

BTW, using 32GB heap prevents you from using compressed oops, resulting in less 
memory available than 31GB.

Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 27 Feb 2018, at 11:36, 苗海泉 <mseaspr...@gmail.com> wrote:
> 
> I encountered a more serious problem in the process of using solr. We use
> the solr version is 6.0, our daily amount of data is about 500 billion
> documents, create a collection every hour, the online collection of more
> than a thousand, 49 solr nodes. If the collection in less than 800, the
> speed is still very fast, if the collection of the number of 1100 or so,
> the construction of solr index will drop sharply, one of the original
> program speed of about 2-3 million TPS, Dropped to only a few hundred or
> even tens of TPS, who have encountered a similar situation, there is no
> good idea to find this issue. By the way, solr a node memory we assigned
> 32G,We checked the memory, cpu, disk IO, network IO occupancy is no
> problem, belong to the normal state. Which friend encountered a similar
> problem, please inform the solution, thank you very much.

Reply via email to