Thank you for reply.
One collection has 25 shard one replica, one solr node has about 5T on desk.
GC is checked ,and modify as follow :
SOLR_JAVA_MEM="-Xms32768m -Xmx32768m "
GC_TUNE=" \
-XX:+UseG1GC \
-XX:+PerfDisableSharedMem \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=8m \
-XX:MaxGCPauseMillis=250 \
-XX:InitiatingHeapOccupancyPercent=75 \
-XX:+UseLargePages \
-XX:+AggressiveOpts \
-XX:+UseLargePages"

2018-02-27 19:27 GMT+08:00 Emir Arnautović <emir.arnauto...@sematext.com>:

> Hi,
> To get more complete picture, can you tell us how many shards/replicas do
> you have per collection? Also what is index size on disk? Did you check GC?
>
> BTW, using 32GB heap prevents you from using compressed oops, resulting in
> less memory available than 31GB.
>
> Thanks,
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 27 Feb 2018, at 11:36, 苗海泉 <mseaspr...@gmail.com> wrote:
> >
> > I encountered a more serious problem in the process of using solr. We use
> > the solr version is 6.0, our daily amount of data is about 500 billion
> > documents, create a collection every hour, the online collection of more
> > than a thousand, 49 solr nodes. If the collection in less than 800, the
> > speed is still very fast, if the collection of the number of 1100 or so,
> > the construction of solr index will drop sharply, one of the original
> > program speed of about 2-3 million TPS, Dropped to only a few hundred or
> > even tens of TPS, who have encountered a similar situation, there is no
> > good idea to find this issue. By the way, solr a node memory we assigned
> > 32G,We checked the memory, cpu, disk IO, network IO occupancy is no
> > problem, belong to the normal state. Which friend encountered a similar
> > problem, please inform the solution, thank you very much.
>
>


-- 
==============================
联创科技
知行如一
==============================

Reply via email to