Hi Shawn

   Actually,there are three Solr instances(The top three PIDs is the three
instances),and the datafile size of the stuff is 851G,592G,49G respectively
,and more and more data will be added as time going.I think it may be rare
as the large scope as my solrcloud service .and
it is now one of the most important core service in my company.
Just as you suggest,the increasing size of data make us to devide our
SolrCloud service into smaller application clusters.and we do have
 separated our collection into smaller shards .and I know   there must be
some abnormal things on the service when time is going.however the unknown
reason high sys cpu is right now as a nightmare.So I look for help from our
community.
   Would you have some experience as me and how you solve this problem?




Best Regards




2016-03-17 14:16 GMT+08:00 Shawn Heisey <apa...@elyograg.org>:

> On 3/16/2016 8:27 PM, YouPeng Yang wrote:
> > Hi Shawn
> >    Here is my top screenshot:
> >
> >    https://www.dropbox.com/s/jaw10mkmipz943y/topscreen.jpg?dl=0
> >
> >    It is captured when my system is normal.And I have reduced the memory
> > size down to 48GB originating  from 64GB.
>
> It looks like you have at least two Solr instances on this machine, one
> of which has over 600GB of index data, and the other has over 500GB of
> data.  There may be as many as ten Solr instances, but I cannot tell for
> sure what those Java processes are.
>
> If my guess is correct, this means that there's over a terabyte of index
> data, but you only have about 100GB of RAM available to cache it.  I
> don't think this is enough RAM for good performance, even if the disks
> are SSD.  You'll either need a lot more memory in each machine, or more
> machines.  The data may need to be divided into more shards.
>
> I am not seeing any evidence here of high CPU.  The system only shows
> about 12 percent total CPU usage, and very little of it is system (kernel).
>
> Thanks,
> Shawn
>
>

Reply via email to