On 8/18/2017 10:37 AM, Joe Obernberger wrote:
> Indexing about 15 million documents per day across 100 shards on 45
> servers.  Up until about 350 million documents, each of the solr
> instances was taking up about 1 core (100% CPU).  Recently, they all
> jumped to 700%.  Is this normal?  Anything that I can check for?
> 
> I don't see anything unusual in the solr logs.  Sample from the GC logs:

A sample from GC logs won't reveal anything.  We would need the entire
GC log.  To share something like that, you need a file sharing site,
something like dropbox.  With the full log, we can analyze it for
indications of GC problems.

There are many things that can cause a sudden massive increase in CPU
usage.  In this case, it is likely due to increased requirements because
indexing 15 million documents per day has made the index larger, and now
it probably needs additional resources on each server that are not
available.

The most common need for additional resources is unallocated system
memory for the operating system to cache the index.  Something else that
sometimes happens is that the index outgrows the max heap size, which we
would be able to learn from the full GC log.

These problem are discussed here:

https://wiki.apache.org/solr/SolrPerformanceProblems

Another useful piece of information is obtained by running the "top"
utility on the commandline, pressing shift-M to sort by memory, and
taking a screenshot of that display.  Then you would need a file-sharing
website to share the image.

Thanks,
Shawn

Reply via email to