Vijay Kokatnur [kokatnur.vi...@gmail.com] wrote:
> For the Solr Cloud setup, we are running a cron job with following command
> to clear out the inactive memory.  It  is working as expected.  Even though
> the index size of Cloud is 146GB, the used memory is always below 55GB.
> Our response times are better and no errors/exceptions are thrown. (This
> command causes issue in 2 Shard setup)

> echo 3 > /proc/sys/vm/drop_caches

As Shawn points out, this is under normal circumstances a very bad idea, but...

> Has anyone faced this issue before?

We did have some problems on a 256GB machine churning terabytes of data through 
40 concurrent Tika processes and into Solr. After some days, performance got 
really bad. When we did a top, we noticed that most of the time was used in the 
kernel (the 'sy' on the '%Cpu(s):'-line). The drop_caches trick worked for us 
too. Our systems guys explained that it was because of virtual memory space 
fragmentation, so the OS had to spend a lot of resources just bookkeeping 
memory.

Try keeping an eye on the fraction of processing power spend on the kernel from 
you clear the cache until it performance gets bad again. If it rises 
drastically, you might have the same problem.

- Toke Eskildsen

Reply via email to