On 11/27/2012 10:36 AM, Jack Krupansky wrote:
Okay, if performance isn't the reason for the optimize, what is the
reason that you are using?
8GB for Java heap seems low for a 22GB index. How much Java heap seems
available when the app is running?
Are these three separate Solr instances/JVMs on the same machine?
How many cores for the machine?
First, thank you for taking time to look into how things are going for
me. I really appreciate it.
I am optimizing purely to eliminate deleted documents. I will admit
that when we first got going on Solr 1.4.0, performance was a small
concern, but even way back then, rumblings on the mailing list said
"don't optimize for performance reasons."
Each server has one Solr JVM (using the jetty6 included with 3.5) with
8GB heap, each index shard lives in a Solr core. The server has 64GB of
RAM and two quad-core CPUs, so a total of 8 CPU cores. Two servers make
up an entire index chain. One server has three of the 22GB (cold)
shards and the 800MB (hot) shard, the other server has the other three
22GB shards.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17823 ncindex 20 0 80.8g 17g 9.4g S 2.0 28.6 4548:18 java
ncindex@idxa1 ~ $ du -s /index/solr/data/
71606072 /index/solr/data/
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5440 @ 2.83GHz
stepping : 6
cpu MHz : 2826.535
cache size : 6144 KB
To see whether my heap is too small, I connected jconsole remotely to a
3.5.0 server via JMX. The numbers look OK to me, I'm including a link to
a jconsole screenshot. I could probably drop the heap lower, but that
might cause some issues with DIH full imports, which we do occasionally
when there are major changes to the database.
Jconsole screenshot:
https://dl.dropbox.com/u/97770508/solr-jconsole.png
Thanks,
Shawn