I don't know of a way to tell Solr to load all the indexes into
memory, but if you were to simply read all the files at the OS level,
that would do it. Under a unix OS, "cat * > /dev/null" would work. Under
Windows, I can't think of a way to do it off the top of my head, but if
you had Cygwin installed, you could use the Unix method. That's not
really necessary to do, however. Just the act of running queries against
the index will load the relevant bits into the disk cache, making
subsequent queries go to RAM instead of disk.
With 10 cores at 1.5GB each, your total index is a little bigger than
one of my static indexes. Performance might be reasonable with 8GB of
total RAM, if the machine is running Linux/Unix and doing nothing but
Solr, but would be better with 12-16GB. It would be important to set up
the Solr caches properly. Here's mine:
<filterCache
class="solr.FastLRUCache"
size="256"
initialSize="64"
autowarmCount="32"
cleanupThread="true"
/>
<queryResultCache
class="solr.FastLRUCache"
size="1024"
initialSize="256"
autowarmCount="64"
cleanupThread="true"
/>
<documentCache
class="solr.FastLRUCache"
size="16384"
initialSize="4096"
cleanupThread="true"
/>
The status page is a CGI script that I wrote which queries a couple of
Solr pages on all my VMs. It's heavily tied into the central
configuration used by my Solr build system, so it's not directly usable
by the masses.
Thanks,
Shawn
On 7/17/2010 10:36 AM, marship wrote:
Hi. Shawn.
My indexes are smaller than yours. I only store "id" + "type" in indexes so each
"core" index is about 1 - 1.5GB on disk.
I don't have so many servers/VPS as you have. In my option, my problem is not
CPU. If possible, I prefer to add more memory to fit indexes in my server. At
least at memory is cheaper. And I saw lots of my CPU time are wasted because no
program can fullly use it.
Is there a way to tell solr to load all indexes into memory? like memory
directory in lucene. That would be breezing fast
Btw, how do you get that status page?