Linux will cache the open index files in RAM (in the filesystem cache)
after their first read which makes the ram disk generally useless.
Unless you're processing other files on the box with a size greater
than your total unused ram (and thus need to micro-manage what stays
in RAM), then I wouldn't recommend using a ramdisk - it's just more to
manage.  If you reboot the box and run a few searches, those first few
will likely be slower until all the index files are cached in Memory.
After that point, the performance should be comparable because all
files are read out of RAM from that point forward.

If solr caches are enabled and your queries are repetitive then that
could also be contributing to the speed of repetitive queries.  Note
that the above advice assumes your total unused ram (not allocated to
the JVM or any other processes) is greater than the size of your
lucene index files, which should be a safe assumption considering
you're trying to put the whole index in a ramdisk.

-Trey


On Thu, Jun 2, 2011 at 7:15 PM, Erick Erickson <erickerick...@gmail.com> wrote:
> What I expect is happening is that the Solr caches are effectively making the
> two tests identical, using memory to hold the vital parts of the code in both
> cases (after disk warming on the instance using the local disk). I suspect if
> you measured the first few queries (assuming no auto-warming) you'd see the
> local disk version be slower.
>
> Were you running these tests for curiosity or is running from /dev/shm 
> something
> you're considering for production?
>
> Best
> Erick
>
> On Thu, Jun 2, 2011 at 5:47 PM, Parker Johnson <parker_john...@gap.com> wrote:
>>
>> Hey everyone.
>>
>> Been doing some load testing over the past few days. I've been throwing a
>> good bit of load at an instance of solr and have been measuring response
>> time.  We're running a variety of different keyword searches to keep
>> solr's cache on its toes.
>>
>> I'm running two exact same load testing scenarios: one with indexes
>> residing in /dev/shm and another from local disk.  The indexes are about
>> 4.5GB in size.
>>
>> On both tests the response times are the same.  I wasn't expecting that.
>> I do see the java heap size grow when indexes are served from disk (which
>> is expected).  When the indexes are served out of /dev/shm, the java heap
>> stays small.
>>
>> So in general is this consistent behavior?  I don't really see the
>> advantage of serving indexes from /dev/shm.  When the indexes are being
>> served out of ramdisk, is the linux kernel or the memory mapper doing
>> something tricky behind the scenes to use ramdisk in lieu of the java heap?
>>
>> For what it is worth, we are running x_64 rh5.4 on a 12 core 2.27Ghz Xeon
>> system with 48GB ram.
>>
>> Thoughts?
>>
>> -Park
>>
>>
>>
>

Reply via email to