What I expect is happening is that the Solr caches are effectively making the
two tests identical, using memory to hold the vital parts of the code in both
cases (after disk warming on the instance using the local disk). I suspect if
you measured the first few queries (assuming no auto-warming) you'd see the
local disk version be slower.

Were you running these tests for curiosity or is running from /dev/shm something
you're considering for production?

Best
Erick

On Thu, Jun 2, 2011 at 5:47 PM, Parker Johnson <parker_john...@gap.com> wrote:
>
> Hey everyone.
>
> Been doing some load testing over the past few days. I've been throwing a
> good bit of load at an instance of solr and have been measuring response
> time.  We're running a variety of different keyword searches to keep
> solr's cache on its toes.
>
> I'm running two exact same load testing scenarios: one with indexes
> residing in /dev/shm and another from local disk.  The indexes are about
> 4.5GB in size.
>
> On both tests the response times are the same.  I wasn't expecting that.
> I do see the java heap size grow when indexes are served from disk (which
> is expected).  When the indexes are served out of /dev/shm, the java heap
> stays small.
>
> So in general is this consistent behavior?  I don't really see the
> advantage of serving indexes from /dev/shm.  When the indexes are being
> served out of ramdisk, is the linux kernel or the memory mapper doing
> something tricky behind the scenes to use ramdisk in lieu of the java heap?
>
> For what it is worth, we are running x_64 rh5.4 on a 12 core 2.27Ghz Xeon
> system with 48GB ram.
>
> Thoughts?
>
> -Park
>
>
>

Reply via email to