I'm going to defer to the folks who actually know the guts here.
If you've turned down the cache entries for your Solr caches,
you're pretty much left with Lucene caching which is a mystery...
Best
Erick
On Mon, Dec 5, 2011 at 9:23 AM, Jeff Crump wrote:
> Yes, and without doing much in the way o
Yes, and without doing much in the way of queries, either. Basically, our
test data has large numbers of distinct terms, each of which can be large
in themselves. Heap usage is a straight line -- up -- 75 percent of the
heap is consumed with byte[] allocations at the leaf of an object graph
li
There's no good way to say to Solr "Use only this
much memory for searching". You can certainly
limit the size somewhat by configuring your caches
to be small. But if you're sorting, then Lucene will
use up some cache space etc.
Are you actually running into problems?
Best
Erick
On Fri, Dec 2, 2
Can anyone advise techniques for limiting the size of the RAM buffers to
begin with? As the index grows, I shouldn't have to keep increasing the
heap. We have a high-ingest, low-query-rate environment and I'm not as
much concerned with the query-time caches as I am with the segment core
readers/S
How much memory you actually allocate to the JVM ?
http://wiki.apache.org/solr/SolrPerformanceFactors#Memory_allocated_to_the_Java_VM
You need to increase the -Xmx value, otherwise your large ram buffers
won't fit in the java heap.
sivaprasad wrote:
Hi,
I am getting the following error durin