Hi,
Because you went over 31-32 GB heap you lost the benefit of compressed
pointers and even though you gave the JVM more memory the GC may have had
to work harder. This is a relatively well educated guess, which you can
confirm if you run tests and look at GC counts, times, JVM heap memory pool
On 4/17/2015 8:14 PM, Kamal Kishore Aggarwal wrote:
Hi,
As per this article, the linux machine is preferred to have 1.5 times RAM
with respect to index size. So, to verify this, I tried testing the solr
performance in different volumes of RAM allocation keeping other
configuration (i.e Solid Sta
Hi,
This may be irrelevant but your machine configuration reminded me of some
reading I had done some time back on memory vs ssd.
Do a search on solr ssd and you should get some meaningful posts.
Like this one https://sbdevel.wordpress.com/2013/06/06/memory-is-overrated/
Regards
Puneet
On 18 Apr 2
Hi,
As per this article, the linux machine is preferred to have 1.5 times RAM
with respect to index size. So, to verify this, I tried testing the solr
performance in different volumes of RAM allocation keeping other
configuration (i.e Solid State Drives, 8 core processor, 64-Bit) to be same
in bot
: is there a way (or formula) to determine the required amount of RAM memory,
: e.g. by number of documents, document size?
There's a lot of factors that come into play ... number of documents and
size of documents aren't nearly as significant as number of unique indexed
terms.
: with 4.000.00
Hi all,
is there a way (or formula) to determine the required amount of RAM
memory, e.g. by number of documents, document size?
I need to index about 15.000.000 documents, each document is 1 to 3Kb
big, only the id of the document will be stored.
I've just implemented a testcase on one of o