What do you mean by JVM level? Run Solr on different ports on the same
machine? If you have a 32 core box would you run 2,3,4 JVMs?
On Sun, Dec 4, 2016 at 8:46 PM, Jeff Wartes wrote:
>
> Here’s an earlier post where I mentioned some GC investigation tools:
> https://mail-archives.apache.org/mod_
Here’s an earlier post where I mentioned some GC investigation tools:
https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201604.mbox/%3c8f8fa32d-ec0e-4352-86f7-4b2d8a906...@whitepages.com%3E
In my experience, there are many aspects of the Solr/Lucene memory allocation
model that scale wi
On 12/3/2016 9:46 PM, S G wrote:
> The symptom we see is that the java clients querying Solr see response
> times in 10s of seconds (not milliseconds).
> Some numbers for the Solr Cloud:
>
> *Overall infrastructure:*
> - Only one collection
> - 16 VMs used
> - 8 shards (1 leader and 1 replica per
That is a huge heap.
Once you have enough heap memory to hold a Java program’s working set,
more memory doesn’t make it faster. I just makes the GC take longer.
If you have GC monitoring, look at how much memory is in use after a full GC.
Add the space for new generation (eden, whatever), then a
Thank you Eric.
Our Solr version is 4.10 and we are not doing any sorting or faceting.
I am trying to find some ways of investigating this problem.
Hence asking a few more questions to see what are the normal steps taken in
such situations.
(I did search a few of them on the Internet but could not
All of this is consistent with not having a properly
tuned Solr instance wrt # documents, usage
pattern, memory allocated to the JVM, GC
settings and the like.
Your leader issues can be explained by long
GC pauses too. Zookeeper periodically pings
each replica it knows about and if the response
ti
The symptom we see is that the java clients querying Solr see response
times in 10s of seconds (not milliseconds).
And on the tomcat's gc.log file (where Solr is running), we see very bad GC
pauses - threads being paused for 0.5 seconds per second approximately.
Some numbers for the Solr Cloud:
*
What tool is that ? The stats I would like to run on my Solr instance
Bill Bell
Sent from mobile
> On Dec 2, 2016, at 4:49 PM, Shawn Heisey wrote:
>
>> On 12/2/2016 12:01 PM, S G wrote:
>> This post shows some stats on Solr which indicate that there might be a
>> memory leak in there.
>>
>>
Hi,
All your stats show is large memory requirements to Solr. There is no
direct mapping of number of documents and queries to memory reqts as
requested in that article. Different Solr projects can yield extremely,
extremely different requirements. If you want to understand your memory
usage bette
On 12/2/2016 12:01 PM, S G wrote:
> This post shows some stats on Solr which indicate that there might be a
> memory leak in there.
>
> http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr
>
> Can someone please help to debug this?
> It might be a very good step in making Solr
Are you sure it's an actual leak, not just memory pinned by caches?
Related: https://issues.apache.org/jira/browse/SOLR-9810
On Fri, Dec 2, 2016 at 2:01 PM, S G wrote:
> Hi,
>
> This post shows some stats on Solr which indicate that there might be a
> memory leak in there.
>
> http://stackoverf
We’ve been running Solr 4.10.4 in prod for a couple of years. There aren’t any
obvious
memory leaks in it. It stays up for months.
Objects ejected from the cache will almost always be tenured, so that tends to
cause
full GCs.
If there are very few repeats in your query load, you’ll see a lot o
I don't have a filter cache, and have completely disabled filter cache. Since
I am not using filter queries.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Memory-Leak-in-solr-4-8-1-tp4198488p4198716.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Wed, 2015-04-08 at 14:00 -0700, pras.venkatesh wrote:
> 1. 8 nodes, 4 shards(2 nodes per shard)
> 2. each node having about 55 GB of Data, in total there is 450 million
> documents in the collection. so the document size is not huge,
So ~120M docs/shard.
> 3. The schema has 42 fields, it gets
Hi, one of problem is now alleviated.
Number of lines with "can't identify protocol " in "lsof" output is now
reduced very much. Earlier it kept increasing upto "ulimit -n" thus causing
"Too many open files" error but now it is contained to a quite lesser
number. This happened after I changed max
Hi Chris,
Thanks for you reply and sorry for delay. Please find my replies below in
the mail.
On Sat, Dec 3, 2011 at 5:56 AM, Chris Hostetter wrote:
>
> : Till 3 days ago, we were running Solr 3.4 instance with following java
> : command line options
> : java -server -*Xms2048m* -*Xmx4096m* -Dso
: Till 3 days ago, we were running Solr 3.4 instance with following java
: command line options
: java -server -*Xms2048m* -*Xmx4096m* -Dsolr.solr.home=etc -jar start.jar
:
: Then we increased the memory with following options and restarted the
: server
: java -server *-**Xms4096m* -*Xmx10g* -Dso
17 matches
Mail list logo