Walter’s comment (that I’ve seen too BTW) is something
to pursue if (and only if) you have proof that Solr is spinning
up thousands of threads. Do you have any proof of that?
Having several hundred threads running is quite common BTW.
Attach jconsole or take a thread dump and it’ll be obvious.
H
If we reduce the no of threads then is it going to help.
Is there any other way to debug this.
On Mon, 3 Feb, 2020, 2:52 AM Walter Underwood,
wrote:
> The only time I’ve ever had an OOM is when Solr gets a huge load
> spike and fires up 2000 threads. Then it runs out of space for stacks.
>
>
The only time I’ve ever had an OOM is when Solr gets a huge load
spike and fires up 2000 threads. Then it runs out of space for stacks.
I’ve never run anything other than an 8GB heap, starting with Solr 1.3
at Netflix.
Agreed about filter cache, though I’d expect heavy use of that to most
often b
Mostly I was reacting to the statement that the number
of docs increased by over 4x and then there were
memory problems.
Hmmm, that said, what does “heap space is getting full”
mean anyway? If you’re hitting OOMs, that’s one thing. If
you’re measuring the amount of heap consumed and
noticing tha
We CANNOT diagnose anything until you tell us the error message!
Erick, I strongly disagree that more heap is needed for bigger indexes.
Except for faceting, Lucene was designed to stream index data and
work regardless of the size of the index. Indexing is in RAM buffer
sized chunks, so large upda
We have allocated 16 gb of heap space out of 24 g.
There are 3 solr cores here, for one core when the no of documents are
getting increased i.e. around 4.5 lakhs,then this scenario is happening.
On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson,
wrote:
> Allocate more heap and possibly add more R
Allocate more heap and possibly add more RAM.
What are you expectations? You can't continue to
add documents to your Solr instance without regard to
how much heap you’ve allocated. You’ve put over 4x
the number of docs on the node. There’s no magic here.
You can’t continue to add docs to a Solr
What can we do in this scenario as the solr master node is going down and
the indexing is failing.
Please provide some workaround for this issue.
On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood,
wrote:
> What message do you get about the heap space.
>
> It is completely normal for Java to use al
What message do you get about the heap space.
It is completely normal for Java to use all of heap before running a major GC.
That
is how the JVM works.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo wrote:
>
Please reply anyone
On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo,
wrote:
> This is happening when the no of indexed document count is increasing.
>With 1 million docs it's working fine but when it's crossing 4.5
> million it's heap space is getting full.
>
>
> On Wed, 22 Jan, 2020, 7:05 PM M
This is happening when the no of indexed document count is increasing.
With 1 million docs it's working fine but when it's crossing 4.5 million
it's heap space is getting full.
On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney,
wrote:
> Rajdeep, you say that "suddenly" heap space is getting full
Rajdeep, you say that "suddenly" heap space is getting full ... does
this mean that some variant of this configuration was working for you
at some point, or just that the failure happens quickly?
If heap space and faceting are indeed the bottleneck, you might make
sure that you have docValues enab
On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
> I had a similar issue with a large number of facets. There is no way
> (At least I know) your can get an acceptable response time from
> search engine with high number of facets.
Just for the record then it is doable under specific circumst
The problem is happening for one index, for other two indexes it is working
fine.
For other two indexes indexing and search both are working fine.
But for one index after indexing completion the heap space is getting
full and solr is not responding at all.
Index sizes are almost same,its arou
Anything else regarding gc tuning.
On Mon, 20 Jan, 2020, 8:08 AM Rajdeep Sahoo,
wrote:
> Initially we were getting the warning message as ulimit is low i.e. 1024
> so we changed it to 65000
> Using ulimit -u 65000.
>
> Then the error was failed to reserve shared memory error =1
> Because of th
Initially we were getting the warning message as ulimit is low i.e. 1024
so we changed it to 65000
Using ulimit -u 65000.
Then the error was failed to reserve shared memory error =1
Because of this we removed
-xx : +uselargepages
Now in console log it is showing
Could not find or load main c
I had a similar issue with a large number of facets. There is no way (At
least I know) your can get an acceptable response time from search engine
with high number of facets.
The way we solved the issue was to cache shallow Facets data structure in
the web services. Facts structures are refreshed
Initially we were getting the warning message as ulimit is low i.e. 1024
so we changed it to 65000
Using ulimit -u 65000.
Then the error was failed to reserve shared memory error =1
Because of this we removed
-xx : +uselargepages
Now in console log it is showing
Could not find or load main c
What message do you get that means the heap space is full?
Java will always use all of the heap, either as live data or not-yet-collected
garbage.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo wrote:
>
> H
Hi,
Currently there is no request or indexing is happening.
It's just start up
And during that time heap is getting full.
Index size is approx 1 g.
On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood,
wrote:
> A new garbage collector won’t fix it, but it might help a bit.
>
> Requesting 200 fac
A new garbage collector won’t fix it, but it might help a bit.
Requesting 200 facet fields and having 50-60 of them with results is a huge
amount of work for Solr. A typical faceting implementation might have three to
five facets. Your requests will be at least 10X to 20X slower.
Check the CPU
Hi,
Still facing the same issue...
Anything else that we need to check.
On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood,
wrote:
> With Java 1.8, I would use the G1 garbage collector. We’ve been running
> that combination in prod for three years with no problems.
>
> SOLR_HEAP=8g
> # Use G1 GC -
With Java 1.8, I would use the G1 garbage collector. We’ve been running that
combination in prod for three years with no problems.
SOLR_HEAP=8g
# Use G1 GC -- wunder 2017-01-23
# Settings from https://wiki.apache.org/solr/ShawnHeisey
GC_TUNE=" \
-XX:+UseG1GC \
-XX:+ParallelRefProcEnabled \
-XX:G
Please reply anyone
On Sun, 19 Jan, 2020, 10:55 PM Rajdeep Sahoo,
wrote:
> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb. We
> have completed indexing after starting the server suddenly heap space is
> getting full.
>Added gc params , still not working and jdk versi
24 matches
Mail list logo