Susheel, my inference was based on the Qtime value from Solr log and not
based on application log. Before the CPU spike, the query time didn’t give
any indication that they are slow in the process of slowing down. As the GC
suddenly triggers a high CPU usage, query execution slows down or chocks,
b
It may happen that you may never find the queries/query time being logged
for the queries which caused OOM and your app never got chance to log how
much time it took...
So if you had proper exception handled in your client code, you may see
exception being logged but not see the query time for suc
I usually log queries that took more than 1sec. Based on the logs, I haven't
seen anything alarming or surge in terms of slow queries, especially around
the time when the CPU spike happened.
I don't necessarily have the data for deep paging, but the usage of sort
parameter (date in our case) has b
Hi Shamik,
funny enough, we had a similar issue with our old legacy application
that still used plain Lucene code in a JBoss container.
Same, there were no specific queries or updates causing this, the
performance just broke completely without unusual usage. GC was raising
up to 99% or so. Someti
It does not have to be query load - it can be one heavy query that cause memory
consumption (heavy faceting, deep paging,…) and after that GC jumps in. Maybe
you could start with log and see if there are queries that have large QTime,
Emir
> On 22 Sep 2017, at 12:00, shamik wrote:
>
> All the
All the tuning and scaling down of memory seemed to be stable for a couple of
days but then came down due to a huge spike in CPU usage, contributed by G1
Old Generation GC. I'm really puzzled why the instances are suddenly
behaving like this. It's not that a sudden surge of load contributed to
this
+1. Asking for way more than anything you need may result into OOM. rows
and facet.limit should be carefully passed.
On Tue, Sep 19, 2017 at 1:23 PM, Toke Eskildsen wrote:
> shamik wrote:
> > I've facet.limit=-1 configured for few search types, but facet.mincount
> is
> > always set as 1. Didn
shamik wrote:
> I've facet.limit=-1 configured for few search types, but facet.mincount is
> always set as 1. Didn't know that's detrimental to doc values.
It is if you have a lot (1000+) of unique values in your facet field,
especially when you have more than 1 shard. Only ask for the number yo
Emir, after digging deeper into the logs (using new relic/solr admin) during
the outage, it looks like a combination of query load and indexing process
triggered it. Based on the earlier pattern, memory would tend to increase at
a steady pace, but then surge all of a sudden, triggering OOM. After I
With frequent commits, autowarming isn’t very useful. Even with a daily bulk
update, I use explicit warming queries.
For our textbooks collection, I configure the twenty top queries and the twenty
most common words in the index. Neither list changes much. If we used facets,
I’d warm those, too.
Hi Shamik,
Can you tell us a bit more about how you use Solr before it OOM. Do you observe
some heavy indexing or it happens during higher query load. Does memory slowly
increases or jumps suddenly? Do you have any monitoring tool to see if you can
correlate some metric with memory increase?
You
Thanks, the change seemed to have addressed the memory issue (so far), but on
the contrary, the GC chocked the CPUs stalling everything. The CPU
utilization across the cluster clocked close to 400%, literally stalling
everything.On a first look, the G1-Old generation looks to be the culprit
that to
On Mon, 2017-09-18 at 20:47 -0700, shamik wrote:
> I did bring down the heap size to 8gb, changed to G1 and reduced the
> cache params. The memory so far has been holding up but will wait for
> a while before passing on a judgment.
Sounds reasonable.
> autowarmCount="0"/>
[...]
> The change se
A suggester rebuild will mmap the entire index. So'll you need free memory
for depending on your index size.
On 19 September 2017 at 13:47, shamik wrote:
> I agree, should have made it clear in my initial post. The reason I thought
> it's little trivial since the newly introduced collection has
I agree, should have made it clear in my initial post. The reason I thought
it's little trivial since the newly introduced collection has only few
hundred documents and is not being used in search yet. Neither it's being
indexed at a regular interval. The cache parameters are kept to a minimum as
w
Shamik:
bq: The part I'm trying to understand is whether the memory footprint
is higher for 6.6...
bq: it has two collections, one being introduced with 6.6 upgrade
If I'm reading this right, you added another collection to the system
as part of the upgrade. Of course it will take more memory.
Very nice article - thank you! Is there a similar article available
when the index is on HDFS? Sorry to hijack! I'm very interested in how
we can improve cache/general performance when running with HDFS.
-Joe
On 9/18/2017 11:35 AM, Erick Erickson wrote:
This is suspicious too. Each entr
Walter, thanks again. Here's some information on the index and search
feature.
The index size is close to 25gb, with 20 million documents. it has two
collections, one being introduced with 6.6 upgrade. The primary collection
carries the bulk of the index, newly formed one being aimed at getting
po
Thanks for your suggesting, I'm going to tune it and bring it down. It just
happened to carry over from 5.5 settings. Based on Walter's suggestion, I'm
going to reduce the heap size and see if it addresses the problem.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
This is suspicious too. Each entry is up to about
maxDoc/8 bytes + (string size of fq clause) long
and you can have up to 20,000 of them. An autowarm count of 512 is
almost never a good thing.
Walter's comments about your memory are spot on of course, see:
http://blog.thetaphi.de/2012/07/use-lu
29G on a 30G machine is still a bad config. That leaves no space for the OS,
file buffers, or any other processes.
Try with 8G.
Also, give us some information about the number of docs, size of the indexes,
and the kinds of search features you are using.
wunder
Walter Underwood
wun...@wunderwoo
Apologies, 290gb was a typo on my end, it should read 29gb instead. I started
with my 5.5 configurations of limiting the RAM to 15gb. But it started going
down once it reached the 15gb ceiling. I tried bumping it up to 29gb since
memory seemed to stabilize at 22gb after running for few hours, of co
You are running with a 290 Gb heap () on a 30 Gb machine. That is the worst
Java config I have ever seen.
Use this:
SOLR_JAVA_MEM="-Xms8g -Xmx8g”
That starts with an 8 Gb heap and stays there.
Also, you might think about simplifying the GC configuration. Or if you are on
a recent release
23 matches
Mail list logo