Hi to all,
we moved solr with patched lucene's FieldCache in production environment.
During tests we noticed random ConcurrentModificationException calling
the getCacheEntries method due to this bug
https://issues.apache.org/jira/browse/LUCENE-2273
We applied that patch as well, and added an abst
Fields over i'm sorting to are dynamic so one query sorts on
erick_time_1,erick_timeA_1 and other sorts on erick_time_2 and so
on.What we see in the heap are a lot of arrays,most of them,filled
with 0s maybe due to the fact that this timestamps fields are not
present in all the documents.
By the w
Hmmm, I'm missing something here then. Sorting over 15 fields of type long
shouldn't use much memory, even if all the values are unique. When you say
"12-15 dynamic fields", are you talking about 12-15 fields per query out of
XXX total fields? And is XXX large? At a guess, how many different fields
Hi Erick,
the index is quite small (1691145 docs) but sorting is massive and
often on unique timestamp fields.
OOM occur after a range of time between three and four hours.
Depending as well if users browse a part of the application.
We use solrj to make the queries so we did not use Readers obje
H.. A couple of details I'm wondering about. How many
documents are we talking about in your index? Do you get
OOMs when you start fresh or does it take a while?
You've done some good investigations, so it seems like there
could well be something else going on here than just "the usual
suspect
First of all thanks for your answers.
Those OOMEs are pretty nasty for our production environment.
I didn't try the solution of ordering by function as it was a solr 1.5
feature and we prefer to use a stable version 1.4.
I made a temporary patch that it looks is working fine.
I patched the lucene-
No, this is basic to how Lucene works. You will need larger EC2 instances.
On Mon, Jun 21, 2010 at 2:08 AM, Matteo Fiandesio
wrote:
> Compiling solr with lucene 2.9.3 instead of 2.9.1 will solve this issue?
> Regards,
> Matteo
>
> On 19 June 2010 02:28, Lance Norskog wrote:
>> The Lucene impleme
Compiling solr with lucene 2.9.3 instead of 2.9.1 will solve this issue?
Regards,
Matteo
On 19 June 2010 02:28, Lance Norskog wrote:
> The Lucene implementation of sorting creates an array of four-byte
> ints for every document in the index, and another array of the unique
> values in the field.
The Lucene implementation of sorting creates an array of four-byte
ints for every document in the index, and another array of the unique
values in the field.
If the timestamps are 'date' or 'tdate' in the schema, they do not
need the second array.
You can also sort by a field's with a function que
Hello,
we are experiencing OOM exceptions in our single core solr instance
(on a (huge) amazon EC2 machine).
We investigated a lot in the mailing list and through jmap/jhat dump
analyzing and the problem resides in the lucene FieldCache that fills
the heap and blows up the server.
Our index is qui
10 matches
Mail list logo