On Mon, 2018-09-17 at 17:52 +0200, Vincenzo D'Amore wrote:
> org.apache.solr.common.SolrException: Error while processing facet
> fields:
> java.lang.OutOfMemoryError: Java heap space
>
> Here the complete stacktrace:
> https://gist.github.com/freedev/a14aa9e6ae33fc3ddb2f02d602b34e2b
>
> I suppos
On 9/17/2018 9:52 AM, Vincenzo D'Amore wrote:
recently I had few Java OOM in my Solr 4.8.1 instance.
Here the configuration I have.
The only part of your commandline options that matters for OOM is the
max heap. Which is 16GB for your server. Note, you should set the min
heap and max heap t
We put nginx servers in front of our three solr stand alone servers and
three node gallera cluster, it works very well and the amount of control it
gives you is really helpful.
On Tue, Dec 19, 2017 at 10:58 AM, Walter Underwood
wrote:
> > On Dec 19, 2017, at 7:38 AM, Toke Eskildsen wrote:
> >
>
> On Dec 19, 2017, at 7:38 AM, Toke Eskildsen wrote:
>
> Let's say we change Solr, so that it does not re-issue queries that
> caused nodes to fail. Unfortunately that does not solve your problem as
> the user will do what users do on an internal server error: Press
> reload.
That would work, be
On Mon, 2017-12-18 at 15:56 -0500, Susheel Kumar wrote:
> Technically I agree Shawn with you on fixing OOME cause, Infact it is
> not an issue any more but I was testing for HA when planing for any
> failures.
> Same time it's hard to convince Business folks that HA wouldn't be
> there in case of O
Hi Susheel,
If a single query can cause node to fail and if retry cause replicas to be
affected (still to be confirmed) then preventing retry logic on Solr side can
only partially solve that issue - retry logic can exist on client side and it
will result in replicas’ OOM. Again, not sure if Solr
UNSUBSCRIBE
On Mon, Dec 18, 2017 at 12:57 PM Susheel Kumar
wrote:
> Technically I agree Shawn with you on fixing OOME cause, Infact it is not
> an issue any more but I was testing for HA when planing for any failures.
> Same time it's hard to convince Business folks that HA wouldn't be there in
Technically I agree Shawn with you on fixing OOME cause, Infact it is not
an issue any more but I was testing for HA when planing for any failures.
Same time it's hard to convince Business folks that HA wouldn't be there in
case of OOME.
I think the best option is to enable timeAllowed for now.
T
On 12/18/2017 9:01 AM, Susheel Kumar wrote:
> Any thoughts on how one can provide HA in these situations.
As I have said already a couple of times today on other threads, there
are *exactly* two ways to deal with OOME. No other solution is possible.
1) Configure the system to allow the process t
Shawn/Emir - its the Java heap space issue. I can see in GCViewer sudden
heap utilization and finally Full GC lines and oom killer script killing
the solr.
What I wonder is if there is retry from coordinating node which is causing
this OOM query to spread to next set of replica's then how can we
Ah, I misunderstood your usecase - it is not node that receives query that OOMs
but nodes that are included in distributed queries are the one that OOMs. I
would also say that it is expected because queries to particular shards fails
and coordinating node retries using other replicas causing all
On 12/18/2017 7:36 AM, Susheel Kumar wrote:
Yes, Emir. If I repeat the query, it will spread to other nodes but that's
not the case. This is my test env and i am deliberately executing the
query with very high offset and wildcard to cause OOM but executing only
one time.
So it shouldn't spread
Yes, Emir. If I repeat the query, it will spread to other nodes but that's
not the case. This is my test env and i am deliberately executing the
query with very high offset and wildcard to cause OOM but executing only
one time.
So it shouldn't spread to other replica sets and at the end of my te
Hi Susheel,
The fact that only node that received query OOM tells that it is about merging
results from all shards and providing final result. It is expected that
repeating the same query on some other node will result in a similar behaviour
- it just mean that Solr does not have enough memory t
On 10/16/2017 5:38 PM, Randy Fradin wrote:
> Each shard has around 4.2 million documents which are around 40GB on disk.
> Two nodes have 3 shard replicas each and the third has 2 shard replicas.
>
> The text of the exception is: java.lang.OutOfMemoryError: Java heap space
> And the heap dump is a f
alaxy S® 6, an AT&T 4G LTE smartphone
> >
> >
> > Original message
> > From: Randy Fradin
> > Date: 10/16/17 7:38 PM (GMT-05:00)
> > To: solr-user@lucene.apache.org
> > Subject: [EXTERNAL] Re: OOM during indexing with 24G heap - S
:
>
>
>
>
> Sent via the Samsung Galaxy S® 6, an AT&T 4G LTE smartphone
>
>
> Original message
> From: Randy Fradin
> Date: 10/16/17 7:38 PM (GMT-05:00)
> To: solr-user@lucene.apache.org
> Subject: [EXTERNAL] Re: OOM during indexing with 24G
Sent via the Samsung Galaxy S® 6, an AT&T 4G LTE smartphone
Original message
From: Randy Fradin
Date: 10/16/17 7:38 PM (GMT-05:00)
To: solr-user@lucene.apache.org
Subject: [EXTERNAL] Re: OOM during indexing with 24G heap - Solr 6.5.1
Each shard has around 4.2 mil
Each shard has around 4.2 million documents which are around 40GB on disk.
Two nodes have 3 shard replicas each and the third has 2 shard replicas.
The text of the exception is: java.lang.OutOfMemoryError: Java heap space
And the heap dump is a full 24GB indicating the full heap space was being
us
On 10/16/2017 3:19 PM, Randy Fradin wrote:
> We are seeing a lot of full GC events and eventual OOM errors in Solr
> during indexing. This is Solr 6.5.1 running in cloud mode with a 24G heap.
> At these times indexing is the only activity taking place. The collection
> has 4 shards and 2 replicas a
When you restart, there are a bunch of threads that start up than can
chew up stack space.
If the message says something about "unable to start native thread"
then it's not raw memory
but the stack space.
Doesn't really sound like this is your error, but thought I'd mention it.
On Wed, Mar 1, 201
Thanks Shawn, of course it must be the -Xmx. It is interesting that we do not
see the OOM until restarting.
On March 1, 2017 8:18:11 PM EST, Shawn Heisey wrote:
>On 2/27/2017 4:57 PM, Rick Leir wrote:
>> We get an OOM after stopping then starting Solr (with a tiny index).
>Is there something I
On 2/27/2017 4:57 PM, Rick Leir wrote:
> We get an OOM after stopping then starting Solr (with a tiny index). Is
> there something I could check quickly before I break out the Eclipse
> debugger? Maybe Marple could tell me about problems in the index?
There are exactly two ways of dealing with
How much memory are you giving the JVM anyway?
On Mon, Feb 27, 2017 at 5:02 PM, Alexandre Rafalovitch
wrote:
> Marple will (probably) tell you whether you are indexing Chinese as
> English text. Unlikely it would help with OOM (though I would be happy
> to know if it did).
>
> Are you actually ge
Marple will (probably) tell you whether you are indexing Chinese as
English text. Unlikely it would help with OOM (though I would be happy
to know if it did).
Are you actually getting the error as you are starting? Repeatedly?
What's the stack trace looks like?
Regards,
Alex.
http://www.s
Thanks, Shawn for looking into. Your summption is right, the end of graph
is the OOM. I am trying to collect all the queries & ingestion numbers
around 9:12 but one more observation and a question from today
Observed that on 2-3 VM's out of 12, shows high usage of heap even though
heavy ingestion
On 11/8/2016 12:49 PM, Susheel Kumar wrote:
> Ran into OOM Error again right after two weeks. Below is the GC log
> viewer graph. The first time we run into this was after 3 months and
> then second time in two weeks. After first incident reduced the cache
> size and increase heap from 8 to 10G. In
Hello,
Ran into OOM Error again right after two weeks. Below is the GC log viewer
graph. The first time we run into this was after 3 months and then second
time in two weeks. After first incident reduced the cache size and increase
heap from 8 to 10G. Interestingly query and ingestion load is li
Hi Toke,
I think your guess is right. We have ingestion running in batches. We
have 6 shards & 6 replicas on 12 VM's each around 40+ million docs on each
shard.
Thanks everyone for the suggestions/pointers.
Thanks,
Susheel
On Wed, Oct 26, 2016 at 1:52 AM, Toke Eskildsen
wrote:
> On Tue, 201
On Wed, Oct 26, 2016 at 4:53 AM, Shawn Heisey wrote:
> On 10/25/2016 8:03 PM, Susheel Kumar wrote:
>> Agree, Pushkar. I had docValues for sorting / faceting fields from
>> begining (since I setup Solr 6.0). So good on that side. I am going to
>> analyze the queries to find any potential issue. T
On Tue, 2016-10-25 at 15:04 -0400, Susheel Kumar wrote:
> Thanks, Toke. Analyzing GC logs helped to determine that it was a
> sudden
> death.
> The peaks in last 20 mins... See http://tinypic.com/r/n2zonb/9
Peaks yes, but there is a pattern of
1) Stable memory use
2) Temporary doubling of
On 10/25/2016 8:03 PM, Susheel Kumar wrote:
> Agree, Pushkar. I had docValues for sorting / faceting fields from
> begining (since I setup Solr 6.0). So good on that side. I am going to
> analyze the queries to find any potential issue. Two questions which I am
> puzzling with
>
> a) Should the b
Off the top of my head:
a) Should the below JVM parameter be included for Prod to get heap dump
Makes sense. It may produce quite a large dump file, but then this is
an extraordinary situation so that's probably OK.
b) Currently OOM script just kills the Solr instance. Shouldn't it be
enhanced t
Agree, Pushkar. I had docValues for sorting / faceting fields from
begining (since I setup Solr 6.0). So good on that side. I am going to
analyze the queries to find any potential issue. Two questions which I am
puzzling with
a) Should the below JVM parameter be included for Prod to get heap dum
You should look into using docValues. docValues are stored off heap and
hence you would be better off than just bumping up the heap.
Don't enable docValues on existing fields unless you plan to reindex data
from scratch.
On Oct 25, 2016 3:04 PM, "Susheel Kumar" wrote:
> Thanks, Toke. Analyzin
Thanks, Toke. Analyzing GC logs helped to determine that it was a sudden
death. The peaks in last 20 mins... See http://tinypic.com/r/n2zonb/9
Will look into the queries more closer and also adjusting the cache sizing.
Thanks,
Susheel
On Tue, Oct 25, 2016 at 3:37 AM, Toke Eskildsen
wrote:
I would also recommend that 8GB is cutting it close for Java 8 JVM with
SOLR. We use 12GB and have had issues with 8GB. But your mileage may vary.
On Tue, Oct 25, 2016 at 1:37 AM, Toke Eskildsen
wrote:
> On Mon, 2016-10-24 at 18:27 -0400, Susheel Kumar wrote:
> > I am seeing OOM script killed so
On Mon, 2016-10-24 at 18:27 -0400, Susheel Kumar wrote:
> I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
> today. So far our solr cluster has been running fine but suddenly
> today many of the VM's Solr instance got killed.
As you have the GC-logs, you should be able to dete
Thanks, Pushkar. The Solr was already killed by OOM script so i believe we
can't get heap dump.
Hi Shawn, I used Solr service scripts to launch Solr and it looks like
bin/solr doesn't include by default the below JVM parameter.
"-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/the/dump"
On 10/24/2016 4:27 PM, Susheel Kumar wrote:
> I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
> today. So far our solr cluster has been running fine but suddenly today
> many of the VM's Solr instance got killed. I had 8G of heap allocated on 64
> GB machines with 20+ GB of in
Did you look into the heap dump ?
On Mon, Oct 24, 2016 at 6:27 PM, Susheel Kumar
wrote:
> Hello,
>
> I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
> today. So far our solr cluster has been running fine but suddenly today
> many of the VM's Solr instance got killed. I had
On 5/5/2016 11:42 PM, Bastien Latard - MDPI AG wrote:
> So if I run the two following requests, it will only store once 7.5Mo,
> right?
> - select?q=*:*&fq=bPublic:true&rows=10
> - select?q=field:my_search&fq=bPublic:true&rows=10
That is correct.
Thanks,
Shawn
Thank you Shawn!
So if I run the two following requests, it will only store once 7.5Mo,
right?
- select?q=*:*&fq=bPublic:true&rows=10
- select?q=field:my_search&fq=bPublic:true&rows=10
kr,
Bast
On 04/05/2016 16:22, Shawn Heisey wrote:
On 5/3/2016 11:58 PM, Bastien Latard - MDPI AG wrote:
T
: You could, but before that I'd try to see what's using your memory and see
: if you can decrease that. Maybe identify why you are running OOM now and
: not with your previous Solr version (assuming you weren't, and that you are
: running with the same JVM settings). A bigger heap usually means m
On 5/3/2016 11:58 PM, Bastien Latard - MDPI AG wrote:
> Thank you for your email.
> You said "have big caches or request big pages (e.g. 100k docs)"...
> Does a fq cache all the potential results, or only the ones the query
> returns?
> e.g.: select?q=*:*&fq=bPublic:true&rows=10
>
> => with this qu
Hi Tomás,
Thank you for your email.
You said "have big caches or request big pages (e.g. 100k docs)"...
Does a fq cache all the potential results, or only the ones the query
returns?
e.g.: select?q=*:*&fq=bPublic:true&rows=10
=> with this query, if I have 60 millions of public documents, would
You could use some memory analyzer tools (e.g. jmap), that could give you a
hint. But if you are migrating, I'd start to see if you changed something
from the previous version, including jvm settings, schema/solrconfig.
If nothing is different, I'd try to identify which feature is consuming
more me
Hi Tomás,
Thanks for your answer.
How could I see what's using memory?
I tried to add "-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/solr/logs/OOM_Heap_dump/"
...but this doesn't seem to be really helpful...
Kind regards,
Bastien
On 02/05/2016 22:55, Tomás Fernández Löbbe wrote:
You
You could, but before that I'd try to see what's using your memory and see
if you can decrease that. Maybe identify why you are running OOM now and
not with your previous Solr version (assuming you weren't, and that you are
running with the same JVM settings). A bigger heap usually means more work
How big is your request size from client to server?
I ran into OOM problems too. For me the reason was that I was sending big
requests (1+ docs) at too fast a pace.
So I put a throttle on the client to control the throughput of the request
it sends to the server, and that got rid of the OOM e
I made two tests, one with MaxRamBuffer=128 and the second with
MaxRamBuffer=256.
In both i got OOM.
I also made two tests on autocommit:
one with commit every 5 min, and the second with commit every 100,000 docs.
(disabled softcommit)
In both i got OOM.
merge policy - Tiered (max segment size o
enable heap dump on OOME, and build the histogram by jhat.
Did you try to reduce MaxRamBuffer or max buffered docs? or enable
autocommit?
On Tue, Jun 24, 2014 at 7:43 PM, adfel70 wrote:
> Hi,
>
> I am getting OOM during indexing 400 million docs (nested 7-20 children).
> The memory usage gets h
Please file a JIRA issue so that we can address this.
- Mark
On Jul 2, 2013, at 6:20 AM, Daniel Collins wrote:
> On looking at the code in SolrDispatchFilter, is this intentional or not?
> I think I remember Mark Miller mentioning that in an OOM case, the best
> course of action is basically to
On looking at the code in SolrDispatchFilter, is this intentional or not?
I think I remember Mark Miller mentioning that in an OOM case, the best
course of action is basically to kill the process, there is very little
Solr can do once it has run out of memory. Yet it seems that Solr catches
the O
Thanks for the feedback Daniel ... For now, I've opted to just kill
the JVM with System.exit(1) in the SolrDispatchFilter code and will
restart it with a Linux supervisor. Not elegant but the alternative of
having a zombie Solr instance walking around my cluster is much worse
;-) Will try to dig in
Ooh, I guess Jetty is trapping that java.lang.OutOfMemoryError, and
throwing it/packaging it as a java.lang.RuntimeException. The -XX option
assumes that the application doesn't handle the Errors and so they would
reach the JVM and thus invoke the handler.
Since Jetty has an exception handler that
A little more to this ...
Just on chance this was a weird Jetty issue or something, I tried with
the latest 9 and the problem still occurs :-(
This is on Java 7 on debian:
java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
Java HotSpot(TM) 64-Bit Server VM (build 23
I am running the sun version:
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
I get multiple Out of memory exceptions looking at my application and the
solr logs, but my script doesn't get called the first
: Usually any good piece of java code refrains from capturing Throwable
: so that Errors will bubble up unlike exceptions. Having said that,
Even if some piece of code catches an OutOfMemoryError, the JVM should
have already called the "-XX:OnOutOfMemoryError" hook - Although from what
i can te
Usually any good piece of java code refrains from capturing Throwable
so that Errors will bubble up unlike exceptions. Having said that,
perhaps someone in the list can help, if you share which particular
Solr version you are using where you suspect that the Error is being
eaten up.
On Fri, Sep 16
Hi Eric,
Thanks for the reply. It is very useful for me.
For point 1. : I do need 10 core and it will go on increasing in future.
I have document that belongs to different workspaces , so the
1 workspace = 1 core ; I cant go with one core. Currrently having 10 core
but in future the count may
Multiple webapps will not help you, they're still on the underlying
memory. In fact, it'll make matters worse since they won't share
resources.
So questions become:
1> Why do you have 10 cores? Putting 10 cores on the same machine
doesn't really do much. It can make lots of sense to put 10 cores o
Number of cache is definitely going to reduce heap usage.
Can you run those xlsx file separately with Tika and see if you are getting
OOM issue.
On Mon, Sep 12, 2011 at 3:09 PM, abhijit bashetti wrote:
> I am facing the OOM issue.
>
> OTHER than increasing the RAM , Can we chnage some other par
Are you using Tika to do the extraction of content?
You might be getting OOM because of huge xlsx file.
Try having bigger RAM and you might not get the issue.
On Mon, Sep 12, 2011 at 12:44 PM, abhijit bashetti <
abhijitbashe...@gmail.com> wrote:
> Hi,
>
> I am getting the OOM error.
>
> I am wor
Send gc log and force dump if you can when it happens.
Bill Bell
Sent from mobile
On Aug 16, 2011, at 5:27 AM, Pranav Prakash wrote:
>>
>>
>> AFAIK, solr 1.4 is on Lucene 2.9.1 so this patch is already applied to
>> the version you are using.
>> maybe you can provide the stacktrace and more
>
>
> AFAIK, solr 1.4 is on Lucene 2.9.1 so this patch is already applied to
> the version you are using.
> maybe you can provide the stacktrace and more deatails about your
> problem and report back?
>
Unfortunately, I have only this much information with me. However following
is my speficiations
hey,
On Tue, Aug 16, 2011 at 9:34 AM, Pranav Prakash wrote:
> Hi,
>
> This might probably have been discussed long time back, but I got this error
> recently in one of my production slaves.
>
> SEVERE: java.lang.OutOfMemoryError: OutOfMemoryError likely caused by the
> Sun VM Bug described in htt
2011/7/5 Chengyang :
> Is there any memory leak when I updating the index at the master node?
> Here is the stack trace.
>
> o.a.solr.servlet.SolrDispatchFilter - java.lang.OutOfMemoryError: Java heap
> space
You don't need a memory leak to get a OOM error in Java. It might just
happen that the a
: Subject: OOM on uninvert field request
: In-Reply-To: <1277850992.1955.6.ca...@kratos>
: References: <1277726685.6747.2.ca...@kratos>
: <9f5fcd40-c9bb-4cfb-bb0d-d3cdf1680...@gmail.com>
: <9eb24a79bbfe195513fa05e0ce2c654c.squir...@sm.webmail.pair.com>
:
:
: <1277850992.1955.
method=fc
-Yonik
http://www.lucidimagination.com
> -Original Message-
> From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
> Sent: Wednesday, June 30, 2010 1:38 PM
> To: solr-user@lucene.apache.org
> Subject: Re: OOM on uninvert field request
>
>
...@gmail.com] On Behalf Of Yonik Seeley
Sent: Wednesday, June 30, 2010 1:38 PM
To: solr-user@lucene.apache.org
Subject: Re: OOM on uninvert field request
On Tue, Jun 29, 2010 at 7:32 PM, Robert Petersen wrote:
> Hello I am trying to find the right max and min settings for Java 1.6 on 20GB
> index
On Wed, Jun 30, 2010 at 1:38 PM, Yonik Seeley
wrote:
> On Tue, Jun 29, 2010 at 7:32 PM, Robert Petersen wrote:
>> Hello I am trying to find the right max and min settings for Java 1.6 on
>> 20GB index with 8 million docs, running 1.6_018 JVM with solr 1.4, and am
>> currently have java set to a
On Tue, Jun 29, 2010 at 7:32 PM, Robert Petersen wrote:
> Hello I am trying to find the right max and min settings for Java 1.6 on 20GB
> index with 8 million docs, running 1.6_018 JVM with solr 1.4, and am
> currently have java set to an even 4GB (export JAVA_OPTS="-Xmx4096m
> -Xms4096m") for
e is only 8 million) so I don't want to do *that* too
often as the slaves will be quite stale by the time it's done! :)
Thanks for the help!
-Original Message-
From: Robert Petersen [mailto:rober...@buy.com]
Sent: Wednesday, June 30, 2010 9:49 AM
To: solr-user@lucene.apac
To: solr-user@lucene.apache.org
Subject: Re: OOM on uninvert field request
Yes, it is better to use ints for ids than strings. Also, the Trie int
fields have a compressed format that may cut the storage needs even
more. 8m * 4 = 32mb, times "a few hundred", we'll say 300, is 900mb o
Yes, it is better to use ints for ids than strings. Also, the Trie int
fields have a compressed format that may cut the storage needs even
more. 8m * 4 = 32mb, times "a few hundred", we'll say 300, is 900mb of
IDs. I don't know how these fields are stored, but if they are
separate objects we've bl
Hi to all,
we moved solr with patched lucene's FieldCache in production environment.
During tests we noticed random ConcurrentModificationException calling
the getCacheEntries method due to this bug
https://issues.apache.org/jira/browse/LUCENE-2273
We applied that patch as well, and added an abst
Fields over i'm sorting to are dynamic so one query sorts on
erick_time_1,erick_timeA_1 and other sorts on erick_time_2 and so
on.What we see in the heap are a lot of arrays,most of them,filled
with 0s maybe due to the fact that this timestamps fields are not
present in all the documents.
By the w
Hmmm, I'm missing something here then. Sorting over 15 fields of type long
shouldn't use much memory, even if all the values are unique. When you say
"12-15 dynamic fields", are you talking about 12-15 fields per query out of
XXX total fields? And is XXX large? At a guess, how many different fields
Hi Erick,
the index is quite small (1691145 docs) but sorting is massive and
often on unique timestamp fields.
OOM occur after a range of time between three and four hours.
Depending as well if users browse a part of the application.
We use solrj to make the queries so we did not use Readers obje
H.. A couple of details I'm wondering about. How many
documents are we talking about in your index? Do you get
OOMs when you start fresh or does it take a while?
You've done some good investigations, so it seems like there
could well be something else going on here than just "the usual
suspect
First of all thanks for your answers.
Those OOMEs are pretty nasty for our production environment.
I didn't try the solution of ordering by function as it was a solr 1.5
feature and we prefer to use a stable version 1.4.
I made a temporary patch that it looks is working fine.
I patched the lucene-
No, this is basic to how Lucene works. You will need larger EC2 instances.
On Mon, Jun 21, 2010 at 2:08 AM, Matteo Fiandesio
wrote:
> Compiling solr with lucene 2.9.3 instead of 2.9.1 will solve this issue?
> Regards,
> Matteo
>
> On 19 June 2010 02:28, Lance Norskog wrote:
>> The Lucene impleme
Compiling solr with lucene 2.9.3 instead of 2.9.1 will solve this issue?
Regards,
Matteo
On 19 June 2010 02:28, Lance Norskog wrote:
> The Lucene implementation of sorting creates an array of four-byte
> ints for every document in the index, and another array of the unique
> values in the field.
The Lucene implementation of sorting creates an array of four-byte
ints for every document in the index, and another array of the unique
values in the field.
If the timestamps are 'date' or 'tdate' in the schema, they do not
need the second array.
You can also sort by a field's with a function que
Hello,
we are experiencing OOM exceptions in our single core solr instance
(on a (huge) amazon EC2 machine).
We investigated a lot in the mailing list and through jmap/jhat dump
analyzing and the problem resides in the lucene FieldCache that fills
the heap and blows up the server.
Our index is qui
On Fri, Sep 25, 2009 at 8:20 AM, Phillip Farber wrote:
> Can I expect the index to be left in a usable state ofter an out of memory
> error during a merge or it it most likely to be corrupt?
It should be in the state it was after the last successful commit.
-Yonik
http://www.lucidimagination.co
r-user@lucene.apache.org
> Sent: Monday, March 30, 2009 12:53:21 PM
> Subject: Re: OOM at MultiSegmentReader.norms
>
> Thanks Otis and Mike.
>
> I'm indexing total of 9 fields, with 5 having norms turned on. I think
> I may not need it and will try use the omitNorms for them.
>
On Mon, Mar 30, 2009 at 12:53 PM, vivek sar wrote:
> I'm indexing total of 9 fields, with 5 having norms turned on.
So that's 500MB for norms alone, plus memory for Lucene's term index
(every 128th term by default). Solr also opens a new
IndexReader/Searcher before closing the old one, so there
Thanks Otis and Mike.
I'm indexing total of 9 fields, with 5 having norms turned on. I think
I may not need it and will try use the omitNorms for them.
How do I make use of RAMBuffer in Solr? I couldn't find anything on
this on the Wiki - any pointer?
Thanks,
-vivek
On Sat, Mar 28, 2009 at 1:09
Still, 1024M ought to be enough to load one field's norms (how many
fields have norms?). If you do things requiring FieldCache that'll
also consume RAM.
It's also possible you're hitting this bug (false OOME) in Sun's JRE:
http://issues.apache.org/jira/browse/LUCENE-1566
Feel free to go vote
That's a tiny heap. Part of it is used for indexing, too. And the fact that
your heap is so small shows you are not really making use of that nice
ramBufferSizeMB setting. :)
Also, use omitNorms="true" for fields that don't need norms (if their types
don't already do that).
Otis
--
Sematext
n Tue, 12/2/08, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> From: Yonik Seeley <[EMAIL PROTECTED]>
> Subject: Re: OOM on commit after few days
> To: solr-user@lucene.apache.org
> Date: Tuesday, December 2, 2008, 1:13 PM
> Using embedded is always more error prone...you
Using embedded is always more error prone...you're probably forgetting
to close some resource.
Make sure to close all SolrQueryRequest objects.
Start with a memory profiler or heap dump to try and figure out what's
taking up all the memory.
-Yonik
On Tue, Dec 2, 2008 at 1:05 PM, Sunil <[EMAIL PRO
Sorry for that. I didnt realise how my had finally arrived. Sorry!!!
From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Subject: OOM on Solr Sort
Date: Tue, 22 Jul 2008 18:33:43 +
Hi,
We are developing a product in a agile manner and the current
implementation has a data of size ju
On 7/25/07, Luis Neves <[EMAIL PROTECTED]> wrote:
Yonik Seeley wrote:
> On 7/25/07, Luis Neves <[EMAIL PROTECTED]> wrote:
>> This turn out to be a bad idea ... for some reason using the
>> BoostQuery instead
>> of the BoostFunction slows the search to a crawl.
>
> Dismax throws bq in with the mai
Yonik Seeley wrote:
On 7/25/07, Luis Neves <[EMAIL PROTECTED]> wrote:
This turn out to be a bad idea ... for some reason using the
BoostQuery instead
of the BoostFunction slows the search to a crawl.
Dismax throws bq in with the main query, so it can't really be cached
separately, so iteratin
On 7/25/07, Luis Neves <[EMAIL PROTECTED]> wrote:
Luis Neves wrote:
> The objective is to boost the documents by "freshness" ... this is
> probably the cause of the memory abuse since all the "EntryDate" values
> are unique.
> I will try to use something like:
> EntryDate:[* TO NOW/DAY-3MONTH]^
Luis Neves wrote:
The objective is to boost the documents by "freshness" ... this is
probably the cause of the memory abuse since all the "EntryDate" values
are unique.
I will try to use something like:
EntryDate:[* TO NOW/DAY-3MONTH]^1.5
This turn out to be a bad idea ... for some reason u
Yonik Seeley wrote:
On 7/25/07, Luis Neves <[EMAIL PROTECTED]> wrote:
We are having some issues with one of our Solr instances when
autowarming is
enabled. The index has about 2.2M documents and 2GB of size, so it's not
particularly big. Solr runs with "-Xmx1024M -Xms1024M".
"Big" is relative
1 - 100 of 102 matches
Mail list logo