Computational Linguist & Java Developer
> > Linguiste informaticienne & développeur java
> >
> >
> >
> > De : Guilherme Viteri
> > Envoyé : 10 juin 2020 16:57
> > À : solr-user@lucene.apache.org
> > Objet
> De : Guilherme Viteri
> Envoyé : 10 juin 2020 16:57
> À : solr-user@lucene.apache.org
> Objet : [EXTERNAL] - SolR OOM error due to query injection
>
> Hi,
>
> Environment: SolR 6.6.2, with org.apache.solr.solr-core:6.1.0. This setu
solr-user@lucene.apache.org
Objet : [EXTERNAL] - SolR OOM error due to query injection
Hi,
Environment: SolR 6.6.2, with org.apache.solr.solr-core:6.1.0. This setup has
been running for at least 4 years without having OutOfMemory error. (it is
never too late for an OOM…)
This week, our searc
Hi,
Environment: SolR 6.6.2, with org.apache.solr.solr-core:6.1.0. This setup has
been running for at least 4 years without having OutOfMemory error. (it is
never too late for an OOM…)
This week, our search tool has been attacked via ‘sql injection’ like, and that
led to an OOM. These requests
ote:
> > Ran into OOM Error again right after two weeks. Below is the GC log
> > viewer graph. The first time we run into this was after 3 months and
> > then second time in two weeks. After first incident reduced the cache
> > size and increase heap from 8 to 10G. Interesti
On 11/8/2016 12:49 PM, Susheel Kumar wrote:
> Ran into OOM Error again right after two weeks. Below is the GC log
> viewer graph. The first time we run into this was after 3 months and
> then second time in two weeks. After first incident reduced the cache
> size and increase heap f
Hello,
Ran into OOM Error again right after two weeks. Below is the GC log viewer
graph. The first time we run into this was after 3 months and then second
time in two weeks. After first incident reduced the cache size and increase
heap from 8 to 10G. Interestingly query and ingestion load is
Hi Toke,
I think your guess is right. We have ingestion running in batches. We
have 6 shards & 6 replicas on 12 VM's each around 40+ million docs on each
shard.
Thanks everyone for the suggestions/pointers.
Thanks,
Susheel
On Wed, Oct 26, 2016 at 1:52 AM, Toke Eskildsen
wrote:
> On Tue, 201
On Wed, Oct 26, 2016 at 4:53 AM, Shawn Heisey wrote:
> On 10/25/2016 8:03 PM, Susheel Kumar wrote:
>> Agree, Pushkar. I had docValues for sorting / faceting fields from
>> begining (since I setup Solr 6.0). So good on that side. I am going to
>> analyze the queries to find any potential issue. T
On Tue, 2016-10-25 at 15:04 -0400, Susheel Kumar wrote:
> Thanks, Toke. Analyzing GC logs helped to determine that it was a
> sudden
> death.
> The peaks in last 20 mins... See http://tinypic.com/r/n2zonb/9
Peaks yes, but there is a pattern of
1) Stable memory use
2) Temporary doubling of
On 10/25/2016 8:03 PM, Susheel Kumar wrote:
> Agree, Pushkar. I had docValues for sorting / faceting fields from
> begining (since I setup Solr 6.0). So good on that side. I am going to
> analyze the queries to find any potential issue. Two questions which I am
> puzzling with
>
> a) Should the b
Off the top of my head:
a) Should the below JVM parameter be included for Prod to get heap dump
Makes sense. It may produce quite a large dump file, but then this is
an extraordinary situation so that's probably OK.
b) Currently OOM script just kills the Solr instance. Shouldn't it be
enhanced t
Agree, Pushkar. I had docValues for sorting / faceting fields from
begining (since I setup Solr 6.0). So good on that side. I am going to
analyze the queries to find any potential issue. Two questions which I am
puzzling with
a) Should the below JVM parameter be included for Prod to get heap dum
You should look into using docValues. docValues are stored off heap and
hence you would be better off than just bumping up the heap.
Don't enable docValues on existing fields unless you plan to reindex data
from scratch.
On Oct 25, 2016 3:04 PM, "Susheel Kumar" wrote:
> Thanks, Toke. Analyzin
Thanks, Toke. Analyzing GC logs helped to determine that it was a sudden
death. The peaks in last 20 mins... See http://tinypic.com/r/n2zonb/9
Will look into the queries more closer and also adjusting the cache sizing.
Thanks,
Susheel
On Tue, Oct 25, 2016 at 3:37 AM, Toke Eskildsen
wrote:
I would also recommend that 8GB is cutting it close for Java 8 JVM with
SOLR. We use 12GB and have had issues with 8GB. But your mileage may vary.
On Tue, Oct 25, 2016 at 1:37 AM, Toke Eskildsen
wrote:
> On Mon, 2016-10-24 at 18:27 -0400, Susheel Kumar wrote:
> > I am seeing OOM script killed so
On Mon, 2016-10-24 at 18:27 -0400, Susheel Kumar wrote:
> I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
> today. So far our solr cluster has been running fine but suddenly
> today many of the VM's Solr instance got killed.
As you have the GC-logs, you should be able to dete
Thanks, Pushkar. The Solr was already killed by OOM script so i believe we
can't get heap dump.
Hi Shawn, I used Solr service scripts to launch Solr and it looks like
bin/solr doesn't include by default the below JVM parameter.
"-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/the/dump"
On 10/24/2016 4:27 PM, Susheel Kumar wrote:
> I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
> today. So far our solr cluster has been running fine but suddenly today
> many of the VM's Solr instance got killed. I had 8G of heap allocated on 64
> GB machines with 20+ GB of in
Did you look into the heap dump ?
On Mon, Oct 24, 2016 at 6:27 PM, Susheel Kumar
wrote:
> Hello,
>
> I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
> today. So far our solr cluster has been running fine but suddenly today
> many of the VM's Solr instance got killed. I had
Hello,
I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
today. So far our solr cluster has been running fine but suddenly today
many of the VM's Solr instance got killed. I had 8G of heap allocated on 64
GB machines with 20+ GB of index size on each shards.
What could be look
this fixed the problem. We're not sure exactly
what the problem was, but that's all it took.
Hope this helps someone!
Michael
--
View this message in context:
http://lucene.472066.n3.nabble.com/Non-Heap-OOM-Error-with-Small-Index-Size-tp4141175p4141509.html
Sent from the Solr - Us
While running a Solr-based Web application on Tomcat 6, we have been
repeatedly running into Out of Memory issues. However, these OOM errors are
not related to the Java heap. A snapshot of our Solr dashboard just before
the OOM error reported:
Physical memory: 7.13/7.29 GB
JVM-Memory: 57.90 MB
Hi Shawn,
Thanks for the advice :). The JVM heap Size usage on indexer machine
has been consistency about 95% (both total and old gen) for past 3 days. It
might have nothing to do with Solr 3.6 Vs solr 4.2 .. Because Solr 3.6
indexer gets restarted once in 2-3 days.
Will investigate why
On 5/21/2013 9:22 PM, Umesh Prasad wrote:
> This is our own implementation of data source (canon name
> com.flipkart.w3.solr.MultiSPCMSProductsDataSource) , which pulls the data
> from out downstream service and it doesn't cache data in RAM. It fetches
> the data in batches of 200 and iterates
Hi Shawn,
This is our own implementation of data source (canon name
com.flipkart.w3.solr.MultiSPCMSProductsDataSource) , which pulls the data
from out downstream service and it doesn't cache data in RAM. It fetches
the data in batches of 200 and iterates over it when DIH asks for it. I
will che
On 5/21/2013 5:14 PM, Umesh Prasad wrote:
We have sufficient RAM on machine ..64 GB and we have given JVM 32 GB of
memory. The machine runs Indexing primarily.
The JVM doesn't run out of memory. It is the particular IndexWriterSolrCore
which has .. May be we have specified too low a memory for I
on a machine with more memory. Or did you do that already?
>
> -- Jack Krupansky
>
> -Original Message- From: Umesh Prasad
> Sent: Tuesday, May 21, 2013 1:57 AM
> To: solr-user@lucene.apache.org
> Subject: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1
>
Try again on a machine with more memory. Or did you do that already?
-- Jack Krupansky
-Original Message-
From: Umesh Prasad
Sent: Tuesday, May 21, 2013 1:57 AM
To: solr-user@lucene.apache.org
Subject: Hard Commit giving OOM Error on Index Writer in Solr 4.2.1
Hi All,
I am hitting
On 5/20/2013 11:57 PM, Umesh Prasad wrote:
>I am hitting an OOM error while trying to do an hard commit on one of
> the cores.
>
> Transaction log dir is Empty and DIH shows indexing going on for 13 hrs..
>
> *Indexing since 13h 22m 22s*
> Requests: 5,211,392 (108/s), F
Hi,
Maybe you van share more info, such as your java command line or jstat
output from right before the oom ...
Otis
Solr & ElasticSearch Support
http://sematext.com/
On May 21, 2013 1:58 AM, "Umesh Prasad" wrote:
> Hi All,
> I am hitting an OOM error while trying to
Hi All,
I am hitting an OOM error while trying to do an hard commit on one of
the cores.
Transaction log dir is Empty and DIH shows indexing going on for 13 hrs..
*Indexing since 13h 22m 22s*
Requests: 5,211,392 (108/s), Fetched: 1,902,792 (40/s), Skipped: 106,853,
Processed: 1,016,696 (21/s
: Wednesday, May 16, 2012 7:50 AM
To: solr-user@lucene.apache.org
Subject: PermGen OOM Error
When running Solr we are experiencing PermGen OOM exceptions, this problem
gets worse and worse the more documents are added and committed.
Stopping the java process does not seem to free the memory.
Has
so have to increase the memory available to the JVM, what servlet container are
you using?
SH
On 05/16/2012 01:50 PM, richard.pog...@holidaylettings.co.uk wrote:
When running Solr we are experiencing PermGen OOM exceptions, this problem gets
worse and worse the more documents are added and co
When running Solr we are experiencing PermGen OOM exceptions, this problem gets
worse and worse the more documents are added and committed.
Stopping the java process does not seem to free the memory.
Has anyone experienced issues like this.
Kind regards,
Richard
On Fri, Sep 25, 2009 at 8:20 AM, Phillip Farber wrote:
> Can I expect the index to be left in a usable state ofter an out of memory
> error during a merge or it it most likely to be corrupt?
It should be in the state it was after the last successful commit.
-Yonik
http://www.lucidimagination.co
Can I expect the index to be left in a usable state ofter an out of
memory error during a merge or it it most likely to be corrupt? I'd
really hate to have to start this index build again from square one.
Thanks.
Thanks,
Phil
---
Exception in thread "http-8080-Processor2505"
java.lang.
37 matches
Mail list logo