Hi,
the stack trace points to tika, which is likely in the process of
extracting indexable plain text from some document.
Tika's job is one of the dirtiest you can think of in the whole indexing
business. We throw all kinds of more or less
documented/broken/misguided/ill-designed/cruft/trunc
Using Tika to extract documents or content is something I don't have experience
with but it looks like your issue is in that process. If you're able to
reproduce this issue near the same place every time maybe you've got a document
that has a lot of nested fields in it or otherwise causes the ex
Please find below entire stack trace:
ERROR - 2014-07-25 13:14:22.202; org.apache.solr.common.SolrException;
null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Requested
array size exceeds VM limit
at
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:790)
at
o
You might consider looking at your internal Solr cache configuration
(solrconfig.xml). These caches occupy heap space, and from my understanding do
not overflow to disk. So if there is not enough heap memory to support the
caches an OOM error will be thrown.
I also believe these caches live i
Would you include the entire stack trace for your OOM message? Are you seeing
this on the client or server side?
Thanks,
Greg
On Jul 25, 2014, at 10:21 AM, Ameya Aware wrote:
> Hi,
>
> I am in process of indexing lot of documents but after around 9
> documents i am getting below error:
>
On Thu, 2013-08-01 at 15:24 +0200, Grzegorz Sobczyk wrote:
> Today I found in solr logs exception: java.lang.OutOfMemoryError: Requested
> array size exceeds VM limit.
> At that time memory usage was ~200MB / Xmx3g
[...]
> Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM lim
Well, this is probably not a rogue query. You can test this, of course
by replaying all your queries on a test system. My guess is that it's
just too much stuff on too small a box.
Or you could have poorly configured Solr parameters. I've seen, for
instance, the filterCache sized at 1M. Which runs
Solr parameters listed in dashboard:
-DzkHost=localhost:2181,172.27.5.121:2181,172.27.5.122:2181
-XX:+UseConcMarkSweepGC
-Xmx3072m
-Djava.awt.headless=true
Mem usage in last days: http://i42.tinypic.com/29z5rew.png
It's production system and there's too many requests to detect which query
is bugg
What are the memory parameters you start Solr with? The Solr admin page
will tell you how much memory the JVM has.
Also, cut/paste the queries you're running when you see this.
Best
Erick
On Thu, Aug 1, 2013 at 9:50 AM, Grzegorz Sobczyk wrote:
> after node starts in log I have only few reques
after node starts in log I have only few requests:
https://gist.github.com/gsobczyk/6131503#file-solr-oom-log
this error occurred multiple times
On 1 August 2013 15:33, Rafał Kuć wrote:
> Hello!
>
> The exception you've shown tells you that Solr tried to allocate an
> array that exceeded heap
Hello!
The exception you've shown tells you that Solr tried to allocate an
array that exceeded heap size. Do you use some custom sorts? Did you
send large bulks during the time that the exception occurred?
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - ElasticSearch
Hi Ali, You are getting an error because of the number of rows you are trying to fetch. Solr will keep all results in its queue before submitting the results. Solution is to page thru your results (but be careful about deep paging). _Stephane On July 13, 2013 at 2:31:57 AM, Ali, Saqib (docbook@
12 matches
Mail list logo