Hi,

I agree Erick it could be a good think to have more details about your
configuration and collection.

Your heap size is 32Gb. How many RAM on each servers ?

By « 4 shard Solr cluster », you mean a 4 nodes Solr servers or a
collection with 4 shards ?

So, how many nodes in the cluster ?
How many shards and replicas for the collection ?
How many items in the collection ?
What is the size of the index ?
How is updated the collection (frequency, how many items per days, what is
your hard commit strategy) ?
How are configured cache in solrconfig.xml ?
Can you provide all other JVM parameters ?

Regards

Dominique

2014-12-23 17:50 GMT+01:00 Erick Erickson <erickerick...@gmail.com>:

> Second most important part of your message:
> "When executing a huge query with many wildcards inside it the server"
>
> This is usually an anti-pattern. The very first thing
> I'd be doing is trying to not do this. See ngrams for infix
> queries, or shingles or ReverseWildcardFilterFactory or.....
>
> And if your corpus is very large with many unique terms it's even
> worse, but you haven't really told us about that yet.
>
> Best,
> Erick
>
> On Tue, Dec 23, 2014 at 8:30 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> > On 12/23/2014 4:34 AM, Modassar Ather wrote:
> >> Hi,
> >>
> >> I have a setup of 4 shard Solr cluster with embedded zookeeper on one of
> >> them. The zkClient time out is set to 30 seconds, -Xms is 20g and -Xms
> is
> >> 24g.
> >> When executing a huge query with many wildcards inside it the server
> >> crashes and becomes non-responsive. Even the dashboard does not responds
> >> and shows connection lost error. This requires me to restart the
> servers.
> >
> > Here's the important part of your message:
> >
> > *Caused by: java.lang.OutOfMemoryError: Java heap space*
> >
> >
> > Your heap is not big enough for what Solr has been asked to do.  You
> > need to either increase your heap size or change your configuration so
> > that it uses less memory.
> >
> > http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
> >
> > Most programs have pretty much undefined behavior when an OOME occurs.
> > Lucene's IndexWriter has been hardened so that it tries extremely hard
> > to avoid index corruption when OOME strikes, and I believe that works
> > well enough that we can call it nearly bulletproof ... but the rest of
> > Lucene and Solr will make no guarantees.
> >
> > It's very difficult to have definable program behavior when an OOME
> > happens, because you simply cannot know the precise point during program
> > execution where it will happen, or what isn't going to work because Java
> > did not have memory space to create an object.  Going unresponsive is
> > not surprising.
> >
> > If you can solve your heap problem, note that you may run into other
> > performance issues discussed on the wiki page that I linked.
> >
> > Thanks,
> > Shawn
> >
>

Reply via email to