And you didn’t give how many RAM on each servers ?

2014-12-24 8:17 GMT+01:00 Dominique Bejean <dominique.bej...@eolya.fr>:

> Modassar,
>
> How many items in the collection ?
> I mean how many documents per collection ? 1 million, 10 millions, …?
>
> How are configured cache in solrconfig.xml ?
> What are the size attribute value for each cache ?
>
> Can you provide a sample of the query ?
> Does it fail immediately after solrcloud startup or after several hours ?
>
> Dominique
>
> 2014-12-24 6:20 GMT+01:00 Modassar Ather <modather1...@gmail.com>:
>
>> Thanks for your suggestions.
>>
>> I will look into the link provided.
>> http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
>>
>> This is usually an anti-pattern. The very first thing
>> I'd be doing is trying to not do this. See ngrams for infix
>> queries, or shingles or ReverseWildcardFilterFactory or.....
>>
>> We cannot avoid multiple wildcards since that's is our user's requirement.
>> We try to discourage it but the users insist on firing such queries. Also,
>> ngrams etc. can be tried but our index is already huge and ngrams may
>> further add lot to it. We are OK with such queries failing as long as
>> other
>> queries are not affected.
>>
>>
>> Please find the details below.
>>
>> So, how many nodes in the cluster ?
>> There are total 4 nodes on the cluster.
>>
>> How many shards and replicas for the collection ?
>> There are 4 shards and no replica for any of them.
>>
>> How many items in the collection ?
>> If I understand the question correctly there are two collection on each
>> node and there size on each node is approximately 190GB and 130GB.
>>
>> What is the size of the index ?
>> There are two collection on each node and there size on each node is
>> approximately 190GB and 130GB.
>>
>> How is updated the collection (frequency, how many items per days, what is
>> your hard commit strategy) ?
>> It is an optimized index and read-only. There are no inter-mediate update.
>>
>> How are configured cache in solrconfig.xml ?
>> Filter cache, query result cache and document cache are enabled.
>> Auto-warming is also done.
>>
>> Can you provide all other JVM parameters ?
>> -Xms20g -Xmx24g -XX:+UseConcMarkSweepGC
>>
>> Thanks again,
>> Modassar
>>
>> On Wed, Dec 24, 2014 at 2:29 AM, Dominique Bejean <
>> dominique.bej...@eolya.fr
>> > wrote:
>>
>> > Hi,
>> >
>> > I agree Erick it could be a good think to have more details about your
>> > configuration and collection.
>> >
>> > Your heap size is 32Gb. How many RAM on each servers ?
>> >
>> > By « 4 shard Solr cluster », you mean a 4 nodes Solr servers or a
>> > collection with 4 shards ?
>> >
>> > So, how many nodes in the cluster ?
>> > How many shards and replicas for the collection ?
>> > How many items in the collection ?
>> > What is the size of the index ?
>> > How is updated the collection (frequency, how many items per days, what
>> is
>> > your hard commit strategy) ?
>> > How are configured cache in solrconfig.xml ?
>> > Can you provide all other JVM parameters ?
>> >
>> > Regards
>> >
>> > Dominique
>> >
>> > 2014-12-23 17:50 GMT+01:00 Erick Erickson <erickerick...@gmail.com>:
>> >
>> > > Second most important part of your message:
>> > > "When executing a huge query with many wildcards inside it the server"
>> > >
>> > > This is usually an anti-pattern. The very first thing
>> > > I'd be doing is trying to not do this. See ngrams for infix
>> > > queries, or shingles or ReverseWildcardFilterFactory or.....
>> > >
>> > > And if your corpus is very large with many unique terms it's even
>> > > worse, but you haven't really told us about that yet.
>> > >
>> > > Best,
>> > > Erick
>> > >
>> > > On Tue, Dec 23, 2014 at 8:30 AM, Shawn Heisey <apa...@elyograg.org>
>> > wrote:
>> > > > On 12/23/2014 4:34 AM, Modassar Ather wrote:
>> > > >> Hi,
>> > > >>
>> > > >> I have a setup of 4 shard Solr cluster with embedded zookeeper on
>> one
>> > of
>> > > >> them. The zkClient time out is set to 30 seconds, -Xms is 20g and
>> -Xms
>> > > is
>> > > >> 24g.
>> > > >> When executing a huge query with many wildcards inside it the
>> server
>> > > >> crashes and becomes non-responsive. Even the dashboard does not
>> > responds
>> > > >> and shows connection lost error. This requires me to restart the
>> > > servers.
>> > > >
>> > > > Here's the important part of your message:
>> > > >
>> > > > *Caused by: java.lang.OutOfMemoryError: Java heap space*
>> > > >
>> > > >
>> > > > Your heap is not big enough for what Solr has been asked to do.  You
>> > > > need to either increase your heap size or change your configuration
>> so
>> > > > that it uses less memory.
>> > > >
>> > > > http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
>> > > >
>> > > > Most programs have pretty much undefined behavior when an OOME
>> occurs.
>> > > > Lucene's IndexWriter has been hardened so that it tries extremely
>> hard
>> > > > to avoid index corruption when OOME strikes, and I believe that
>> works
>> > > > well enough that we can call it nearly bulletproof ... but the rest
>> of
>> > > > Lucene and Solr will make no guarantees.
>> > > >
>> > > > It's very difficult to have definable program behavior when an OOME
>> > > > happens, because you simply cannot know the precise point during
>> > program
>> > > > execution where it will happen, or what isn't going to work because
>> > Java
>> > > > did not have memory space to create an object.  Going unresponsive
>> is
>> > > > not surprising.
>> > > >
>> > > > If you can solve your heap problem, note that you may run into other
>> > > > performance issues discussed on the wiki page that I linked.
>> > > >
>> > > > Thanks,
>> > > > Shawn
>> > > >
>> > >
>> >
>>
>>
>> On Wed, Dec 24, 2014 at 2:29 AM, Dominique Bejean <
>> dominique.bej...@eolya.fr
>> > wrote:
>>
>> > Hi,
>> >
>> > I agree Erick it could be a good think to have more details about your
>> > configuration and collection.
>> >
>> > Your heap size is 32Gb. How many RAM on each servers ?
>> >
>> > By « 4 shard Solr cluster », you mean a 4 nodes Solr servers or a
>> > collection with 4 shards ?
>> >
>> > So, how many nodes in the cluster ?
>> > How many shards and replicas for the collection ?
>> > How many items in the collection ?
>> > What is the size of the index ?
>> > How is updated the collection (frequency, how many items per days, what
>> is
>> > your hard commit strategy) ?
>> > How are configured cache in solrconfig.xml ?
>> > Can you provide all other JVM parameters ?
>> >
>> > Regards
>> >
>> > Dominique
>> >
>> > 2014-12-23 17:50 GMT+01:00 Erick Erickson <erickerick...@gmail.com>:
>> >
>> > > Second most important part of your message:
>> > > "When executing a huge query with many wildcards inside it the server"
>> > >
>> > > This is usually an anti-pattern. The very first thing
>> > > I'd be doing is trying to not do this. See ngrams for infix
>> > > queries, or shingles or ReverseWildcardFilterFactory or.....
>> > >
>> > > And if your corpus is very large with many unique terms it's even
>> > > worse, but you haven't really told us about that yet.
>> > >
>> > > Best,
>> > > Erick
>> > >
>> > > On Tue, Dec 23, 2014 at 8:30 AM, Shawn Heisey <apa...@elyograg.org>
>> > wrote:
>> > > > On 12/23/2014 4:34 AM, Modassar Ather wrote:
>> > > >> Hi,
>> > > >>
>> > > >> I have a setup of 4 shard Solr cluster with embedded zookeeper on
>> one
>> > of
>> > > >> them. The zkClient time out is set to 30 seconds, -Xms is 20g and
>> -Xms
>> > > is
>> > > >> 24g.
>> > > >> When executing a huge query with many wildcards inside it the
>> server
>> > > >> crashes and becomes non-responsive. Even the dashboard does not
>> > responds
>> > > >> and shows connection lost error. This requires me to restart the
>> > > servers.
>> > > >
>> > > > Here's the important part of your message:
>> > > >
>> > > > *Caused by: java.lang.OutOfMemoryError: Java heap space*
>> > > >
>> > > >
>> > > > Your heap is not big enough for what Solr has been asked to do.  You
>> > > > need to either increase your heap size or change your configuration
>> so
>> > > > that it uses less memory.
>> > > >
>> > > > http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
>> > > >
>> > > > Most programs have pretty much undefined behavior when an OOME
>> occurs.
>> > > > Lucene's IndexWriter has been hardened so that it tries extremely
>> hard
>> > > > to avoid index corruption when OOME strikes, and I believe that
>> works
>> > > > well enough that we can call it nearly bulletproof ... but the rest
>> of
>> > > > Lucene and Solr will make no guarantees.
>> > > >
>> > > > It's very difficult to have definable program behavior when an OOME
>> > > > happens, because you simply cannot know the precise point during
>> > program
>> > > > execution where it will happen, or what isn't going to work because
>> > Java
>> > > > did not have memory space to create an object.  Going unresponsive
>> is
>> > > > not surprising.
>> > > >
>> > > > If you can solve your heap problem, note that you may run into other
>> > > > performance issues discussed on the wiki page that I linked.
>> > > >
>> > > > Thanks,
>> > > > Shawn
>> > > >
>> > >
>> >
>>
>
>

Reply via email to