Thanks! We enlarged the max heap size and it looks ok so far.

On Fri, Apr 9, 2010 at 4:23 AM, Lance Norskog <goks...@gmail.com> wrote:

> Since the facet "cache" is hard-allocated and has not eviction policy,
> you could do a facet query on each core as part of the wam-up. This
> way, the facets will not fail.  At that point, you can tune the Solr
> cache sizes.
>
> Solr caches documents, searches, and filter queries. Filter queries
> are sets of documents cached as a bitmap, one bit for every document
> (I think). Searches are cached as a sorted list of document numbers
> (they include the relevance order, where filters don't care).
> Documents cache all of the fields in a document.
>
> On Thu, Apr 8, 2010 at 3:46 AM, Victoria Kagansky
> <victoria.kagan...@gmail.com> wrote:
> > I noticed now that the OutOfMemory exception occurs upon faceting
> queries.
> > Queries without facets do return successfully.
> >
> > There are two log types upon the exception. The queries causing them
> differ
> > only in q parameter, the faceting and sorting parameters are the same. I
> > guess this has something to do with the result set size influencing the
> > faceting mechanism.
> >
> > 1) Apr 8, 2010 9:18:21 AM org.apache.solr.common.SolrException log
> > SEVERE: java.lang.OutOfMemoryError: Java heap space
> >
> > 2) Apr 8, 2010 9:18:13 AM org.apache.solr.common.SolrException log
> > SEVERE: java.lang.OutOfMemoryError: Java heap space
> >    at
> >
> org.apache.solr.request.UnInvertedField.uninvert(UnInvertedField.java:191)
> >    at
> > org.apache.solr.request.UnInvertedField.<init>(UnInvertedField.java:178)
> >    at
> >
> org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:839)
> >    at
> > org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:250)
> >    at
> >
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:283)
> >    at
> >
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:166)
> >    at
> >
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:72)
> >    at
> >
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195)
> >    at
> >
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
> >    at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
> >    at
> >
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
> >    at
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
> >    at
> >
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> >    at
> >
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> >    at
> >
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> >    at
> >
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> >    at
> >
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
> >    at
> >
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> >    at
> > org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
> >    at
> >
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> >    at
> >
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> >    at
> >
> org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859)
> >    at
> >
> org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:574)
> >    at
> > org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1527)
> >    at java.lang.Thread.run(Thread.java:619)
> >
> >
> >
> >
> > On Thu, Apr 8, 2010 at 9:51 AM, Victoria Kagansky <
> > victoria.kagan...@gmail.com> wrote:
> >
> >> The queries do require sorting (on int) and faceting. They should fetch
> >> first 200 docs.
> >> The current problematic core has 10 entries in fieldCache and  5 entries
> in
> >> filterCache. The other caches are empty. Is there any way to know how
> much
> >> memory specific cache takes?
> >>
> >> The problem is that one core behaves well, while the other one throws
> >> OutOfMemory exceptions right from the restart. This behavior is
> consistent
> >> if I switch the order of the cores initialization. It feels like there
> the
> >> core initialized second has no memory resources assigned.
> >>
> >>
> >>
> >> On Thu, Apr 8, 2010 at 4:26 AM, Lance Norskog <goks...@gmail.com>
> wrote:
> >>
> >>> Sorting takes memory. What data types are the fields sorted on? If
> >>> they're strings, that could be a space-eater. If they are ints or
> >>> dates, not a problem.
> >>>
> >>> Do the queries pull all of the documents found? Or do they just fetch
> >>> the, for example, first 10 documents?
> >>>
> >>> What are the cache statistics like? Can they be shrunk? The stats are
> >>> shown the Statistics page off of the main solr/admin page.
> >>>
> >>> Facets come from something called the Lucene Field Cache, which is not
> >>> controlled out of Solr. It has no eviction policy. When you do a facet
> >>> request, the memory used to load up the facets for a particular field
> >>> will not be evicted. So if you have lots and lots of facets, this
> >>> could be a problem.
> >>>
> >>> On Wed, Apr 7, 2010 at 3:45 PM, Victoria Kagansky
> >>> <victoria.kagan...@gmail.com> wrote:
> >>> >  Hi,
> >>> > We are using Solr 1.4 running 2 cores each containing ~90M documents.
> >>> Each
> >>> > core index size on the disk is ~ 120 G.
> >>> > The machine is a 64-bit quad-core 64G RAM running Windows Server
> 2008.
> >>> > Max heap size is set to 9G for the Tomcat process. Default caches are
> >>> used.
> >>> >
> >>> > Our queries are complex and involve  8 facet fields (3 of them are
> >>> boolean)
> >>> > and sorting on up 2 fields in addition to Solr score.
> >>> > I noticed a new behavior that didn't happened before: the first core
> >>> being
> >>> > queried after startup answers all the queries, even the ones bringing
> >>> ten
> >>> > millions of the documents, while the other core (that was queried the
> >>> > second) causes OutOfMemory exceptions for any query, even the
> "smallest"
> >>> > one. The heap is shown as not fully used 6-5 G out of 9. This is very
> >>> > strange because till recently both cores were working well, handling
> the
> >>> > heaviest queries, while the heap usage was on 8 G.
> >>> >
> >>> > Any ideas?
> >>> >
> >>> > Thanks
> >>> >
> >>>
> >>>
> >>>
> >>> --
> >>> Lance Norskog
> >>> goks...@gmail.com
> >>>
> >>
> >>
> >
>
>
>
> --
> Lance Norskog
> goks...@gmail.com
>

Reply via email to