Hi all,

I've been struggling with this problem for over a month now, and
although memory issues have been discussed often, I don't seem to be
able to find a fitting solution.

The index is merely 1.5 GB large, but memory use quickly fills out the
heap max of 1 GB on a 2 GB machine. This then works fine until
auto-warming starts. Switching the latter off altogether is unattractive
as it leads to response times of up to 30 s. When auto-warming starts, I
get this error:

> SEVERE: Error during auto-warming of
key:org.apache.solr.search.QueryResultKey
@e0b93139:java.lang.OutOfMemoryError: Java heap space

Now when I reduce the size of caches (to a fraction of the default
settings) and number of warming Searchers (to 2), memory use is not
reduced and the problem stays. Only deactivating auto-warming will help.
When I set the heap size limit higher (and go into swap space), all the
extra memory seems to be used up right away, independently from
auto-warming.

This all seems to be closely connected to sorting by a numerical field,
as switching this off does make memory use a lot more friendly.

Is it normal to need that much Memory for such a small index?

I suspect the problem is in Lucene, would it be better to post on their
list?

Does anyone know a better way of getting the sorting done?

Thanks in advance for your help,

Chris


This is the field setup in schema.xml:

<field name="id" type="long" stored="true" required="true"
multiValued="false" />
<field name="user-id" type="long" stored="true" required="true"
multiValued="false" />
<field name="text" type="text" indexed="true" multiValued="false" />
<field name="created" type="slong" indexed="true" multiValued="false" />

And this is a sample query:

select/?q=solr&start=0&rows=20&sort=created+desc


Reply via email to