I'm not sure how you're running Solr, but generally if you're using the
Java command line to launch Jetty, you do something like this:

java -Xmx512m -jar start.jar

That would give you a half-gigabyte heap.

Are you running this on a machine with low overall RAM, or low available
RAM?

Michael Della Bitta

Applications Developer

o: +1 646 532 3062  | c: +1 917 477 7906

appinions inc.

“The Science of Influence Marketing”

18 East 41st Street

New York, NY 10017

t: @appinions <https://twitter.com/Appinions> | g+:
plus.google.com/appinions<https://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts>
w: appinions.com <http://www.appinions.com/>


On Tue, Dec 10, 2013 at 9:37 AM, sweety <sweetyshind...@yahoo.com> wrote:

> I just indexed 10 doc of total 15mb.For some queries it works fine but,
> for some queries i get this error:
> <response>
> <lst name="error">
> <str name="msg">java.lang.OutOfMemoryError: Java heap space</str>
> <str name="trace">
> java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space at
>
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:651)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:364)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
> at
>
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
> at
>
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
> at
>
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
> at
>
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
> at
>
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
> at
>
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
> at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:928)
> at
>
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
> at
>
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
> at
>
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:539)
> at
>
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:298)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at
> java.lang.Thread.run(Unknown Source) Caused by: java.lang.OutOfMemoryError:
> Java heap space
> </str>
> <int name="code">500</int>
> </lst>
> </response>
>
> I have direclty indexed them into solr.
> My schema.xml is:
> <fields>
>
> <field name="doc_id" type="uuid" indexed="true" stored="true" default="NEW"
> multiValued="false"/>
> <field name="id" type="integer" indexed="true" stored="true"
> required="true"
> multiValued="false"/>
> <field name="contents" type="text" indexed="true" stored="true"
> multiValued="false"/>
> <field name="author" type="title_text" indexed="true" stored="true"
> multiValued="true"/>
> <field name="title" type="title_text" indexed="true" stored="true"/>
>
> <field name="_version_" type="long" indexed="true" stored="true"
> multiValued="false"/>
> <copyfield source="id" dest="text" />
> <dynamicField name="ignored_*" type="text" indexed="false" stored="false"
> multiValued="true"/>
>
> <field name="spelltext" type="spell" indexed="true" stored="false"
> multiValued="true" />
> <copyField source="contents" dest="spelltext" />
> </fields>
>
> I dont understand for such small num of  doc why do i get this error.
> I havent studied much about solr performance details.
> How to increase the heap size? because I need to index a lot more data
> still.
> Thanks in advance.
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Java-heap-space-out-of-memory-tp4105903.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Reply via email to