Are you doing heavy writes at the time?

How many concurrent reads are are happening?

What version of Solr are you using?

What is the field definition for the double, is it docValues?




Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu <newry1...@gmail.com> wrote:

> Hello:
>    We are using export handler in Solr Cloud to get some data, we only
> request for one field, which type is tdouble, it works well at the
> beginning, but recently we saw high CPU issue in all the solr cloud nodes,
> we took some thread dump and found following information:
>
>    java.lang.Thread.State: RUNNABLE
>
>         at java.lang.Thread.isAlive(Native Method)
>
>         at
> org.apache.lucene.util.CloseableThreadLocal.purge(
> CloseableThreadLocal.java:115)
>
>         - locked <0x00000006e24d86a8> (a java.util.WeakHashMap)
>
>         at
> org.apache.lucene.util.CloseableThreadLocal.maybePurge(
> CloseableThreadLocal.java:105)
>
>         at
> org.apache.lucene.util.CloseableThreadLocal.get(
> CloseableThreadLocal.java:88)
>
>         at
> org.apache.lucene.index.CodecReader.getNumericDocValues(
> CodecReader.java:143)
>
>         at
> org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> FilterLeafReader.java:430)
>
>         at
> org.apache.lucene.uninverting.UninvertingReader.getNumericDocValues(
> UninvertingReader.java:239)
>
>         at
> org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> FilterLeafReader.java:430)
>
> Is this a known issue for export handler? As we only fetch up to 5000
> documents, it should not be data volume issue.
>
> Can anyone help on that? Thanks a lot.
>

Reply via email to