We’ve run into this fatal problem with 6.6 in prod. It gets overloaded, make
4000 threads, runs out of memory, and dies.
Not an acceptable design. Excess load MUST be rejected, otherwise the system
goes into a stable congested state.
I was working with John Nagle when he figured this out in the
My experience with "OutOfMemoryError: unable to create new native thread"
as follows: it occurs on envs where devs refuse to use threadpools in favor
of old good new Thread().
Then, it turns rather interesting: If there are plenty of heap, GC doesn't
sweep Thread instances. Since they are native i
On 12/9/2019 2:23 PM, Joe Obernberger wrote:
Getting this error on some of the nodes in a solr cloud during heavy
indexing:
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
Java was not able to start a new thread. Most likely this is caused by
the operating syste
That's great.
But I also wanted to know why the concerned document was scored lower in
the original query. Anyways, glad that the issue is resolved. :)
On Tue, 10 Dec 2019 at 00:38, rhys J wrote:
> On Mon, Dec 9, 2019 at 12:06 AM Paras Lehana
> wrote:
>
> > Hi Rhys,
> >
> > Use Solr Query Debu
Getting this error on some of the nodes in a solr cloud during heavy
indexing:
null:org.apache.solr.common.SolrException: Server error writing document id
COLLECT20005437492077_activemq:queue:PAXTwitterExtractionQueue to the index
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(D
On Mon, Dec 9, 2019 at 12:06 AM Paras Lehana
wrote:
> Hi Rhys,
>
> Use Solr Query Debugger
> <
> https://chrome.google.com/webstore/detail/solr-query-debugger/gmpkeiamnmccifccnbfljffkcnacmmdl?hl=en
> >
> Chrome
> Extension to see what's making up the score for both of them. I guess
> fieldNorm sh
in case any one is interested, i made the memory changes as well as two
changes to
XX:ParallelGCThreads 8->20
XX:ConcGCThreads . 4->5
old:
https://gceasy.io/diamondgc-report.jsp?p=c2hhcmVkLzIwMTkvMTIvNi8tLXNvbHJfZ2MubG9nLjAuY3VycmVudC0tMTQtMjEtMTA=&channel=WEB
now:
https://gceasy.io/diamondgc-re
Hi there,
on stackoverflow I got the advice to delete the _nest_path_ field. Without it I
can use the parent filter without getting the "Parent filter should not be sent
when the schema is nested" error mesage. For example:
q={!parent which=doc_type:parent}&fl=id,[child parentFilter=doc_type:
Note that that article is from 2011. That was in the Solr 3x days when many,
many, many things were different. There was no SolrCloud for instance. Plus
Tom’s problem space is indexing _books_. Whole, complete, books. Which is,
actually, not “normal” indexing at all as most Solr indexes are much
Thanks. I observe we too often write in that way and leave it up to the
reader to assume we don’t intentionally add bugs :-)
On Mon, Dec 9, 2019 at 5:45 AM Colvin Cowie
wrote:
> Oh, just looking at the way the announcement reads on
> http://lucene.apache.org/solr/news.html :
> Solr 8.3.1 Releas
I check the frange on another field and not on query($bq) for some reason on
schema field it's filter in the right way and I get all the relevant values,
But when I do the filter on the return score from query($bq) the upper and the
lower bound behave different and in addition I didn't get the do
Cheers
On Mon, 9 Dec 2019 at 11:19, Ishan Chattopadhyaya
wrote:
> Thanks, I'll fix.
>
> On Mon, Dec 9, 2019 at 4:15 PM Colvin Cowie
> wrote:
> >
> > Oh, just looking at the way the announcement reads on
> > http://lucene.apache.org/solr/news.html :
> > Solr 8.3.1 Release Highlights:
> >
> >
I am using solr 7.6.0 and I try to check the incl and incu I get the same
result for true or false
-Original Message-
From: Paras Lehana [mailto:paras.leh...@indiamart.com]
Sent: Monday, December 09, 2019 1:31 PM
To: solr-user@lucene.apache.org
Subject: Re: Edismax bq(boost query) with fi
I don't know if this inclusive works though I know that incl is for
including the lower bound and incu is for including the upper bound.
On Mon, 9 Dec 2019 at 16:49, Raboah, Avi wrote:
> Thanks for your fast response!
>
> Without the frange I get all the documents with the score field from 1.0
>
Thanks, I'll fix.
On Mon, Dec 9, 2019 at 4:15 PM Colvin Cowie wrote:
>
> Oh, just looking at the way the announcement reads on
> http://lucene.apache.org/solr/news.html :
> Solr 8.3.1 Release Highlights:
>
>- JavaBinCodec has concurrent modification of CharArr resulting in
>corrupt intern
Thanks for your fast response!
Without the frange I get all the documents with the score field from 1.0
(default score) to max score after boosting.
When I add the frange for example
bq=text:"Phrase"^3&defType=edismax&fl=*,score&fq={!frange l=0 u=3
inclusive=true}query($bq)&q=*:*&rows=2000
I
Oh, just looking at the way the announcement reads on
http://lucene.apache.org/solr/news.html :
Solr 8.3.1 Release Highlights:
- JavaBinCodec has concurrent modification of CharArr resulting in
corrupt internode updates
That kind of sounds like the corrupt internode updates is something tha
Hi Anuj,
Glad that it worked. I ask for schema screenshot everyone because I'm
mostly sure of the schema not being reloaded or something.
However, I change pint to plong because it was taking an awful lot of time
> to index.
Strange! Why do you think that this is the case?
On Mon, 9 Dec 2019 a
I was just going to suggest you frange. You're already using it.
Please post the whole query. Have you confirmed that by removing the
frange, you are able to see the documents with score=1.0.
On Mon, 9 Dec 2019 at 14:21, Raboah, Avi wrote:
> That's right,
>
> I check something like this fq={!fr
That's right,
I check something like this fq={!frange l=0 u=5}query($bq)
And it's partially work but it's not return the documents with score = 1.0
Do you know why?
Thanks.
-Original Message-
From: Paras Lehana [mailto:paras.leh...@indiamart.com]
Sent: Monday, December 09, 2019 7:08 AM
Thanks Paras, that was very helpful. Restarted solr and for posting_id it
showed pint earlier it was showing string.
However, I change pint to plong because it was taking an awful lot of time
to index.
Thanks again,
Regards,
ANuj
On Mon, 9 Dec 2019 at 11:32, Paras Lehana
wrote:
> Hi Anuj,
>
>
21 matches
Mail list logo