How far would you take that? Say you had 100 terms joined by AND
(ridiculous I know, just sayin' ). Then you'd chew up 100 entries in
the filterCache.
On Fri, Sep 1, 2017 at 4:24 PM, Walter Underwood wrote:
> Hmm. Solr really should convert an fq of “a AND b” to separate “a” and “b” fq
> filters
Hmm. Solr really should convert an fq of “a AND b” to separate “a” and “b” fq
filters. That should be a simple special-case rewrite. It might take less time
to implement than explaining it to everyone.
Well, I guess then we’d have to explain how it wasn’t really necessary to send
separate fq pa
all of the low level SSL code used by Solr comes from the JVM.
double check which version of java you are using and make sure it's
consistent on all of your servers -- if you disable SSL on the affected
server you can use the Solr Admin UI to be 100% certain of exactly which
version of java is
Anyone else besides Shawn and me to reproduce this problem? Shawn contacted
Oracle off-list but that was useless at best (attach JConsole, watch heap, etc).
Is this a real problem, just a bad reporting issue of the JVM and Linux?
Thanks,
Markus
-Original message-
> From:Markus Jelsm
Sorry I am not using Tomcat. This is a fresh build of solr.
Sent from my iPhone
> On Sep 1, 2017, at 3:33 PM, Rick Leir wrote:
>
> Kent,
> Did you say you are using Tomcat? Solr does not use Tomcat by default, so you
> will need to tell us more about your configuration.
>
> But first, thi
Shawn:
See: https://issues.apache.org/jira/browse/SOLR-7219
Try fq=filter(foo) filter(bar) filter(baz)
Patches to docs welcome ;)
On Fri, Sep 1, 2017 at 1:50 PM, Shawn Heisey wrote:
> On 9/1/2017 8:13 AM, Alexandre Rafalovitch wrote:
>> You can OR cachable filter queries in the latest Solr
Shawn, you are welcome:
http://lucene.apache.org/solr/guide/6_6/the-standard-query-parser.html
Support for a special filter(…) syntax to...
https://issues.apache.org/jira/browse/SOLR-7219
On Fri, Sep 1, 2017 at 11:50 PM, Shawn Heisey wrote:
> On 9/1/2017 8:13 AM, Alexandre Rafalovitch wrote:
On 9/1/2017 8:13 AM, Alexandre Rafalovitch wrote:
> You can OR cachable filter queries in the latest Solr. There is a special
> (filter) syntax for that.
This is actually possible? If so, I didn't see anything come across the
dev list about it.
I opened an issue for it, didn't know anything had
Kent,
Did you say you are using Tomcat? Solr does not use Tomcat by default, so you
will need to tell us more about your configuration.
But first, think of what you might have changed just before it stopped working.
Cheers -- Rick
On September 1, 2017 11:55:47 AM EDT, "Younge, Kent A - Norman,
Hi,Zookeeper 3.4.10 and solr 6.6.We prepare to use only one node or more so we
currently test with this case.
Did you face it before, or do you know if it is a solr bug?
ThanksMikhail
From: Susheel Kumar
To: solr-user@lucene.apache.org; Mikhail Ibraheem
Sent: Friday, 1 September 2017,
I'm not sure if this forum is good place for my question, but I want to try.
Maybe somebody can help me.
I have web application is based on blacklight for working with SOLR (also I
use ruby gem for SOLR connection - rsolr). My task is to remove blacklight
from my application. In the last two weeks
Hello,
I am getting an error ERR_SSL_VERSION_OR_CIPHER_MISMATCH on one of my Solr
servers. The details show that it's an Unsupported protocol: The client and
server don't support a common SSL protocol version or cipher suite. I have
changed my browser settings and nothing seems to work. I
You don't have to stop solr to run the merge index tool. I would,
however, stop _indexing_ to that Solr instance.
And you probably have to reload the core (or restart the collection)
afterwards to pick up the merged documents.
Best,
Erick
On Fri, Sep 1, 2017 at 6:46 AM, Zheng Lin Edwin Yeo
wrot
You can OR cachable filter queries in the latest Solr. There is a special
(filter) syntax for that.
Regards,
Alex
On 31 Aug. 2017 2:11 pm, "Josh Lincoln" wrote:
As I understand it, using a different fq for each clause makes the
resultant caches more likely to be used in future requests.
Fo
On Fri, Sep 1, 2017 at 9:17 AM, Ere Maijala wrote:
> I spoke a bit too soon. Now I see why I didn't see any improvement from
> facet.method=uif before: its performance seems to depend heavily on how many
> facets are returned. With an index of 6 million records and the facet having
> 1960 buckets:
On Fri, Sep 1, 2017 at 9:17 AM, Ere Maijala wrote:
> I spoke a bit too soon. Now I see why I didn't see any improvement from
> facet.method=uif before: its performance seems to depend heavily on how many
> facets are returned. With an index of 6 million records and the facet having
> 1960 buckets:
Hi,
Just to check, are we able to run the IndexMergeTool on the index that is
still running on Solr? Or do we have to stop Solr first before running the
IndexMergeTool?
Regards,
Edwin
On 26 August 2017 at 23:41, Zheng Lin Edwin Yeo
wrote:
> Thanks for pointing out the mistake. The script can r
I spoke a bit too soon. Now I see why I didn't see any improvement from
facet.method=uif before: its performance seems to depend heavily on how
many facets are returned. With an index of 6 million records and the
facet having 1960 buckets:
facet.limit=20 takes 4ms
facet.limit=200 takes ~100ms
Which solr and zookeeper version you have. Any why do you have just 1 node
zookeeper. Usually you have 3 or so to maintain quorum.
Thnx
On Fri, Sep 1, 2017 at 7:24 AM, Mikhail Ibraheem <
arsenal2...@yahoo.com.invalid> wrote:
>
> Any help please? From: Mikhail Ibraheem
> To: Solr-user
>
Any help please? From: Mikhail Ibraheem
To: Solr-user
Sent: Wednesday, 30 August 2017, 18:36
Subject: Overseer task timeout
Hi,We have one node zookeeper and one no solr. Sometimes when trying to create
or delete collection there is "SEVERE:
null:org.apache.solr.common.SolrExcepti
Yonik, thanks for the hint with the uif facet method.
(btw: why isn't it part of the official documentation? - at least I
haven't found it)
For our use case it means:
Time for facet processing is exactly the same as it is with version 4.
But this works only for indexes 'without' docvalues
I te
I can confirm that we're seeing the same issue as Günter. For a
collection of 57 million bibliographic records, Solr 4.10.2 (without
docValues) can consistently return a facet in about 20ms, while Solr
6.6.0 with docValues takes around 2600ms. I've tested some versions
between those two too, bu
I am (seldom) seeing NPEs at line 610 of HttpSolrClient:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error
from server at http://xxx.xxx.x.xxx:8983/solr/core1:
java.lang.NullPointerException
at
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(Http
Hello Edwin, the JVM heap was around 50 % the whole time. The VM, which only
ran Solr, was doing fine, normal CPU and memory usage. I doubt all three
Zookeeper nodes were bad are too heavy loaded, all other Solr clusters could
talk to it and so did Flume and YARN and HDFS and everything else.
T
Thank you all for the reply. I have updated the solr client list.
Regards
Ganesh
On 31-08-2017 00:37, Leonardo Perez Pulido wrote:
Hi,
Apart from take a look at the Solr's wiki, I think one of the main reasons
why these API's are all out dated is that Solr itself provides the 'API' to
many d
25 matches
Mail list logo