Oops, sorry! Don't know how I missed that.
Have you tested if it makes any difference if you put the sfield
parameter inside the fq like in the example
(https://lucene.apache.org/solr/guide/8_1/spatial-search.html#geofilt)?
We actually put pt and d in there too, e.g.
{!geofilt+sfield%3Dlocation_g
HI,
I am using Solr 7.7.1 in SolrCloud mode.
I’m getting a document I shouldn’t when searching with a TextField.
It looks like autoGeneratePhaseQuery is not working as it should,
but I have no idea what is causing it.
The schema definition I use is as follows.
Hi, community.
I am using Solr 7.7.2 in SolrCloud mode.
I am looking for the feature for re-balancing replicas among some node
groups.
Such as,
Initial state)
Node0 : shard0, shard1
Node1 : shard1, shard2
Node2 : shard2, shard3
Node3 : shard3, shard0
After Re-balancing replicas between two group
Please look at the below test which tests CDCR OPS Api. This has "BadApple"
annotation (meaning the test fails intermittently)
https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrOpsAndBoundariesTest.java#L73
This also is because of sometimes the
Btw , the code is copied from solr 7.6 source code.
Thanks,
Rajeswari
On 7/24/19, 4:12 PM, "Natarajan, Rajeswari"
wrote:
Thanks Shawn for the reply. I am not saying it is bug. I just would like to
know how to get the "lastTimestamp" by invoking CluodSolrClient reliabily.
Regards
Thanks Shawn for the reply. I am not saying it is bug. I just would like to
know how to get the "lastTimestamp" by invoking CluodSolrClient reliabily.
Regards,
Rajeswari
On 7/24/19, 3:14 PM, "Shawn Heisey" wrote:
On 7/24/2019 3:50 PM, Natarajan, Rajeswari wrote:
> Hi,
>
> Wit
On 7/24/2019 3:50 PM, Natarajan, Rajeswari wrote:
Hi,
With the below API , the QueryResponse , sometimes have the "lastTimestamp" ,
sometimes not.
protected static QueryResponse getCdcrQueue(CloudSolrClient client) throws
SolrServerException, IOException {
ModifiableSolrParams params = ne
Hi,
With the below API , the QueryResponse , sometimes have the "lastTimestamp" ,
sometimes not.
protected static QueryResponse getCdcrQueue(CloudSolrClient client) throws
SolrServerException, IOException {
ModifiableSolrParams params = new ModifiableSolrParams();
params.set(CommonParams
Thank you Shawn. Appreciate your response.
On Mon, Jul 22, 2019 at 4:58 PM Shawn Heisey wrote:
> On 7/22/2019 12:55 PM, Siva Tallapaneni wrote:
> > When I try to create a collection I'm running into issues, Following is
> the
> > exception I'm seeing
>
>
>
> > HTTP/2 + SSL is not support in Jav
Hi Furkan KAMACI,
Thanks for your thoughts on maxAnalyzedChars.
So, how can we get whether its matched or not? Is there any way to get such
data from extra payload in response from solr ?
Thanks and regards
Govind
On Wed, Jul 24, 2019 at 8:43 PM Furkan KAMACI
wrote:
> Hi Govind,
>
> Using *hl
On 7/24/2019 11:31 AM, Fiz Ahmed wrote:
We are using Apache Solr 6.6 stand-alone currently in a number
of locations.Most indexes are holding 250,000 to 400,000 documents.Our data
comes from MS-SQL.We’re using a front-end JavaScript solution to
communicate with Solr to perform querie
On 7/24/2019 12:08 PM, Prince Manohar wrote:
If the zookeeper is wiped out of it's data, it looks like the Solr also
deleted all the indexes.
I wanted to know if is normal that if the configsets related to a
collection are not inside zookeeper, then its Solr indexes are deleted
from the file sy
Maybe it would be good to state your configuration, how many machines, memory,
heap, cpu, os What performance do you have, what performance do you expect,
what queries have the most performance problems.
Sometimes rendering in the UI can also be a performance bottleneck.
> Am 24.07.2019 um
My Question is related to Apache Solr 8.
If the zookeeper is wiped out of it's data, it looks like the Solr also
deleted all the indexes.
I wanted to know if is normal that if the configsets related to a
collection are not inside zookeeper, then its Solr indexes are deleted
from the file system?
Pretty sure you are running into
https://issues.apache.org/jira/browse/SOLR-8213
Always looking for patches to help improve things :)
Kevin Risden
On Wed, Jul 24, 2019 at 4:50 AM Suril Shah wrote:
> Hi,
> I am using Solr Version 7.6.0 where Basic Authentication is enabled. I am
> trying to us
Hi SOLR Experts,
We are using Apache Solr 6.6 stand-alone currently in a number
of locations.Most indexes are holding 250,000 to 400,000 documents.Our data
comes from MS-SQL.We’re using a front-end JavaScript solution to
communicate with Solr to perform queries.
- Solr Performanc
Hi Doss,
What was existing value and what happens after you do atomic update?
Kind Regards,
Furkan KAMACI
On Wed, Jul 24, 2019 at 2:47 PM Doss wrote:
> HI,
>
> I have a multiValued field of type String.
>
> multiValued="true"/>
>
> I want to keep this list unique, so I am using atomic updates
Hi Govind,
Using *hl.tag.pre* and *hl.tag.post* may help you. However you should keep
in mind that even such term exists in desired field, highlighter can use
fallback field due to *hl.maxAnalyzedChars* parameter.
Kind Regards,
Furkan KAMACI
On Wed, Jul 24, 2019 at 8:24 AM govind nitk wrote:
>
Hello, I didn't see an existing issue for this in Jira:
The system property *waitForZk* was added in
https://issues.apache.org/jira/browse/SOLR-5129 and is supposed to increase
the timeout for an initial connection to Solr at startup, From the
solr.in.sh:
*# By default Solr will try to connect t
My example query has d=1 as the first parameter, so none of the results should
be coming back, but they are which makes it seem like it's not doing any
geofiltering for some reason.
On 7/24/19, 2:06 AM, "Ere Maijala" wrote:
I think you might be missing the d parameter in geofilt. I'm not
Where did you read anything about a 2G heap being “in the danger zone”? I
routinely see heap sizes in the 16G range and greater. The default 512M is
actually _much_ lower than it probably should be, see:
https://issues.apache.org/jira/browse/SOLR-13446
The “danger” if you allocate too much memo
HI,
I have a multiValued field of type String.
I want to keep this list unique, so I am using atomic updates with
"add-distinct"
{"docid":123456,"namelist":{"add-distinct":["Adam","Jane"]}}
but this is not maintaining the expected uniqueness, am I doing something
wrong? Guide me please.
Than
Hi,
I am using Solr Version 7.6.0 where Basic Authentication is enabled. I am
trying to use Parallel SQL to run some SQL queries.
This is the code snippet that I am using to connect to Solr and run some
SQL queries on it. This works when authentication is not enabled on the
Solr cluster.
publi
I think you can safely increase heap size to 1 gb or what you need.
Be aware though:
Solrs performance depends heavily on file system caches which are not on the
heap! So you need more memory than what you configure as heap freely available.
How much more depends on your index size.
Another opt
I am using SOLR version 6.6.0 and the heap size is set to 512 MB, I believe
which is default. We do have almost 10 million documents in the index, we do
perform frequent updates (we are doing soft commit on every update: heap issue
was seen with and without soft commit) to the index and obviousl
25 matches
Mail list logo