We run a 3 node ZK cluster, but I'm not concerned about 2 nodes failing at
the same time. Our chaos process only kills approximately one node per
hour, and our cloud service provider automatically spins up another ZK node
when one goes down. All 3 ZK nodes are back up within 2 minutes, talking to
e
Where does zookeeper store the collection info on local filesystem on
zookeeper?
Thank you
Hi
Can the timezone of the NOW parameter in the |deleteByQuery| of the
DocExpirationUpdateProcessorFactory be change to my timezone?
I am in SG and using solr 6.5.1.
The timestamp of the entries in the solr.log is in my timezone but the
NOW parameter of the |deleteByQuery| is a different tim
Hi ,
In case of fieldTypes which specify only ‘index’ time analyzers , what will be
the analyzer used during query time.
Example below specified only index time analyzer. So what will be used during
query time
https://opengrok.ariba.com/source/s?path=solr/&project=arches_rel>.TextField
How many Zookeeper nodes in your ensemble? You need five nodes to
handle two failures.
Are your Solr instances started with a zkHost that lists all five Zookeeper
nodes?
What version of Zookeeper?
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Aug
Hi all,
My team is attempting to spin up a SolrCloud cluster with an external
ZooKeeper ensemble. We're trying to engineer our solution to be HA and
fault-tolerant such that we can lose either 1 Solr instance or 1 ZooKeeper
and not take downtime. We use chaos engineering to randomly kill instances
Hi David,
Your observations seem correct. If all fields produces the same tokens then
Solr goes for “term centric” query, but if different fields produce different
tokens, then it uses field centric query. Here is blog post that explains it
from multiword synonyms perspective:
https://opensourc
Shawn,
You are correct. I created another setup. This time with 1 node, 1 shard, 2
replicas and the join worked!
Running with the example SolrCloud setup doesn't work for join queries.
Thanks.
-S
-Original Message-
From: Steve Pruitt
Sent: Thursday, August 30, 2018 12:25 PM
To: so
Gosh, really? This is not mentioned anywhere in the documentation that I can
find. There are node to HW considerations if you are joining across different
Collections.
But, the same Collection? Tell me this is not so.
-S
-Original Message-
From: Shawn Heisey
Sent: Thursday, August
Hello,
we have a solr server set up with a pair of replicated solr servers and a
three-node zookeeper front end. This configuration is replicated in several
environments. In one environment, we frequently receive the following
zookeeper-related warning against each of the three webapps that have so
On 8/30/2018 9:49 AM, Steve Pruitt wrote:
If you mean another running Solr server running, then no.
I mean multiple Solr processes.
The cloud example (started with bin/solr -e cloud) starts two Solr
instances if you give it the defaults. They are both running on the
same machine, but if par
Or do the spellcheck results give an indication that "11000.35" has an exact
match?
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV
Gesendet: Donnerstag, 30. August 2018 18:01
An: 'solr-user@lucene.apache.org'
Betreff: Solr suggestions: why are exact matches omitted
Given the followin
Given the following configuration:
...
suggest_word_fuzzy
org.apache.solr.spelling.suggest.Suggester
org.apache.solr.spelling.suggest.fst.FuzzyLookupFactory
true
_my_suggest_word
2
If you mean another running Solr server running, then no.
-Original Message-
From: Shawn Heisey
Sent: Thursday, August 30, 2018 11:31 AM
To: solr-user@lucene.apache.org
Subject: Re: [EXTERNAL] - Re: join works with a core, doesn't work with a
collection
On 8/30/2018 9:17 AM, Steve Pr
On 8/30/2018 9:17 AM, Steve Pruitt wrote:
Single server. Localhost. I am using the simple setup and took all the
defaults.
Is there more than one Solr instance on that server? SolrCloud considers
multiple instances to be completely separate, even if they're actually
on the same hardware.
Single server. Localhost. I am using the simple setup and took all the
defaults.
-Original Message-
From: Shawn Heisey
Sent: Thursday, August 30, 2018 11:14 AM
To: solr-user@lucene.apache.org
Subject: [EXTERNAL] - Re: join works with a core, doesn't work with a collection
On 8/30/2
On 8/30/2018 9:00 AM, Steve Pruitt wrote:
Is there something different I need to do for a query with a join for a
Collection? Singular Collection, not across Collections.
Initially, I used a Core for simple development. One of my queries uses a
join. It works fine.
I know very little abou
On 8/30/2018 3:14 AM, Salvo Bonanno wrote:
The solr version in both enviroment is 7.4.0
Looks that there was a problem using the intPointField type for a key
field in my schema, I've changed the type to string and now everything
works.
Seeing that problem in 7.4.0 definitely sounds like you've
On 8/30/2018 2:13 AM, Gembali Satish kumar wrote:
*SolrClient client = new HttpSolrClient.Builder(*
* SolrUtil.getSolrURL(tsConfigUtil.getClusterAdvertisedAddress(),
aInCollectionName)).build();*
after my job search done, I am closing my client.
*client.close();*
but from UI getting more reques
Is there something different I need to do for a query with a join for a
Collection? Singular Collection, not across Collections.
Initially, I used a Core for simple development. One of my queries uses a
join. It works fine.
I created a Collection with the same schema. Indexed the same docum
Thank you Shalin. I'll try creating a policy with practically zero effect
for now.
On Wed, Aug 29, 2018 at 11:31 PM Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> There is a bad oversight on our part which causes preferences to not be
> used for placing replicas unless a cluster policy
Hi everyone,
I am doing some tests to understand how the split on whitespace
parameter works with eDisMax query parser. I understand the behaviour,
but I have a doubt about why it works like that.
When sow=true, it works as it did with previous Solr versions.
When sow=false, the behaviour changes
As Jan pointed out, unless your client sends Solr some instructions for
what to do with those documents specifically, Solr doesn't do anything.
In your example, Nutch crawls 30 documents at first, and 30 documents are
sent to Solr and added to the index. On next crawl, it finds 27 documents,
and 2
Thanks for the update
I'm using Nutch 1.14 and Solr 6.6.3 and Zookeeper 3.4.12. We are using two
Solr and configured as Solr cloud. Please let me know if anything is missing
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi
Please give us more context. You can start with telling us which crawler you
are using and more about your architecture.
It is NOT Solr's responsibility to add/delete documents on its own. it is the
client (crawler) that has to know when a document is stale or gone from the
source, and then
The solr version in both enviroment is 7.4.0
Looks that there was a problem using the intPointField type for a key
field in my schema, I've changed the type to string and now everything
works.
Thanks everyone for the replies.
On Wed, Aug 29, 2018 at 9:39 PM Shawn Heisey wrote:
>
> On 8/29/2018 1
You should create a single HttpSolrClient and re-use for all requests. It
is thread safe and creates an Http connection pool internally (well Apache
HttpClient does).
On Thu, Aug 30, 2018 at 2:28 PM Gembali Satish kumar <
gembalisatishku...@gmail.com> wrote:
> Hi Team,
>
> Need some help on Clie
Hello All,
I would like to know how Solr will handle the stale pages. For example there
are 30 documents indexed for a domain abc.com and in the second collection i
have only 27 documents for the same abc.com domain and this needs to be
indexed in Solr.
So how solr will handle the old pages alr
Hi Team,
Need some help on Client connection object pooling
I am using SolrJ API to connect the Solr.
This below snippet I used to create the client object.
*SolrClient client = new HttpSolrClient.Builder(*
* SolrUtil.getSolrURL(tsConfigUtil.getClusterAdvertisedAddress(),
aInCollectionName)).bu
29 matches
Mail list logo