The query can be as simple as *:* and I still get the error
http://localhost:8983/solr/sial-catalog-material_shard1_replica1/select?q=*%3A*&fl=id%2C+sap_material_number%2C+material%2Cname&wt=json&indent=true&group=true&group.field=sap_material_number&group.limit=-1
Removing the group.limit and the
I have an application that we wrote to support solr cloud collections. It
is a rest service that uses solrj. I am in the process of upgrading it to
use 6.1 Solr
The application builds with maven.
I get the following error:
Caused by: java.lang.NoClassDefFoundError:
org/apache/http/impl/client/Clo
I tried adding the solr-core dependency but that caused the app to fail to
deploy entirely.
On Wed, Jun 22, 2016 at 2:36 PM, Webster Homer
wrote:
> I have an application that we wrote to support solr cloud collections. It
> is a rest service that uses solrj. I am in the process of upgrad
Never mind, this was a dependency issue between Solr and JAX-RS. Managed
the dependency to 4.4.1 fixed it
On Wed, Jun 22, 2016 at 2:44 PM, Webster Homer
wrote:
> I tried adding the solr-core dependency but that caused the app to fail to
> deploy entirely.
>
> On Wed, Jun 22, 201
go on.
>
> Best,
> Erick
>
>
>
>
> On Fri, Jun 2, 2017 at 9:08 AM, Webster Homer
> wrote:
> > In the documentation for Solr cdcr there is an example of a source
> > configuration that uses properties:
> >
> > ${TargetZk}
> > ${So
the collections-api has a call to retrieve the async status.
All this does is tell you failure/success. and a message:
found [9cfb05af-b778-416e-b3c6-8b8e62345f4e] in failed tasks
It would be nice if instead it retrieved the information written to the
failed tasks queue!
So I used the zkcli.sh c
brary.
On Mon, Jun 5, 2017 at 1:52 PM, Webster Homer
wrote:
> the collections-api has a call to retrieve the async status.
>
> All this does is tell you failure/success. and a message:
> found [9cfb05af-b778-416e-b3c6-8b8e62345f4e] in failed tasks
>
> It would be nice if instead
We recently had a deployment where we had a ton of errors when calling Solr
and the collections API.
We saw nodes going into recovery mode and generally things were hosed.
Restarting solr didn't help, but restarting zookeeper did.
In our environment Zookeeper and Solr are on google cloud servers, o
We have a solr cloud collection that gets a full update every morning, via
cdcr replication. We see that the target tlogs do not seem to get truncated
or deleted as described here
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
I checked we have
ave autoSoftCommit set. That seems to
work as the data is searchable.
Does the commit write a message to the log? How can you tell when a commit
occurs? As stated above I believe that autoCommit is broken
2017-06-20 9:22 GMT-05:00 Webster Homer :
> We have a solr cloud collection that get
doesn't matter).
>
> Full background here:
> https://lucidworks.com/2013/08/23/understanding-
> transaction-logs-softcommit-and-commit-in-sorlcloud/
>
> Not entirely sure how that interacts with CDCR mind you..
>
> Best,
> Erick
>
> 2017-06-20 9:48 GMT-07:00
x27;s
> the case tlogs _are_ being rolled over, just not very often. Why is a
> mystery of course.
>
> So what happens if you issue manual commits on both source and target?
>
> It's unlikely that autocommit is totally broken or we'd have heard
> howls almost immediately
un 27, 2017 at 11:32 AM, Webster Homer
wrote:
> Commits were definitely not happening. We ran out of filesystem space. The
> admins deleted old tlogs and restartd. The collection in question was
> missing a lot of data. We reloaded it, and then we saw some commits. In
> Solrcloud th
does have the solr.CdCrUpdateLog set.
${solr.ulog.dir:}
On Tue, Jun 27, 2017 at 3:11 PM, Webster Homer
wrote:
> It appears right how that we are not seeing an issue with the target
> collections, we definitely see a problem with the source collection.
> numRec
Sometimes there are subdirectories of tlog files for example this is a
directory name tlog.20170624124859032 Why do these come into existence? The
sum of the file sizes in the folders seem close to the value returned by
the CDCR action=QUEUES
On Tue, Jun 27, 2017 at 4:05 PM, Webster Homer
wrote
e author wasn't certain what to do
in this situation
On Wed, Jun 28, 2017 at 3:14 PM, Webster Homer
wrote:
> Sometimes there are subdirectories of tlog files for example this is a
> directory name tlog.20170624124859032 Why do these come into existence?
> The sum of the file sizes in
We've been using cdcr for a while now. It seems to be pretty fragile.
Currently we're seeing tons of errors like this:
2017-07-04 14:41:27.015 ERROR
(cdcr-bootstrap-status-51-thread-1-processing-n:dfw-pauth-msc02:8983_solr)
[ ] o.a.s.h.CdcrReplicatorManager Exception during bootstrap status reques
d for
/collections/sial-catalog-product/state.json
So is Zookeeper hosed? How do I tell?
On Tue, Jul 4, 2017 at 3:27 PM, Webster Homer
wrote:
> We've been using cdcr for a while now. It seems to be pretty fragile.
>
> Currently we're seeing tons of errors like this:
> 20
restarting the zookeeper on the source cloud seems to have helped
On Tue, Jul 4, 2017 at 3:42 PM, Webster Homer
wrote:
> Another strange error message I'm seeing
> 2017-07-04 18:59:40.585 WARN (cdcr-replicator-110-thread-
> 4-processing-n:dfw-pauth-msc02:8983_solr) [ ] o.a.s.h.
I've seen this a number of times. We do cdcr replication to a cloud, and
only the shard leader gets data.
CDCR source has 2 nodes and we replicate to 2 clouds each of which have 4
nodes
Both source and targets have 2 shards
We frequently end up with collections where the target shard leader has
d
lica1",
"baseUrl": "http://uc1f-ecom-msc02:8983/solr";,
"nodeName": "uc1f-ecom-msc02:8983_solr",
"state": "active",
"leader": false,
"index":
{
"numDocs": 0,
"maxDocs": 0,
"deletedDocs"
Looking at the overseer API call as documented in the Solr Collections API
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-OVERSEERSTATUS:OverseerStatusandStatistics
The information returned looks like it could be useful in diagnosing
problems with Solrcloud.
It wo
We have buffers disabled as described in the CDCR documentation. We also
have autoCommit set for hard commits, but openSearcher false. We also have
autoSoftCommit set.
On Tue, Jul 11, 2017 at 5:00 PM, Xie, Sean wrote:
> Please see my previous thread. I have to disable buffer on source cluster
>
I have several fieldtypes that use the WordDelimiterFilterFactory
We have a fieldtype for cas numbers. which look like 1234-12-1, numbers
separated by hyphens, users often leave out the hyphens and either use
spaces or just string the numbers together.
The WDF seemed like a great solution especia
fieldtype
On Wed, Jul 26, 2017 at 12:56 PM, Saurabh Sethi
wrote:
> 1. What tokenizer are you using?
> 2. Do you have preserveOriginal="1" flag set in your filter?
> 3. Which version of solr are you using?
>
> On Wed, Jul 26, 2017 at 10:48 AM, Webster H
s most likely the cause.
> Also, based on your query, you might want to set preserveOriginal=1
>
> You can take one filter out at a time and see which one is altering the
> query.
>
> On Wed, Jul 26, 2017 at 11:13 AM, Webster Homer
> wrote:
>
> > 1. KeywordTokenizer
ly as the data is after it has been indexed" If the fieldtype
removes hyphens then you must enter the wildcard query without hyphens.
On Thu, Jul 27, 2017 at 8:35 AM, Shawn Heisey wrote:
> On 7/26/2017 12:33 PM, Webster Homer wrote:
> > checked the Pattern Replace it's OK. Can
text and your indexed
> tokens have -)?
>
> On Thu, Jul 27, 2017 at 12:03 PM, Webster Homer
> wrote:
>
> > Shawn,
> > Thank you for that. I didn't know about that feature of the WDF. It
> doesn't
> > help my situation but it's great to know about.
We have a Solrcloud environments that have 4 solr nodes and a 3 node
Zookeeper ensemble. All of the collections are configured to have 2 shards
with 2 replicas. In this environment we have 14 different collections. Some
of these collections are hardly touched others have a fairly heavy search
and u
he GC logs and seen
> if you have large stop-the-world GC pauses?
>
> In short, what you've described should be easily handled. My guess is
> GC pauses, I/O contention and/or flaky networks
>
> Best,
> Erick
>
> On Tue, Aug 8, 2017 at 11:35 AM, Webster Homer
>
Our most common use for solr is searching for products, not text search. My
company is in the process of migrating away from an Endeca search engine,
the goal to keep the business happy is to make sure that search results
from the different engines be fairly similar, one area that we have found
th
i.apache.org/solr/SchemaXml#Similarity
>
> Cheers,
> Peter Lancaster.
>
>
> -Original Message-
> From: Webster Homer [mailto:webster.ho...@sial.com]
> Sent: 08 August 2017 20:39
> To: solr-user@lucene.apache.org
> Subject: Solr 6 and IDF
>
> Our most c
It appears that all I need to do is create a class that
extends BM25Similarity, and have the new class return 1 as the idf. Is that
correct?
On Tue, Aug 8, 2017 at 3:15 PM, Webster Homer
wrote:
> I do want to use BM25, just disable IDF
>
> On Tue, Aug 8, 2017 at 2:58 PM, Peter
ct: Re: Solr 6 and IDF
> >
> > It appears that all I need to do is create a class that
> > extends BM25Similarity, and have the new class return 1 as the idf. Is
> that
> > correct?
> >
> > On Tue, Aug 8, 2017 at 3:15 PM, Webster Homer
> > wrote
We occasionally (frequently for some searches) see this exception. All
replicas appear up and stable when viewed from the SolrAdmin Console
The problem seems to be intermittent, but too frequent by far!
It's happening more for our Indian developers than for the US devs. We're
hitting the same sol
I have a need to override the default behavior of the BM25Similarty class.
It was trivial to create the class. My problem is that I cannot load it, at
least via the blob api as described here:
https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
I set enable.run
The blob store api is indeed severely limited (near useless) by this:
https://issues.apache.org/jira/browse/SOLR-9175
On Thu, Aug 10, 2017 at 4:08 PM, Webster Homer
wrote:
> I have a need to override the default behavior of the BM25Similarty class.
> It was trivial to create the cla
is seems like a bug in solr to have it behave like this!
We are running Solr 6.2.0 with our production systems in Google Cloud We
use cdcr to replicate from our on prem systems to the Google Cloud
On Wed, Jul 12, 2017 at 9:19 AM, Webster Homer
wrote:
> We have buffers disabled as des
What field types are you using for your dates?
Have a look at:
https://cwiki.apache.org/confluence/display/solr/Working+with+Dates
On Thu, Aug 17, 2017 at 10:08 AM, Nawab Zada Asad Iqbal
wrote:
> Hi Krishna
>
> I haven't used date range queries myself. But if Solr only supports a
> particular da
Is it possible to facet on a payload field type?
We are moving from Endeca to Solr. We have a number of Endeca facets where
we have hacked in multilangauge support. The multiple languages are really
just for displaying the value of a term internally the value used to search
is in English. The pro
Certainly more than a byte of information. The most common example is to
have payloads encode floats.So if there is a limit, it's more likely to be
64bits
On Wed, Aug 23, 2017 at 2:10 PM, Markus Jelsma
wrote:
> Technically they could, facetting is possible on TextField, but it would
> be useless
The payload idea was from my boss, it's similar to how they did this in
Endeca.
I'm not sure I follow your idea about "mapping internal value to translated
value". Would you care to elaborate?
My alternate idea is to have sets of facet fields for different languages,
then let our service layer dete
The issue is, that we lack translations for much of our attribute data. We
do have English versions. The idea is to use the English values for the
faceted values and for the filters, but be able to retrieve different
language versions of the term to the caller.
If we have a facet on color if the va
idea of encoding these translations as payloads
> wouldn't make sense -- because payloads exist per *occurance* of the term
> -- ie: it wouldn't make sense to put "es=rojo;fr=rouge" in the payload of
> a term "red" when indexing a document, because you want those
I am using Solr 6.2.0 configured as a solr cloud with 2 shards and 4
replicas (total of 4 nodes).
If I run the query multiple times I see the three different top scoring
results.
No data load is running, all data has been commited
I get these three different hits with their scores:
copperiinitrat
ll
> include deleted documents... it's just a side effect of the index
> structure that we don't (and can't easily) update statistics when a
> document is marked as deleted.
>
> -Yonik
>
>
> > Erick
> >
> > On Wed, Sep 6, 2017 at 7:48 PM, Yonik Seeley wro
ever. use a second element like id to your
> ranking perhaps.
>
>
>
>
> On Thu, Sep 7, 2017 at 10:54 AM, Webster Homer
> wrote:
>
> > I am not concerned about deleted documents. I am concerned that the same
> > search gives different results after each search
solr cloud instances. This
was not the first time. This particular situation was on a development
system
On Thu, Sep 7, 2017 at 10:04 AM, Webster Homer
wrote:
> the scores are not the same
> Doc
> 305340 432.44238
>
> On Thu, Sep 7, 2017 at 10:02 AM, David Hastings <
> hasting
We have several solr clouds, a couple of them have only 1 replica per
shard. We have never observed the problem when we have a single replica
only when there are multiple replicas per shard.
On Thu, Sep 7, 2017 at 10:08 AM, Webster Homer
wrote:
> the scores are not the same
> Doc
&g
on which replica serves the query, the
> order of docs may be somewhat different if the scores are close.
>
> optimizing squeezes all the deleted documents out of all the replicas
> so the scores become identical.
>
> This doesn't happen, of course, if you have only one replica.
updated constantly, and would not
lend themselves to being optimized what is the best approach for these?
Thanks
On Fri, Sep 8, 2017 at 9:47 AM, Shawn Heisey wrote:
> On 9/7/2017 8:54 AM, Webster Homer wrote:
> > I am not concerned about deleted documents. I am concerned that the same
&
Is it possible to use the streaming API to stream documents from a
collection and load them into a new collection? I was thinking that this
would be a great way to get a random sample of data from our main
collections to developer machines. Making it a random sample would be
useful as well. This lo
I am trying to create a filter that normalizes an input token, but also
splits it inot multiple pieces. Sort of like what the WordDelimiterFilter
does.
It's meant to take a molecular formula like C2H6O and normalize it to C2H6O1
That part works. However I was also going to have it put out the ind
e:
> pattern=“([A-Z][a-z]?\d+)” preserveOriginal=“true” />
>
> This will capture all atom counts as a separate tokens.
>
> HTH,
> Emir
>
> > On 26 Sep 2017, at 23:14, Webster Homer wrote:
> >
> > I am trying to create a filter that normalizes an input toke
over thinking it.
>
> Mind to share?
>
> -Stefan
>
> On Sep 27, 2017 4:34 PM, "Webster Homer" wrote:
>
> > There is a need for a special filter since the input has to be
> normalized.
> > That is the main requirement, splitting into pieces is op
Check that you have autoCommit enabled in the target schema.
Try sending a commit to the target collection. If you don't have autoCommit
enabled then the data could be replicating but not committed so not
searchable
On Thu, Sep 28, 2017 at 1:57 AM, Jiani Yang wrote:
> Hi,
>
> Recently I am tryi
Are there diagnostic tools to test that nodes of a cloud are communicating
correctly with each other and their Zookeeper?
We sometimes see strange behavior of our nodes and it would be nice to have
a tool that could detect network issues, especially intermittent ones.
--
This message and any a
A colleague of mine was testing how solrcloud replica recovery works. We
have had a lot of issues with replicas going into recovery mode, replicas
down and in recovery failed states. So to test, he deleted a healthy
replica in one of our development. First the delete operation timed out,
but the r
We have begun to see errors around too many open files on one of our
solrcloud nodes. One replica tries to open >8000 files. This replica tries
to startup and then fails the open files are exceeded upon startup as it
tries to recover.
Our solrclouds have 12 distinct collections. I would think tha
h case you might want to split that
> shard up and move the sub-shards to some other machine.
>
> Best,
> Erick
>
> On Thu, Oct 5, 2017 at 10:02 AM, Webster Homer
> wrote:
> > We have begun to see errors around too many open files on one of our
> > solrcloud node
8,000 files seems very odd though. Is it
> a massive index? The default max segment size is 5G, so you could have
> a gazillion small segments in which case you might want to split that
> shard up and move the sub-shards to some other machine.
>
> Best,
> Erick
>
> On Thu, Oc
Interestingly many of these tlog files (5428 out of 8007) are have 0
length!? What would cause that? As I stated this is a cdcr target
collection.
On Thu, Oct 5, 2017 at 1:19 PM, Webster Homer
wrote:
> I wouldn't call it massive. The index is ~9 million documents. So not too
&g
:{"/cdcr":{
"name":"/cdcr",
"class":"solr.CdcrRequestHandler",
"buffer":{"defaultState":"disabled"}}
These are all in our QA environment
On Thu, Oct 5, 2017 at 2:43 PM, Erick Erickson
wrote:
> OK,
It seems that there was a networking error just prior to the creation of
the 0 length files:
The files from Sep 27 are all written at 17:56.
There was minor packet loss (1 out of 10 packets per 60 second interval)
just prior to that time.
On Thu, Oct 5, 2017 at 3:11 PM, Webster Homer
wrote
look from ZK perspective after deleting that replica?
>
> Thanks,
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 5 Oct 2017, at 16:17, Webster Homer
dn't be necessary, just in case.
>
> Best,
> Erick
>
> On Fri, Oct 6, 2017 at 10:34 AM, Webster Homer
> wrote:
> > The replica was deleted using the deleteReplica collections API call. The
> > call timed out, but eventually completed. However something still he
We are using Solr 6.2.0 in solrcloud mode
I have a QA solrcloud that has multiple collections. All collections have 2
shards each with two replicas.
I have several replicas where the numDocs in the same shard do not match.
In two collections with three different shards I have one replica with dat
data loads
to the out of whack collections?
On Fri, Oct 6, 2017 at 2:04 PM, Webster Homer
wrote:
> We are using Solr 6.2.0 in solrcloud mode
>
> I have a QA solrcloud that has multiple collections. All collections have
> 2 shards each with two replicas.
>
> I have several
I have an application which currently uses a boolean query. The query could
have a large number of boolean terms. I know that the TermsQuery doesn't
have the same limitations as the boolean query. However I need to maintain
the order of the original terms.
The query terms from the boolean query ar
he code that TermsQuerParser uses bypasses scoring on the
> theory that these vary large OR clauses are usually useless for
> scoring, your application is an outlier. But you knew that already ;)
>
>
> Best,
> Erick
>
> On Wed, Oct 18, 2017 at 9:42 AM, Webster Homer
> wrote
I have a Replica marked as down in Production, but the diagnostics as to
why it's down are useless. All we see is a NullPointerException
I see this error message in the log:
2017-10-30 14:17:39.008 ERROR (qtp472654579-39773) [ ] o.a.s.s.HttpSolrCall
null:org.apache.solr.common.SolrException: SolrC
I have a potential use case for solr searching via streaming expressions.
I am currently using solr 6.2.0, but we will soon be upgrading to the 7.1.0
version.
I started testing out searching using streaming expressions.
1. If I use an alias instead of a collection name it fails. I see that
there i
ly only supports sorting by fields.
>
> You can sort by score using the default /select handler.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Oct 31, 2017 at 1:50 PM, Webster Homer
> wrote:
>
> > I have a potential use case for solr searching vi
g to use
> streaming for
> a use-case it was not designed for.
>
> You have two choices here.
> > use streaming as it was intended,
> > use cursorMark for processing in batches.
>
> Best,
> Erick
>
> On Wed, Nov 1, 2017 at 8:33 AM, Webster Homer
> wrote:
>
I'm using Solr 6.2.0. I am trying to understand how the streaming api works.
in 6.2 simple expressions seem to behave well. I am having a problem making
the joins work. I don't see errors, but I don't see data either.
Using the Solr Admin Console for testing, this query works:
search(test-catalog
ge clusters.
> You won't be able to get good performance with large joins on a single node
> or even a small cluster.
>
> So you'll really need to think about how the joins are designed and whether
> they fit your use case.
>
> Joel Bernstein
> http://joelsolr.bl
My company uses Dynatrace for most everything in production. They have a
plugin for Solr that works with 6.*
On Thu, Nov 2, 2017 at 4:05 PM, Emir Arnautović <
emir.arnauto...@sematext.com> wrote:
> Hi Robi,
> Did you try Sematext’s SPM? It provides host, JVM and Solr metrics and
> more. We use it
We're in the process of upgrading to Solr 7.1. I noticed that the 7.1 Admin
Dashboard in the Console no longer displays the Args section showing all
the startup parameters. Instead I just see "Arg" with nothing next to it.
This has been quite useful as we don't have access to the startup scripts
in
> Webster:
>
> What browser and version of browser?
>
> On Tue, Nov 14, 2017 at 7:49 AM, Shawn Heisey wrote:
> > On 11/14/2017 8:33 AM, Webster Homer wrote:
> >> We're in the process of upgrading to Solr 7.1. I noticed that the 7.1
> Admin
> >> Das
Prior to upgrading to 7.1 from 6.2 this innerJoin streaming expression
returned in ~10 seconds when run from the Solr Admin Console:
innerJoin(
search(sial-catalog-material,
q="*:*",fl="id_record_spec,display_material_number,display_package_size,display_material_number,display_material_qty,displa
For what it's worth, all of our solr installations install solr as a service
On Tue, Nov 14, 2017 at 12:43 PM, Webster Homer
wrote:
> I am using chrome Version 62.0.3202.94 (Official Build) (64-bit)
>
> I only see a little icon and the word "Args" with nothing displayed
I @ jquery-2.1.3.min.js:27
On Tue, Nov 14, 2017 at 7:12 PM, Rick Leir wrote:
> Homer
> In chrome, right-click and choose 'inspect' at the bottom. Now go to the
> network tab then reload the page. Are you seeing errors? Tell!
> Thanks
> Rick
>
> On November 14, 2017
http://localhost:8983/solr/admin/info/system and parse the JSON response
> in
> various ways. The Args section comes from the "jvm.jmx.commandLineArgs"
> section of that. Somewhere maybe that data is being requested twice and
> making a duplicate set of data for the UI to parse?
hese twice.
We were migrating from solr 6.2.0 if that makes any difference
On Wed, Nov 15, 2017 at 12:55 PM, Shawn Heisey wrote:
> On 11/15/2017 8:40 AM, Webster Homer wrote:
> > I do see errors in both Consoles. I see more errors on the ones that
> don't
> > display Arg
I am converting a schema from 6 to 7 and in the process I removed the Trie
field types and replaced them with Point field types.
My schema also had fields defined as "int" and "long". These seem to have
been removed as well, but I don't remember seeing that documented.
In my original schema the _
Oh sorry missed that they were defined as trie fields. For some reason I
thought that they were Java classes
On Thu, Nov 16, 2017 at 4:23 PM, Webster Homer
wrote:
> I am converting a schema from 6 to 7 and in the process I removed the Trie
> field types and replaced them with Point field
I am developing an application that uses cursorMark deep paging. It's a
java client using solrj client.
Currently the client is created with Solr 6.2 solrj jars, but the test
server is a solr 7.1 server
I am getting this error:
Error from server at http://XX:8983/solr/sial-catalog-product: Cu
As I suspected this was a bug in my code. We use KIE Drools to configure
our queries, and there was a conflict between two rules.
On Mon, Nov 20, 2017 at 4:09 PM, Webster Homer
wrote:
> I am developing an application that uses cursorMark deep paging. It's a
> java client using s
We also have the same configurations used in different environments. We
upload the configset to zookeeper and use the Config API to overlay
environment specific settings in the solrconfig.xml. We have avoided having
collections share the same configsets, basically for this reason.
If CDCR supporte
While setting up cdcr on a server I noticed that there were a lot of
messages being written to the solr.log. All INFO.
2016-12-02 20:32:59.096 INFO
(cdcr-replicator-100-thread-2-processing-n:stlpj1scld.sial.com:8983_solr
x:sial-catalog-product_shard1_replica1 s:shard1 c:sial-catalog-product
r:core
We would like to load a collection and have it replicate out to multiple
clusters. For example we want a US cluster to be able to replicate to
Europe and Asia.
I tried to create two source cdcrRequestHandlers
/cdcr01 and /cdcr02 each differing by their target zookeepers
When the target handlers w
I have been testing and setting up CDCR replication between Solrcloud
instances.
We are currently using Solr 6.2
We have a lot of collections and a number of environments for testing and
deployment. It seemed that using properties in the cdcrRequestHandler would
help a lot. Since we have a naming
We are using Solr Cloud 6.2
We have been noticing an issue where the index in a core shows as current =
false
We have autocommit set for 15 seconds, and soft commit at 2 seconds
This seems to cause two replicas to return different hits depending upon
which one is queried.
What would lead to the
tically
> occur. Updates should be fine.
>
> BTW, I've seen continuous monitoring of this done by automated
> scripts. The key is to get the shard URL and ping that with
> &distrib=false. It'll look something like
> http://host:port/solr/collection_shard1_replica1...
-consistent. That is, all the replicas for shardN on the source
> cluster show the same documents (M). All the replicas for shardN on
> the target cluster show the same number of docs (N). I'm not as
> concerned if M != N at this point. Note I'm looking at the number of
>
le there was
>> active indexing.
>>
>> How do you fix this problem when you see it? If it goes away by itself
>> that would gives at least a start on where to look. If you have to
>> manually intervene it would be good to know what you do.
>>
>> Th
31 PM, Webster Homer
wrote:
> Looking through our replicas I noticed that in one of our shards (each
> shard has 2 replicas)
> 1 replica shows:
> "replicas": [
>
> {
> "name": "core_node1",
> "core": "sial-catalog-material_shard
While testing CDCR I found that it is writing tons of log messages per
second. Example:
2016-12-21 23:24:41.652 INFO (qtp110456297-13) [c:sial-catalog-material
s:shard1 r:core_node1 x:sial-catalog-material_shard1_replica1]
o.a.s.c.S.Request [sial-catalog-material_shard1_replica1] webapp=/solr
pat
The logs filled up the file system and caused CDCR to fail due to a
corrupted Tlog file.
On Thu, Dec 22, 2016 at 9:10 AM, Webster Homer
wrote:
> While testing CDCR I found that it is writing tons of log messages per
> second. Example:
> 2016-12-21 23:24:41.652 INFO (qtp110456297-13
That doesn't address the CDCR loging verbosity, but it might get you by.
>
> You can also change the logging at the class level by appropriately
> editing the
> log4j properties file. Again perhaps not the best solution but one
> that's immediately
> available.
>
> Be
1 - 100 of 219 matches
Mail list logo