We have a solrcloud set up with 2 nodes, 1 zookeeper and running Solr 7.7.2
This cloud is used for development purposes. Collections are sharded across the
2 nodes.
Recently we noticed that one of the main collections we use had both replicas
running on the same node. Normally we don't see coll
Never mind I figured out my problem.
-Original Message-
From: Webster Homer
Sent: Thursday, August 27, 2020 10:29 AM
To: solr-user@lucene.apache.org
Subject: Odd Solr zkcli script behavior
I am using solr 7.7.2 solr cloud
We version our collection and config set names with dates. I
I am using solr 7.7.2 solr cloud
We version our collection and config set names with dates. I have two
collections sial-catalog-product-20200711 and sial-catalog-product-20200808. A
developer uploaded a configuration file to the 20200711 version that was not
checked into our source control, and
I forgot to mention, the fields being used in the function query are indexed
fields. They are mostly text fields that cannot have DocValues
-Original Message-
From: Webster Homer
Sent: Thursday, July 23, 2020 2:07 PM
To: solr-user@lucene.apache.org
Subject: RE: How to measure search
called for the docs
returned in the packet, i.e. the “rows” parameter.
Best,
Erick
> On Jul 23, 2020, at 11:49 AM, Webster Homer
> wrote:
>
> I'm trying to determine the overhead of adding some pseudo fields to one of
> our standard searches. The pseudo fields are simply
I'm trying to determine the overhead of adding some pseudo fields to one of our
standard searches. The pseudo fields are simply function queries to report if
certain fields matched the query or not. I had thought that I could run the
search without the change and then re-run the searches with th
My company is very interested in using Learning To Rank in our product search.
The problem we face is that our product search groups its results and that does
not work with LTR.
https://issues.apache.org/jira/browse/SOLR-8776
Is there any traction to getting the SOLR-8776 patch into the main bra
Markus,
Thanks, for the reference, but that doesn't answer my question. If - is a
special character, it's not consistently special. In my example "3-DIMETHYL"
behaves quite differently than ")-PYRIMIDINE". If I escape the closing
parenthesis the following minus no longer behaves specially. The
Recently we found strange behavior in a query. We use eDismax as the query
parser.
This is the query term:
1,3-DIMETHYL-5-(3-PHENYL-ALLYLIDENE)-PYRIMIDINE-2,4,6-TRIONE
It should hit one document in our index. It does not. However, if you use the
Dismax query parser it does match the record.
Th
separately in the timings section of debug output…
Best,
Erick
> On May 28, 2020, at 4:52 PM, Webster Homer
> wrote:
>
> My concern was that I thought that explain is resource heavy, and was only
> used for debugging queries.
>
> -Original Message-
> Fro
functionality is slower than Endecas?
Or harder to understand/interpret?
If the latter, I might recommend http://splainer.io as one solution
On Thu, May 21, 2020 at 4:52 PM Webster Homer <
webster.ho...@milliporesigma.com> wrote:
> My company is working on a new website. The old/curren
this
functionality is an expensive debug option. Is there another way to get this
information from a query?
Webster Homer
This message and any attachment are confidential and may be privileged or
otherwise protected from disclosure. If you are not the intended recipient, you
must not copy this
My company has several Solrcloud environments. In our most active cloud we are
seeing outages that are related to GC pauses. We have about 10 collections of
which 4 get a lot of traffic. The solrcloud consists of 4 nodes with 6
processors and 11Gb heap size (25Gb physical memory).
I notice that
Hi,
My company is looking at using the Learning to Rank. However, our main searches
do grouping. There is an old Jira from 2016 about how these don't work together.
https://issues.apache.org/jira/browse/SOLR-8776
It doesn't look like this has moved much since then. When will we be able to
re-rank
I was just looking at the Schema Browser for one of our collections. It's
pretty handy. I was thinking that it would be useful to create a tool that
would create a report about what fields were indexed had docValues, were
multivalued etc...
Has someone built such a tool? I want it to aid in est
We are looking at upgrading our Solrcoud instances from 7.2 to the most recent
version of solr 8.4.1 at this time. The last time we upgraded a major solr
release we were able to upgrade the index files to the newer version, this
prevented us from having an outage. Subsequently we've reindexed a
t;>> How long delay do you see? Is it only for query panel or for the UI in
>>> general?
>>> A query for *:* is not necessarily a simple query, it depends on how many
>>> and large fields you have etc. Try a query with fl=id or fl=title and see
>>> if th
>>>>> What version of Solr?
>>>>>
>>>>>
>>>>>
>>>>> Joel Bernstein
>>>>> http://joelsolr.blogspot.com/
>>>>>
>>>>>
>>>>> On Tue, Dec 10, 2019 at 5:58 PM Arnold
It seems like the Solr Admin console has become slow when you use it on the
chrome browser. If I go to the query tab and execute a query, even the default
*:* after that the browser window becomes very slow.
I'm using chrome Version 78.0.3904.108 (Official Build) (64-bit) on Windows
The work aro
s, tfloat:* does
Hi Webster,
> The fq facet_melting_point:*
"Point" numeric fields don't support that syntax currently, and the way to
retrieve "docs with any value in field foo" is "foo:[* TO *]". See
https://issues.apache.org/jira/browse/SOLR-11746
O
The fq facet_melting_point:*
Returns 0 rows. However the field clearly has data in it, why does this query
return rows where there is data
I am trying to update our solr schemas to use the point fields instead of the
trie fields.
We have a number of pfloat fields. These fields are indexed and
ldn't get values for index_date either even though every record has it set
with the default of NOW
So what am I doing wrong with the point fields? I expected to be able to do
just about everything with the point fields I could do with the deprecated trie
fields.
Regards,
Webster H
Tlogs will accumulate if you have buffers "enabled". Make sure that you
explicitly disable buffering from the cdcr endpoint
https://lucene.apache.org/solr/guide/7_7/cdcr-api.html#disablebuffer
Make sure that they're disabled on both the source and targets
I believe that sometimes buffers get enab
-
From: Mikhail Khludnev
Sent: Friday, October 04, 2019 2:28 PM
To: solr-user
Subject: Re: json.facet throws ClassCastException
Hello, Webster.
Have you managed to capture stacktrace?
On Fri, Oct 4, 2019 at 8:24 PM Webster Homer <
webster.ho...@milliporesigma.com> wrote:
> I'm t
I'm trying to understand what is wrong with my query or collection.
I have a functioning solr schema and collection. I'm running Solr 7.2
When I run with a facet.field it works, but if I change it to use a json.facet
it throws a class cast exception.
json.facet=prod:{type:terms,field:product,mi
the text (line breaks, tabs
> etc.).
>
>> Am 27.09.2019 um 16:42 schrieb Webster Homer
>> :
>>
>> I forgot to mention that I'm using Solr 7.2. I also found that if
>> instead of \p{L} I use the long form \p{Letter} then when I reload
>> the collection a
al Message-----
From: Webster Homer
Sent: Friday, September 27, 2019 9:09 AM
To: solr-user@lucene.apache.org
Subject: Strange regex behavior in solr.PatternReplaceCharFilterFactory
I am developing a new version of a fieldtype that we’ve been using for several
years. This fieldtype is to be used as a
I am developing a new version of a fieldtype that we’ve been using for several
years. This fieldtype is to be used as a part of an autocomplete code. The
original version handled standard ascii characters well, but I wanted it to be
able to handle any Unicode letter, not just A-Za-z but Greek an
-Original Message-
From: Webster Homer
Sent: Monday, September 09, 2019 4:17 PM
To: solr-user@lucene.apache.org
Subject: CDCR tlog corruption leads to infinite loop
We are running Solr 7.2.0
Our configuration has several collections that are loaded into a solr cloud
which is set to replicate
We are running Solr 7.2.0
Our configuration has several collections that are loaded into a solr cloud
which is set to replicate using CDCR to 3 different solrclouds. All of our
target collections have 2 shards with two replicas per shard. Our source
collection has 2 shards, and 1 replica per sh
unexpected with 0
relevancy.
It does appear that bq does what I want, but the behavior of boost seems like a
bug. We use boost elsewhere and it works as we want, that use case does not
involve using the query function though.
-Original Message-
From: Webster Homer
Sent: Thursday, April
Hi,
I am trying to understand how the boost (and bq) parameters are supposed to
work.
My application searches our product schema and returns the best matches. To
enable an exactish match on product name we created fields that are minimally
tokenized (keyword tokenizer/lowercase). Now I want the
I am using the CloudSolrClient Solrj api for querying solr cloud collections.
For the most part it works well. However we recently experienced a series of
outages where our production cloud became unavailable. All the nodes were down.
That's a separate topic... The client application tried to la
We have a number of collections in a Solrcloud.
The cloud has 2 shards each with 2 replicas, 4 nodes. On one of the nodes I am
seeing a lot of errors in the log like this:
2019-02-04 20:27:11.831 ERROR (qtp1595212853-88527) [c:sial-catalog-product
s:shard1 r:core_node4 x:sial-catalog-product_sha
need to narrow the focus to a
smaller number of records, and I'm not certain how to do that efficiently. Are
there debug parameters that could help?
-Original Message-
From: Webster Homer
Sent: Thursday, December 20, 2018 3:45 PM
To: solr-user@lucene.apache.org
Subject: Query
We are experiencing almost nightly solr crashes due to Japanese queries. I’ve
been able to determine that one of our field types seems to be a culprit. When
I run a much reduced version of the query against out DEV solrcloud I see the
memory usage jump from less than a gb to 5gb using only a sin
Recently we had a few Japanese queries that killed our production Solrcloud
instance. Our schemas support multiple languages, with language specific search
fields.
This query and similar ones caused OOM errors in Solr:
モノクローナル抗ニコチン性アセチルコリンレセプター(??7サブユニット)抗体 マウス宿主抗体
The query doesn’t match anyth
Is there a way to get an approximate measure of the memory used by an indexed
field(s). I’m looking into a problem with one of our Solr indexes. I have a
Japanese query that causes the replicas to run out of memory when processing a
query.
Also, is there a way to change or disable the timeout in
We are using Solr 7.2. We have two solrclouds that are hosted on Google clouds.
These are targets for an on Prem solr cloud where we run our ETL loads and
have CDCR replicate it to the Google clouds. This mostly works pretty well.
However, networks can fail. When the network has a brief outage
My company is planning on upgrading our stack to use Java 11. What version of
Solr is planned to be supported on Java 11?
We won't be doing this immediately as several of our key components are not yet
been ported to 11, but we want to plan for it.
Thanks,
Webster
k Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, November 06, 2018 12:36 PM
To: solr-user
Subject: Re: Negative CDCR Queue Size?
What version of Solr? CDCR has changed quite a bit in the 7x code line so it's
important to know the version.
On Tue, Nov 6, 2018 at 10:32 AM Webster Homer
wrote:
&g
Several times I have noticed that the CDCR action=QUEUES will return a negative
queueSize. When this happens we seem to be missing data in the target
collection. How can this happen? What does a negative Queue size mean? The
timestamp is an empty string.
We have two targets for a source. One lo
The KeywordRepeat and RemoveDuplicates were added to support better wildcard
matching. Removing the duplicates just removes those terms that weren't
stemmed.
This seems like a subtle bug to me
-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io]
Sent: Tuesday, Oc
I noticed that sometimes query matches seem to get counted twice when they are
scored. This will happen if the fieldtype is being stemmed, and there is a
matching synonym.
It seems that the score for the field is 2X higher than it should be. We see
this only when there is a matching synonym that
I have a fairly complex query which I'm trying to debug. The query works as
long as I don't try to return the field: [explain style=nl]
Of course this is the data I'm really interested in. When I run it, the console
acts busy, but then the screen clears displaying no data. I suspect a timeout
in
This morning I was told that there was something screwy with one of our
collections.
This collection has 2 shards and 2 replicas per shard. Each replica has a
different value for numDocs!
Datacenter #1
shard1_replica11513053
shard1_replica21512653
shard2_replica11512296
shard2_replica2
use the Solrcloud Collections API
https://lucene.apache.org/solr/guide/7_3/collections-api.html#list
On Tue, Jul 17, 2018 at 12:12 PM, Kudrettin Güleryüz
wrote:
> Hi,
>
> What is the suggested way to get list of collections from a solr Cloud with
> a ZKhost?
>
> Thank you
>
--
This message a
I have a fairly large existing code base for querying Solr. It is
architected where common code calls solr and returns a solrj QueryResponse
object.
I'm currently using Solr 7.2 the code interacts with solr using the Solrj
client api
I have a need that would be very easily met by using the json.f
Recently I encountered some problems with CDCR after we experienced network
problems, I thought I'd share.
I'm using Solr 7.2.0
We have 3 solr cloud instances where we update one cloud and use cdcr to
forward updates to the two solrclouds that are hosted in a cloud.
Usually this works pretty well
We do group queries with Solrcloud all the time. You must set up your
collection so that all values for the field you are grouping on are in the
same shard.
This can easily be done with the composite router. Basically you do this be
creating a unique field that contains the field to group on, with
I was looking at SOLR-12057
According to the comment on the ticket, CDCR can not work when a collection
has PULL Replicas. That seems like a MAJOR limitation to CDCR and PULL
Replicas. Is this likely to be addressed in the future?
CDCR currently is broken for TLOG replicas too.
https://issues.apa
added to CDCR.
>
> But I don't recall your other e-mails mention CDCR so I mention this
> on the off chance...
>
> Best,
> Erick
>
> On Mon, Apr 2, 2018 at 10:35 AM, Webster Homer
> wrote:
> > Over the weekend one of our Dev solrcloud ran out of dis
Over the weekend one of our Dev solrcloud ran out of disk space. Examining
the problem we found one collection that had 2 months of uncommitted tlog
files. Unfortuneatly the solr logs rolled over and so I cannot see the
commit behavior during the last time data was loaded to it.
The solrconfig.xml
This Zookeeper ensemble doesn't look right.
>
> ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/ -p
> 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
Shouldn't the zookeeper ensemble be specified as:
zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181
You should put the
rytime. We use Solr as a search engine,
we almost always want to retrieve results in order of relevancy.
I think that we will phase out the use of NRT replicas in favor of TLOG
replicas
On Fri, Mar 23, 2018 at 7:04 PM, Shawn Heisey wrote:
> On 3/23/2018 3:47 PM, Webster Homer wrote:
> &g
Mar 23, 2018 at 6:44 PM, Shawn Heisey wrote:
> On 3/23/2018 3:24 PM, Webster Homer wrote:
> > I see this in the output:
> > Lexical error at line 1, column 1759. Encountered: after :
> > "/select?defType=edismax&start=0&rows=25&...
> > It has basically
Just FYI I had a project recently where I tried to use cursorMark in
Solrcloud and solr 7.2.0 and it was very unreliable. It couldn't even
return consistent numberFound values. I posted about it in this forum.
Using the start and rows arguments in SolrQuery did work reliably so I
abandoned cursorMa
I am working on a program to play back queries from a log file. It seemed
straight forward. The log has the solr query written to it. via the
SolrQuery.toString method. The SolrQuery class has a constructor which
takes a string. It does instantiate a SolrQuery object, however when I try
to actuall
t; Commits worked fine after that. I don't know what caused the commits to
> > stop, and why re-booting (and not just restarting Solr) caused them to
> work
> > fine.
> >
> > Wondering if you ever found a solution to your situation?
> >
> >
> >
You probably want to call solr.FlattenGraphFilterFactory after the call
to WordDelimiterGraphFilterFactory. I put it at the end
Also there is an issue calling more than one graph filter in an analysis
chain so you may need to remove one of them. I think that there is a Jira
about that
Personal
seems that this is a bug in Solr
https://issues.apache.org/jira/browse/SOLR-12057
Hopefully it can be addressed soon!
On Mon, Mar 5, 2018 at 4:14 PM, Webster Homer
wrote:
> I noticed that the cdcr action=queues returns different results for the
> target clouds. One target says th
m:2181/solr",
[
"b2b-catalog-material-180124T",
[
"queueSize",
0,
"lastTimestamp",
"2018-02-28T18:34:39.704Z"
]
],
"yyy-mzk01.sial.com:2181,yyy-mzk02.sial.com:2181,
yyy-mzk03.sial.com:2181/solr",
[
"b2b-catalog-material-180124T",
[
"qu
=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT
no errors
autoCommit is set to 6 I tried sending a commit explicitly no
difference. cdcr is uploading data, but no new data appears in the
collection.
On Fri, Mar 2, 2018 at 1:39 PM, Webster Homer
wrote:
> We have been having strange behavior w
We have been having strange behavior with CDCR on Solr 7.2.0.
We have a number of replicas which have identical schemas. We found that
TLOG replicas give much more consistent search results.
We created a collection using TLOG replicas in our QA clouds.
We have a locally hosted solrcloud with 2 no
t; collections do not include the "myParent1".
> This makes the names of my collections more confusing because you can't
> tell what application they belong to. It wasn’t a problem until we had 2
> collections for one of the apps.
>
>
>
>
> -Original M
NRT
replicas. So using TLOG replicas still looks like the best work around for
the NRT issue
On Fri, Mar 2, 2018 at 10:44 AM, Shawn Heisey wrote:
> On 3/2/2018 9:28 AM, Webster Homer wrote:
>
>> I've never disabled this before. I edited the solrconfig.xml setting the
>> size
dexed? The documentation on this
feature is skimpy.
Is there a way to see if it's enabled in the Admin Console?
On Tue, Feb 27, 2018 at 9:31 AM, Webster Homer
wrote:
> Emir,
>
> Using tlog replica types addresses my immediate problem.
>
> The secondary issue is that all of our sear
Your problem seems a lot like an issue I see with Near Real Time (NRT)
replicas. I posted about it in this forum. I was told that a possible
solution was to use the Global Stats feature. I am looking at testing that
now.
Have you tried using Tlog replicas? That fixed my issues with relevancy
diffe
; Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 26 Feb 2018, at 21:03, Webster Homer wrote:
> >
> > Erick,
> >
> > No we didn't loo
st,
> Erick
>
> On Mon, Feb 26, 2018 at 11:00 AM, Webster Homer
> wrote:
> > Thanks Shawn, I had settled on this as a solution.
> >
> > All our use cases for Solr is to return results in order of relevancy to
> > the query, so having a deterministic sort would
20 PM, Shawn Heisey wrote:
> On 2/26/2018 10:26 AM, Webster Homer wrote:
> > We need the results by relevancy so the application sorts the results by
> > score desc, and the unique id ascending as the tie breaker
>
> This is the reason for the discrepancy, and why the different
I have an application which implements several different searches against a
solrcloud collection.
We are using Solr 7.2 and Solr 6.1
The collection b2b-catalog-material is created with the default Near Real
Time (NRT) replicas. The collection has 2 shards each with 2 replicas.
The application lau
1. When you say you "issue a commit" I'm
> assuming
> it's via collection/update?commit=true or some such which issues a
> hard
> commit with openSearcher=true. And it's on a _collection_ basis, right?
>
> Sorry I can't be more help
> Erick
>
&g
Yesterday I restarted a development solrcloud. After the cloud restarted 2
collections failed to come back.
I see this in the log:
2018-02-16 15:31:16.684 ERROR
(coreLoadExecutor-6-thread-1-processing-n:ae1c-ecomdev-msc02:8983_solr) [
] o.a.s.c.CachingDirectoryFactory Error closing
directory:org.
We ran the org.apache.lucene.index.IndexUpgrader as part of upgrading from
6.1 to 7.2.0
After the upgrade, one of our collections threw a NullPointerException on a
query of *:*
We didn't observe errors in the logs. All of our other collections appear
to be fine.
Re-indexing the collection seems
mit
We had a lot of issues with missing commits when we didn't set
solr.autoCommit.maxTime
${solr.autoCommit.maxTime:6}
false
${solr.autoSoftCommit.maxTime:5000}
On Fri, Feb 9, 2018 at 3:49 PM, Shawn Heisey wrote:
> On 2/9/2018 9:2
appen.
On Fri, Feb 9, 2018 at 10:25 AM, Webster Homer
wrote:
> I we do have autoSoftcommit set to 3 seconds. It is NOT the visibility of
> the records that is my primary concern. I am concerned about is the
> accumulation of uncommitted tlog files and the larger number of deleted
> do
, 2018 at 10:08 AM, Shawn Heisey wrote:
> On 2/9/2018 8:44 AM, Webster Homer wrote:
>
>> I look at the latest timestamp on a record in the collection and see that
>> it is over 24 hours old.
>>
>> I send a commit to the collection, and then see that the core is now
&g
I have observed this behavior with several versions of solr (4.10, 6.1, and
now 7.2)
I look in the admin console and look at a core and see that it is not
"current"
I also notice that there are lots of segments etc...
I look at the latest timestamp on a record in the collection and see that
it is
I noticed that in some of the current example schemas that are shipped with
Solr, there is a fieldtype, text_en_splitting, that feeds the output
of SynonymGraphFilterFactory into WordDelimiterGraphFilterFactory. So if
this isn't supported, the example should probably be updated or removed.
On Mon,
the delete. Otherwise it
doesn't seem to work very well.
On Fri, Jan 26, 2018 at 1:29 PM, Webster Homer
wrote:
> We have just upgraded our QA solr clouds to 7.2.0
> We have 3 solr clouds. collections in the first cloud replicate to the
> other 2
>
> For existing collections
We have just upgraded our QA solr clouds to 7.2.0
We have 3 solr clouds. collections in the first cloud replicate to the
other 2
For existing collections which we upgraded in place using the lucene index
upgrade tool seem to behave correctly data written to collections in the
first environment rep
While upgrading our QA solr 6.1 solrclouds to Solr 7.2.0 I discovered that
some of our index folders for a replica had directory names like
index.20170830071504690
These replicas also had a file index.properties which indicates which index
directory is current.
We don't see this configuration in
I don't like that this behavior is not documented.
It appears from this that aliases are recursive (sort of) and that isn't
documented.
On Wed, Jan 24, 2018 at 6:38 AM, alessandro.benedetti
wrote:
> b2b-catalog-material-etl -> b2b-catalog-material
> b2b-catalog-material -> b2b-catalog-material-1
to use the alias
> when they were all re-written, then delete old_collection.
>
> So it is convenient I think. We haven't moved forward on SOLR-11488
> yet. SOLR-11218 beefed up some testing also so we don't inadvertently
> break things.
>
> Best,
> Erick
>
>
>
It seems like a useful feature, especially for migrating from standalone to
solrcloud, at least if the precedence of alias to collection is defined and
enforced.
On Fri, Jan 19, 2018 at 5:01 PM, Shawn Heisey wrote:
> On 1/19/2018 3:53 PM, Webster Homer wrote:
>
>> I created the a
ng2...@gmail.com> wrote:
> Why would you create an alias with an existing collection name?
>
> Sent from my iPhone
>
> > On Jan 19, 2018, at 14:14, Webster Homer wrote:
> >
> > I just discovered some odd behavior with aliases.
> >
> > We are in the process of
db order isn't generally defined, unless you are using an explicit "order
by" on your select. Default behavior would vary by database type and even
release of the database. You can index the fields that you would "order by"
in the db, and sort on those fields in solr
On Thu, Jan 18, 2018 at 10:17
I just discovered some odd behavior with aliases.
We are in the process of converting over to use aliases in solrcloud. We
have a number of collections that applications have referenced the
collections from when we used standalone solr. So we created alias names to
match the name that the java app
o
get this working and I found that using the normal start/rows iteration
seems to work. if less efficiently
On Tue, Jan 16, 2018 at 4:15 PM, Webster Homer
wrote:
> sorry solr_returned is the total count of the documents retrieved from the
> queryResponse. So if I ask for 200 rows at at time
PM, Shawn Heisey wrote:
> On 1/15/2018 12:52 PM, Webster Homer wrote:
>
>> When I don't have score in the sort, the solr_returned and count are the
>> same
>>
>
> I don't know what "solr_returned" means. I haven't encountered that
> befo
When I don't have score in the sort, the solr_returned and count are the
same
On Mon, Jan 15, 2018 at 1:50 PM, Webster Homer
wrote:
> The problem is that the cursor mark query returns different numbers of
> documents each time it is called when the collection has multiple replicas
nt result sets.
On Mon, Jan 15, 2018 at 1:28 PM, Shawn Heisey wrote:
> On 1/15/2018 11:56 AM, Webster Homer wrote:
>
>> I have noticed strange behavior using cursorMark for deep paging in an
>> application. We use solrcloud for searching. We have several clouds for
>>
I have noticed strange behavior using cursorMark for deep paging in an
application. We use solrcloud for searching. We have several clouds for
development. For our development systems we have two different clouds. One
cloud has 2 shards with 1 replica per shard. All or our other clouds are
set up w
We also have the same configurations used in different environments. We
upload the configset to zookeeper and use the Config API to overlay
environment specific settings in the solrconfig.xml. We have avoided having
collections share the same configsets, basically for this reason.
If CDCR supporte
As I suspected this was a bug in my code. We use KIE Drools to configure
our queries, and there was a conflict between two rules.
On Mon, Nov 20, 2017 at 4:09 PM, Webster Homer
wrote:
> I am developing an application that uses cursorMark deep paging. It's a
> java client using s
I am developing an application that uses cursorMark deep paging. It's a
java client using solrj client.
Currently the client is created with Solr 6.2 solrj jars, but the test
server is a solr 7.1 server
I am getting this error:
Error from server at http://XX:8983/solr/sial-catalog-product: Cu
Oh sorry missed that they were defined as trie fields. For some reason I
thought that they were Java classes
On Thu, Nov 16, 2017 at 4:23 PM, Webster Homer
wrote:
> I am converting a schema from 6 to 7 and in the process I removed the Trie
> field types and replaced them with Point field
I am converting a schema from 6 to 7 and in the process I removed the Trie
field types and replaced them with Point field types.
My schema also had fields defined as "int" and "long". These seem to have
been removed as well, but I don't remember seeing that documented.
In my original schema the _
hese twice.
We were migrating from solr 6.2.0 if that makes any difference
On Wed, Nov 15, 2017 at 12:55 PM, Shawn Heisey wrote:
> On 11/15/2017 8:40 AM, Webster Homer wrote:
> > I do see errors in both Consoles. I see more errors on the ones that
> don't
> > display Arg
1 - 100 of 219 matches
Mail list logo