This is very odd for at least two reasons:
1> TLOG replicas shouldn’t optimize on the follower. They should optimize on
the leader then replicate the entire index to the follower.
2> As of Solr 7.5, optimize should not optimize to a single segment _unless_
that segment is < 5G. See LUCENE-7976.
Might be https://issues.apache.org/jira/browse/SOLR-13248
>From the upgrade notes 7.7:
SOLR-13248: The default replica placement strategy used in Solr has been
reverted to the 'legacy' policy used by Solr
7.4 and previous versions. This is due to multiple bugs in the autoscaling
based replica pla
Sounds like SOLR-13248.
You should be able to cure this by setting the clusterproperty
useLegacyReplicaAssignment, something like:
curl -X POST -H 'Content-type:application/json' --data-binary '
{
"set-obj-property": {
"defaults" : {
"cluster": {
"useLegacyRep
Hi community,
I have solr 7.6 running on three nodes with about 400 collections with one
shard and 3 replicas per collection. I want replicas to be spread between
all 3 nodes so that for every collection I have one replica per collection
on each node.
I create collections via the SolrJ code.
Hi,
RecentIy I encountered a strange issue with optimize in Solr 7.6. The cloud
is created with 4 shards with 2 Tlog replicas per shard. After batch index
update I issue an optimize command to a randomly picked replica in the
cloud. After a while when I check, all the non-leader Tlog replicas
fi
(1) no, and Shawn’s comments are well taken.
(2) bq. is the number of segments would drastically increase
Not true. First of all, TieredMergePolicy will take care of merging “like
sized” segments for you. You’ll have the same number (or close) no matter how
short the autocommit interval. Secon
On 3/8/2019 10:44 AM, Rahul Goswami wrote:
1) Is there currently a configuration setting in Solr that will trigger the
first option you mentioned ? Which is to not serve any searches until tlogs
are played. If not, since instances shutting down abruptly is not very
uncommon, would a JIRA to imple
Eric,
Thanks for the detailed response...I have two follow up questions :
1) Is there currently a configuration setting in Solr that will trigger the
first option you mentioned ? Which is to not serve any searches until tlogs
are played. If not, since instances shutting down abruptly is not very
Yes, you’ll get stale values. There’s no way I know of to change that, it’s a
fundamental result of Lucene’s design.
There’s a “segment_n” file that contains pointers to the current valid
segments. When a commit happens, segments are closed and the very last
operation is to update that file.
I
Hi,
Yes, I verified schema and settings. They appear to be good.I am using solr 7.4
version.
I see that solr creating schema on it's own when you don't supply schema.
Finally I modified the schema generated by solr as needed with schema API(
HTTP).
Thanks,Ahemad
Sent from Yahoo Mail on Android
What I am observing is that Solr is fully started up even before it has
finished playing the tlog. In the logs I see that a searcher is registered
first and the "Log replay finished" appears later. During that time if I
search, I do get stale values. Below are the log lines that I captured :
WARN
Right, IntRangeField (and others) is in Lucene but has not yet been implemented
by Solr.
> On Mar 7, 2019, at 6:56 PM, Ryan Yacyshyn wrote:
>
> And it's exactly what I'm looking for but unfortunately adding a fieldType
> with this class in Solr didn't work for me.
Good morning guys, I have a questions about Solrj and JSON facets.
I'm using Solr 7.7.1 and I'm sending a request like this:
json.facet={x:'max(iterationTimestamp)'}
where "iterationTimestamp" is a solr.DatePointField. The JSON response
correctly includes what I'm expecting:
"facets": {
13 matches
Mail list logo