We monitor the response time (pingdom) of the page that uses these
boosting parameters. Since the addition of these boosting parameters and
an additional field to search on (which I will create a thread on it in
the mailing list), the page average response time has increased by 1-2
seconds.
Ma
Before worrying about it too much, exactly _how_ much has
the performance changed?
I’ve just been in too many situations where there’s
no objective measure of performance before and after, just
someone saying “it seems slower” and had those performance
changes disappear when a rigorous test is don
Hi Derek,
Ah, then my reply was completely off :)
I don’t really see a better way. Maybe other than changing termfreq to field,
if the numeric field has docValues? That may be faster, but I don’t know for
sure.
Best regards,
Radu
--
Sematext Cloud - Full Stack Observability - https://sematext.
Hi Radu
Apologies for not making myself clear.
I would like to know if there is a more simple or efficient way to craft
the boosting parameters based on the requirements.
For example, I am using 'if', 'map' and 'termfreq' functions in the bf
parameters.
Is there a more efficient or simple
Hi Derek,
It’s hard to tell whether your boosts can be made better without knowing your
data and what users expect of it. Which is a problem in itself.
I would suggest gathering judgements, like if a user queries for X, what doc
IDs do you expect to get back?
Once you have enough of these judg
Hi
I have added the following boosting requirements to the search query of
a page. Feedback from monitoring team is that the overall response of
the page has increased since then.
I am trying to find out if the added boosting parameters (below) could
have contributed to the increased.
The bo
What Walter said. Although with Solr 7.6, unless you specify maxSegments
explicitly,
you won’t create segments over the default 5G maximum.
And if you have in the past specified maxSegments so you have segments over 5G,
optimize (again without specifying maxSegments) will do a “singleton merge
From that short description, you should not be running optimize at all.
Just stop doing it. It doesn’t make that big a difference.
It may take your indexes a few weeks to get back to a normal state after the
forced merges.
wunder
Walter Underwood
wun...@wunderwood.org
http
wrote:
It Depends (tm).
As of Solr 7.5, optimize is different. See:
https://lucidworks.com/post/solr-and-optimizing-your-index-take-ii/
So, assuming you have _not_ specified maxSegments=1, any very large
segment (near 5G) that has _zero_ deleted documents won’t be merged.
So there are two
It Depends (tm).
As of Solr 7.5, optimize is different. See:
https://lucidworks.com/post/solr-and-optimizing-your-index-take-ii/
So, assuming you have _not_ specified maxSegments=1, any very large
segment (near 5G) that has _zero_ deleted documents won’t be merged.
So there are two scenarios
For a full forced merge (mistakenly named “optimize”), the worst case disk space
is 3X the size of the index. It is common to need 2X the size of the index.
When I worked on Ultraseek Server 20+ years ago, it had the same merge behavior.
I implemented a disk space check that would refuse to merge
I cant give you a 100% true answer but ive experienced this, and what
"seemed" to happen to me was that the optimize would start, and that will
drive the size up by 3 fold, and if you out of disk space in the process
the optimize will quit since, it cant optimize, and leave the live in
when optimize command is issued, the expectation after the completion of
optimization process is that the index size either decreases or at most remain
same. In solr 7.6 cluster with 50 plus shards, when optimize command is issued,
some of the shard's transient or older segment files ar
> ask me to remove the stopwords, if I modify the "managed-schema" file I
> > remove the stopwords file Is it possible to re-index the database without
> > having to reload all the material but taking the documents already
> present?
> >
> > Thank you
&
Massimiliano Randazzo
>
> Il giorno mer 26 feb 2020 alle ore 13:26 Paras Lehana <
> paras.leh...@indiamart.com> ha scritto:
>
> > Hi Massimiliano,
> >
> > Is it still necessary to run the Optimize command from my application
> when
> > > I have fi
zzo
Il giorno mer 26 feb 2020 alle ore 13:26 Paras Lehana <
paras.leh...@indiamart.com> ha scritto:
> Hi Massimiliano,
>
> Is it still necessary to run the Optimize command from my application when
> > I have finished indexing?
>
>
> I guess you can stop worrying about
Hi Massimiliano,
Is it still necessary to run the Optimize command from my application when
> I have finished indexing?
I guess you can stop worrying about optimizations and let Solr handle that
implicitly. There's nothing so bad about having more segments.
On Wed, 26 Feb 2020
e I noticed a difference in the
> "Overview" page in solr 6.4 it was affected Optimized and Current and
> allowed me to launch Optimized if necessary, in version 8.41 Optimized is
> no longer present I hypothesized that this activity is done with the commit
> or through some ope
e to launch Optimized if necessary, in version 8.41 Optimized is
no longer present I hypothesized that this activity is done with the commit
or through some operation in the backgroup, if this were so, is it still
necessary to run the Optimize command from my application when I have
finished indexing
Try changing commit to
optimize
Also, If it does not work, try removing the polling interval configuration
from the slaves.
What you are seeing is expected behaviour for solr and nothing is unusual.
Try out the changes and I hope it should work fine.
On Sun, Sep 1, 2019 at 7:52 AM Monil Parikh
-instead-of-optimize
Thanks in advance!
Correct, do not optimize.
“Optimize” was a bad choice for this action. It is a forced merge.
With master/slave, it means the slaves must always copy the entire
400 GB index. Without optimize, they would only need to copy the
changed segments.
Solr automatically merges segments for you.
wunder
y you are
asking. It's important to remember, that people on the list don't know
anything about your system unless you tell them. For example one reason
version matters is that, optimize is sometimes useful, but in some older
versions of solr it can cause also cause issues (depending on
400GB index is good ?
Are we should shard it .?
When we should start caring about inex size .?
On Tue, Jun 4, 2019 at 3:04 PM Midas A wrote:
> So we should not optimize our index ?
>
> On Tue, Jun 4, 2019 at 2:37 PM Toke Eskildsen wrote:
>
>> On Tue, 2019-06-04 at 11:48 +0
So we should not optimize our index ?
On Tue, Jun 4, 2019 at 2:37 PM Toke Eskildsen wrote:
> On Tue, 2019-06-04 at 11:48 +0530, Midas A wrote:
> > Index size is 400GB. we used master slave architecture .
> >
> > commit is taking time while not able to perform optimize .
&
On Tue, 2019-06-04 at 11:48 +0530, Midas A wrote:
> Index size is 400GB. we used master slave architecture .
>
> commit is taking time while not able to perform optimize .
Why do you want to optimize in the first place? What are you hoping to
achieve?
There should be an error messag
Hi ,
Index size is 400GB. we used master slave architecture .
commit is taking time while not able to perform optimize .
what should i do .
Thanks Erick ! Great details as always :)
> On Mar 13, 2019, at 8:48 AM, Erick Erickson wrote:
>
> Wei:
>
> Right. You should count on the _entire_ index being replicated from the
> leader, but only after the optimize is done. Pre 7.5, this would be a single
> segmen
Wei:
Right. You should count on the _entire_ index being replicated from the leader,
but only after the optimize is done. Pre 7.5, this would be a single segment,
7.5+ it would be a bunch of 5G flies unless you specified that the optimize
create some number of segments.
But unless you
1> h
Hi Erick
A related question:
Is optimize then ill advised for bulk indexer post solr 7.5 ?
>> Especially in a situation where an index is being modified over many days ?
Thanks
Aroop
> On Mar 12, 2019, at 9:30 PM, Wei wrote:
>
> Thanks Erick, it's very helpful. So for
Thanks Erick, it's very helpful. So for bulking indexing in a Tlog or
Tlog/Pull cloud, when we optimize at the end of updates, segments on the
leader replica will change rapidly and the follower replicas will be
continuously pulling from the leader, effectively downloading the whole
index
wrote:
>
>> Thanks Erick.
>>
>> 1> TLOG replicas shouldn’t optimize on the follower. They should optimize
>> on the leader then replicate the entire index to the follower.
>>
>> Does that mean the follower will ignore the optimize request? Or shall I
>&g
wrote:
> Thanks Erick.
>
> 1> TLOG replicas shouldn’t optimize on the follower. They should optimize
> on the leader then replicate the entire index to the follower.
>
> Does that mean the follower will ignore the optimize request? Or shall I
> send the optimize request only to one
Thanks Erick.
1> TLOG replicas shouldn’t optimize on the follower. They should optimize
on the leader then replicate the entire index to the follower.
Does that mean the follower will ignore the optimize request? Or shall I
send the optimize request only to one of the leaders?
2> As of So
This is very odd for at least two reasons:
1> TLOG replicas shouldn’t optimize on the follower. They should optimize on
the leader then replicate the entire index to the follower.
2> As of Solr 7.5, optimize should not optimize to a single segment _unless_
that segment is < 5G. See LU
Hi,
RecentIy I encountered a strange issue with optimize in Solr 7.6. The cloud
is created with 4 shards with 2 Tlog replicas per shard. After batch index
update I issue an optimize command to a randomly picked replica in the
cloud. After a while when I check, all the non-leader Tlog replicas
can afford the time etc. to do it every time.
>
> See:
> https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
> It's not as bad, but still expensive in Solr 7.5 and later:
> https://lucidworks.com/2018/06/20/solr-and-optimizing-your-index-take-ii/
ou can _measure_ a significant improvement after the op
2> you can afford the time etc. to do it every time.
See:
https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
It's not as bad, but still expensive in Solr 7.5 and later:
https://lucidworks.com/2018/06/20/solr
Should we consider to default optimize to false in the DIH UI?
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
> 16. jan. 2019 kl. 14:23 skrev Jeremy Smith :
>
> How are you calling the dataimport? As I understand it, optimize defaults to
> true, s
How are you calling the dataimport? As I understand it, optimize defaults to
true, so unless you explicitly set it to false, the optimize will occur after
the import.
From: talhanather
Sent: Wednesday, January 16, 2019 7:57:29 AM
To: solr-user
Hi Erick,
PFB the solr-config.xml, Its not having optimization tag to true.
Then how optimization is continuously occurring for me. ?
uuid
db-data-config.xml
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
1> Not sure. You can get stats after the fact if that would help.
2, 3, 4> Well, optimize is a configuration parameter in DIH
that defaults to true so set it false and
you'll get rid of the optimize. See:
https://lucene.apache.org/solr/guide/6_6/uploading-structured-data-store-data-wi
indexed without clicking on delta
import.
"2018-12-19 16:13:21.927 WARN (qtp736709391-15) [ x:solrprod]
o.a.s.u.DirectUpdateHandler2 Starting optimize... Reading and rewriting the
entire index! Use with care."
I have mentioned my queries below, Kindly suggest.
1. Without clicking on de
rried about? And if you can't execute DIH from
the admin UI, how are you executing it? What is your DIH config? Does
whatever starts DIH specify it should optimize?
Best,
Erick
On Wed, Dec 26, 2018 at 6:16 AM Edward Ribeiro wrote:
>
> Optimize is an expensive operation. It will cost y
Optimize is an expensive operation. It will cost you 2x disk space, plus
CPU and RAM. It is usually advisable not to optimize unless you really need
to, and do not optimize frequently. Whether this can impact the server and
search depends on the index size and hardware specification.
See more
Solr
Admin, But the new/updated data's are getting indexed automatically.
When I verified the logs, I could see that the below warning messages are
occurring recursively.
"2018-12-19 16:13:21.927 WARN (qtp736709391-15) [ x:solrprod]
o.a.s.u.DirectUpdateHandler2 Starting optimize
#x27;s what I would expect.
>
> If you have to explicitly include parameters like "wait" or
> "waitSearcher" to make it block until the optimize is done, then in
> my mind, that's a bug. That should be the default setting. In the
> 7.5 reference guide, I onl
Here's the scoop on optimize:
https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
Note the link to how Solr 7.5 is different.
Best,
Erick
On Thu, Nov 29, 2018 at 3:53 PM Shawn Heisey wrote:
>
> On 11/29/2018 4:41 PM, Christopher Schultz wrote:
&g
o make it block until the optimize is done, then in my
mind, that's a bug. That should be the default setting. In the 7.5
reference guide, I only see "waitSearcher", and it says the default is true.
Thanks,
Shawn
t;> stream.body=' '
>>
>> The request returns status code 200 shortly, but when looking at
>> the solr instance I noticed that actual optimization has not
>> completed yet as there are more than 1 segments. Is the optimize
>> command async? What is the bes
zation has not completed yet as there
are more than 1 segments. Is the optimize command async? What is the best
approach to validate that optimize is truly completed?
I do not know how that request can return a 200 before the optimize job
completes. The "wait" parameters (one of which C
returns status code 200 shortly, but when looking at
> the solr instance I noticed that actual optimization has not
> completed yet as there are more than 1 segments. Is the optimize
> command async? What is the best approach to validate that optimize
> is truly completed?
Try this inst
Why do you think you need to optimize? Most configurations don’t need that.
And no, there is not synchronous optimize request.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Nov 28, 2018, at 6:50 PM, Zheng Lin Edwin Yeo wrote:
>
> Hi,
shortly, but when looking at the solr
> instance I noticed that actual optimization has not completed yet as there
> are more than 1 segments. Is the optimize command async? What is the best
> approach to validate that optimize is truly completed?
>
>
> Thanks,
>
> Wei
>
there
are more than 1 segments. Is the optimize command async? What is the best
approach to validate that optimize is truly completed?
Thanks,
Wei
least it better not be.
As far as your index growing after optimize, that's the little
"gotcha" with optimize, see:
https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
(https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-ma
y installed Solr 7.1 and configured it to work with Dovecot for
full-text searching. It works great but after about 2 days of indexing, I've
pressed the 'Optimize' button. At that point it had collected about 17 million
documents and it was taking up about 60-70GB of space.
It comple
On 4/23/2018 11:13 AM, Scott M. wrote:
I recently installed Solr 7.1 and configured it to work with Dovecot for
full-text searching. It works great but after about 2 days of indexing, I've
pressed the 'Optimize' button. At that point it had collected about 17 million
docum
No, it's not "optimizing on its own". At least it better not be.
As far as your index growing after optimize, that's the little
"gotcha" with optimize, see:
https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
This is being a
I recently installed Solr 7.1 and configured it to work with Dovecot for
full-text searching. It works great but after about 2 days of indexing, I've
pressed the 'Optimize' button. At that point it had collected about 17 million
documents and it was taking up about 60-70GB
Hi,
I have NumDocs: 17051329, deleted docs:2, Segment Count :21
Then after hitting optimize
NumDocs:7260056, deleted docs:0, Segment Count: 3.
Why docs are deleted without being marked for deletion?
Thanks,
Aashish
Ok sorry
Full import was in process so num docs changed.
Please ignore
Thanks
Aashish
On Jan 21, 2018 10:46 PM, "Aashish Agarwal" wrote:
> Hi,
>
> I have NumDocs: 17051329, deleted docs:2, Segment Count :21
> Then after hitting optimize
> NumDocs:7260056, deleted
he IndexUpgrader is an
fairly simple piece of code. It runs forceMerge (optimize) on the
index, creating a single new segment from the entire existing index.
That ties into this thread's initial subject and LUCENE-7976. I wonder
if perhaps the upgrade merge policy should be changed so that it just
rewrites all existing segments instead of fully merging them.
Thanks,
Shawn
15, at 6:01 PM, CrazyDiamond wrote:
>>>>
>>>> my index is updating frequently and i need to remove unused documents from
>>>> index after update/reindex.
>>>> Optimizaion is very expensive so what should i do?
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://lucene.472066.n3.nabble.com/is-there-a-way-to-remove-deleted-documents-from-index-without-optimize-tp4230691.html
>>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>>
>>
ve unused documents from
>>> index after update/reindex.
>>> Optimizaion is very expensive so what should i do?
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://lucene.472066.n3.nabble.com/is-there-a-way-to-remove-deleted-documents-from-index-without-optimize-tp4230691.html
>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>
should i do?
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/is-there-a-way-to-remove-deleted-documents-from-index-without-optimize-tp4230691.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>
When? When you optimize? During queries? If the latter, I doubt you'll fix
it with optimization.
On Jul 31, 2017 1:19 AM, "marotosg" wrote:
> Basically an issue with loadbalancer timeout.
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
>
Basically an issue with loadbalancer timeout.
--
View this message in context:
http://lucene.472066.n3.nabble.com/HTTP-ERROR-504-Optimize-tp4345815p4348330.html
Sent from the Solr - User mailing list archive at Nabble.com.
-
> From:Walter Underwood
> Sent: Tuesday 25th July 2017 22:39
> To: solr-user@lucene.apache.org
> Subject: Re: Optimize stalls at the same point
>
> I’ve never been fond of elaborate GC settings. I prefer to set a few things
> then let it run. I know someone wh
Thanks a lot for the responses, after the optimize is complete and i have
some time to experiment ill throw some of these settings in place,
On Tue, Jul 25, 2017 at 4:39 PM, Walter Underwood
wrote:
> I’ve never been fond of elaborate GC settings. I prefer to set a few
> things then let
an have much more index data in mapped
> memory.
>
> Regards,
> Markus
>
> -Original message-
>> From:David Hastings
>> Sent: Tuesday 25th July 2017 22:15
>> To: solr-user@lucene.apache.org
>> Subject: Re: Optimize stalls at the same point
>>
>>
y 2017 22:15
> To: solr-user@lucene.apache.org
> Subject: Re: Optimize stalls at the same point
>
> it turned out that i think it was a large GC operation, as it has since
> resumed optimizing. current java options are as follows for the indexing
> server (they are different fo
; wun...@wunderwood.org
> http://observer.wunderwood.org/ (my blog)
>
>
> > On Jul 25, 2017, at 12:03 PM, David Hastings <
> hastings.recurs...@gmail.com> wrote:
> >
> > I am trying to optimize a rather large index (417gb) because its sitting
> at
> > 28% deleti
://observer.wunderwood.org/ (my blog)
> On Jul 25, 2017, at 12:03 PM, David Hastings
> wrote:
>
> I am trying to optimize a rather large index (417gb) because its sitting at
> 28% deletions. However when optimizing, it stops at exactly 492.24 GB
> every time. When I restart solr it w
I am trying to optimize a rather large index (417gb) because its sitting at
28% deletions. However when optimizing, it stops at exactly 492.24 GB
every time. When I restart solr it will fall back down to 417 gb, and
again, if i send an optimize command, the exact same 492.24 GB and it stops
Optimize can take a long time.
Why are you doing an optimize? It doesn’t really optimize the index, it only
forces merges and deletions. Solr does that automatically in the background.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jul 13, 2
ucene.472066.n3.nabble.com/HTTP-ERROR-504-Optimize-tp4345815.html
Sent from the Solr - User mailing list archive at Nabble.com.
ion documents over a week
>> (OCR takes long) we normally end up with about 60-70 segments with this
>> configuration.
>>
>>> On 3 Mar 2017, at 02:42, Alexandre Rafalovitch wrote:
>>>
>>> What do you have for merge configuration in solrconfig.
about 60-70 segments with this
configuration.
On 3 Mar 2017, at 02:42, Alexandre Rafalovitch wrote:
What do you have for merge configuration in solrconfig.xml? You should
be able to tune it to - approximately - whatever you want without
doing the grand optimize:
https://cwiki.apache.org
end up with about 60-70 segments with this
configuration.
> On 3 Mar 2017, at 02:42, Alexandre Rafalovitch wrote:
>
> What do you have for merge configuration in solrconfig.xml? You should
> be able to tune it to - approximately - whatever you want without
> doing the grand op
What do you have for merge configuration in solrconfig.xml? You should
be able to tune it to - approximately - whatever you want without
doing the grand optimize:
https://cwiki.apache.org/confluence/display/solr/IndexConfig+in+SolrConfig#IndexConfiginSolrConfig-MergingIndexSegments
Regards
017, at 7:42 pm, Michael Joyner wrote:
>
> You can solve the disk space and time issues by specifying multiple segments
> to optimize down to instead of a single segment.
>
> When we reindex we have to optimize or we end up with hundreds of segments
> and very horrible
I typically end up with about 60-70 segments after indexing. What configuration
do you use to bring it down to 16?
> On 2 Mar 2017, at 7:42 pm, Michael Joyner wrote:
>
> You can solve the disk space and time issues by specifying multiple segments
> to optimize down to instead
You can solve the disk space and time issues by specifying multiple
segments to optimize down to instead of a single segment.
When we reindex we have to optimize or we end up with hundreds of
segments and very horrible performance.
We optimize down to like 16 segments or so and it doesn'
x27;t really affect my *current* program much.
For a future version of the program, I have a question: If I have a
SolrJ optimize running in a background thread, can I call close() on
SolrClient and HttpClient objects (and remove all references to them)
while that's happening and have all t
Heisey wrote:
> I have this code in my SolrJ program:
>
> LOG.info("{}: background optimizing", logPrefix);
> myOptimizeSolrClient.optimize(myName, false, false);
> elapsedMillis = (System.nanoTime() - startNanos) / 100;
> LOG.info("{}: Background optimize
On 11/8/2016 3:55 PM, Shawn Heisey wrote:
> I am not in a position to try this in 6.x versions. Is there anyone
> out there who does have a 6.x index they can try it on, see if it's
> still a problem?
I upgraded a dev version of the program to SolrJ 6.2.1 (newest currently
available via ivy), the
I have this code in my SolrJ program:
LOG.info("{}: background optimizing", logPrefix);
myOptimizeSolrClient.optimize(myName, false, false);
elapsedMillis = (System.nanoTime() - startNanos) / 100;
LOG.info("{}: Background optimize completed, elapsed={}", logP
riginal Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Friday, August 26, 2016 4:46 PM
To: solr-user
Subject: Re: solcloud; collection reload, core Statistics 'optimize now'
First of all, please have them pretty much ignore the cores admin page.
That's mostly
ls me what hit Reload for a
> given collection actually does, whether it is safe to do at any time and/or
> under what circumstances it should/shouldn't be used?
>
>
>
> Also, poking around the UI I noticed that if you select a core, on the
> Overview page there is a Statis
't be used?
Also, poking around the UI I noticed that if you select a core, on the Overview
page there is a Statistics panel and in it a button entitled 'optimize now'.
Again I'd like to understand what this does, when it should/shouldn't be used
and whether optimisin
erflow is that optimizing is now
> essentially deprecated and lucene (We're on Solr 5.5.2) will now keep the
> amount of segments at a reasonable level and that the performance impact of
> having deletedDocs is now much less.
Optimize is certainly not deprecated.
The operation was re
Did you change the merge settings and max segments? If you did, try going back
to the defaults.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Aug 8, 2016, at 8:56 AM, Erick Erickson wrote:
>
> Callum:
>
> re: the optimize failin
Callum:
re: the optimize failing: Perhaps it's just timing out?
That is, the command succeeds fine (which you
are reporting), but it's taking long enough that the
request times out so the client you're using reports an error.
Just a guess...
My personal feeling is that (of co
Yeah I figured that was too many deleteddocs. It could just be that our max
segments is set too high though.
The reason I asked is because our optimize requests have started failing.
Or at least,they are appearing to fail because the optimize request returns
a non 200. The optimize seems to go
tely a benefit.
In cases where there are a lot of deleted documents, scoring can be
affected by the presence of the deleted documents, and the drop in index
size after an optimize can result in a large performance boost. For the
general case where there are not many deletes, there *is* a performance
b
We have a cronjob that runs every week at a quiet time to run the
optimizecommand on our Solr collections. Even when it's quiet it's still an
extremely heavy operation.
One of the things I keep seeing on stackoverflow is that optimizing is now
essentially deprecated and lucene (We're on Solr 5.5.2
replicateAfter" directive
> is "commit" or "optimize", a replication is triggered whenever a segments
> merge occurs. Is that right?
> Or is it triggered only when a full index merge occurs, which could happen
> after a commit as well (other than after an optimization
Thanks for your answer Shawn,
If I got you, you are saying that regardless the "replicateAfter" directive is
"commit" or "optimize", a replication is triggered whenever a segments merge
occurs. Is that right?
Or is it triggered only when a full index merge occur
On 7/22/2016 4:02 AM, Alessandro Bon wrote:
> Issue: Full index replicas occur sometimes on master startup and after
> commits, despite only the optimize
> directive is specified. In the case of replica on commit, it occurs
> only for sufficiently big commits. Replica correctly sta
1 - 100 of 838 matches
Mail list logo