Hi experts:
After I sent out previous email, I issued commit on that replica core and
observed the same "ClosedChannelException", please refer to below under
"issuing core commit" section
Then I issued a core reload, and I see the timestamp of the latest tlog file
chan
Hi experts:
Need some help and suggestion about an issue I am facing
Solr info:
- Solr 8.7
- Solr cloud with tlog replica; replica size is 3 for my Solr collection
Issue:
- before issuing collection reload; I observed a new tlog file are created
after every commit; and those tlog files are
them. Obviously if I (soft-)commit after each
document is added or removed the serializable consistency would
guarantee that I can see all documents that I might want to change.
However this is not desirable in terms of performance.
I've come up with a potential solution: If I can track document
up
Hi All,
For further investigation, I have raised a JIRA ticket.
https://issues.apache.org/jira/browse/SOLR-15045
In case, anyone has any information to share, feel free to mention it here.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi All,
As I mentioned in my previous post that reloading/refreshing of the external
file is consuming most of the time during a commit operation.
In order to nullify the impact of external files, I had deleted external
files from all the shards and issued commit through the curl command. Commit
Hi All,
Till we investigate further about this issue.
Can anyone please share what other ways we can issue a commit or point me to
existing documents that have a relevant example.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
ost name
shard1_0=>solr_199
shard1_1=>solr_200
shard2_0=> solr_254
shard2_1=> solr_132
shard3_0=>solr_133
shard3_1=>solr_198
*Request rate on the system is currently zero and only hourly indexing
running on it.*
We are using curl command to issue commit.
/curl
"
Is your “id” field is your , and is it tokenized? It shouldn’t
> be, use something like “string” or keywordTokenizer. Definitely do NOT use,
> say, text_general.
>
> It’s very unlikely that records are not being flushed on commit, I’m
> 99.99% certain that’s a red herring and that thi
Is your “id” field is your , and is it tokenized? It shouldn’t be,
use something like “string” or keywordTokenizer. Definitely do NOT use, say,
text_general.
It’s very unlikely that records are not being flushed on commit, I’m 99.99%
certain that’s a red herring and that this is a problem in
v.vnc.de (167907516612608),
[email protected] (1679075166126080001),
[email protected] (1679075166126080002),
[email protected] (1679075166127128576)],commit=} 0
8
Selecting records looks good:
{
&q
n all the time but enough that the index is very inconsistent.
>
> What happens:
>
> 1. We commit a doc to Solr,
> 2. The doc shows in the search results,
> 3. Later (may be immediate, may take minutes, may take hours), the same
> document is emptied of all data except versi
Hi,
We're seeing strange behaviour when records have been committed. It doesn't
happen all the time but enough that the index is very inconsistent.
What happens:
1. We commit a doc to Solr,
2. The doc shows in the search results,
3. Later (may be immediate, may take minutes, may
,
> Tushar
> On Thu, 3 Sep 2020 at 16:17, Emir Arnautović
> wrote:
>
>> Hi Tushar,
>> Replication is file based process and hard commit is when segment is
>> flushed to disk. It is not common that you use soft commits on master. The
>> only usecase that I can
have to replicate the data to slave immediately.
Regards,
Tushar
On Thu, 3 Sep 2020 at 16:17, Emir Arnautović
wrote:
> Hi Tushar,
> Replication is file based process and hard commit is when segment is
> flushed to disk. It is not common that you use soft commits on master. The
> only us
Hi Tushar,
Replication is file based process and hard commit is when segment is flushed to
disk. It is not common that you use soft commits on master. The only usecase
that I can think of is when you read your index as part of indexing process,
but even that is bad practice and should be
Hi,
I want to ask if the soft commit works in replication.
One of our use cases deals with indexing the data every second on a master
server. And then it has to replicate to slaves. So if we use soft commit,
then does the data replicate immediately to the slave server or after the
hard commit
There are a bunch of variables. If there are too many merge threads going on,
for instance, then
the commit will block until one of the merge threads finishes. It could
well be that the one you identify as “slow” is coincidentally after the hard
commit, which are
could accumulate for 10 minutes
Thanks for your quick reply.
Commit is not called from client side.
We do not use any cache. Here is my solrconfig.xml :
https://drive.google.com/file/d/1LwA1d4OiMhQQv806tR0HbZoEjA8IyfdR/view
We give set SOLR_OPTS=%SOLR_OPTS% -Dsolr.autoSoftCommit.maxTime=100 because we
want quick view after
It depends on how the commit is called. You have openSearcher=true, which means
the call
won’t return until all your autowarming is done. This _looks_ like it might be
a commit
called from a client, which you should not do.
It’s also suspicious that these are soft commits 1 second apart. The
I am using solr 6.1.0. We have 2 shards and each has one replica.
When I checked shard1 log, I found that commit process was going to slow for
some collection.
Slow commit:
2020-08-25 09:08:10.328 INFO (commitScheduler-124-thread-1) [c:forms s:shard1
r:core_node1 x:forms
On 7/15/2020 11:39 PM, Natarajan, Rajeswari wrote:
Resending this again as I still could not make this work. So would like to know
if this is even possible to have
both solr.CdcrUpdateProcessorFactory and
solr.IgnoreCommitOptimizeUpdateProcessorFactory in solrconfig.xml and get both
functiona
,
Rajeswari
On 7/14/20, 12:40 PM, "Natarajan, Rajeswari"
wrote:
Hi ,
Would like to have these two processors (cdcr and ignorecommit) in
solrconfig.xml .
But cdcr fails with below error , with either cdcr-processor-chain or
ignore-commit-from-client chain
version co
Hi ,
Would like to have these two processors (cdcr and ignorecommit) in
solrconfig.xml .
But cdcr fails with below error , with either cdcr-processor-chain or
ignore-commit-from-client chain
version conflict for 60d35f0850afac66 expected=1671629672447737856
actual=-1, retry=0 commError
Hi Erick,
Thanks. We do have NRT requirement in our application that updates be
immediately visible. We do have constant updates. The push is for even
faster visibility but we are holding off at 2 secs soft-commit for now.
What I am not able to understand is that as per query debugging, the
Oh dear. Your autowarming is almost, but not quite totally, useless given
your 2 second soft commit interval. See:
https://lucidworks.com/post/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
So autowarming is probably not a cure, when you originally said “commit” I was
autowarm time is 25 for
query-result cache and 1724 for filter cache
- We have hard-commit every 5 mins with opensearcher=false and
soft-commit every 2 secs.
- facet are a mix of pivot facets,range facets and facet queries
- when the same facets criteria bring a smaller result set
field listed in feild cache, but don't
see any dynamic fields listed.
2) Autowarm count is at 32 for both and autowarm time is 25 for
query-result cache and 1724 for filter cache
3)Can you elaborate what you mean here. We have hard-commit every 5 mins
with opensearcher=false and soft-commit ev
OK, sounds like docValues is set.
Sure, in solrconfig.xml, there are two sections “firstSearcher” and
“newSearcher”.
These are queries (or lists of queries) that are fired as part of autowarming
when Solr is first started (firstSearcher) or when a commit happens that opens
a new searcher
I’d double check <1> first.
>
> Best,
> Erick
>
> > On Mar 30, 2020, at 12:20 PM, sujatha arun wrote:
> >
> > A facet heavy query which uses docValue fields for faceting returns
> about
> > 5k results executes between 10ms to 5 secs and the 5 secs time seems to
> > coincide with after a hard commit.
> >
> > Does that have any relation? Why the fluctuation in execution time?
> >
> > Thanks,
> > Revas
>
>
st,
Erick
> On Mar 30, 2020, at 12:20 PM, sujatha arun wrote:
>
> A facet heavy query which uses docValue fields for faceting returns about
> 5k results executes between 10ms to 5 secs and the 5 secs time seems to
> coincide with after a hard commit.
>
> Does that hav
A facet heavy query which uses docValue fields for faceting returns about
5k results executes between 10ms to 5 secs and the 5 secs time seems to
coincide with after a hard commit.
Does that have any relation? Why the fluctuation in execution time?
Thanks,
Revas
Hi,
This is above result is what I want to be able to commit but when I run the
> same command with commit=true it will not work like below.
> curl
> 'http://54.146.2.60:8983/solr/eatzcollection/update/json?commit=true' -d
> '[{"id":"location
Hi,
This is above result is what I want to be able to commit but when I run the
> same command with commit=true it will not work like below.
> curl
> 'http://54.146.2.60:8983/solr/eatzcollection/update/json?commit=true' -d
> '[{"id":"location
Hi Chriss,
thanks for opening the ticket. I have found some possibly related issues:
Open:
https://issues.apache.org/jira/browse/SOLR-3888 - "need beter handling of
external add/commit requests during tlog recovery"
<https://issues.apache.org/jira/browse/SOLR-3888&g
-- I agree something better should be
done here, and have filed SOLR-14262 for subsequent discussion...
https://issues.apache.org/jira/browse/SOLR-14262
I believe the reason the local commit is ignored during replay is to
ensure a consistent view of the index -- if the tlog being
replayed contains C
So I am trying to do a partial update to a document in Solr, but it will not
commit!
So this is the original doc I am trying to update with 11 votes.
{
"doc":
{
"id":"location_23_deal_51",
"deal_id":"deal_51",
"deal":&qu
So I am trying to do a partial update to a document in Solr, but it will not
commit!
So this is the original doc I am trying to update with 11 votes.
{
"doc":
{
"id":"location_23_deal_51",
"deal_id":"deal_51",
"deal":&qu
until a new searcher is opened
and registered as the main query searcher, making the changes visible."
https://lucene.apache.org/solr/guide/7_7/uploading-data-with-index-handlers.html#xml-update-commands
The issue we have is the "silent" part. If upon recieving a commit request
should be after
hard commits with waitSearcher=true return sucessfull from all replicas. Is
that correct?
The client that indexes new documents performs a hard commit with
waitSearcher=true and after that was successful, we expect the documents to
be visible on all Replicas.
This seems to work as
/get?id=foo to check the "current" data in the document is
more appropriate then /select?q=id:foo
Some more info here...
https://lucene.apache.org/solr/guide/8_4/solrcloud-resilience.html
https://lucene.apache.org/solr/guide/8_4/realtime-get.html
A few other things that jumped out a
Hi All,
In our Solr Cloud cluster (8.4.1) sometimes committed documents are not
visible to subsequent requests sent after a, apprently, sucessful
commit(waitFlush=true, wait=searcherTrue). This behaviour does not happen
if all nodes are stable, but will happen eventually if we kill off random
I wouldn’t remove the entire directory, but yeah, after a commit you should be
fine to remove all of the files/directories _under_ tlog.
> On Dec 20, 2019, at 5:35 PM, alwaysbluesky wrote:
>
> Using solr 7.7.2.
>
> Our CDCR is broken for some reason as I posted the other
&
first. Otherwise, disk space will become full.
Is it safe to manually delete by using "rm -rf ./tlog" after commit with
/solr/collectionname/update?commit=true (simply doing commit was not able to
clean tlog because of CDCR malfunction)?
--
Sent from: https://lucene.472066.n3.nabbl
Solr has nothing to do
with these settings.
Here’s more than you want to know about commits:
https://lucidworks.com/post/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
Also, you mentioned Sitecore, which uses Solr. Perhaps this more a question for
Sitecore?
Best,
Erick
>
Thanks for the feedback
Is there a config setting that can be used for explicit commit? I was thinking
the should be handling this already?
In our issue, the changes will only be reflected back to sitecore once we
manually indexed the item with changes
Regards
David Villacorta
-Original
Hi David,
Index will get updated (hard commit is happening every 15s) but changes will
not be visible until you explicitly commit or you reload core. Note that Solr
restart reloads cores.
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consul
Just want to confirm, given the following config settings at solrconfig.xml:
${solr.autoCommit.maxTime:15000}
false
${solr.autoSoftCommit.maxTime:-1}
Solr index will not be updated unless created item in Sitecore is manually
indexed, right?
Regards
David Villacorta
N
Try changing commit to
optimize
Also, If it does not work, try removing the polling interval configuration
from the slaves.
What you are seeing is expected behaviour for solr and nothing is unusual.
Try out the changes and I hope it should work fine.
On Sun, Sep 1, 2019 at 7:52 AM Monil Parikh
Hello Solr Users,
I am trying to get Master-Repeater-Slave config to work, I am facing
replication related issue on luceneMatchVersion 7.7.1.
Posted on stack overflow with all details:
https://stackoverflow.com/questions/57741934/solr-repeaters-slaves-replicating-are-every-commit-on-master
Hi,
I am running solr 7.5.0 with a transient cache size of 20. I am bulk indexing
multiple cores. I have auto commit setup and I see a few of these errors while
indexing:
auto commit error...:org.apache.solr.common.SolrException: openNewSearcher
called on closed core
I see log messages about
On 7/3/2019 1:36 AM, Avi Steiner wrote:
We had some cases with customers (Solr 5.3.1, one search node, one shard) with
huge tlog files (more than 1 GB).
With 30 seconds on the autoCommit, that should not be happening.
When a hard commit fires, the current tlog is closed and a new one
starts
the core is replaying the tlog, it
means that the last hard commit didn’t happen and the docs indexed since the
last _successful_ commit wouldn’t be found. IOW, you’d think you lost documents.
4. Not really. The commit/tlog management has been pretty static.
BTW, are you by any chance using CDCR
Thanks for your reply, Erick.
1. Unfortunately, we got those incidents after a long time, and relevant log
files have been already rolled, so I couldn't find commit failure messages, but
since I found OOM messages in other logs, I can guess it was the root cause.
2. Just to be sure I
Let’s take this a piece at a time.
1. commit failures are very rare, in fact the only time I’ve seen them is when
running out of disk space, OOMs, pulling the plug, etc. Look in your log files,
is there any evidence of same?
2. OOM messages. To support Real Time Get, internal structures are
180
${solr.data.dir:}
I don't have enough logs so I don't know if commit failed or not. I just
remember there were OOM messages.
As you may know, during restart, Solr tries to replay from tl
lr runs on a dedicated VM with eight cores and 64 GB RAM (16G heap),
>> which is common scenario with our software and the index holds about 20
>> million documents. Queries are as fast as expected.
>>
>> This is Solr 7.5.0, stand-alone, auto hard-commit set to 60 sec
lr runs on a dedicated VM with eight cores and 64 GB RAM (16G heap),
>> which is common scenario with our software and the index holds about 20
>> million documents. Queries are as fast as expected.
>>
>> This is Solr 7.5.0, stand-alone, auto hard-commit set to 60 sec
our software and the index holds about 20
> million documents. Queries are as fast as expected.
>
> This is Solr 7.5.0, stand-alone, auto hard-commit set to 60 seconds, no
> explicit soft-commits but documents added with commitWhitin=5000 or 1000
> depending on the use case. No war
scenario with our software and the index holds about 20
million documents. Queries are as fast as expected.
This is Solr 7.5.0, stand-alone, auto hard-commit set to 60 seconds, no
explicit soft-commits but documents added with commitWhitin=5000 or 1000
depending on the use case. No warm-up queries
t seeing reloads.
> >
> > Ah, good.
> >
> >
> > > > I am trying to understand the interactions
> > > > between hard commit, soft commit, transaction log update with a TLOG
> > > > cluster for both leader and follower replicas.
n Fri, Dec 14, 2018 at 3:08 AM Tomás Fernández Löbbe
wrote:
> > >
> > > No, I am not seeing reloads.
>
> Ah, good.
>
>
> > > I am trying to understand the interactions
> > > between hard commit, soft commit, transaction log update with a TLOG
>
> >
> > No, I am not seeing reloads.
Ah, good.
> > I am trying to understand the interactions
> > between hard commit, soft commit, transaction log update with a TLOG
> > cluster for both leader and follower replicas. For example, after getting
> > new s
bq. , after getting new segments from the leader the follower replica will
still apply the hard/soft commit?
As was described in one of the videos below, follower tlog replica look for max
docid in received new segments
and purge its transaction log of older records. Than it starts new
Hi Tomás,
No, I am not seeing reloads. I am trying to understand the interactions
between hard commit, soft commit, transaction log update with a TLOG
cluster for both leader and follower replicas. For example, after getting
new segments from the leader the follower replica will still apply the
On 13/12/18 16:48, Mikhail Khludnev wrote:
solr.log.9:2018-12-13 09:35:31.921 INFO
(recoveryExecutor-9-thread-1-processing-x:COSBIBioIndex) [
x:COSBIBioIndex] o.a.s.u.DirectUpdateHandler2 start
commit{flags=2,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit
solr.log.9:2018-12-13 09:35:31.921 INFO
(recoveryExecutor-9-thread-1-processing-x:COSBIBioIndex) [
x:COSBIBioIndex] o.a.s.u.DirectUpdateHandler2 start
commit{flags=2,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
so,recovery have been
On 13/12/18 10:29, Mikhail Khludnev wrote:
Do you have any idea of why this happens?
One just commit it every time, or send, append commitWithin param.
Can you grep logs for 'commit' word occurrence? Also it's possible to
increase log verbosity for LogUpdateProcessor
> Do you have any idea of why this happens?
One just commit it every time, or send, append commitWithin param.
Can you grep logs for 'commit' word occurrence? Also it's possible to
increase log verbosity for LogUpdateProcessor.
On Thu, Dec 13, 2018 at 10:15 AM Danilo Tomasoni
ave a single machine where I just index data, no concurrent querying
is happening, that's why I don't care about visibility but just about
speed/no crash.
I'm planning to make a single hard commit at the end (roughly once every
500.000 docs)
copy the final index to a clone m
cratch because I don't expect
> that the documents accepted by solr between the last hard commit and the
> crash will be saved somewhere.
>
> But this article
>
> https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
>
> say
system requirements while indexing?
What about transaction logs? they can be disabled?
When solr crashes I always reimport from scratch because I don't expect
that the documents accepted by solr between the last hard commit and the
crash will be saved somewhere.
But this article
What about autoSoftCommit ?
On Wed, Dec 12, 2018 at 3:24 PM Danilo Tomasoni wrote:
> Hello, I'm experiencing oom while indexing a big amount of documents.
>
> The main idea to avoid OOM is to avoid commit (just one big commit at
> the end).
>
> Is this a correct idea
Hello, I'm experiencing oom while indexing a big amount of documents.
The main idea to avoid OOM is to avoid commit (just one big commit at
the end).
Is this a correct idea?
How can I disable autocommit?
I've set
${solr.autoCommit.maxTime:-1}
false
in solrconfi
I think this is a good point. The tricky part is that if TLOG replicas
don't replicate often, their transaction logs will get too big too, so you
want the replication interval of TLOG replicas to be tied to the
auto(hard)Commit interval (by default at least). If you are using them for
search
bq. but not every poll attempt they fetch new segment from the leader
Ah, right. Ignore my comment. Commit will only occur on the followers
when there are new segments to pull down, so your'e right, roughly
every second poll would commit find things to bring down and open a
new sea
Hi Vadim,
There is no commit on TLOG/PULL follower replicas, only on the leader.
Followers fetch the segments and **reload the core** every 150 seconds (if
there were new segments, I suppose). Yeah, followers don't pay the CPU
price of indexing, but there are still cache invalid
If hard commit max time is 300 sec then commit happens every 300 sec on tlog
leader. And new segments pop up on the leader every 300 sec, during indexing.
Polling interval on other replicas 150 sec, but not every poll attempt they
fetch new segment from the leader, afaiu. Erick, do you mean
Not quite, 60. The polling interval is half the commit interval
This has always bothered me a little bit, I wonder at the utility of a
config param. We already have old-style replication with a
configurable polling interval. Under very heavy indexing loads, it
seems to me that either the
018 12:42 AM
> To: [email protected]
> Subject: Re: Soft commit and new replica types
>
> Some insights in the new replica types below:
>
> On Sat, December 8, 2018 08:42, Vadim Ivanov <
> [email protected] wrote:
>
> >
> > From
periodically poll the
leader for changes in index segments' files and download those segments
from the leader. If hard commit max time is defined in solrconfig.xml the
polling interval of each replica will be half that value. Or else if the
soft commit max time is defined then the replicas wil
Before 7.x all replicas in SolrCloud were NRT type.
And following rules were applicable:
https://stackoverflow.com/questions/45998804/when-should-we-apply-hard-commit-and-soft-commit-in-solr
and
https://lucene.apache.org/solr/guide/7_5/updatehandlers-in-solrconfig.html#commit-and-softcommit
But
nd I don't have issue with that as its normal
behaviour but most of the time. But most of the time it’s just triggers
full-copy without any details in the log.
And recently in one of the nodes i enabled soft-commit in master nodes and
monitored the corresponding slave node, what i observed is
gments on
the master have changed, they'll all need to be copied during
replication so if that's the case that's entirely normal.
And you shouldn't need to commit on the slaves, that should happen as
part of replication.
Best,
Erick
On Fri, Jun 15, 2018 at 3:25 AM, Adarsh H
Hi All,
Current am using SOLR 5.2.1 on Linux machine. I have cluster of 5 nodes with
master and salve configuration, which gives 5 master nodes and 5slave node. We
have enabled only hard commit on master nodes and both soft & hard commit on
the slave nodes since the search will happe
In the below mentioned git commit, I see SolrCloudClient has been changed to
generate solr core urls differently than before.
In the previous version, solr urls were computed using "url =
coreNodeProps.getCoreUrl()".
This concatenated "base_url" + "core" name from t
On 5/14/2018 11:29 AM, LOPEZ-CORTES Mariano-ext wrote:
> After having injecting 200 documents in our Solr server, the commit
> operation at the end of the process (using ConcurrentUpdateSolrClient) take
> 10 minutes. It's too slow?
There is a wiki page discussing slow c
Hi
After having injecting 200 documents in our Solr server, the commit
operation at the end of the process (using ConcurrentUpdateSolrClient) take 10
minutes. It's too slow?
Our auto-commit policy is the following:
te:
>> > Over the weekend one of our Dev solrcloud ran out of disk space.
>> Examining
>> > the problem we found one collection that had 2 months of uncommitted tlog
>> > files. Unfortuneatly the solr logs rolled over and so I cannot see the
>&g
k space.
> Examining
> > the problem we found one collection that had 2 months of uncommitted tlog
> > files. Unfortuneatly the solr logs rolled over and so I cannot see the
> > commit behavior during the last time data was loaded to it.
> >
> > The solrconfig.xml has
eatly the solr logs rolled over and so I cannot see the
> commit behavior during the last time data was loaded to it.
>
> The solrconfig.xml has both autoCommit and autoSoftCommit enabled.
>
>${solr.autoCommit.maxTime:6}
>false
>
>
>
Over the weekend one of our Dev solrcloud ran out of disk space. Examining
the problem we found one collection that had 2 months of uncommitted tlog
files. Unfortuneatly the solr logs rolled over and so I cannot see the
commit behavior during the last time data was loaded to it.
The
It's been a while since I had time to look further into this. I'll have to
go back through logs, which I need to get retrieved by an admin.
On Fri, Mar 23, 2018 at 8:45 AM, Amrit Sarkar
wrote:
> Elaino,
>
> When you say commits not working, the solr logs not printing &quo
Elaino,
When you say commits not working, the solr logs not printing "commit"
messages? or documents are not appearing when we search.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linke
tion?
On Fri, Feb 16, 2018 at 2:44 PM, Webster Homer
wrote:
> I meant to get back to this sooner.
>
> When I say I issued a commit I do issue it as collection/update?commit=true
>
> The soft commit interval is set to 3000, but I don't have a problem with
> soft commits
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Shawn,
On 3/2/18 7:46 PM, Shawn Heisey wrote:
> On 3/2/2018 10:39 AM, Christopher Schultz wrote:
>> The problem is that I'm updating the index after my SQL UPDATE(s)
>> have run, but before my SQL COMMIT occurs. I have had a
On 3/2/2018 10:39 AM, Christopher Schultz wrote:
> The problem is that I'm updating the index after my SQL UPDATE(s) have
> run, but before my SQL COMMIT occurs. I have had a problem where the SQL
> fails and rolls-back, but the solrClient is not rolled-back.
>
> I'm a li
nd I'd like to know what the "right"
solution is.
The problem is that I'm updating the index after my SQL UPDATE(s) have
run, but before my SQL COMMIT occurs. I have had a problem where the SQL
fails and rolls-back, but the solrClient is not rolled-back.
I'm a little w
I meant to get back to this sooner.
When I say I issued a commit I do issue it as collection/update?commit=true
The soft commit interval is set to 3000, but I don't have a problem with
soft commits ( I think). I was responding
I am concerned that some hard commits don't seem to hap
bq: But if 3 seconds is aggressive what would be a good value for soft commit?
The usual answer is "as long as you can stand". All top-level caches are
invalidated, autowarming is done etc. on each soft commit. That can be a lot of
work and if your users are comfortable with docs not
ent environment, where we do not use CDCR. However
I'm pretty sure that I've seen situations in production where commits were
also long overdue.
the "autoSoftcommit" was a typo. The soft commit logic seems to be fine, I
don't see an issue with data visibility. But if 3 seco
1 - 100 of 1501 matches
Mail list logo