Its for both. I am facing some problem, and I want to get to the root of it
by understanding what happens when we issue an update.

The problem I am facing is that sometimes, old transaction logs are not
getting deleted for my solr cloud setup for one or two replicas, no matter
how many times I do a hard commit. They just keep piling up(I have seen upto
30GB). So I am issuing a hard commit and then deleting then manually. I want
to ensure that this doesn't cause any data loss. My hard commit duration is
set in a way(based on indexing rate) that the tlog should never grow beyond
500MB-600MB.

Why might be the reason that very old transaction log doesn't get deleted.
They only get rolled up in case of hard commit. This happens very randomly,
but once it happens for a replica, it keeps on happening for the same
replica again and again. Other replica's transaction log get deleted fine on
hard commit.

Another question:  What role does tlog play in case of atomic updates? Are
they scanned if I do an atomic update? In case my tlog grow very huge, then
will it effect indexing performance with atomic updates?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-tlog-and-soft-commit-tp4193105p4193129.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to