It can fail because it may contain a partial record - that is why that
is warn level rather than error. A fail does not necessarily indicate a
problem.
- Mark
On 12/29/2013 09:04 AM, YouPeng Yang wrote:
> Hi Mark Miller
>
>How can a log replay fail .
> And I can not figure out the re
Hi Mark Miller
How can a log replay fail .
And I can not figure out the reason of the exception. It seems to no
BigDecimal type field in my schema.
Please give some suggestions
The exception :
133462 [recoveryExecutor-48-thread-1] WARN org.apache.solr.update.
UpdateLog – Starting
Hi Mark
Yes i have auto commit on just as other cores. All of my core are with
the same configuration.the maxtime is 60s, max number is 10. Have not new
searcher opening.
On contrast with others , only this core is abnormal. Even it cannot
relay the log,because of the exceptio
Do you have auto commit (not softAutoCommit) on? At what value? Are you
ever opening a new searcher?
- Mark
On 12/27/2013 05:17 AM, YouPeng Yang wrote:
> Hi
> There is a failed core in my solrcloud cluster(solr 4.6 with hdfs 2.2)
> when I start my solrcloud . I noticed that there are lots of t
Hi Eric,
Sorry for replay being late.
The tlog file stay there for one week and seems no decease. Most of them
are 3~5 MB and totally 40MB.
The article your point I've read many times but no working. Everytime I
reindex files solr generate many tlog of them and no matter how many hard
commit I di
What is your commit strategy? A hard commit
(openSearcher=true or false doesn't matter)
should close the current tlog file, open
a new one and delete old ones. That said, there
will be enough tlog files kept around to hold at
least 100 documents. So if you're committing
too often (say after every d