full recovery it would use
old-style replication to copy down the entire index, corrupted index
and all, from the leader. The follower can go into "full recovery" for
a number of reasons, from it being shut down for a while and indexing
still happening to the leader to communications burps.
The disk corruption is, of course, a red flag and likely the root cause.
As for how it replicated let's assume a 2 replica shard (leader +
follower). If the follower ever went into full recovery it would use
old-style replication to copy down the entire index, corrupted index
and all, fro
Hi,
We've just been working with a client who had a corruption issue with
their SolrCloud install. They're running Solr 5.3.1, with a collection
spread across 12 shards. Each shard has a single replica.
They were seeing "Index Corruption" errors when running certain queries.
We investigated,
Hi,
We have a requirement to pre-encrypt an index we are building before it
hits disk. We are doing this by using a wrapper around MMapDirectory that
wraps the input/output streams(I know the general recommendation is to
encrypt the filesystem instead but this option was explicitly rejected by
ou
; at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
> at
> org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:383)
> at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1777)
>
>
>
>
> --
> View this message in co
(CheckIndex.java:1777)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Fixing-corrupted-index-tp4126644p4126837.html
Sent from the Solr - User mailing list archive at Nabble.com.
seems the index is
> compeletely broken, as it complains " ERROR: java.lang.Exception: there is
> no valid Lucene index in this directory."
>
> Sounds like I am out of luck, is it so?
>
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472
p://lucene.472066.n3.nabble.com/Fixing-corrupted-index-tp4126644p4126830.html
Sent from the Solr - User mailing list archive at Nabble.com.
egements file in directory".
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Fixing-corrupted-index-tp4126644p4126687.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Dmitry
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
Hi
Thanks.
But I am already using CheckIndex and the error is given by the CheckIndex
utility: it could not even continue after reporting "could not read any
segements file in directory".
--
View this message in context:
http://lucene.472066.n3.nabble.com/Fixing-corru
ooks similar is segments.gen.
> However, the index segment files including .si, tip, doc, fdx etc still
> exist.
>
> Is there any way to fix this as it took me 2 weeks to build this index...
>
> Many many thanks for your kind advice!
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Fixing-corrupted-index-tp4126644.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Dmitry
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
still
exist.
Is there any way to fix this as it took me 2 weeks to build this index...
Many many thanks for your kind advice!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Fixing-corrupted-index-tp4126644.html
Sent from the Solr - User mailing list archive at Nabble.com.
rapper.handle(HandlerWrapper.java:152)\n\tat
> org.mortbay.jetty.Server.handle(Server.java:326)\n\tat
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)\n\tat
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)\n\tat
> org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)\n\tat
> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)\n\tat
> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)\n\tat
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)\n\tat
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)\n","code":500}}
>
>
> Thanks in advance, regards
> Victor
--
View this message in context:
http://lucene.472066.n3.nabble.com/corrupted-index-in-slave-tp4054769p4054772.html
Sent from the Solr - User mailing list archive at Nabble.com.
Handler.headerComplete(HttpConnection.java:926)\n\tat
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)\n\tat
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)\n\tat
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)\n\tat
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)\n\tat
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)\n","code":500}}
Thanks in advance, regards
Victor
--
View this message in context:
http://lucene.472066.n3.nabble.com/corrupted-index-in-slave-tp4054769.html
Sent from the Solr - User mailing list archive at Nabble.com.
: First, I tried the scripts provided in the Solr distribution without success
...
: And that's true : there is no /opt/apache-solr-1.4.1/src/bin/scripts-util
: but a /opt/apache-solr-1.4.1/src/scripts/scripts-util
: Is this normal to distribute the scripts with a bad path ?
it looks like
Hi everyone,
We are using Solr 1.4.1 in my company and we need to do some backups of the
indexes.
After some googling, I'm quite confused about the differents ways of backing
up the index.
First, I tried the scripts provided in the Solr distribution without success
:
I untarred the apache-solr-1
> Sent: Thu, January 7, 2010 3:08:55 PM
> Subject: Corrupted Index
>
> Hi all,
>
> Our application uses solrj to communicate with our solr servers. We started a
> fresh index yesterday after upping the maxFieldLength setting in solrconfig.
> Our
> task indexes cont
Yes, that would be helpful to include, sorry, the official 1.4.
-Original Message-
From: Ryan McKinley [mailto:ryan...@gmail.com]
Sent: Thursday, January 07, 2010 2:15 PM
To: solr-user@lucene.apache.org
Subject: Re: Corrupted Index
what version of solr are you running?
On Jan 7, 2010
what version of solr are you running?
On Jan 7, 2010, at 3:08 PM, Jake Brownell wrote:
Hi all,
Our application uses solrj to communicate with our solr servers. We
started a fresh index yesterday after upping the maxFieldLength
setting in solrconfig. Our task indexes content in batches and
Hi all,
Our application uses solrj to communicate with our solr servers. We started a
fresh index yesterday after upping the maxFieldLength setting in solrconfig.
Our task indexes content in batches and all appeared to be well until noonish
today, when after 40k docs, I started seeing errors. I
20 matches
Mail list logo