On 7/27/2018 11:02 AM, cyndefromva wrote:
> I'm just curious why are there still so many 503 errors being generated
> (Error - Rsolr::Error::Http - 503 Service Unavailable - retrying ...)
>
> Is it related to all the "Error opening new searcher. exceeded limit of
> maxWarmingSearchers=2, try again
bq: Error opening new searcher. exceeded limit of maxWarmingSearchers=2
did you make sure that your indexing client isn't issuing commits all
the time? The other possible culprit (although I'd be very surprised)
is if you have your filterCache and queryResultCache autowarm settings
set extremely h
That makes sense, the ulimit was too small and I've updated it.
I'm just curious why are there still so many 503 errors being generated
(Error - Rsolr::Error::Http - 503 Service Unavailable - retrying ...)
Is it related to all the "Error opening new searcher. exceeded limit of
maxWarmingSearcher
Hi,
You have to increase the openfile limit for your SOLR user - you can
check it with uname -a. It should show something about 1024.
To increase it, you have to raise the systemlimit in
/etc/security/limits.conf.
Add the following lines:
* hard nofile 102400
* soft nofile 102400
root hard no
I have Rails 5 application that uses solr to index and search our site. The
sunspot gem is used to integrate ruby and sunspot. It's a relatively small
site (no more 100,000 records) and has moderate usage (except for the
googlebot).
Until recently we regularly received 503 errors; reloading the p
On 7/26/2018 1:32 PM, cyndefromva wrote:
At the point it starts failing I see a java exception: "java.io-IOException:
Too many open files" in the solr log file and a SolrException (Error open
new searcher) is returned to the user.
The operating system where Solr is running needs its open file l
us
>
>
>
> -Original message-
>> From:cyndefromva
>> Sent: Thursday 26th July 2018 22:18
>> To: solr-user@lucene.apache.org
>> Subject: Recent configuration change to our site causes frequent index
>> corruption
>>
>> I have Rails 5 application tha
ve to!
Regards,
Markus
-Original message-
> From:cyndefromva
> Sent: Thursday 26th July 2018 22:18
> To: solr-user@lucene.apache.org
> Subject: Recent configuration change to our site causes frequent index
> corruption
>
> I have Rails 5 application that uses solr to
I have Rails 5 application that uses solr to index and search our site. The
sunspot gem is used to integrate ruby and sunspot. It's a relatively small
site (no more 100,000 records) and has moderate usage (except for the
googlebot).
Until recently we regularly received 503 errors; reloading the p
Another sanity check. With deletion, only option would be to reindex those
documents. Could someone please let me know if I am missing anything or if I
am on track here. Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lucene-index-corruption-and-recovery
While trying to upgrade 100G index from Solr 4 to 5, check index (actually
updater) indicates that the index is corrupted. Hence, I ran check index
to fix the index which showed broken segment warning and then deleted those
documents. I then ran index update on the fixed index which upgraded fine
are you certain the schema is the same on both master and slave? I find
> > that the schema file doesnt always go with the replication and if a field
> > is different on the slave it will cause problems
> >
> > On Wed, Mar 15, 2017 at 12:08 PM, Santosh Sidnal <
> sidnal.
tosh Sidnal
> wrote:
>
>> Hi all,
>>
>> I am facing issues of index corruption at regular intervals of the time on
>> live server where i pull index data from one master server.
>>
>> Can anyone please give us some ppinters why we are facing issue on re
are you certain the schema is the same on both master and slave? I find
that the schema file doesnt always go with the replication and if a field
is different on the slave it will cause problems
On Wed, Mar 15, 2017 at 12:08 PM, Santosh Sidnal
wrote:
> Hi all,
>
> I am facing issues
Hi all,
I am facing issues of index corruption at regular intervals of the time on live
server where i pull index data from one master server.
Can anyone please give us some ppinters why we are facing issue on regular
interval of time?
I am aware of how can we correct corrupted index but i am
HI Guys,
Please can someone help out here to pin-point the issue..?
Thanks & Regards,
Puneet
On Mon, Apr 6, 2015 at 1:27 PM, Puneet Jain wrote:
> Hi Guys,
>
> I am using 4.2.0 since more than a year and since last October 2014 facing
> index corruption issue. However, now
Hi Guys,
I am using 4.2.0 since more than a year and since last October 2014 facing
index corruption issue. However, now it is happening everyday and have to
built a fresh index for the temporary fix. Please find the logs below where
i can see an error while replicating data from master to slave
Ahhh, ok. When you reloaded the cores, did you do it core-by-core?
Yes, but maybe we reloaded the wrong core or something like that. We
also noticed that the startTime doesn't update in the admin-ui while
switching between cores (you have to reload the page). We still use
4.8.1, so maybe it i
Ahhh, ok. When you reloaded the cores, did you do it core-by-core?
I can see how something could get dropped in that case.
However, if you used the Collections API and two cores mysteriously
failed to reload that would be a bug. Assuming the replicas in question
were up and running at the time you
Hi,
this _sounds_ like you somehow don't have indexed="true" set for the
field in question.
We investigated a lot more. The CheckIndex tool didn't find any error.
We now think the following happened:
- We changed the schema two months ago: we changed a field to
indexed="true". We reloaded th
bq: You say in our case some docs didn't made it to the node, but
that's not really true: the docs can be found on the corrupted nodes
when I search on ID. The docs are also complete. The problem is that
the docs do not appear when I filter on certain fields
this _sounds_ like you somehow don't ha
On 3/5/2015 3:13 PM, Martin de Vries wrote:
> I understand there is not a "master" in SolrCloud. In our case we use
> haproxy as a load balancer for every request. So when indexing every
> document will be sent to a different solr server, immediately after
> each other. Maybe SolrCloud is not able
If you google replication can cause index corruption there are two jira issues
that are the most likely cause of corruption in a solrcloud env.
- Mark
> On Mar 5, 2015, at 2:20 PM, Garth Grimm
> wrote:
>
> For updates, the document will always get routed to the leader of the
&
Subject: Re: Solrcloud Index corruption
Hi Erick,
Thank you for your detailed reply.
You say in our case some docs didn't made it to the node, but that's not really
true: the docs can be found on the corrupted nodes when I search on ID. The
docs are also complete. The problem is that t
Hi Erick,
Thank you for your detailed reply.
You say in our case some docs didn't made it to the node, but that's
not really true: the docs can be found on the corrupted nodes when I
search on ID. The docs are also complete. The problem is that the docs
do not appear when I filter on certain
Wait up. There's no "master" index in SolrCloud. Raw documents are
forwarded to each replica, indexed and put in the local tlog. If a
replica falls too far out of synch (say you take it offline), then the
entire index _can_ be replicated from the leader and, if the leader's
index was incomplete the
Hi Andrew,
Even our master index is corrupt, so I'm afraid this won't help in our
case.
Martin
Andrew Butkus schreef op 05.03.2015 16:45:
Force a fetchindex on slave from master command:
http://slave_host:port/solr/replication?command=fetchindex - from
http://wiki.apache.org/solr/SolrRepli
corruption
We had a similar issue, when this happened we did a fetch index on each core
out of sync to put them back right again
Sent from my iPhone
> On 5 Mar 2015, at 14:40, Martin de Vries wrote:
>
> Hi,
>
> We have index corruption on some cores on our Solrcloud running v
We had a similar issue, when this happened we did a fetch index on each core
out of sync to put them back right again
Sent from my iPhone
> On 5 Mar 2015, at 14:40, Martin de Vries wrote:
>
> Hi,
>
> We have index corruption on some cores on our Solrcloud running version
>
Hi,
We have index corruption on some cores on our Solrcloud running version
4.8.1. The index is corrupt on several servers. (for example: when we do
an fq search we get results on some servers, on other servers we don't,
while the stored document contains the field on all servers).
A
Hi,
It sounds like Solr simply could not index some docs. The index is not
corrupt, it's just that indexing was failing while disk was full. You'll
need to re-send/re-add/re-index the missing docs (or simply all of them if
you don't know which ones are missing).
Otis
--
Monitoring * Alerting *
Hi All,
I use Solr 4.4.0 in a master-slave configuration. Last week, the master
server ran out of disk (logs got too big too quick due to a bug in our
system). Because of this, we weren't able to add new docs to an index. The
first thing I did was to delete a few old log files to free up disk spac
Hi again,
a follow-up on this: I ended up fixing it by uploading a new version of
clusterstate.json to Zookeeper with the missing hash ranges set (they were
easily deducible since they were sorted by shard name).
I still don't know what the correct solution to handle index corruption (
Hi,
I have a Solr cloud set up with 12 shards with 2 replicas each, divided on 6
servers (each server hosting 4 cores). Solr version is 4.3.1.
Due to memory errors on one machine, 3 of its 4 indexes became corrupted. I
unloaded the cores, repaired the indexes with the Lucene CheckIndex tool, an
> Hi,
>
> We are frequently getting issues of index corruption on the cloud, this used
> to not happen in our master slave setup with solr 3.6. I have tried to check
> the logs, but don't see an exact reason.
>
> I have run the index checker and it recovers, but I am not a
Hi,
We are frequently getting issues of index corruption on the cloud, this used to
not happen in our master slave setup with solr 3.6. I have tried to check the
logs, but don't see an exact reason.
I have run the index checker and it recovers, but I am not able to understand
as to why
I am using 3.5 .
- Original Message -
From: Lance Norskog [mailto:goks...@gmail.com]
Sent: Monday, May 14, 2012 11:08 AM
To: solr-user@lucene.apache.org
Subject: Re: Index Corruption
"Index corruption" usually means data structure problems. There is a
Luce
"Index corruption" usually means data structure problems. There is a
Lucene program 'org.apache.lucene.index.CheckIndex' in the lucene core
jar. If there is a problem with the data structures, this program will
find it:
java -cp lucene-core-XX.jar org.apache.lucene.index.C
r why the
same happened in terms of corruption still bother's me.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Index-Corruption-tp3983579.html
Sent from the Solr - User mailing list archive at Nabble.com.
to experience any index corruption anymore, so it
is safe to use Java 7u1 with Lucene Core and Solr.
On the same day, Oracle released Java 6u29 [2] fixing the same problems
occurring with Java 6, if the JVM switches -XX:+AggressiveOpts or
-XX:+OptimizeStringConcat were used. Of course, you should
olr users
> with the default configuration will have Java crashing with SIGSEGV as soon
> as they start to index documents, as one affected part is the well-known
> Porter stemmer (see LUCENE-3335 [4]). Other loops in Lucene may be
> miscompiled, too, leading to index corruption (especially
ault configuration will have Java crashing with SIGSEGV as soon
as they start to index documents, as one affected part is the well-known
Porter stemmer (see LUCENE-3335 [4]). Other loops in Lucene may be
miscompiled, too, leading to index corruption (especially on Lucene trunk
with pulsing codec; o
If you are using SimpleFSDirectory (either explicitly or via
FSDirectory.open on Windows) with Solr/Lucene trunk or 3.x branch since July
30,
you might have index corruption and you should svn up and rebuild.
More details available here:
https://issues.apache.org/jira/browse/LUCENE-2637
<ht
This issue:
https://issues.apache.org/jira/browse/LUCENE-2574
which was committed 3 days ago (Friday Jul 30) can cause index corruption.
I just committed a fix for the corruption, but if you've been using
Solr/Lucene trunk or 3x branch updated after the first commit on
Friday, and you
; org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>at
>
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
>at
>
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
>at
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
>at java.lang.Thread.run(Thread.java:619)
>
> catalina.2010-07-15.log:SEVERE: java.io.FileNotFoundException:
> /solr_data/data/index/segments_jes (No such file or directory)
> catalina.2010-07-15.log:SEVERE: java.io.FileNotFoundException:
> /solr_data/data/index/segments_jes (No such file or directory)
> catalina.2010-07-15.log:SEVERE: java.io.FileNotFoundException:
> /solr_data/data/index/segments_jes (No such file or directory)
> catalina.2010-07-15.log:SEVERE: java.io.FileNotFoundException:
> /solr_data/data/index/segments_jes (No such file or directory)
> catalina.2010-07-15.log:SEVERE: java.io.FileNotFoundException:
> /solr_data/data/index/segments_jes (No such file or directory)
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Index-corruption-tp980425p980472.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
e or directory)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Index-corruption-tp980425p980472.html
Sent from the Solr - User mailing list archive at Nabble.com.
ess. I have suggested
> that we use a single index in order to keep things simple, but there
> are suggestions to split are documents amongst different indexes.
>
> The primary motivation for this split is a worry about potential
> index corruption. IE, if we only have one i
single index in order to keep things simple, but there
are suggestions to split are documents amongst different indexes.
The primary motivation for this split is a worry about potential
index corruption. IE, if we only have one index and it becomes
corrupt what do we do? I never considered this
tions to split are
documents amongst different indexes.
The primary motivation for this split is a worry about potential index
corruption. IE, if we only have one index and it becomes corrupt what do we do?
I never considered this to be an issue since we would have backups etc., but I
think
We have an indexing script which has been running for a couple of weeks
now without problems. It indexes documents and then periodically commit
(which is a tad redundant I suppose) both via the HTTP interface.
All documents are indexed to a master and a slave rsyncs them off using
the standard
Can you post the full logs leading up to the corruption (including
full stack traces)? Ie, after reboot when the permissions problem
started.
I'm very surprised this led to index corruption. If Lucene is unable
to write any of the files for a new commit, that commit aborts and
those part
We rebooted a machine, and the permissions on the external drive where
the index was stored had changed. We didn't realize it immediately,
because searches were working and updates were not throwing errors
back to the client.
These ended up in catalina.out
Apr 22, 2009 11:57:12 PM org.apache.sol
Hello all!
I had a problem this week, and I like to share with you all.
My weblogic server that generate my index hrows its logs in a shared
storage. During my indexing process (SOLR+Lucene), this shared storage
became 100% full, and everything collapsed (all servers that uses this
shared stor
On 30-Aug-07, at 12:09 PM, Lance Norskog wrote:
Is there an app that walks a Lucene index and checks for corruption?
How would we know if our index had become corrupted?
Try asking on [EMAIL PROTECTED]
-Mike
Is there an app that walks a Lucene index and checks for corruption?
How would we know if our index had become corrupted?
Thanks,
Lance
55 matches
Mail list logo