Hi,
I sometimes get FileNotFoundExceptions from the recovery of a core in my
log. Does anyone know the reason for this? As I understand Solr this may
(or should) not happen.
Sorry for cross-posting in wrong user group (java-user).
Markus
2015-08-04
15:06:07,646|INFO|mpKPXpbUwp|org.apache.solr.u
View this message in context:
http://lucene.472066.n3.nabble.com/FileNotFoundException-Error-closing-IndexWriter-Error-opening-new-searcher-tp4157800p4162177.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
in the last few days we had some troubles with one of our clusters (5 machines
each running 4.7.2 inside jetty container, no replication, Java 1.7.21). Two
time we had troubles to restart one server (same machine) because of some
FileNotFoundException.
1. First time: Stopping Solr while
--
View this message in context:
http://lucene.472066.n3.nabble.com/FileNotFoundException-tp4093416p4093462.html
Sent from the Solr - User mailing list archive at Nabble.com.
oBody(StandardDirectoryReader.java:56)
> at
>
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
> at
>
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
> at
> org.apache.lucene.index.DirectoryReader
at
org.apache.solr.search.SolrIndexSearcher.getReader(SolrIndexSearcher.java:169)
... 18 more
--
View this message in context:
http://lucene.472066.n3.nabble.com/FileNotFoundException-tp4093416.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 7/2/2013 9:39 AM, Murthy Perla wrote:
>I am newbie to solr. I've accidentally deleted indexed
> files(manually using rm -rf command) on server from solr index folder. Then
> on when ever I start my server its failing to start with FNF exception. How
> can this be fixed quickly?
I believ
Hi All,
I am newbie to solr. I've accidentally deleted indexed
files(manually using rm -rf command) on server from solr index folder. Then
on when ever I start my server its failing to start with FNF exception. How
can this be fixed quickly?
Appreciate if any can suggest a quick fix
his core
> to refresh the changes.
>
> NOW i get SOMETIMES my Exception =(
> Anybody a idea ?
>
> here is a part of my solrconfig.xml (updater AND searcher)
> ->
>
>
>true
>128
>2
>
>
>single
>1000
>1
> false
>
>true
>false
>
>
>
> 1
>
> 0
>
>
>
>
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/FileNotFoundException-during-commit-concurrences-process-tp3991384.html
> Sent from the Solr - User mailing list archive at Nabble.com.
Regards,
Karthik
In my older version of solr this was possible, but it seems not possible in
this new =(
--
View this message in context:
http://lucene.472066.n3.nabble.com/FileNotFoundException-during-commit-concurrences-process-tp3991384p3991388.html
Sent from the Solr - User mailing list archive at Nabble.com.
=(
Anybody a idea ?
here is a part of my solrconfig.xml (updater AND searcher)
->
true
128
2
single
1000
1
false
true
false
1
0
--
View this message in context:
http://l
Thanks for the info Peter, i think i ran into the same isssue some time ago
and could not find out why the backup stopped and also got deleted by solr.
I decided to stop current running updates to solr while backup is running and
wrote an own backuphandler that simply just copies the index-file
Informational
Hi,
This information is for anyone who might be running into problems when
performing explicit periodic backups of Solr indexes. I encountered this
problem, and hopefully this might be useful to others.
A related Jira issue is: SOLR-1475.
The issue is: When you execute a 'command=b
can we confirm that the user does not have multiple DIH configured?
any request for an import, while an import is going on, is rejected
On Sat, Feb 13, 2010 at 11:40 AM, Chris Hostetter
wrote:
>
> : concurrent imports are not allowed in DIH, unless u setup multiple DIH
> instances
>
> Right, bu
: concurrent imports are not allowed in DIH, unless u setup multiple DIH
instances
Right, but that's not the issue -- the question is wether attemping
to do so might be causing index corruption (either because of a bug or
because of some possibly really odd config we currently know nothing abo
concurrent imports are not allowed in DIH, unless u setup multiple DIH instances
On Sat, Feb 13, 2010 at 7:05 AM, Chris Hostetter
wrote:
>
> : I have noticed that when I run concurrent full-imports using DIH in Solr
> : 1.4, the index ends up getting corrupted. I see the following in the log
>
>
: I have noticed that when I run concurrent full-imports using DIH in Solr
: 1.4, the index ends up getting corrupted. I see the following in the log
I'm fairly confident that concurrent imports won't work -- but it
shouldn't corrupt your index -- even if the DIH didn't actively check for
this
; Could this be because the concurrent full-imports are stepping on each
> other's toes? It seems like one full-import request ends up deleting
> another's segment files.
>
> Is there a way to avoid this? Perhaps a config option? I would like to
> retain the flexibility t
ere a way to avoid this? Perhaps a config option? I would like to
retain the flexibility to issue concurrent full-import requests.
I found some documentation on this issue at:
http://old.nabble.com/FileNotFoundException-on-index-td25717530.html
But I looked at:
http://old.nabble.com/dataimpor
On Tue, Sep 29, 2009 at 3:19 AM, Mark Miller wrote:
> Looks like a bug to me. I don't see the commit point being reserved in
> the backup code - which means its likely be removed before its done
> being copied. Gotto reserve it using the delete policy to keep around
> for the full backup duration
Mark Miller wrote:
> Looks like a bug to me. I don't see the commit point being reserved in
> the backup code - which means its likely be removed before its done
> being copied. Gotto reserve it using the delete policy to keep around
> for the full backup duration. I'd file a JIRA issue.
>
>
>
Y
Looks like a bug to me. I don't see the commit point being reserved in
the backup code - which means its likely be removed before its done
being copied. Gotto reserve it using the delete policy to keep around
for the full backup duration. I'd file a JIRA issue.
--
- Mark
http://www.lucidimagina
Thanks to Noble Paul, I think I now understand the Java replication
handler's backup feature. It seems to work as expected on a toy index.
When trying it out on a copy of my production index (300GB-ish),
though, I'm getting FileNotFoundExceptions. These cancel the backup,
and delete the snapshot.yy
I think you may be right i've opened SOLR-830
: We may have identified the root cause but wanted to run it by the community.
: We figure there is a bug in the snappuller shell script, line 181:
-Hoss
p|grep -v temp|sort -r|head
-1"`
This has fixed our local issue, we can submit a patch but wanted a quick
sanity check because I'm surprised its not much more commonly seen.
Jim
--
View this message in context:
http://www.nabble.com/FileNotFoundException-on-slave-after-replication---
25 matches
Mail list logo