core we
> just unloaded, and it happens quite frequently that we have an error at
> this point, the copy cannot be done, and I guess it is because of a
> write.lock file created by a solr index writer in the index directory.
>
> Is it possible, when unloading the core, to stop / kill
).
But we have, at the same time, some batches indexing non stop the core we
just unloaded, and it happens quite frequently that we have an error at
this point, the copy cannot be done, and I guess it is because of a
write.lock file created by a solr index writer in the index directory.
Is it
@Shawn Heisey Yeah, delete "write.lock" files manually is ok finally。
@Walter Underwood Have some performace evaluation about Solr on HDFS vs
LocalFS recently?
Shawn Heisey 于2018年8月28日周二 上午4:10写道:
> On 8/26/2018 7:47 PM, zhenyuan wei wrote:
> > I found an exception w
tance, the lockfile will get left
behind and you may have difficulty starting Solr back up on ANY kind of
filesystem until you delete the file in each core's data directory. The
filename defaults to "write.lock" if you don't change it.
Thanks,
Shawn
_shard56_replica_n110' is already locked. The most likely
>>> cause is another Solr server (or another solr core in this server) also
>>> configured to use this directory; other possible causes may be specific
>> to
>>> lockType: hdfs
>>>a
uses may be specific
> to
> > lockType: hdfs
> > at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)
> > at org.apache.solr.core.SolrCore.(SolrCore.java:955)
> > ... 9 more
> >
> >
> > In fact, a print out a hdfs
ckType: hdfs
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)
> at org.apache.solr.core.SolrCore.(SolrCore.java:955)
> ... 9 more
>
>
> In fact, a print out a hdfs api level excepti
... 9 more
In fact, a print out a hdfs api level exception stack, it reports like:
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException:
/solr/collection002/core_node17/data/index/write.lock for client
192.168.0.12 already exists
at
org.apache.hadoop.hdfs.server.namenode.FSNamesy
Or only catch the specific exception and only swallow that? But yeah,
this is something that should change as I see this "in the field" and
a more specific error message would short-circuit a lot of unnecessary
pain.
see: LUCENE-7959
Erick
On Wed, Sep 6, 2017 at 5:49 AM, Shawn Heisey wrote:
> O
On 9/4/2017 5:53 PM, Erick Erickson wrote:
> Gah, thanks for letting us know. I can't tell you how often
> permissions issues have tripped me up. You're right, it does seem like
> there could be a better error message though.
I see this code in NativeFSLockFactory, code that completely ignores any
-Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Saturday, 26 August 2017 9:15 a.m.
> To: solr-user
> Subject: Re: write.lock file appears and solr wont open
>
> Odd. The way core discovery works, it starts at SOLR_HOME and recursively
).
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Saturday, 26 August 2017 9:15 a.m.
To: solr-user
Subject: Re: write.lock file appears and solr wont open
Odd. The way core discovery works, it starts at SOLR_HOME and recursively
descends the directories
operties file in the absence of any dataDir overrides.
So how the write.lock file is getting preserved across Solr restarts
is a mystery to me. Doing a "kill -9" is one way to make that happen
if it is done at just the wrong time, but that's unlikely in what
you're describing.
SOLR_HOME is /var/www/solr/data
The zip was actually the entire data directory which also included configsets.
And yes core.properties is in var/www/solr/data/prindex (just has single line
name=prindex, in it). No other cores are present.
The data directory should have been unzipped before the so
It's certainly possible to move a core like this. You say you moved
the core. Did you move the core.properties file as well? And did it
point to the _same_ directory as the original (dataDir property)? The
whole purpose of write.lock is to keep two cores from being able to
update the same ind
I am slowing moving 6.5.1 from development to production. After installing solr
on the final test machine, I tried to supply a core by zipping up the data
directory on development and unzipping on test.
When I go to admin I get:
[cid:image001.png@01D31DA9.1B0EF540]
Write.lock obviously causing a
Hi Mark,
let's summarise a little bit:
First of all you are using the IndexBasedSpellChecker which is what is
usually called "based on the sidecar index" .
Basically you are building a mini lucene index to be used with the
spellcheck component.
It behaves as a classic Lucene index, so it needs com
OK, I gave each of these spellcheckIndexDir tokens distinct location --
from each other and from the main index. This has resolved the
write.lock problem when I attempt a spellcheck.build! Thanks for the help!
I looked in the new spellcheckIndexDir location and the directory is
populated
Mikhail,
Yes, both the Index-based and File-based spell checkers reference the
same index location. My understanding is they were supposed to. I
didn't realize this was for writing indexes. Rather, I thought this was
for reading the main index. So, I need to make 3 separate locations for
ings,
>>>
>>> Whenever I try to build my spellcheck index
>>> (params.set("spellcheck.build", true); or put a check in the
>>> spellcheck.build box in the web interface) I get the following
>>> stacktrace.
>>> Removing the write.lo
n Sat, Sep 19, 2015 at 12:34 AM, Mark Fenbers
wrote:
Greetings,
Whenever I try to build my spellcheck index
(params.set("spellcheck.build", true); or put a check in the
spellcheck.build box in the web interface) I get the following stacktrace.
Removing the write.lock file does no good. Th
I get the following stacktrace.
> Removing the write.lock file does no good. The message comes right back
> anyway. I read in a post that increasing writeLockTimeout would help. It
> did not help for me even increasing it to 20,000 msec. If I don't build,
> then my resultset count i
Greetings,
Whenever I try to build my spellcheck index
(params.set("spellcheck.build", true); or put a check in the
spellcheck.build box in the web interface) I get the following
stacktrace. Removing the write.lock file does no good. The message
comes right back anyway. I read
Haven't seen this particular problem before, but it sounds like it could be a
problem with permissions or data size limits - it may be worth looking into.
The "write.lock" file is used when an index is being modified - it is how
lucene handles concurrent attempts to modify the i
I looked for messages on the following error but dont see anything in nabble.
Does anyone know what this error means and how to correct it??
SEVERE: java.lang.IllegalArgumentException:
/var/apache/my-solr-slave/solr/coreA/data/index/write.lock does not exist
I also occasionally see error
rResourceLoader.java:651)
> at org.apache.solr.core.SolrCore.(SolrCore.java:851)
> ... 27 more
>
> Debugging Solr code I found out that the original exception comes from the
> IndexWriter construction inside AnalyzingInfixSuggester.java ( more
> specifically org.apac
xception comes from the
IndexWriter construction inside AnalyzingInfixSuggester.java ( more
specifically org.apache.lucene.store.Lock:89). The exception is "Lock obtain
timed out: NativeFSLock@$indexPath/write.lock" but seems to be hidden by the
RuntimeException thrown by BlendedInfixLookupFac
I’m pretty sure the default config will unlock on startup.
- Mark
http://about.me/markrmiller
On Feb 28, 2014, at 3:50 AM, Chen Lion wrote:
> Dear all,
> I hava a problem i can't understand it.
>
> I use solr 4.6.1, and 2 nodes, one leader and one follower, both have the
Dear all,
I hava a problem i can't understand it.
I use solr 4.6.1, and 2 nodes, one leader and one follower, both have the
write.lock file.
I did not think i could create index since the write.lock file exists,
right?
But I could, why?
Jiahui Chen
donot need to define schema/config because conf folder is not
inside each
collection.
1/ Indexing works OK but write.lock is not removed (we use
"/update?commit=true..")
2/ Shutdown tomcat, I saw write.lock is gone
3/ Restart Tomcat, indexed data was created at the instanceDir/data le
, Lisheng
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Thursday, May 30, 2013 11:35 AM
To: solr-user@lucene.apache.org
Subject: Re: solr 4.3: write.lock is not removed
: I recently upgraded solr from 3.6.1 to 4.3, it works well, but I noticed that
: I recently upgraded solr from 3.6.1 to 4.3, it works well, but I noticed that
after finishing
: indexing
:
: write.lock
:
: is NOT removed. Later if I index again it still works OK. Only after I
shutdown Tomcat
: then write.lock is removed. This behavior caused some problem like I could
works OK but write.lock is not removed (we use
"/update?commit=true..")
2/ Shutdown tomcat, I saw write.lock is gone
3/ Restart Tomcat, indexed data was created at the instanceDir/data level, with
some warning
messages. It seems that in solr.xml, dataDir is not defined?
Thanks very much
-Original Message-
From: bbarani [mailto:bbar...@gmail.com]
Sent: Thursday, May 30, 2013 9:45 AM
To: solr-user@lucene.apache.org
Subject: Re: solr 4.3: write.lock is not removed
How are you indexing the documents? Are you using indexing program?
The below post discusses the same issue
How are you indexing the documents? Are you using indexing program?
The below post discusses the same issue..
http://lucene.472066.n3.nabble.com/removing-write-lock-file-in-solr-after-indexing-td3699356.html
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-4-3-write-lo
Hi,
I recently upgraded solr from 3.6.1 to 4.3, it works well, but I noticed that
after finishing
indexing
write.lock
is NOT removed. Later if I index again it still works OK. Only after I shutdown
Tomcat
then write.lock is removed. This behavior caused some problem like I could not
use
Cool. I can use that setting while testing then set it back when I'm just
running Lucene. Many thanks folks!
Regards,
Tim
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-1-0-index-leaving-write-lock-file-tp4038046p4038060.html
Sent from the Solr - User mailing list
You can use 'none' for the lock type in solrconfig.xml.
You risk corruption if two IW's try to modify the index at once though.
- Mark
On Feb 1, 2013, at 6:56 PM, dm_tim wrote:
> Well that makes sense. The problem is that I am working in both Solr and
> Lucene directly. I have some indexes tha
Well that makes sense. The problem is that I am working in both Solr and
Lucene directly. I have some indexes that work great in Solr and now I want
to do the same thing in Java using the Lucene libs. So I'm writing to the
same index dir. I do testing by creating an index in Solr, look at it, and
t
On Fri, Feb 1, 2013 at 5:41 PM, dm_tim wrote:
> I've been using Solr 4.1.0 for a little while now and I just noticed that
> when I index any core I have the write.lock file doesn't go away until I
> stop the server where solr is running.
Sounds like it's working as it sh
Howdy,
I've been using Solr 4.1.0 for a little while now and I just noticed that
when I index any core I have the write.lock file doesn't go away until I
stop the server where solr is running. The data I'm indexing is fairly small
(16k rows in a db) so it shouldn't take much
past 2 months I've been getting a lot of
> write.lock errors. I switched to the "simple" lockType (and made it
> clear the lock on restart), but my index is still locking up a few
> times a week.
>
> I can't seem to determine what is causing the locks -- does anyone
I'm running Solr 3.4. The past 2 months I've been getting a lot of
write.lock errors. I switched to the "simple" lockType (and made it
clear the lock on restart), but my index is still locking up a few
times a week.
I can't seem to determine what is causing the locks
Hi Erick,
I was able to resolve the issue with 'write.lock' files.
Using container.remove("core1") or using container.shutdown() is helping to
remove the 'write.lock' files.
-Shyam
Here is how I got SolrJ to delete the write.lock file. I switched to the
CoreContainer's remove() method. So the new code is:
...
SolrCore curCore = container.remove("core1");
curCore.close();
Now my understanding for why it is working. Based on Solr source code, the
issue had
urCore = container.getCore("core1");
curCore.close();
I have also seen that EmbeddedSolrServer process is not terminating after
completion of the indexing process, can this be a reason. But even after manual
termination of the process the 'write.lock' file stays in the index directo
and after indexing every time it is observed that the
>> write.lock remains without getting cleared and for the next indexing we have
>> to delete the file to get the indexing process running.
>>
>> We use SolrServer for our indexing and I do not see any methods to close or
On Mon, Jan 30, 2012 at 2:42 AM, Shyam Bhaskaran
wrote:
> Hi,
>
> We are using Solr 4.0 and after indexing every time it is observed that the
> write.lock remains without getting cleared and for the next indexing we have
> to delete the file to get the indexing process runn
rayList();
docs.add( doc1 );
solr.commit();
SolrCore curCore = container.getCore("core1");
curCore.close();
I thought for sure by calling close(), I would also be releasing all
associated resources including the lock on the core that is
I would getting rid of the write.lock file.
I am using
Hi,
We are using Solr 4.0 and after indexing every time it is observed that the
write.lock remains without getting cleared and for the next indexing we have to
delete the file to get the indexing process running.
We use SolrServer for our indexing and I do not see any methods to close or
die.
Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php
--- On Tue, 9/14/10, Bharat Jain wrote:
> From: Bharat Jain
> Subject: Re: org.apache.lucene.store.LockObtainFailedException: Lock obtain
> timed out : SingleInstanceLock: write.lock
> To: solr-
happen. Can you guys please validate the
> issue?
> > thanks a lot in advance.
> >
> > SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain
> timed
> > out
> > : SingleInstanceLock: write.lock
> > at org.apache
nks a lot in advance.
>
> SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
> out
> : SingleInstanceLock: write.lock
> at org.apache.lucene.store.Lock.obtain(Lock.java:85)
> at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
> at org.apache.lucene
: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out
: SingleInstanceLock: write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
at org.apache.lucene.index.IndexWriter
: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out
: SingleInstanceLock: write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:938)
at
guess we must ignore the .lock file if it is returned in the list
of files.
you can raise an issue and we can fix it.
--Noble
On Fri, May 15, 2009 at 12:38 AM, Bryan Talbot
wrote:
When using solr 1.4 replication, I see that the lucene-write.lock
file is
being replicated to slaves. I
, 2009 at 12:38 AM, Bryan Talbot wrote:
>
> When using solr 1.4 replication, I see that the lucene-write.lock file is
> being replicated to slaves. I'm importing data from a db every 5 minutes
> using cron to trigger a DIH delta-import. Replication polls every 60
> secon
When using solr 1.4 replication, I see that the lucene-write.lock file
is being replicated to slaves. I'm importing data from a db every 5
minutes using cron to trigger a DIH delta-import. Replication polls
every 60 seconds and the master is configured to take a sna
March 31, 2009 5:31:42 PM
> Subject: SingleInstanceLock: write.lock
>
> Hi,
>
> I am new to Solr and am having an issue with the following
> SingleInstanceLock:
> write.lock. We have solr 1.3 running under tomcat 1.6.0_11. We have an index
> of
> users that are onli
Hi,
I am new to Solr and am having an issue with the following
SingleInstanceLock: write.lock. We have solr 1.3 running under tomcat
1.6.0_11. We have an index of users that are online at any given time
(Usually around 4000 users). The records from solr are deleted and
repopulated at
set the lock timeout?
Kasi
-Original Message-
From: Mike Klaas [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 20, 2007 12:19 PM
To: solr-user@lucene.apache.org
Subject: Re: clearing solr write.lock
On 20-Dec-07, at 11:24 AM, Kasi Sankaralingam wrote:
I am running into a problem where
Hi Mike,
Thanks a lot, where would this lock information go and also
How do I set the lock timeout?
Kasi
-Original Message-
From: Mike Klaas [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 20, 2007 12:19 PM
To: solr-user@lucene.apache.org
Subject: Re: clearing solr write.lock
On 20
On 20-Dec-07, at 11:24 AM, Kasi Sankaralingam wrote:
I am running into a problem where previous residual lock files are
left in solr data directory after
A failed process, can we programmatically/efficiently remove
this .lock file. Also, has anyone
Externalized the handling of lock files (mea
I am running into a problem where previous residual lock files are left in solr
data directory after
A failed process, can we programmatically/efficiently remove this .lock file.
Also, has anyone
Externalized the handling of lock files (meaning keep the lock file for example
in database?)
Any pl
lucene to run w/ a large index...
Bingo, I'm an idiot - or rather, I now know *why* I'm an idiot. :)
I'll give it a go.
Also, this is likely to be the cause of my write.lock problems - the
Too many files exception just occured and the write.lock file gets
left around (should
I've done a bit of poking on the server and ulimit doesn't seem to be
the problem:
e2wiki:~$ ulimit
unlimited
e2wiki:~$ cat /proc/sys/fs/file-max
170355
try: ulimit -n
ulimit on its own is something else. On my machine I get:
[EMAIL PROTECTED]:~$ ulimit
unlimited
[EMAIL PROTECTED]:~$ cat /
.
You should not need to upgrade to fix the write.lock and Too Many
Open Files problem. Try increasing ulimit or using a compoundfile
before upgrading.
We're quite a way off of real production, it's just internal use at
the moment (on the real product server, but we're a smal
neFAQ#head-48921635adf2c968f7936dc07d51dfb40d638b82
ulimit -n .
Yeah I'm aware of the ulimit, I'm just keen to identify what's causing
it to happen before starting to increase limits. Given the write.lock
errors as well I'm particularly suspicious of it. That said, most likely
it happens whenever a se
2
ulimit -n .
Yeah I'm aware of the ulimit, I'm just keen to identify what's
causing it to happen before starting to increase limits. Given the
write.lock errors as well I'm particularly suspicious of it. That
said, most likely it happens whenever a search and a write ar
The other problem is that after some time we get a "Too Many Open Files"
error when autocommit fires.
Have you checked your ulimit settings?
http://wiki.apache.org/lucene-java/LuceneFAQ#head-48921635adf2c968f7936dc07d51dfb40d638b82
ulimit -n .
As mike mentioned, you may also want to use 's
On Sep 10, 2007, at 5:00 PM, Mike Klaas wrote:
On 10-Sep-07, at 1:50 PM, Adrian Sutton wrote:
We use DirectSolrConnection via JNI in a couple of client apps
that sometimes have 100s of thousands of new docs as fast as Solr
will have them. It would crash relentlessly if I didn't force all
On 10-Sep-07, at 1:50 PM, Adrian Sutton wrote:
We use DirectSolrConnection via JNI in a couple of client apps
that sometimes have 100s of thousands of new docs as fast as Solr
will have them. It would crash relentlessly if I didn't force all
calls to update or query to be on the same thread
On 9/10/07, Adrian Sutton <[EMAIL PROTECTED]> wrote:
> Can Solr as a web app handle multiple updates at
> once or does it synchronize to avoid it?
Yep... things aren't synchronized at the top level and are designed to
be thread-safe.
-Yonik
We use DirectSolrConnection via JNI in a couple of client apps that
sometimes have 100s of thousands of new docs as fast as Solr will
have them. It would crash relentlessly if I didn't force all calls
to update or query to be on the same thread using objc's
@synchronized and a message queue
On Sep 10, 2007, at 1:33 AM, Adrian Sutton wrote:
After a while we start getting exceptions thrown because of a
timeout in acquiring write.lock. It's quite possible that this
occurs whenever two updates are attempted at the same time - is
DirectSolrConnection intended to be thread
e start getting exceptions thrown because of a timeout
in acquiring write.lock. It's quite possible that this occurs
whenever two updates are attempted at the same time - is
DirectSolrConnection intended to be thread safe?
The other problem is that after some time we get a "Too Many
76 matches
Mail list logo