unsubscribe

On Mon, Jan 3, 2011 at 5:22 AM, Markus Jelsma <markus.jel...@openindex.io>wrote:

> I'm seeing this issue as well on 1.4.1 where all slaves are using simple as
> the locking mechanism. For some unknown reason slaves either don't remove
> old
> index.DATE directories or old index files in the index directory. Only the
> second slave has the correct index size.
>
> master
> 4.8G    index
> 4.8G    total
>
> slave 1
> 9.7G    index
> 4.0K    index.20110103022003
> 4.0K    index.20110103125106
> 4.0K    replication.properties
> 9.7G    total
>
> slave 2
> 4.8G    index
> 4.0K    index.20110103115106
> 4.0K    replication.properties
> 4.8G    total
>
> slave 3
> 4.9G    index
> 9.7G    index.20101230101714
> 4.0K    index.properties
> 4.0K    replication.properties
> 15G     total
>
> I've read and searched and read and tried and what not but i cannot find
> the
> cause of the problem, nor do i know how to reproduce but it smells like it
> has
> something to do with restarting the servlet container.
>
> Anyone with clues?
>
>
> On Sunday 19 December 2010 02:01:40 Lance Norskog wrote:
> > This could be a quirk of the native locking feature. What's the file
> > system? Can you fsck it?
> >
> > If this error keeps happening, please file this. It should not happen.
> > Add the text above and also your solrconfigs if you can.
> >
> > One thing you could try is to change from the native locking policy to
> > the simple locking policy - but only on the child.
> >
> > On Sat, Dec 18, 2010 at 4:44 PM, feedly team <feedly...@gmail.com>
> wrote:
> > > I have set up index replication (triggered on optimize). The problem I
> > > am having is the old index files are not being deleted on the slave.
> > > After each replication, I can see the old files still hanging around
> > > as well as the files that have just been pulled. This causes the data
> > > directory size to increase by the index size every replication until
> > > the disk fills up.
> > >
> > > Checking the logs, I see the following error:
> > >
> > > SEVERE: SnapPull failed
> > > org.apache.solr.common.SolrException: Index fetch failed :
> > >        at
> > >
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:329)
> > > at
> > >
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.ja
> > > va:265) at
> org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
> > > at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> > > at
> > >
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:31
> > > 7) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at
> > >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acc
> > > ess$101(ScheduledThreadPoolExecutor.java:98) at
> > >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run
> > > Periodic(ScheduledThreadPoolExecutor.java:181) at
> > >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run
> > > (ScheduledThreadPoolExecutor.java:205) at
> > >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecuto
> > > r.java:886) at
> > >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja
> > > va:908) at java.lang.Thread.run(Thread.java:619)
> > > Caused by: org.apache.lucene.store.LockObtainFailedException: Lock
> > > obtain timed out:
> > > NativeFSLock@
> /var/solrhome/data/index/lucene-cdaa80c0fefe1a7dfc7aab89298c
> > > 614c-write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84)
> > >        at
> > > org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1065) at
> > > org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:954) at
> > > org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:192)
> > > at
> > >
> org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler
> > > .java:99) at
> > >
> org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandl
> > > er2.java:173) at
> > >
> org.apache.solr.update.DirectUpdateHandler2.forceOpenWriter(DirectUpdate
> > > Handler2.java:376) at
> > > org.apache.solr.handler.SnapPuller.doCommit(SnapPuller.java:471) at
> > >
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:319)
> > > ... 11 more
> > >
> > > lsof reveals that the file is still opened from the java process.
> > >
> > > I am running 4.0 rev 993367 with patch SOLR-1316. Otherwise, the setup
> > > is pretty vanilla. The OS is linux, the indexes are on local
> > > directories, write permissions look ok, nothing unusual in the config
> > > (default deletion policy, etc.). Contents of the index data dir:
> > >
> > > master:
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 14 01:06 _1lg.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 14 01:07 _1lg.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 14 01:07 _1lg.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 474M Dec 14 01:12 _1lg.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 14 01:12 _1lg.tii
> > > -rw-rw-r-- 1 feeddo feeddo 144M Dec 14 01:12 _1lg.prx
> > > -rw-rw-r-- 1 feeddo feeddo 277M Dec 14 01:12 _1lg.frq
> > > -rw-rw-r-- 1 feeddo feeddo  311 Dec 14 01:12 segments_1ji
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 14 01:12 _1lg.nrm
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 18 01:11 _24e.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 18 01:12 _24e.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 18 01:12 _24e.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 483M Dec 18 01:23 _24e.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 18 01:23 _24e.tii
> > > -rw-rw-r-- 1 feeddo feeddo 146M Dec 18 01:23 _24e.prx
> > > -rw-rw-r-- 1 feeddo feeddo 283M Dec 18 01:23 _24e.frq
> > > -rw-rw-r-- 1 feeddo feeddo  311 Dec 18 01:24 segments_1xz
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 18 01:24 _24e.nrm
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 18 13:15 _25z.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 18 13:16 _25z.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 18 13:16 _25z.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 484M Dec 18 13:35 _25z.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 18 13:35 _25z.tii
> > > -rw-rw-r-- 1 feeddo feeddo 146M Dec 18 13:35 _25z.prx
> > > -rw-rw-r-- 1 feeddo feeddo 284M Dec 18 13:35 _25z.frq
> > > -rw-rw-r-- 1 feeddo feeddo   20 Dec 18 13:35 segments.gen
> > > -rw-rw-r-- 1 feeddo feeddo  311 Dec 18 13:35 segments_1y1
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 18 13:35 _25z.nrm
> > >
> > > slave:
> > > -rw-rw-r-- 1 feeddo feeddo   20 Dec 13 17:54 segments.gen
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 15 01:07 _1mk.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 15 01:08 _1mk.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 15 01:08 _1mk.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 476M Dec 15 01:18 _1mk.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 15 01:18 _1mk.tii
> > > -rw-rw-r-- 1 feeddo feeddo 144M Dec 15 01:18 _1mk.prx
> > > -rw-rw-r-- 1 feeddo feeddo 278M Dec 15 01:18 _1mk.frq
> > > -rw-rw-r-- 1 feeddo feeddo  312 Dec 15 01:18 segments_1kj
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 15 01:18 _1mk.nrm
> > > -rw-rw-r-- 1 feeddo feeddo    0 Dec 15 01:19
> > > lucene-cdaa80c0fefe1a7dfc7aab89298c614c-write.lock
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 15 13:14 _1qu.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 15 13:16 _1qu.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 15 13:16 _1qu.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 477M Dec 15 13:28 _1qu.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 15 13:28 _1qu.tii
> > > -rw-rw-r-- 1 feeddo feeddo 144M Dec 15 13:28 _1qu.prx
> > > -rw-rw-r-- 1 feeddo feeddo 278M Dec 15 13:28 _1qu.frq
> > > -rw-rw-r-- 1 feeddo feeddo  311 Dec 15 13:28 segments_1oe
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 15 13:28 _1qu.nrm
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 17 01:12 _222.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 17 01:15 _222.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 17 01:15 _222.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 481M Dec 17 01:36 _222.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 17 01:36 _222.tii
> > > -rw-rw-r-- 1 feeddo feeddo 145M Dec 17 01:36 _222.prx
> > > -rw-rw-r-- 1 feeddo feeddo 281M Dec 17 01:36 _222.frq
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 17 01:36 _222.nrm
> > > -rw-rw-r-- 1 feeddo feeddo  311 Dec 17 01:36 segments_1xv
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 17 13:10 _233.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 17 13:13 _233.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 17 13:13 _233.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 482M Dec 17 13:31 _233.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 17 13:31 _233.tii
> > > -rw-rw-r-- 1 feeddo feeddo 146M Dec 17 13:31 _233.prx
> > > -rw-rw-r-- 1 feeddo feeddo 282M Dec 17 13:31 _233.frq
> > > -rw-rw-r-- 1 feeddo feeddo  311 Dec 17 13:31 segments_1xx
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 17 13:31 _233.nrm
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 18 01:11 _24e.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 18 01:12 _24e.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 18 01:12 _24e.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 483M Dec 18 01:23 _24e.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 18 01:23 _24e.tii
> > > -rw-rw-r-- 1 feeddo feeddo 146M Dec 18 01:23 _24e.prx
> > > -rw-rw-r-- 1 feeddo feeddo 283M Dec 18 01:23 _24e.frq
> > > -rw-rw-r-- 1 feeddo feeddo  311 Dec 18 01:24 segments_1xz
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 18 01:24 _24e.nrm
> > > -rw-rw-r-- 1 feeddo feeddo  191 Dec 18 13:15 _25z.fnm
> > > -rw-rw-r-- 1 feeddo feeddo  26M Dec 18 13:16 _25z.fdx
> > > -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 18 13:16 _25z.fdt
> > > -rw-rw-r-- 1 feeddo feeddo 484M Dec 18 13:35 _25z.tis
> > > -rw-rw-r-- 1 feeddo feeddo  15M Dec 18 13:35 _25z.tii
> > > -rw-rw-r-- 1 feeddo feeddo 146M Dec 18 13:35 _25z.prx
> > > -rw-rw-r-- 1 feeddo feeddo 284M Dec 18 13:35 _25z.frq
> > > -rw-rw-r-- 1 feeddo feeddo  311 Dec 18 13:35 segments_1y1
> > > -rw-rw-r-- 1 feeddo feeddo  23M Dec 18 13:35 _25z.nrm
> > >
> > >
> > > Any pointers on how to proceed? Thanks.
>
> --
> Markus Jelsma - CTO - Openindex
> http://www.linkedin.com/in/markus17
> 050-8536620 / 06-50258350
>

Reply via email to