Re:why sorl is slower than lucene so much?

2010-10-21 Thread kafka0102
I found the problem's cause.It's the DocSetCollector. my fitler query result's 
size is about 300,so the DocSetCollector.getDocSet() is OpenBitSet. And 
300 OpenBitSet.fastSet(doc) op is too slow. So I used SolrIndexSearcher's 
TopFieldDocs search(Query query, Filter filter, int n,
Sort sort), and it's normal.




At 2010-10-20 19:21:27,kafka0102  wrote:

>For solr's SolrIndexSearcher.search(QueryResult qr, QueryCommand cmd), I find 
>it's too slowly.my index's size is about 500M, and record's num is 3984274.my 
>query is like q=xx&fq=fid:1&fq=atm:[int_time1 TO int_time2].
>fid's type is precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
>atm's type is  precisionStep="8" omitNorms="true" positionIncrementGap="0"/>.
>for the test, I closed solr's cache's config and used another lucene's code 
>like bottom:
>
> private void test2(final ResponseBuilder rb) {
>try {
>  final SolrQueryRequest req = rb.req;
>  final SolrIndexSearcher searcher = req.getSearcher();
>  final SolrIndexSearcher.QueryCommand cmd = rb.getQueryCommand();
>  final ExecuteTimeStatics timeStatics = 
> ExecuteTimeStatics.getExecuteTimeStatics();
>  final ExecuteTimeUnit staticUnit = 
> timeStatics.addExecuteTimeUnit("test2");
>  staticUnit.start();
>  final List query = cmd.getFilterList();
>  final BooleanQuery booleanFilter = new BooleanQuery();
>  for (final Query q : query) {
>booleanFilter.add(new BooleanClause(q,Occur.MUST));
>  }
>  booleanFilter.add(new BooleanClause(cmd.getQuery(),Occur.MUST));
>  logger.info("q:"+query);
>  final Sort sort = cmd.getSort();
>  final TopFieldDocs docs = searcher.search(booleanFilter,null,20,sort);
>  final StringBuilder sbBuilder = new StringBuilder();
>  for (final ScoreDoc doc :docs.scoreDocs) {
>sbBuilder.append(doc.doc+",");
>  }
>  logger.info("hits:"+docs.totalHits+",result:"+sbBuilder.toString());
>  staticUnit.end();
>} catch (final Exception e) {
>  throw new RuntimeException(e);
>}
>  }
>
>for the test, I first called above's code and then solr's search(...). The 
>result is : lucence's about 20ms and solr's about 70ms.
>I'm so confused.
>And,I wrote another code using filter like bottom,but the range query's result 
>num is not correct.
>Can anybody knows the reasons?
>
>  private void test1(final ResponseBuilder rb) {
>try {
>  final SolrQueryRequest req = rb.req;
>  final SolrIndexSearcher searcher = req.getSearcher();
>  final SolrIndexSearcher.QueryCommand cmd = rb.getQueryCommand();
>  final ExecuteTimeStatics timeStatics = 
> ExecuteTimeStatics.getExecuteTimeStatics();
>  final ExecuteTimeUnit staticUnit = 
> timeStatics.addExecuteTimeUnit("test1");
>  staticUnit.start();
>  final List query = cmd.getFilterList();
>  final BooleanFilter booleanFilter = new BooleanFilter();
>  for (final Query q : query) {
>setFilter(booleanFilter,q);
>  }
>  final Sort sort = cmd.getSort();
>  final TopFieldDocs docs = 
> searcher.search(cmd.getQuery(),booleanFilter,20,sort);
>  logger.info("hits:"+docs.totalHits);
> 
>  staticUnit.end();
>} catch (final Exception e) {
>  throw new RuntimeException(e);
>}
>  }
>
>


Re: why sorl is slower than lucene so much?

2010-10-22 Thread kafka0102

thanks a lot.
I got it.

On 2010年10月21日 22:36, Yonik Seeley wrote:

2010/10/21 kafka0102:

I found the problem's cause.It's the DocSetCollector. my fitler query result's 
size is about 300,so the DocSetCollector.getDocSet() is OpenBitSet. And 
300 OpenBitSet.fastSet(doc) op is too slow.


As I said in my other response to you, that's a perfect reason why you
want Solr to cache that for you (unless the filter will be different
each time).

-Yonik
http://www.lucidimagination.com





problem of solr replcation's speed

2010-10-31 Thread kafka0102
It takes about one hour to replacate 6G index for solr in my env. But my 
network can transfer file about 10-20M/s using scp. So solr's http replcation 
is too slow, it's normal or I do something wrong?


Re:Re: problem of solr replcation's speed

2010-11-01 Thread kafka0102
I hacked SnapPuller to log the cost, and the log is like thus:
[2010-11-01 
17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 979
[2010-11-01 
17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 4
[2010-11-01 
17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 4
[2010-11-01 
17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 980
[2010-11-01 
17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 4
[2010-11-01 
17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 5
[2010-11-01 
17:21:21][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 979


It's saying it cost about 1000ms for transfering 1M data every 2 times. I used 
jetty as server and embeded solr in my app.I'm so confused.What I have done 
wrong?


At 2010-11-01 10:12:38,"Lance Norskog"  wrote:

>If you are copying from an indexer while you are indexing new content,
>this would cause contention for the disk head. Does indexing slow down
>during this period?
>
>Lance
>
>2010/10/31 Peter Karich :
>>  we have an identical-sized index and it takes ~5minutes
>>
>>
>>> It takes about one hour to replacate 6G index for solr in my env. But my
>>> network can transfer file about 10-20M/s using scp. So solr's http
>>> replcation is too slow, it's normal or I do something wrong?
>>>
>>
>>
>
>
>
>-- 
>Lance Norskog
>goks...@gmail.com


Re:Re:Re: problem of solr replcation's speed

2010-11-01 Thread kafka0102
I suspected my app has some sleeping op every 1s, so
I changed ReplicationHandler.PACKET_SZ to 1024 * 1024*10; // 10MB

and log result is like thus :
[2010-11-01 
17:49:29][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 cost 
3184
[2010-11-01 
17:49:32][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 cost 
3426
[2010-11-01 
17:49:36][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 cost 
3359
[2010-11-01 
17:49:39][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 cost 
3166
[2010-11-01 
17:49:42][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 cost 
3513
[2010-11-01 
17:49:46][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 cost 
3140
[2010-11-01 
17:49:50][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 cost 
3471

That means It's still slow like before. what's wrong with my env

At 2010-11-01 17:30:32,kafka0102  wrote:
I hacked SnapPuller to log the cost, and the log is like thus:
[2010-11-01 
17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 979
[2010-11-01 
17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 4
[2010-11-01 
17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 4
[2010-11-01 
17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 980
[2010-11-01 
17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 4
[2010-11-01 
17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 5
[2010-11-01 
17:21:21][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 979


It's saying it cost about 1000ms for transfering 1M data every 2 times. I used 
jetty as server and embeded solr in my app.I'm so confused.What I have done 
wrong?


At 2010-11-01 10:12:38,"Lance Norskog"  wrote:

>If you are copying from an indexer while you are indexing new content,
>this would cause contention for the disk head. Does indexing slow down
>during this period?
>
>Lance
>
>2010/10/31 Peter Karich :
>>  we have an identical-sized index and it takes ~5minutes
>>
>>
>>> It takes about one hour to replacate 6G index for solr in my env. But my
>>> network can transfer file about 10-20M/s using scp. So solr's http
>>> replcation is too slow, it's normal or I do something wrong?
>>>
>>
>>
>
>
>
>-- 
>Lance Norskog
>goks...@gmail.com




Re:Re: Re:Re: problem of solr replcation's speed

2010-11-04 Thread kafka0102
sometorment later
I found the reason ofsolr replication'slow speed. It's not solr's problem.It's 
jetty's. I used to embed jetty7 in my app. But when I found solr's demo use 
jetty6 , I tried to use jetty6 in my app and I was so happy to get the fast 
speed.
actually, I tried to change solr's demo in jetty7 by default's conf, the 
replication's speed was slow too.

I don't know why the default jetty7 server is so slow. I wanna to find the 
reason.Maybe I can ask the jetty maillist or continue to read the codes.




At 2010-11-02 07:28:54,"Lance Norskog"  wrote:

>This is the time to replicate and open the new index, right? Opening a
>new index can take a lot of time. How many autowarmers and queries are
>there in the caches? Opening a new index re-runs all of the queries in
>all of the caches.
>
>2010/11/1 kafka0102 :
>> I suspected my app has some sleeping op every 1s, so
>> I changed ReplicationHandler.PACKET_SZ to 1024 * 1024*10; // 10MB
>>
>> and log result is like thus :
>> [2010-11-01 
>> 17:49:29][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 
>> cost 3184
>> [2010-11-01 
>> 17:49:32][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 
>> cost 3426
>> [2010-11-01 
>> 17:49:36][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 
>> cost 3359
>> [2010-11-01 
>> 17:49:39][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 
>> cost 3166
>> [2010-11-01 
>> 17:49:42][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 
>> cost 3513
>> [2010-11-01 
>> 17:49:46][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 
>> cost 3140
>> [2010-11-01 
>> 17:49:50][INFO][pool-6-thread-1][SnapPuller.java(1038)]readFully10485760 
>> cost 3471
>>
>> That means It's still slow like before. what's wrong with my env
>>
>> At 2010-11-01 17:30:32,kafka0102  wrote:
>> I hacked SnapPuller to log the cost, and the log is like thus:
>> [2010-11-01 
>> 17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 
>> 979
>> [2010-11-01 
>> 17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 
>> 4
>> [2010-11-01 
>> 17:21:19][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 
>> 4
>> [2010-11-01 
>> 17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 
>> 980
>> [2010-11-01 
>> 17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 
>> 4
>> [2010-11-01 
>> 17:21:20][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 
>> 5
>> [2010-11-01 
>> 17:21:21][INFO][pool-6-thread-1][SnapPuller.java(1037)]readFully1048576 cost 
>> 979
>>
>>
>> It's saying it cost about 1000ms for transfering 1M data every 2 times. I 
>> used jetty as server and embeded solr in my app.I'm so confused.What I have 
>> done wrong?
>>
>>
>> At 2010-11-01 10:12:38,"Lance Norskog"  wrote:
>>
>>>If you are copying from an indexer while you are indexing new content,
>>>this would cause contention for the disk head. Does indexing slow down
>>>during this period?
>>>
>>>Lance
>>>
>>>2010/10/31 Peter Karich :
>>>>  we have an identical-sized index and it takes ~5minutes
>>>>
>>>>
>>>>> It takes about one hour to replacate 6G index for solr in my env. But my
>>>>> network can transfer file about 10-20M/s using scp. So solr's http
>>>>> replcation is too slow, it's normal or I do something wrong?
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>--
>>>Lance Norskog
>>>goks...@gmail.com
>>
>>
>>
>
>
>
>-- 
>Lance Norskog
>goks...@gmail.com


Re:Re: Updating Solr index - DIH delta vs. task queues

2010-11-17 Thread kafka0102
Does anyone care about this?
I use task queue for now.
I think DIH delta cannot handle changed data very well. For aim db,it needs not 
only a last_index_time collum. If a row is deleted, DIH delta cannot know it. 
So it need another boolean collum marking whether the row is 
deleted.However,thus has greatly changed original table's schema. By looking at 
sphinx's practice,you can create a new table including  last_index_time and 
isdeleted collum to store changed rows,thus DIH delta fetching this table. And 
I think the new table is like a task queue.





At 2010-11-05 01:52:20,"Ezequiel Calderara"  wrote:

>I'm in the same scenario, so this answer would be helpful too..
>I'm adding...
>
>3) Web Service - Request a webservice for all the new data that has been
>updated (can this be done?
>On Thu, Nov 4, 2010 at 2:38 PM, Andy  wrote:
>
>> Hi,
>> I have data stored in a database that is being updated constantly. I need
>> to find a way to update Solr index as data in the database is being updated.
>> There seems to be 2 main schools of thoughts on this:
>> 1) DIH delta - query the database for all records that have a timestamp
>> later than the last_index_time. Import those records for indexing to Solr
>> 2) Task queue - every time a record is updated in the database, throw a
>> task to a queue to index that record to Solr
>> Just want to know what are the pros and cons of each approach and what is
>> your experience. For someone starting new, what'd be your recommendation?
>> ThanksAndy
>>
>>
>>
>
>
>
>
>-- 
>__
>Ezequiel.
>
>Http://www.ironicnet.com


how about another SolrIndexSearcher.numDocs method?

2010-11-18 Thread kafka0102
In my app,I want to search numdocs for some queries. I see SolrIndexSearcher 
has two methods:
public int numDocs(Query a, DocSet b)
public int numDocs(Query a, Query b)

But these're not fit for me.For search's params,I get q and fq, and q' results 
are not in filterCache.But above methods are both using filtercache. So I think 
a method like:
public int numDocs(Query q, List fqs) (q not with filtercache,fqs with 
filtercache)
would be fine.
And now,I cannot extend SolrIndexSearcher because of SolrCore. What should I do 
to solve the problem?
thanks.



Re:how about another SolrIndexSearcher.numDocs method?

2010-11-19 Thread kafka0102
numDocs methods seem just for filterCache. So I just need use QueryResult 
search(QueryResult qr, QueryCommand cmd) by setting QueryCommand.len=0?I would 
try it.

At 2010-11-19 15:49:31,kafka0102  wrote:
In my app,I want to search numdocs for some queries. I see SolrIndexSearcher 
has two methods:
public int numDocs(Query a, DocSet b)
public int numDocs(Query a, Query b)

But these're not fit for me.For search's params,I get q and fq, and q' results 
are not in filterCache.But above methods are both using filtercache. So I think 
a method like:
public int numDocs(Query q, List fqs) (q not with filtercache,fqs with 
filtercache)
would be fine.
And now,I cannot extend SolrIndexSearcher because of SolrCore. What should I do 
to solve the problem?
thanks.




网易163/126邮箱百分百兼容iphone ipad邮件收发

SnapPuller error : Unable to move index file

2010-11-22 Thread kafka0102
my replication got errors like :
Unable to move index file from: 
/home/data/tuba/search-index/eshequn.post.db_post/index.20101122034500/_21.frq 
to: 
/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000/_21.frq

I looked at log and found the last slave replication commit before the error is 
:
[2010-11-22 
15:10:18][INFO][pool-6-thread-1][SolrDeletionPolicy.java(114)]SolrDeletionPolicy.onInit:
 commits:num=4

commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_3,version=1290358965331,generation=3,filenames=[_21.fdt,
 _21.frq, _21.prx, _21.tii, _21.nrm, _21.fdx, _21.tis, segments_3, _21.fnm]

commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_kq,version=1290358966074,generation=746,filenames=[_21.frq,
 _21.prx, _q8.frq, _21.tii, _q8.prx, _q8.tii, _q8.fdt, _21.nrm, _q8.fnm, 
_21.tis, _21.fdt, _q8.nrm, _q8.fdx, segments_kq, _q8.tis, _21.fdx, _21_1r.del, 
_21.fnm]

commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_ky,version=1290358966082,generation=754,filenames=[_21.frq,
 _qg.fnm, _qe.tis, _21.tii, _qe.nrm, _qg.nrm, _qg.fdt, _21_1u.del, _qd.tii, 
_qd.nrm, _qg.tii, _21.tis, _21.fdt, _qe.fdx, _qe.prx, _qf.tii, _21.fdx, 
_qf.nrm, segments_ky, _qf.fdt, _qe.fdt, _qd.fdt, _qf.tis, _21.prx, _qd_2.del, 
_qd.fnm, _qd.fdx, _qf.fdx, _qe.frq, _qd.prx, _21.nrm, _qd.frq, _qg.prx, 
_qg.tis, _qf.frq, _qd.tis, _qf.prx, _qe.tii, _qf.fnm, _qg.fdx, _qe.fnm, 
_qg.frq, _21.fnm]

commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_l3,version=1290358966087,generation=759,filenames=[_21.frq,
 _21.prx, _21.tii, _qn.fnm, _qn.fdt, _21_1u.del, _qn.fdx, _21.nrm, _qn.nrm, 
_qn.frq, _21.tis, _qn.prx, _21.fdt, segments_l3, _qn.tis, _qn.tii, _21.fdx, 
_21.fnm]

When the error happened, the dir index.20101122031000 had been deleted. Does 
the SolrDeletionPolicy delete the index dir not only files? The problem happend 
some times.Does anyone know the reason?



Re:Re: SnapPuller error : Unable to move index file

2010-11-22 Thread kafka0102
sorry for my unclear question.
My solr's version is 1.4.1,and I  maybe hit a solr's bug.
In my case,my slave's using index's directory is index.20101122031000.It was 
generated at 2010-11-22 03:10:00 because of some reasons(It's not important). 
And at 2010-11-22 15:10:00,the slave got a replication.I find the function in 
SnapPuller:
  private File createTempindexDir(final SolrCore core) {
final String tmpIdxDirName = "index." + new 
SimpleDateFormat(SnapShooter.DATE_FMT).format(new Date());
final File tmpIdxDir = new File(core.getDataDir(), tmpIdxDirName);
tmpIdxDir.mkdirs();
return tmpIdxDir;
  }
and SnapShooter.DATE_FMT = "MMddhhmmss"
So in this replication, the tmpIndexDir and indexDir both are 
"20101122031000".In the end of the replication,delTree(tmpIndexDir) will delete 
the index dir.

So SnapShooter.DATE_FMT = "MMddHHmmss" should be fine.






At 2010-11-22 21:13:41,"Erick Erickson"  wrote:

>what op system are you on? what version of Solr? what filesystem?
>
>It's really hard to help without more information, you might want to review:
>http://wiki.apache.org/solr/UsingMailingLists
>
>Best
>Erick
>
>2010/11/22 kafka0102 
>
>> my replication got errors like :
>> Unable to move index file from:
>> /home/data/tuba/search-index/eshequn.post.db_post/index.20101122034500/_21.frq
>> to:
>> /home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000/_21.frq
>>
>> I looked at log and found the last slave replication commit before the
>> error is :
>> [2010-11-22
>> 15:10:18][INFO][pool-6-thread-1][SolrDeletionPolicy.java(114)]SolrDeletionPolicy.onInit:
>> commits:num=4
>>
>>  
>> commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_3,version=1290358965331,generation=3,filenames=[_21.fdt,
>> _21.frq, _21.prx, _21.tii, _21.nrm, _21.fdx, _21.tis, segments_3, _21.fnm]
>>
>>  
>> commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_kq,version=1290358966074,generation=746,filenames=[_21.frq,
>> _21.prx, _q8.frq, _21.tii, _q8.prx, _q8.tii, _q8.fdt, _21.nrm, _q8.fnm,
>> _21.tis, _21.fdt, _q8.nrm, _q8.fdx, segments_kq, _q8.tis, _21.fdx,
>> _21_1r.del, _21.fnm]
>>
>>  
>> commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_ky,version=1290358966082,generation=754,filenames=[_21.frq,
>> _qg.fnm, _qe.tis, _21.tii, _qe.nrm, _qg.nrm, _qg.fdt, _21_1u.del, _qd.tii,
>> _qd.nrm, _qg.tii, _21.tis, _21.fdt, _qe.fdx, _qe.prx, _qf.tii, _21.fdx,
>> _qf.nrm, segments_ky, _qf.fdt, _qe.fdt, _qd.fdt, _qf.tis, _21.prx,
>> _qd_2.del, _qd.fnm, _qd.fdx, _qf.fdx, _qe.frq, _qd.prx, _21.nrm, _qd.frq,
>> _qg.prx, _qg.tis, _qf.frq, _qd.tis, _qf.prx, _qe.tii, _qf.fnm, _qg.fdx,
>> _qe.fnm, _qg.frq, _21.fnm]
>>
>>  
>> commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_l3,version=1290358966087,generation=759,filenames=[_21.frq,
>> _21.prx, _21.tii, _qn.fnm, _qn.fdt, _21_1u.del, _qn.fdx, _21.nrm, _qn.nrm,
>> _qn.frq, _21.tis, _qn.prx, _21.fdt, segments_l3, _qn.tis, _qn.tii, _21.fdx,
>> _21.fnm]
>>
>> When the error happened, the dir index.20101122031000 had been deleted.
>> Does the SolrDeletionPolicy delete the index dir not only files? The problem
>> happend some times.Does anyone know the reason?
>>
>>


Re:Re:Re: SnapPuller error : Unable to move index file

2010-11-22 Thread kafka0102
Does anyone care about the bug?




At 2010-11-22 22:28:39,kafka0102  wrote:

>sorry for my unclear question.
>My solr's version is 1.4.1,and I  maybe hit a solr's bug.
>In my case,my slave's using index's directory is index.20101122031000.It was 
>generated at 2010-11-22 03:10:00 because of some reasons(It's not important). 
>And at 2010-11-22 15:10:00,the slave got a replication.I find the function in 
>SnapPuller:
>  private File createTempindexDir(final SolrCore core) {
>final String tmpIdxDirName = "index." + new 
> SimpleDateFormat(SnapShooter.DATE_FMT).format(new Date());
>final File tmpIdxDir = new File(core.getDataDir(), tmpIdxDirName);
>tmpIdxDir.mkdirs();
>return tmpIdxDir;
>  }
>and SnapShooter.DATE_FMT = "MMddhhmmss"
>So in this replication, the tmpIndexDir and indexDir both are 
>"20101122031000".In the end of the replication,delTree(tmpIndexDir) will 
>delete the index dir.
>
>So SnapShooter.DATE_FMT = "MMddHHmmss" should be fine.
>
>
>
>
>
>
>At 2010-11-22 21:13:41,"Erick Erickson"  wrote:
>
>>what op system are you on? what version of Solr? what filesystem?
>>
>>It's really hard to help without more information, you might want to review:
>>http://wiki.apache.org/solr/UsingMailingLists
>>
>>Best
>>Erick
>>
>>2010/11/22 kafka0102 
>>
>>> my replication got errors like :
>>> Unable to move index file from:
>>> /home/data/tuba/search-index/eshequn.post.db_post/index.20101122034500/_21.frq
>>> to:
>>> /home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000/_21.frq
>>>
>>> I looked at log and found the last slave replication commit before the
>>> error is :
>>> [2010-11-22
>>> 15:10:18][INFO][pool-6-thread-1][SolrDeletionPolicy.java(114)]SolrDeletionPolicy.onInit:
>>> commits:num=4
>>>
>>>  
>>> commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_3,version=1290358965331,generation=3,filenames=[_21.fdt,
>>> _21.frq, _21.prx, _21.tii, _21.nrm, _21.fdx, _21.tis, segments_3, _21.fnm]
>>>
>>>  
>>> commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_kq,version=1290358966074,generation=746,filenames=[_21.frq,
>>> _21.prx, _q8.frq, _21.tii, _q8.prx, _q8.tii, _q8.fdt, _21.nrm, _q8.fnm,
>>> _21.tis, _21.fdt, _q8.nrm, _q8.fdx, segments_kq, _q8.tis, _21.fdx,
>>> _21_1r.del, _21.fnm]
>>>
>>>  
>>> commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_ky,version=1290358966082,generation=754,filenames=[_21.frq,
>>> _qg.fnm, _qe.tis, _21.tii, _qe.nrm, _qg.nrm, _qg.fdt, _21_1u.del, _qd.tii,
>>> _qd.nrm, _qg.tii, _21.tis, _21.fdt, _qe.fdx, _qe.prx, _qf.tii, _21.fdx,
>>> _qf.nrm, segments_ky, _qf.fdt, _qe.fdt, _qd.fdt, _qf.tis, _21.prx,
>>> _qd_2.del, _qd.fnm, _qd.fdx, _qf.fdx, _qe.frq, _qd.prx, _21.nrm, _qd.frq,
>>> _qg.prx, _qg.tis, _qf.frq, _qd.tis, _qf.prx, _qe.tii, _qf.fnm, _qg.fdx,
>>> _qe.fnm, _qg.frq, _21.fnm]
>>>
>>>  
>>> commit{dir=/home/data/tuba/search-index/eshequn.post.db_post/index.20101122031000,segFN=segments_l3,version=1290358966087,generation=759,filenames=[_21.frq,
>>> _21.prx, _21.tii, _qn.fnm, _qn.fdt, _21_1u.del, _qn.fdx, _21.nrm, _qn.nrm,
>>> _qn.frq, _21.tis, _qn.prx, _21.fdt, segments_l3, _qn.tis, _qn.tii, _21.fdx,
>>> _21.fnm]
>>>
>>> When the error happened, the dir index.20101122031000 had been deleted.
>>> Does the SolrDeletionPolicy delete the index dir not only files? The problem
>>> happend some times.Does anyone know the reason?
>>>
>>>