: Thanks Chris. So, assuming that we rebuild the index, delete the old data and
: then execute a commit,
: will the snap scripts take care of reconciling all the data ? Internally, is
: there an update timestamp notion
: used to figure out which unique id records have changed and then synchronize
lse issues a commit while you are "rebuilding" your
> index will allways look consistent.
>
> But as i said: the master/slave model will work perfectly for what you
> want as well -- and the snap* scripts will take care of loading it up on
> your slave.
>
>
>
&g
On Thu, 2006-12-21 at 12:23 -0800, escher2k wrote:
> Hi,
> We currently use Lucene to do index user data every couple of hours - the
> index is completely rebuilt,
> the old index is archived and the new one copied over to the directory.
> Example -
>
> /bin/cp ${LOG_FILE} ${CRON_ROOT}/index/hel
: Thanks. The problem is, it is not easy to do an incremental update on the
: data set.
: In which case, I guess the index needs to be created in a different path and
: we need to move
: files around. However, since the documents are added over HTTP, how does one
: even create
: the index in a dif
utomatically. You will need to issue a to solr
> to get it to read the new index (open a new searcher), and new caches
> will be associated with that new searcher.
>
> -Yonik
>
>
--
View this message in context:
http://www.nabble.com/Realtime-directory-change...-tf2867482.html#a8017341
Sent from the Solr - User mailing list archive at Nabble.com.
On 12/21/06, escher2k <[EMAIL PROTECTED]> wrote:
Hi,
We currently use Lucene to do index user data every couple of hours - the
index is completely rebuilt,
the old index is archived and the new one copied over to the directory.
Example -
/bin/cp ${LOG_FILE} ${CRON_ROOT}/index/help/
/bin/rm -r
to speed up the retrieval, is there a way to
invalidate some/all caches when
this done ?
Thanks.
--
View this message in context:
http://www.nabble.com/Realtime-directory-change...-tf2867482.html#a8014338
Sent from the Solr - User mailing list archive at Nabble.com.