: Thanks Chris. So, assuming that we rebuild the index, delete the old data and
: then execute a commit,
: will the snap scripts take care of reconciling all the data ? Internally, is
: there an update timestamp notion
: used to figure out which unique id records have changed and then synchronize
Thanks Chris. So, assuming that we rebuild the index, delete the old data and
then execute a commit,
will the snap scripts take care of reconciling all the data ? Internally, is
there an update timestamp notion
used to figure out which unique id records have changed and then synchronize
them by ex
On Thu, 2006-12-21 at 12:23 -0800, escher2k wrote:
> Hi,
> We currently use Lucene to do index user data every couple of hours - the
> index is completely rebuilt,
> the old index is archived and the new one copied over to the directory.
> Example -
>
> /bin/cp ${LOG_FILE} ${CRON_ROOT}/index/hel
: Thanks. The problem is, it is not easy to do an incremental update on the
: data set.
: In which case, I guess the index needs to be created in a different path and
: we need to move
: files around. However, since the documents are added over HTTP, how does one
: even create
: the index in a dif
Thanks. The problem is, it is not easy to do an incremental update on the
data set.
In which case, I guess the index needs to be created in a different path and
we need to move
files around. However, since the documents are added over HTTP, how does one
even create
the index in a different path on
On 12/21/06, escher2k <[EMAIL PROTECTED]> wrote:
Hi,
We currently use Lucene to do index user data every couple of hours - the
index is completely rebuilt,
the old index is archived and the new one copied over to the directory.
Example -
/bin/cp ${LOG_FILE} ${CRON_ROOT}/index/help/
/bin/rm -r