Hi all

New to Solr/Lucene. Our current search is done with Verity and we are
looking to move towards open-source products.

Our first application would have less than 500,000 documents indexed at the
outset. Additions/updates to the index would occur at 2,000-3,000 per
minute. We are currently updating our search indexes nightly, so anything
more frequent would be a plus.

We have 30 logical application server instances running on 5 physical
servers. We have a NAS device for sharing data, so I'm wondering if
leveraging that would make sense, as opposed to pushing index updates
around.

Anyway, 2-n search servers all pointing to the same index data on the NAS.
No updates allowed to these servers. A single update server where all
updates would occur. Would it make sense to use an autocommit value to
update the indexes?

With this acrhitecture, what is the impact on the searches when the update
server is committing?

Also, I have a single point of failure at the update server. I can offset
that by using a queuing mechanism in the app layer to guarantee that the
updates get into the index. Any other ways to avoid this single point of
failure?

Or, am I just batty and need to go back to the drawing board?

Thanks all

Todd

Reply via email to