We tried this architecture for our initial rollout of Solr/Lucene to
our production application.  We ran into a problem with it, which may
or may not apply to you.  Our production software servers all are
monitored for uptime by a daemon which pings them periodically and
restarts them if a response is not received within a configurable
period of time.

We found that under some orderings of restarts, the Lucene appservers
would not come up correctly.  I don't recall the exact details, and I
don't think it ever corrupted the index.  As I recall, we had to
restart in a particular order to avoid freezes on the read-only
servers, and of course the automated monitor, separate for each
server, could not do that.

YMMV of course, but this would be something to test thoroughly in a
shared index situation.  We moved a while ago to each server (even on
the same machine) having its own index files, and using the snapshot
puller/shooter processes for replication.

Rachel

On 2/26/08, Matthew Runo <[EMAIL PROTECTED]> wrote:
> We're about to do the same thing here, but have not tried yet. We
>  currently run Solr with replication across several servers. So long as
>  only one server is doing updates to the index, I think it should work
>  fine.
>
>
>  Thanks!
>
>
>  Matthew Runo
>  Software Developer
>  Zappos.com
>  702.943.7833
>
>
>  On Feb 26, 2008, at 7:51 AM, Evgeniy Strokin wrote:
>
>  > I know there was such discussions about the subject, but I want to
>  > ask again if somebody could share more information.
>  > We are planning to have several separate servers for our search
>  > engine. One of them will be index/search server, and all others are
>  > search only.
>  > We want to use SAN (BTW: should we consider something else?) and
>  > give access to it from all servers. So all servers will use the same
>  > index base, without any replication, same files.
>  > Is this a good practice? Did somebody do the same? Any problems
>  > noticed? Or any suggestions, even about different configurations are
>  > highly appreciated.
>  >
>  > Thanks,
>  > Gene
>
>

Reply via email to