That's true about the commit issue. With that in mind, it might be better to use replication - just keep an eye on it to ensure it's working, as my 1.2 install (3 servers) tends to stop every once in a blue moon.

Thanks!

Matthew Runo
Software Developer
Zappos.com
702.943.7833

On Feb 26, 2008, at 10:53 AM, Walter Underwood wrote:

SAN is not NFS. I would expect SAN to be fast.

wunder

On 2/26/08 10:47 AM, "Jae Joo" <[EMAIL PROTECTED]> wrote:


In my environment, there is NO big difference between local disk and SAN based
file system.
A little slow down, but not a problem (1 or 2 %)
I do have 4 sets of solr indices each has more than 10G in 3 servers.
I think that it is not good way to share SINGLE Index. - disk is pretty cheap
and we can add more disk in SAN pretty easily.
I have another server which is called "Master" with local disk based Solr
Index to update the index.
By some accident or time out, the update is not done successfully, so I do
need to do something by manually.
If you have only one index, there is a risk to mess up the index.

Thanks,

Jae


-----Original Message-----
From: Walter Underwood [mailto:[EMAIL PROTECTED]
Sent: Tue 2/26/2008 1:27 PM
To: solr-user@lucene.apache.org
Subject: Re: Shared index base

I saw a 100X slowdown running with indexes on NFS.

I don't understand going through a lot of effort with unsupported
configurations just to share an index. Local disk is cheap, the
snapshot stuff works well, and local discs avoid a single point
of failure.

The testing time to make a shared index work with each new
release of Solr is almost certainly more expensive than buying
local disc.

The single point of failure is real issue. I've seen two discs
fail on one RAID. When that happens, you've lost all of your
search for hours or days.

Finally, how do you tell Solr that the index has changed and
it needs a new Searcher? Normally, that is a commit, but you
don't want to commit from a read-only Solr.

wunder

On 2/26/08 10:17 AM, "Matthew Runo" <[EMAIL PROTECTED]> wrote:

I hope so. I've found that every once in a while Solr 1.2 replication will die, from a temp-index.... file that seems to ham it up. Removing
that file on all the servers fixes the issue though.

We'd like to be able to point all the servers at an NFS location for
their index files, and use a single server to update it.

Thanks!

Matthew Runo
Software Developer
Zappos.com
702.943.7833

On Feb 26, 2008, at 9:39 AM, Alok Dhir wrote:

Are you saying all the servers will use the same 'data' dir?  Is
that a supported config?

On Feb 26, 2008, at 12:29 PM, Matthew Runo wrote:

We're about to do the same thing here, but have not tried yet. We
currently run Solr with replication across several servers. So long as only one server is doing updates to the index, I think it should
work fine.


Thanks!

Matthew Runo
Software Developer
Zappos.com
702.943.7833

On Feb 26, 2008, at 7:51 AM, Evgeniy Strokin wrote:

I know there was such discussions about the subject, but I want to
ask again if somebody could share more information.
We are planning to have several separate servers for our search
engine. One of them will be index/search server, and all others
are search only.
We want to use SAN (BTW: should we consider something else?) and
give access to it from all servers. So all servers will use the
same index base, without any replication, same files.
Is this a good practice? Did somebody do the same? Any problems
noticed? Or any suggestions, even about different configurations
are highly appreciated.

Thanks,
Gene









Reply via email to