This could be useful in a space expensive situation, although the reason I 
wanted to try it is multiple solr instances in one server reading one index on 
the ssd. This use case where on the nfs still leads to a single point of 
failure situation on one of the most fragile parts of a server, the disk, on 
one machine. So if the nfs master gets corrupt then all clients are dead rather 
than the slaves all having their own copy of the index. 





> On May 26, 2017, at 5:37 PM, Florian Gleixner <f...@redflo.de> wrote:
> 
> 
> Just tested: if file metadata (last change time, access permissions ...)
> on NFS storage change, then all NFS clients invalidate the memory cache
> of the file completely.
> So, if your index does not get changed, caching is good on readonly
> slaves - the NFS client queries only file metadata sometimes.
> But if yout index changes, all affected files have to be read again from
> NFS. You can try this by "touching" the files.
> 
> fincore from linux ftools can be used to view the file caching status.
> 
> "touching" a file on a local mount does not invalidate the memory cache.
> The kernel knows, that no file data have been changed.
> 
> 
>> On 26.05.2017 19:53, Robert Haschart wrote:
>> 
>> The individual servers cannot do a merge on their own, since they mount
>> the NAS read-only.   Nothing they can do will affect the index.  I
>> believe this allows each machine to cache much of the index in memory,
>> with no fear that their cache will be made invalid by one of the others.
>> 
>> -Bob Haschart
>> University of Virginia Library
>> 
> 
> 
> 

Reply via email to