This could be useful in a space expensive situation, although the reason I
wanted to try it is multiple solr instances in one server reading one index on
the ssd. This use case where on the nfs still leads to a single point of
failure situation on one of the most fragile parts of a server, the d
Just tested: if file metadata (last change time, access permissions ...)
on NFS storage change, then all NFS clients invalidate the memory cache
of the file completely.
So, if your index does not get changed, caching is good on readonly
slaves - the NFS client queries only file metadata sometimes.
When the indexing solr instance finishes, it fast-copies the newly built
core to a new directory on the network storage, and then does the
CREATE, SWAP, UNLOAD messages.
Just before starting this message, I needed to update some records and
re-deploy to production, the process took less time the
so are "core" and "corebak" pointing to the same datadir or do you have the
indexing solr instance keep writing to a new directory?
On Fri, May 26, 2017 at 1:53 PM, Robert Haschart wrote:
> The process we use to signal the read-only servers, is to submit a CREATE
> request pointing to the newly
The process we use to signal the read-only servers, is to submit a
CREATE request pointing to the newly created index, with a name like
corebak, then doing a SWAP request between core and corebak, then submit
an UNLOAD request for the corebak which is now pointing at the previous
version.
The
Pretty sure that master/slave was in Solr 1.2. That was very nearly ten years
ago.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On May 26, 2017, at 9:52 AM, David Hastings
> wrote:
>
> Im curious about this. when you say "and signal the three So
Im curious about this. when you say "and signal the three Solr servers
when the updated index is available. " how does it send the signal? IE
what command, just a reload? Also what prevents them from doing a merge on
their own? Thanks
On Fri, May 26, 2017 at 12:09 PM, Robert Haschart
wrote:
Bob:
I'd guess you had to fiddle with lock factories and the like, although
you say that master/slave wasn't even available when you put this
system together so I don't even remember what was available "way back
when" ;).
If it ain't broke, don't fix it applies. That said, if I were redoing
the s
We have run using this exact scenario for several years. We have three
Solr servers sitting behind a load balancer, with all three accessing
the same Solr index stored on read-only network addressable storage. A
fourth machine is used to update the index (typically daily) and signal
the thr
On 5/19/2017 8:33 AM, Ravi Kumar Taminidi wrote:
> Hello, Scenario: Currently we have 2 Solr Servers running in 2 different
> servers (linux), Is there any way can we make the Core to be located in NAS
> or Network shared Drive so both the solrs using the same Index.
>
> Let me know if any perfo
I agree completely, it was just something ive always wanted to try doing.
if my indexes were smaller id just fire up a bunch of slaves on a single
machine and nginx them out, but even 2tb SSD's are some what expensive and
theres not always enough ports on the servers to keep adding more.
On Fri,
One problem here is how to open new searchers on the r/o core.
Consider the autocommit setting. The cycle is
> when the first doc comes in, start your timer
> x milliseconds later, do a commit and (perhaps) open a new searcher.
but the core referencing the index in R/O mode doesn't have any updat
Mt thought would be that the machine would need only the same amount of ram
minus the heap size of the second instance of solr, since it will be file
caching the index into memory only once since its the same files, but read
by both solr instances. my solr slaves have about 150gb each.
On Fri, Ma
> multiple solr instances on one machine performs better than multiple
Does the machine have enough RAM to support all the instances? Again, time for
an experiment!
--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
ecurs...@gmail.com]
Sent: Friday, May 19, 2017 1:33 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr in NAS or Network Shared Drive
The reason for me to want to try it is because replication is not possible on
the single machine, as the index size is around 350gb+another 400gb, and i dont
h
On 19.05.2017 16:33, Ravi Kumar Taminidi wrote:
> Hello, Scenario: Currently we have 2 Solr Servers running in 2 different
> servers (linux), Is there any way can we make the Core to be located in NAS
> or Network shared Drive so both the solrs using the same Index.
>
> Let me know if any perfo
y ACID database, so you can look at best practices for integrating
> these products with Netapp or EMC Celera for more ideas.
>
> -Original Message-
> From: Rick Leir [mailto:rl...@leirtech.com]
> Sent: Friday, May 19, 2017 12:40 PM
> To: solr-user@lucene.apache.org
> S
: Rick Leir [mailto:rl...@leirtech.com]
Sent: Friday, May 19, 2017 12:40 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr in NAS or Network Shared Drive
For an experiment, mount the NAS filesystem ro (readonly). Is there any way to
tell Solr not to bother with a lockfile? And what happens if an
For an experiment, mount the NAS filesystem ro (readonly). Is there any way to
tell Solr not to bother with a lockfile? And what happens if an update or add
gets requested by mistake, does it take down Solr?
Why not do this all the simple way, and just replicate?
On May 19, 2017 10:41:19 AM EDT
ive always wanted to experiment with this, but you have to be very careful
that only one of the cores, or neither, can do ANY writes, also if you have
a suggester index you need to make sure that each core builds their own
independently. In any case from every thing ive read the general answer is
Hello, Scenario: Currently we have 2 Solr Servers running in 2 different
servers (linux), Is there any way can we make the Core to be located in NAS or
Network shared Drive so both the solrs using the same Index.
Let me know if any performance issues, our size of Index is appx 1GB.
Thanks
Rav
21 matches
Mail list logo