I'll give that a shot, thanks!
On Wed, Jul 3, 2013 at 12:28 PM, Shawn Heisey wrote:
> On 7/3/2013 9:29 AM, Neal Ensor wrote:
>
>> Posted the solr config up as http://apaste.info/4eKC (hope that works).
>> Note that this is largely a hold-over from upgrades of previous solr
>> versions, there
On 7/3/2013 9:29 AM, Neal Ensor wrote:
Posted the solr config up as http://apaste.info/4eKC (hope that works).
Note that this is largely a hold-over from upgrades of previous solr
versions, there may be lots of cruft left over. If it's advisable to do
so, I would certainly be open to starting
Posted the solr config up as http://apaste.info/4eKC (hope that works).
Note that this is largely a hold-over from upgrades of previous solr
versions, there may be lots of cruft left over. If it's advisable to do
so, I would certainly be open to starting from scratch with a 4.3+ example
configura
On 7/1/2013 1:07 PM, Neal Ensor wrote:
is it conceivable that there's too much traffic, causing Solr to stall
re-opening the searcher (thus releasing to the new index)? I'm grasping at
straws, and this is beginning to bug me a lot. The traffic logs wouldn't
seem to support this (apart from peri
is it conceivable that there's too much traffic, causing Solr to stall
re-opening the searcher (thus releasing to the new index)? I'm grasping at
straws, and this is beginning to bug me a lot. The traffic logs wouldn't
seem to support this (apart from periodic health-check pings, the load is
dist
Odd - looks like it's stuck waiting to be notified that a new searcher is ready.
- Mark
On Jun 27, 2013, at 8:58 AM, Neal Ensor wrote:
> Okay, I have done this (updated to 4.3.1 across master and four slaves; one
> of these is my own PC for experiments, it is not being accessed by clients).
>
Okay, I have done this (updated to 4.3.1 across master and four slaves; one
of these is my own PC for experiments, it is not being accessed by clients).
Just had a minor replication this morning, and all three slaves are "stuck"
again. Replication supposedly started at 8:40, ended 30 seconds late
A bunch of replication related issues were fixed in 4.2.1 so you're
better off upgrading to 4.2.1 or later (4.3.1 is the latest release).
On Mon, Jun 24, 2013 at 6:55 PM, Neal Ensor wrote:
> As a bit of background, we run a setup (coming from 3.6.1 to 4.2 relatively
> recently) with a single mast
As a bit of background, we run a setup (coming from 3.6.1 to 4.2 relatively
recently) with a single master receiving updates with three slaves pulling
changes in. Our index is around 5 million documents, around 26GB in size
total.
The situation I'm seeing is this: occasionally we update the mast