This is expected behaviour. The shardHandlerFactory element is configured
in solr.xml, not solrconfig.xml See:
https://lucene.apache.org/solr/guide/7_4/format-of-solr-xml.html
On Tue, 11 Sep 2018 at 11:55, Ash Ramesh wrote:
> Hi,
>
> I tried setting up a bespoke ShardHandlerFactory configura
> Shared file system is the best way to keep it consistent but it comes with
> its draw backs. You can always backup locally and asynchronously sync to
> shared FS too.
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
> On May 30, 2018, 5:16 PM -0400, Greg Roodt
going to take a bit of coordination to get all nodes to mount a shared
volume when we take a backup and then unmount when done.
Any idea what happens if a node joins or leaves during a backup?
On Thu, 31 May 2018 at 06:14, Shawn Heisey wrote:
> On 5/29/2018 3:01 PM, Greg Roodt wrote:
Hi
What is the best way to perform a backup of a Solr Cloud cluster? Is there
a way to backup only the leader? From my tests with the collections admin
BACKUP command, all nodes in the cluster need to have access to a shared
filesystem. Surely that isn't necessary if you are backing up the leader
We've been running 7.2.1 at work for a while now and it's been running very
well. We saw some performance improvements, but I would attribute most of
that to the newer instance types we used in the upgrade. Didn't see any
major performance regressions for our workload.
A couple of things to think
A single shard is much simpler conceptually and also cheaper to query. I
would say that even your 1.2M collection can be a single shard. I'm running
a single shard setup 4X that size. You can still have replicas of this
shard for redundancy / availability purposes.
I'm not an expert, but I think o
t from my iPhone
>
> > On Mar 7, 2018, at 8:18 PM, Greg Roodt wrote:
> >
> > Hi
> >
> > I am running a cluster of TLOG and PULL replicas. When I call the
> > DELETEREPLICA api to remove a replica, the replica is removed, however, a
> > new NRT replica pops up in a down state in the cluster.
> >
> > Any ideas why?
> >
> > Greg
>
Hi
I am running a cluster of TLOG and PULL replicas. When I call the
DELETEREPLICA api to remove a replica, the replica is removed, however, a
new NRT replica pops up in a down state in the cluster.
Any ideas why?
Greg
Thanks so much again Tomas! You've answered my questions and I clearly
understand now. Great work!
On 13 February 2018 at 09:13, Tomas Fernandez Lobbe
wrote:
>
>
> > On Feb 12, 2018, at 12:06 PM, Greg Roodt wrote:
> >
> > Thanks Ere. I've taken a look at the
27;m
> considering taking a stab at implementing it.
>
> --Ere
>
>
> Greg Roodt kirjoitti 12.2.2018 klo 6.55:
>
>> Thank you both for your very detailed answers.
>>
>> This is great to know. I knew that SolrJ had the cluster aware knowledge
>> (via zookeeper), but
have multiple shards then you need to use
> the “shards” parameter and tell Solr exactly which nodes you want to hit
> for each shard (the “shards” approach can also be done in the single shard
> case, although you would be adding an extra hop I believe)
>
> Tomás
> Sent from
Hi
I have a question around how queries are routed and load-balanced in a
cluster of mixed TLOG and PULL replicas.
I thought that I might have to put a load-balancer in front of the PULL
replicas and direct queries at them manually as nodes are added and removed
as PULL replicas. However, it seem
Here is the newly created Jira ticket:
https://issues.apache.org/jira/browse/SOLR-11921
On 27 January 2018 at 08:19, Greg Roodt wrote:
> Ok, thanks for the clarification. I'll open a Jira issue.
>
>
>
> On Fri, 26 Jan 2018 at 01:21, Yonik Seeley wrote:
>
>>
serialize/deserialize sort values.
> We can either fix the issue, or at a minimum provide a better error
> message if cursorMark is limited to sorting on "normal" fields only.
>
> -Yonik
>
>
> On Wed, Jan 24, 2018 at 3:19 PM, Greg Roodt wrote:
> > Given the
Given the technical nature of this problem? Do you think I should try
raising this on the developer group or raising a bug?
On 24 January 2018 at 12:36, Greg Roodt wrote:
> Hi
>
> I'm trying to use the Query Eleveation Component in conjunction with
> CursorMark pagination. I
Hi
I'm trying to use the Query Eleveation Component in conjunction with
CursorMark pagination. It doesn't seem to work. I get an exception. Are
these components meant to work together?
This works:
enableElevation=true&forceElevation=true&elevateIds=MAAMNqFV1dg
This fails:
cursorMark=*&enableElev
re bunches of samples in ZkControllerTest
>
> But yeah, it requires that you know your hostname and port, and the
> context is "solr".....
>
> On Tue, Dec 19, 2017 at 8:04 PM, Greg Roodt wrote:
> > Ok, thanks. I'll take a look into using the ADDREPLICA API.
hough.
>
> Finally, you'll want to add a "node" parameter to insure your replica is
> placed on the exact node you want, see the livenodes znode for the
> format...
>
> On Dec 19, 2017 16:06, "Greg Roodt" wrote:
>
> > Thanks for the reply. So it soun
*shard1*
```
That seems mostly equivalent to writing that core.properties file that I am
using in 6.2
On 20 December 2017 at 09:34, Shawn Heisey wrote:
> On 12/19/2017 3:06 PM, Greg Roodt wrote:
> > Thanks for your reply Erick.
> >
> > This is what I'm doing at the mom
> you're just getting lucky. The legcyCloud=true default is _probably_ adding
> the replica with a new URL and thus making it distinct.
>
> Please detail exactly what you do when you add a new node.
>
> Best,
> Erick
>
> On Mon, Dec 18, 2017 at 9:03 PM, Greg Roodt wrot
Hi
Background:
* I am looking to upgrade from Solr 6.1 to Solr 7.1.
* Currently the system is run in cloud mode with a single collection and
single shard per node.
* Currently when a new node is added to the cluster, it becomes a replica
and copies the data / core "automagically".
Question:
Is it
21 matches
Mail list logo