Solrcloud rolling backup

2020-04-05 Thread Karthik K G
Hi Team,

Currently there is no functionality to delete a Solrcloud backup.
We are already creating directories with backup name in the shared location.
We should have some API to delete the Solrcloud backup as this would
benefit the community to have rolling backups.
If there is one already implemented, then it is not updated in the
documentation.
https://lucene.apache.org/solr/guide/6_6/collections-api.html#CollectionsAPI-backup

The documentation is also not clear on what is a shared location. Shared
location here should mean that which has been mounted on same path on all
IPs of the collection.

Please let me know if I can go ahead and create an open JIRA to track the
Solrcloud backup deletion.

Thanks,
Karthik


Re: Required operator (+) is being ignored when using default conjunction operator AND

2020-04-05 Thread Eran Buchnick
Hoss, thanks a lot for the response.
OK, so it seems like I got into to the "uncanny valley" of the search
operators:/
I red your attached blog post (and more) but still the penny hasn't dropped
yet about what causes the operator clash when the default operator is AND.
I red that when q.op=AND, OR will change the left(if not MUST_NOT) and
right clause Occurs to SHOULD - what that means is that the "order of
operations" in this case is giving the infix operator the mandate to
control the prefix operator?
 A little background - I am trying to implement a google search like
service and want to have the ability to have required and prohibit
operators while still allowing default intersection operation as default
operator. How can I achieve this with this limitation?


On Wed, Apr 1, 2020, 20:08 Chris Hostetter  wrote:

>
> : Using solr 8.3.0 it seems like required operator isn't functioning
> properly
> : when default conjunction operator is AND.
>
> You're mixing the "prefix operators" with the "infix operators" which is
> always a recipe for disaster.




>
> The use of q.op=AND vs q.op=OR in these examples only
> complicates the issue because q.op isn't really overriding any sort of
> implicit
> "infix operator" when clauses exist w/o an infix operator between them, it
> is overriding the implicit MUST/SHOULD/MUST_NOT given to each clause as
> parsed ... but in general setting q.op-AND really only makes sense when
> you expect/intend to only be using "infix operators"
>
> This write up i did several years ago is still very accurate -- the bottom
> line is you REALLY don't want to mix infix and prefix operators..
>
> https://lucidworks.com/post/why-not-and-or-and-not/
>
> ...because the results of mixing them really only "make sense" given the
> context that the parser goes left to right (ie: no precedence) and has
> no explicit "prefix" operator syntax for "SHOULD"
>
>
> -Hoss
> http://www.lucidworks.com/
>


All shards placed on the same node

2020-04-05 Thread Kudrettin Güleryüz
Hi,

Running 7.3.1 on an 8 node Solr cloud. Why would solr create all 6 shards
on the same node? I don't want to restrict Solr to create up to x number of
shards per node but creating all shards on the same node doesn't look right
to me.

Will Solr use all space on one node before using another one? Here is my
autoscaling configuration:

{
  "cluster-preferences":[
{
  "minimize":"cores",
  "precision":10},
{
  "precision":100,
  "maximize":"freedisk"},
{
  "minimize":"sysLoadAvg",
  "precision":3}],
  "cluster-policy":[{
  "freedisk":"<10",
  "replica":"0",
  "strict":"true"}],
  "triggers":{".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":120,
  "actions":[
{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}],
  "enabled":true}},
  "listeners":{".auto_add_replicas.system":{
  "trigger":".auto_add_replicas",
  "afterAction":[],
  "stage":[
"STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
  "beforeAction":[]}},
  "properties":{}}


Re: All shards placed on the same node

2020-04-05 Thread Sandeep Dharembra
Hey,

Please change the precision in cluster preference for core to 1 instead of
10 and then give a try.

With current settings, 2 nodes are not treated different till they have a
difference of 10 cores.

Thanks,


On Mon, Apr 6, 2020, 2:09 AM Kudrettin Güleryüz  wrote:

> Hi,
>
> Running 7.3.1 on an 8 node Solr cloud. Why would solr create all 6 shards
> on the same node? I don't want to restrict Solr to create up to x number of
> shards per node but creating all shards on the same node doesn't look right
> to me.
>
> Will Solr use all space on one node before using another one? Here is my
> autoscaling configuration:
>
> {
>   "cluster-preferences":[
> {
>   "minimize":"cores",
>   "precision":10},
> {
>   "precision":100,
>   "maximize":"freedisk"},
> {
>   "minimize":"sysLoadAvg",
>   "precision":3}],
>   "cluster-policy":[{
>   "freedisk":"<10",
>   "replica":"0",
>   "strict":"true"}],
>   "triggers":{".auto_add_replicas":{
>   "name":".auto_add_replicas",
>   "event":"nodeLost",
>   "waitFor":120,
>   "actions":[
> {
>   "name":"auto_add_replicas_plan",
>   "class":"solr.AutoAddReplicasPlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}],
>   "enabled":true}},
>   "listeners":{".auto_add_replicas.system":{
>   "trigger":".auto_add_replicas",
>   "afterAction":[],
>   "stage":[
> "STARTED",
> "ABORTED",
> "SUCCEEDED",
> "FAILED",
> "BEFORE_ACTION",
> "AFTER_ACTION",
> "IGNORED"],
>   "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
>   "beforeAction":[]}},
>   "properties":{}}
>


If the leader dies, will the data be lost?

2020-04-05 Thread Taisuke Miyazaki
Hi,
Using solr 7.5.0 on solr cloud, and replica type is tlog.

If a leader dies, how is the re-election of the leader and the
synchronization of the replicas done?

In my opinion.
Leader dies→ New tlog replica tries to become Leader→ Replays tlogs not
reflected in the index→ Becomes Leader
Is this the right fit first?

Also, when another leader is elected, does it create a tlog that is only
available to the old leader? (I'm worried about data being lost if the
tlogs aren't synchronized.)