Solr Query

2011-03-15 Thread Vishal Patel
I am a bit new for Solr.

I am running below query in query browser admin interface

+RetailPriceCodeID:1 +MSRP:[16001.00 TO 32000.00]

I think it should return only results with RetailPriceCode = 1 ad MSRP
between 16001 and 32000.

But it returns all resuts with MSRP = 1 and doesnt consider 2nd query at
all.

Am i doing something wrong here? Please help


SolrCore Initialization Failures in Solr 8.0.0

2019-03-26 Thread vishal patel

My previous solr version was 6.1.0 and zoo keeper version was 3.4.6. Now I am 
upgrading solr version 8.0.0 and zoo keeper 3.4.13.
In solr 6.1.0 my collection(product) folder server\solr\product
conf
schema.xml
solrconfig.xml
core.properties

In core.properties ::
name=product
shard=shard1
collection=product

In solr 8.0.0, I changed only solrconfig.xml and all other things keep same.
I created 3 zoo keeper and one shard. First I start all 3 zoo keeper and then 
start the solr, below ERROR come


2019-03-26 13:06:49.367 ERROR 
(coreLoadExecutor-13-thread-1-processing-n:192.168.100.145:7991_solr) 
[c:product s:shard1  x:product] o.a.s.c.ZkController
org.apache.solr.common.SolrException: Could not find collection : product
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118) 
~[solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:10]
at 
org.apache.solr.core.CoreContainer.repairCoreProperty(CoreContainer.java:1854) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1790) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729) 
[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
 [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.core.CoreContainer$$Lambda$259/523051393.call(Unknown 
Source) [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
jimczi - 2019-03-08 12:06:06]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 [metrics-core-3.2.6.jar:3.2.6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:10]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$45/898628429.run(Unknown
 Source) [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
jimczi - 2019-03-08 12:06:10]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
2019-03-26 13:06:49.382 ERROR 
(coreContainerWorkExecutor-2-thread-1-processing-n:192.168.100.145:7991_solr) [ 
  ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on startup
org.apache.solr.common.SolrException: Unable to create core [product]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1210)
 ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:06]
at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.core.CoreContainer$$Lambda$259/523051393.call(Unknown 
Source) ~[?:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.6.jar:3.2.6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:10]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$45/898628429.run(Unknown
 Source) [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
jimczi - 2019-03-08 12:06:10]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
Caused by: org.apache.solr.common.SolrException:
at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1760) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
 ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:06]
... 9 more
Caused by: org.apache.solr.common.SolrExc

Re: SolrCore Initialization Failures in Solr 8.0.0

2019-03-26 Thread vishal patel
solr 6.1.0 folder structure

F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr\

---   product

---   conf

---   schema.xml

---   solrconfig.xml

---   core.properties

---   solr.xml

---   zoo.cfg

Note : core.properties below data

name=product

shard=shard1

collection=product

upconfig command :

zkcli.bat -cmd bootstrap -solrhome 
F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183

Solr start command :

solr start -p 7992

*

Now I am upgrading solr 8.0.0 and make a folder structure like

F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr

---   product

---   data

---   core.properties

---   configsets

---   product

---   conf

---   schema.xml

---   solrconfig.xml

---   solr.xml

---   zoo.cfg

Note : core.properties below data
collection.configName=product
name=product
shard=shard1
collection=product
coreNodeName=core_node2

upconfig command :

zkcli.bat -zkhost 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183 -cmd upconfig 
-confdir 
F:/SolrCloud-8-0-0/solr-8.0.0-shard-1/server/solr/configsets/product/conf 
-confname product

Solr start command :

solr start -p 7992

Its working if i make a folder structure like this.

Why should not configure with same folder structure when upgrade the solr 
8.0.0? is it necessary to make a configsets?we did successfully up without 
making a configsets in solr 6.1.0.

Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Tuesday, March 26, 2019 8:16 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrCore Initialization Failures in Solr 8.0.0

How did you create your “product” collection? It looks like you have the config 
resident on your local disk and _not_ on ZooKeeper.

Your configset has to be in ZooKeeper when you create your collection of 
course. Do not try to individually edit the core.properties files, that’ll be 
very difficult to do correctly.

And you’ll have to completely re-index anyway since Lucene 8.x will not open an 
index created with 6.x, so why not just start completely anew?

Best,
Erick

> On Mar 26, 2019, at 6:49 AM, vishal patel  
> wrote:
>
>
> My previous solr version was 6.1.0 and zoo keeper version was 3.4.6. Now I am 
> upgrading solr version 8.0.0 and zoo keeper 3.4.13.
> In solr 6.1.0 my collection(product) folder server\solr\product
> conf
> schema.xml
> solrconfig.xml
> core.properties
>
> In core.properties ::
> name=product
> shard=shard1
> collection=product
>
> In solr 8.0.0, I changed only solrconfig.xml and all other things keep same.
> I created 3 zoo keeper and one shard. First I start all 3 zoo keeper and then 
> start the solr, below ERROR come
>
>
> 2019-03-26 13:06:49.367 ERROR 
> (coreLoadExecutor-13-thread-1-processing-n:192.168.100.145:7991_solr) 
> [c:product s:shard1  x:product] o.a.s.c.ZkController
> org.apache.solr.common.SolrException: Could not find collection : product
> at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
>  ~[solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:10]
> at 
> org.apache.solr.core.CoreContainer.repairCoreProperty(CoreContainer.java:1854)
>  ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1790) 
> ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729) 
> [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
>  [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
> [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer$$Lambda$259/523051393.call(Unknown 
> Source) [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:06]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCal

Re: SolrCore Initialization Failures in Solr 8.0.0

2019-03-27 Thread vishal patel
solr 6.1.0 folder structure

F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr\

---   product

---   conf

---   schema.xml

---   solrconfig.xml

---   core.properties

---   solr.xml

---   zoo.cfg

Note : core.properties below data

name=product

shard=shard1

collection=product

upconfig command :

zkcli.bat -cmd bootstrap -solrhome 
F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183

Solr start command :

solr start -p 7992

*

Now I am upgrading solr 8.0.0 and make a folder structure like

F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr

---   product

---   data

---   core.properties

---   configsets

---   product

---   conf

---   schema.xml

---   solrconfig.xml

---   solr.xml

---   zoo.cfg

Note : core.properties below data
collection.configName=product
name=product
shard=shard1
collection=product
coreNodeName=core_node2

upconfig command :

zkcli.bat -zkhost 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183 -cmd upconfig 
-confdir 
F:/SolrCloud-8-0-0/solr-8.0.0-shard-1/server/solr/configsets/product/conf 
-confname product

Solr start command :

solr start -p 7992

Its working if i make a folder structure like this.

Why should not configure with same folder structure when upgrade the solr 
8.0.0? is it necessary to make a configsets?we did successfully up without 
making a configsets in solr 6.1.0.
<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Tuesday, March 26, 2019 8:16 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrCore Initialization Failures in Solr 8.0.0

How did you create your “product” collection? It looks like you have the config 
resident on your local disk and _not_ on ZooKeeper.

Your configset has to be in ZooKeeper when you create your collection of 
course. Do not try to individually edit the core.properties files, that’ll be 
very difficult to do correctly.

And you’ll have to completely re-index anyway since Lucene 8.x will not open an 
index created with 6.x, so why not just start completely anew?

Best,
Erick

> On Mar 26, 2019, at 6:49 AM, vishal patel  
> wrote:
>
>
> My previous solr version was 6.1.0 and zoo keeper version was 3.4.6. Now I am 
> upgrading solr version 8.0.0 and zoo keeper 3.4.13.
> In solr 6.1.0 my collection(product) folder server\solr\product
> conf
> schema.xml
> solrconfig.xml
> core.properties
>
> In core.properties ::
> name=product
> shard=shard1
> collection=product
>
> In solr 8.0.0, I changed only solrconfig.xml and all other things keep same.
> I created 3 zoo keeper and one shard. First I start all 3 zoo keeper and then 
> start the solr, below ERROR come
>
>
> 2019-03-26 13:06:49.367 ERROR 
> (coreLoadExecutor-13-thread-1-processing-n:192.168.100.145:7991_solr) 
> [c:product s:shard1  x:product] o.a.s.c.ZkController
> org.apache.solr.common.SolrException: Could not find collection : product
> at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
>  ~[solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:10]
> at 
> org.apache.solr.core.CoreContainer.repairCoreProperty(CoreContainer.java:1854)
>  ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1790) 
> ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729) 
> [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
>  [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
> [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer$$Lambda$259/523051393.call(Unknown 
> Source) [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:06]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:

coreNodeName core_node2 does not exist in shard shard1, ignore the exception if the replica was deleted Solr 8.0.0

2019-03-27 Thread vishal patel
First time i have successfully made product collection using GUI admin panel in 
solr 8.0.0.
After some changes in schema.xml, i removed the version-2 folder from zoo 
keeper and again upconfig using below command

zkcli.bat -zkhost 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183 -cmd upconfig 
-confdir 
F:/SolrCloud-8-0-0/solr-8.0.0-shard-1/server/solr/configsets/product/conf 
-confname product

Solr start but i can not find the product collection and below error

2019-03-27 11:48:17.276 ERROR 
(coreContainerWorkExecutor-2-thread-1-processing-n:192.168.100.222:7992_solr) [ 
  ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on startup
org.apache.solr.cloud.ZkController$NotInClusterStateException: coreNodeName 
core_node2 does not exist in shard shard1, ignore the exception if the replica 
was deleted
at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1830) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
 ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:06]
at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.core.CoreContainer$$Lambda$258/1816397102.call(Unknown 
Source) ~[?:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.6.jar:3.2.6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:10]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$46/1324551716.run(Unknown
 Source) [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
jimczi - 2019-03-08 12:06:10]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]

is it needed to remove version-2 folder of zoo keeper when any changes in 
schema.xml? Any solution to make a automatic coreNodeName(core_node2) when 
upconfig and solr start?



Solr 8.0.0 coreNodeName

2019-03-27 Thread vishal patel

Hi

I am upgrading the solr 8.0.0 from 6.1.0. Before I can not add the coreNodeName 
in core.properties and its working fine for me. But when i start the solr 8.0.0 
with same core.properties it will give ERROR

2019-03-25 09:01:18.704 ERROR 
(coreLoadExecutor-13-thread-1-processing-n:192.168.100.145:7991_solr) 
[c:product s:shard1  x:product] o.a.s.c.ZkController
org.apache.solr.common.SolrException: Could not find collection : product
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118) 
~[solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:10]
at 
org.apache.solr.core.CoreContainer.repairCoreProperty(CoreContainer.java:1854) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1790) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729) 
[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
 [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer$$Lambda$244/470132045.call(Unknown Source) 
[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 [metrics-core-3.2.6.jar:3.2.6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_45]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:10]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$56/2019826979.run(Unknown
 Source) [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
jimczi - 2019-03-08 12:06:10]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
2019-03-25 09:01:18.720 ERROR 
(coreContainerWorkExecutor-2-thread-1-processing-n:192.168.100.145:7991_solr) [ 
  ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on startup
org.apache.solr.common.SolrException: Unable to create core [product]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1210)
 ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer$$Lambda$244/470132045.call(Unknown Source) 
~[?:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.6.jar:3.2.6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_45]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:10]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$56/2019826979.run(Unknown
 Source) [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
jimczi - 2019-03-08 12:06:10]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
Caused by: org.apache.solr.common.SolrException:
at 
org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1760) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
 ~[solr-core-8.0.0

Upgrade Solr 8.0.0 issue

2019-03-28 Thread vishal patel
Hi

I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure below
---product
---conf
 ---schema.xml
---solrconfig.xml
---core.properties
---solr.xml

core.properties contains
name=product
shard=shard1
collection=product

upconfig command :
zkcli.bat -cmd bootstrap -solrhome 
F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183

Note : In solr 6.1.0 , I did not create the product collection just copy from 
Solr 5.2.0 and changed solrconfig.xml.

when I start the solr 6.1.0, product collection is automatic created and also I 
can access in GUI admin.
But in Solr 8.0.0, collection is not created automatic and Error came. I used 
same core.properties.
Why is it not working??

Regards,
Vishal
Sent from Outlook


Re: SolrCore Initialization Failures in Solr 8.0.0

2019-03-28 Thread vishal patel
Is it needed to create collection again in Solr 8.0.0 if its already created in 
Solr 6.1.0 ?

When i upgraded Solr 6.1.0 from 5.2.0, i just copied collection and changed the 
solrconfig.xml as per solr 6.1.0.


upconfig using below command

zkcli.bat -cmd bootstrap -solrhome 
F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z

192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183


After the solr start, automatic up the collection without changed core.property 
and made configset.

Is any major changes after the Solr 6.1.0 ??



Get Outlook for Android<https://aka.ms/ghei36>


From: Erick Erickson 
Sent: Wednesday, March 27, 2019 9:15:50 PM
To: vishal patel
Subject: Re: SolrCore Initialization Failures in Solr 8.0.0

There is no need whatsoever to make a folder structure. Solr will create the 
proper local file system _for_ you when you create a collection when using 
SolrCloud. Please just don’t do this. The config files will live on ZooKeeper, 
not locally.

"bin/solr zk upconfig..” will move the files from anywhere in your system to 
ZooKeeper. They will NOT be present locally to each replica. You’re making 
things far too complicated and probably shooting yourself in the foot.

Try the simple way:
1> start Solr after a fresh install
2> use the bin/solr zk upconfg command to put any custom configset up on 
ZooKeeper
3> use the admin UI to create your collection

Best,
Erick



> On Mar 27, 2019, at 5:40 AM, vishal patel  
> wrote:
>
> solr 6.1.0 folder structure
>
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr\
>
> ---   product
>
> ---   conf
>
> ---   schema.xml
>
> ---   solrconfig.xml
>
> ---   core.properties
>
> ---   solr.xml
>
> ---   zoo.cfg
>
> Note : core.properties below data
>
> name=product
>
> shard=shard1
>
> collection=product
>
>
> upconfig command :
>
> zkcli.bat -cmd bootstrap -solrhome 
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
> 192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183
>
> Solr start command :
>
> solr start -p 7992
>
> *
>
> Now I am upgrading solr 8.0.0 and make a folder structure like
>
> F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
>
> ---   product
>
> ---   data
>
> ---   core.properties
>
> ---   configsets
>
> ---   product
>
> ---   conf
>
> ---   schema.xml
>
> ---   solrconfig.xml
>
> ---   solr.xml
>
> ---   zoo.cfg
>
> Note : core.properties below data
> collection.configName=product
> name=product
> shard=shard1
> collection=product
> coreNodeName=core_node2
>
> upconfig command :
>
> zkcli.bat -zkhost 
> 192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183 -cmd upconfig 
> -confdir 
> F:/SolrCloud-8-0-0/solr-8.0.0-shard-1/server/solr/configsets/product/conf 
> -confname product
>
> Solr start command :
>
> solr start -p 7992
>
>
> Its working if i make a folder structure like this.
>
>
> Why should not configure with same folder structure when upgrade the solr 
> 8.0.0? is it necessary to make a configsets?we did successfully up without 
> making a configsets in solr 6.1.0.
> From: Erick Erickson 
> Sent: Tuesday, March 26, 2019 8:16 PM
> To: solr-user@lucene.apache.org
> Subject: Re: SolrCore Initialization Failures in Solr 8.0.0
>
> How did you create your “product” collection? It looks like you have the 
> config resident on your local disk and _not_ on ZooKeeper.
>
> Your configset has to be in ZooKeeper when you create your collection of 
> course. Do not try to individually edit the core.properties files, that’ll be 
> very difficult to do correctly.
>
> And you’ll have to completely re-index anyway since Lucene 8.x will not open 
> an index created with 6.x, so why not just start completely anew?
>
> Best,
> Erick
>
> > On Mar 26, 2019, at 6:49 AM, vishal patel  
> > wrote:
> >
> >
> > My previous solr version was 6.1.0 and zoo keeper version was 3.4.6. Now I 
> > am upgrading solr version 8.0.0 and zoo keeper 3.4.13.
> > In solr 6.1.0 my collection(product) folder server\solr\product
> > conf
> > schema.xml
> > solrconfig.xml
> > core.properties
> >
> > In core.properties ::
> >

Re: SolrCore Initialization Failures in Solr 8.0.0

2019-03-28 Thread vishal patel
I know that need to reindex of collection when upgrading from 6 to  8.

Is it necessary to create a collection using admin GUI for solr 8.0.0? Can I 
copy of collection Folder excluding data from solr 6.1.0 and upconfig?

My collection Folder like
---   product
   ---   conf
  ---   schema.xml
  ---   solrconfig.xml
   ---   core.properties

core.properties contains
name=product
shard=shard1
collection=product



Get Outlook for Android<https://aka.ms/ghei36>




From: Erick Erickson 
Sent: Thursday, March 28, 2019 8:56:45 PM
To: vishal patel
Subject: Re: SolrCore Initialization Failures in Solr 8.0.0

You might as well just start over. Solr 8 will not read an index that’s ever 
been touched by Solr 6. Actually, it’s Lucene that won’t open the index. So you 
have to re-index from scratch into a new collection.

Solr 5->Solr 6 did not have this restriction so what you did worked. But 
there’s no point in trying for this upgrade.

Best,
Erick

> On Mar 28, 2019, at 8:23 AM, vishal patel  
> wrote:
>
> Is it needed to create collection again in Solr 8.0.0 if its already created 
> in Solr 6.1.0 ?
>
> When i upgraded Solr 6.1.0 from 5.2.0, i just copied collection and changed 
> the solrconfig.xml as per solr 6.1.0.
>
>
> upconfig using below command
>
> zkcli.bat -cmd bootstrap -solrhome 
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z
>
> 192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183
>
>
> After the solr start, automatic up the collection without changed 
> core.property and made configset.
>
> Is any major changes after the Solr 6.1.0 ??
>
>
>
> Get Outlook for Android
>
> From: Erick Erickson 
> Sent: Wednesday, March 27, 2019 9:15:50 PM
> To: vishal patel
> Subject: Re: SolrCore Initialization Failures in Solr 8.0.0
>
> There is no need whatsoever to make a folder structure. Solr will create the 
> proper local file system _for_ you when you create a collection when using 
> SolrCloud. Please just don’t do this. The config files will live on 
> ZooKeeper, not locally.
>
> "bin/solr zk upconfig..” will move the files from anywhere in your system to 
> ZooKeeper. They will NOT be present locally to each replica. You’re making 
> things far too complicated and probably shooting yourself in the foot.
>
> Try the simple way:
> 1> start Solr after a fresh install
> 2> use the bin/solr zk upconfg command to put any custom configset up on 
> ZooKeeper
> 3> use the admin UI to create your collection
>
> Best,
> Erick
>
>
>
> > On Mar 27, 2019, at 5:40 AM, vishal patel  
> > wrote:
> >
> > solr 6.1.0 folder structure
> >
> > F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr\
> >
> > ---   product
> >
> > ---   conf
> >
> > ---   schema.xml
> >
> > ---   solrconfig.xml
> >
> > ---   core.properties
> >
> > ---   solr.xml
> >
> > ---   zoo.cfg
> >
> > Note : core.properties below data
> >
> > name=product
> >
> > shard=shard1
> >
> > collection=product
> >
> >
> > upconfig command :
> >
> > zkcli.bat -cmd bootstrap -solrhome 
> > F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
> > 192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183
> >
> > Solr start command :
> >
> > solr start -p 7992
> >
> > *
> >
> > Now I am upgrading solr 8.0.0 and make a folder structure like
> >
> > F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
> >
> > ---   product
> >
> > ---   data
> >
> > ---   core.properties
> >
> > ---   configsets
> >
> > ---   product
> >
> > ---   conf
> >
> > ---   schema.xml
> >
> > ---   solrconfig.xml
> >
> > ---   solr.xml
> >
> > ---   zoo.cfg
> >
> > Note : core.properties below data
> > collection.configName=product
> > name=product
> > shard=shard1
> > collection=product
> > coreNodeName=core_node2
> >
> > upconfig command :
> >
> > zkcli.bat -zkhost 
> > 192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183 -cmd 
> > upconfi

Re: Upgrade Solr 8.0.0 issue

2019-03-28 Thread vishal patel
i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to 
change the core.properties?
In solr 6.0.0 i wrote only name,shard and collection in core.properties i 
didn't write coreNodeName and collection.configName. For starting solr, first i 
delete the zoo_data version-2 folder and upconfig command and then start the 
solr and successfully worked.

In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its 
automatic entry in zoo keeper and after the solr start it worked fine.

why is not working in solr 8.0.0? Is necessary to create a collection again 
using admin GUI or changes in core.properties for solr 8.0.0?

Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will 
re-index after the upgrade.I want only same schema.xml of solr 6.0.0.

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo 
Sent: Friday, March 29, 2019 8:47 AM
To: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Hi Vishal,

There could be problem with your index if you upgrade directly from Solr
6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
upgrade for one major version.

Regards,
Edwin

On Thu, 28 Mar 2019 at 21:30, vishal patel 
wrote:

> Hi
>
> I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure below
> ---product
> ---conf
>  ---schema.xml
> ---solrconfig.xml
> ---core.properties
> ---solr.xml
>
> core.properties contains
> name=product
> shard=shard1
> collection=product
>
> upconfig command :
> zkcli.bat -cmd bootstrap -solrhome
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 192.168.100.222:3181,
> 192.168.100.222:3182,192.168.100.222:3183
>
> Note : In solr 6.1.0 , I did not create the product collection just copy
> from Solr 5.2.0 and changed solrconfig.xml.
>
> when I start the solr 6.1.0, product collection is automatic created and
> also I can access in GUI admin.
> But in Solr 8.0.0, collection is not created automatic and Error came. I
> used same core.properties.
> Why is it not working??
>
> Regards,
> Vishal
> Sent from Outlook<http://aka.ms/weboutlook>
>


Re: Upgrade Solr 8.0.0 issue

2019-03-28 Thread vishal patel
Ohk got your point.

i will create the collection again but sometimes any changes in Solrconfig.xml 
or schema.xml then what to do??
As per my opinion, again upconfig and start the solr and reload the collection. 
Is it true?
By my mistake I removed zoo_data version2 folder from zoo keeper and then 
upconfig and start the solr.
When i did upconfig and started the solr , Error came about coreNodeName . So I 
created collection again but my index data folder was overwrite. is it 
necessary to change index data directory ?
My existing data directory :
F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
product_shard1_replica_n1
   --- data
   --- core.properties

Note : I don't want to re-index after the change in Solrconfig.xml.

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo 
Sent: Friday, March 29, 2019 11:00 AM
To: vishal patel
Cc: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Usually I will create the collection again since I will re-index after the 
upgrade. If I create the collection again, the new core.properties will be 
created.

If you plan to use the same schema.xml, have to check if there are class that 
have become deprecated, as usually some old class will get deprecated in the 
new version.

Regards,
Edwin

On Fri, 29 Mar 2019 at 12:48, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to 
change the core.properties?
In solr 6.0.0 i wrote only name,shard and collection in core.properties i 
didn't write coreNodeName and collection.configName. For starting solr, first i 
delete the zoo_data version-2 folder and upconfig command and then start the 
solr and successfully worked.

In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its 
automatic entry in zoo keeper and after the solr start it worked fine.

why is not working in solr 8.0.0? Is necessary to create a collection again 
using admin GUI or changes in core.properties for solr 8.0.0?

Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will 
re-index after the upgrade.I want only same schema.xml of solr 6.0.0.

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 8:47 AM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: Re: Upgrade Solr 8.0.0 issue

Hi Vishal,

There could be problem with your index if you upgrade directly from Solr
6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
upgrade for one major version.

Regards,
Edwin

On Thu, 28 Mar 2019 at 21:30, vishal patel 
mailto:vishalpatel200...@outlook.com>>
wrote:

> Hi
>
> I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure below
> ---product
> ---conf
>  ---schema.xml
> ---solrconfig.xml
> ---core.properties
> ---solr.xml
>
> core.properties contains
> name=product
> shard=shard1
> collection=product
>
> upconfig command :
> zkcli.bat -cmd bootstrap -solrhome
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
> 192.168.100.222:3181<http://192.168.100.222:3181>,
> 192.168.100.222:3182<http://192.168.100.222:3182>,192.168.100.222:3183<http://192.168.100.222:3183>
>
> Note : In solr 6.1.0 , I did not create the product collection just copy
> from Solr 5.2.0 and changed solrconfig.xml.
>
> when I start the solr 6.1.0, product collection is automatic created and
> also I can access in GUI admin.
> But in Solr 8.0.0, collection is not created automatic and Error came. I
> used same core.properties.
> Why is it not working??
>
> Regards,
> Vishal
> Sent from Outlook<http://aka.ms/weboutlook>
>


Re: Upgrade Solr 8.0.0 issue

2019-03-29 Thread vishal patel
But by mistake zoo_data version2 folder delete then upconfig and again create 
the collection?

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo 
Sent: Friday, March 29, 2019 1:07 PM
To: vishal patel
Cc: solr-user@lucene.apache.org
Subject: Re: Upgrade Solr 8.0.0 issue

Yes, if you have changes to solrconfig.xml or schema.xml, just upconfig and 
reload the collection. Not necessary to restart Solr.

It's not necessary to change the index directly, but you can change it in the 
core.properties if you want to store the index elsewhere (Eg: in another drive).

Regards,
Edwin


On Fri, 29 Mar 2019 at 14:11, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
Ohk got your point.

i will create the collection again but sometimes any changes in Solrconfig.xml 
or schema.xml then what to do??
As per my opinion, again upconfig and start the solr and reload the collection. 
Is it true?
By my mistake I removed zoo_data version2 folder from zoo keeper and then 
upconfig and start the solr.
When i did upconfig and started the solr , Error came about coreNodeName . So I 
created collection again but my index data folder was overwrite. is it 
necessary to change index data directory ?
My existing data directory :
F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
product_shard1_replica_n1
   --- data
   --- core.properties

Note : I don't want to re-index after the change in Solrconfig.xml.

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 11:00 AM
To: vishal patel
Cc: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: Re: Upgrade Solr 8.0.0 issue

Usually I will create the collection again since I will re-index after the 
upgrade. If I create the collection again, the new core.properties will be 
created.

If you plan to use the same schema.xml, have to check if there are class that 
have become deprecated, as usually some old class will get deprecated in the 
new version.

Regards,
Edwin

On Fri, 29 Mar 2019 at 12:48, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to 
change the core.properties?
In solr 6.0.0 i wrote only name,shard and collection in core.properties i 
didn't write coreNodeName and collection.configName. For starting solr, first i 
delete the zoo_data version-2 folder and upconfig command and then start the 
solr and successfully worked.

In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its 
automatic entry in zoo keeper and after the solr start it worked fine.

why is not working in solr 8.0.0? Is necessary to create a collection again 
using admin GUI or changes in core.properties for solr 8.0.0?

Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will 
re-index after the upgrade.I want only same schema.xml of solr 6.0.0.

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 8:47 AM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: Re: Upgrade Solr 8.0.0 issue

Hi Vishal,

There could be problem with your index if you upgrade directly from Solr
6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
upgrade for one major version.

Regards,
Edwin

On Thu, 28 Mar 2019 at 21:30, vishal patel 
mailto:vishalpatel200...@outlook.com>>
wrote:

> Hi
>
> I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure below
> ---product
> ---conf
>  ---schema.xml
> ---solrconfig.xml
> ---core.properties
> ---solr.xml
>
> core.properties contains
> name=product
> shard=shard1
> collection=product
>
> upconfig command :
> zkcli.bat -cmd bootstrap -solrhome
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
> 192.168.100.222:3181<http://192.168.100.222:3181>,
> 192.168.100.222:3182<http://192.168.100.222:3182>,192.168.100.222:3183<http://192.168.100.222:3183>
>
> Note : In solr 6.1.0 , I did not create the product collection just copy
> from Solr 5.2.0 and changed solrconfig.xml.
>
> when I start the solr 6.1.0, product collection is automatic created and
> also I can access in GUI admin.
> But in Solr 8.0.0, collection is not created automatic and Error came. I
> used same core.properties.
> Why is it not working??
>
> Regards,
> Vishal
> Sent from Outlook<http://aka.ms/weboutlook>
>


Re: Upgrade Solr 8.0.0 issue

2019-03-29 Thread vishal patel

If I delete the product_shard1_replica_n1 folder then what about data folder? 
because its in this folder . It also need to delete?. Do you need to backup of 
data folder or change the data directory?

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo 
Sent: Friday, March 29, 2019 1:39 PM
To: vishal patel
Subject: Re: Upgrade Solr 8.0.0 issue

Yes.
Also delete the product_shard1_replica_n1 folder under server\solr, so that you 
can start everything from fresh from creating the collection.

Regards,
Edwin


On Fri, 29 Mar 2019 at 15:43, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
But by mistake zoo_data version2 folder delete then upconfig and again create 
the collection?

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 1:07 PM
To: vishal patel
Cc: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: Re: Upgrade Solr 8.0.0 issue

Yes, if you have changes to solrconfig.xml or schema.xml, just upconfig and 
reload the collection. Not necessary to restart Solr.

It's not necessary to change the index directly, but you can change it in the 
core.properties if you want to store the index elsewhere (Eg: in another drive).

Regards,
Edwin


On Fri, 29 Mar 2019 at 14:11, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
Ohk got your point.

i will create the collection again but sometimes any changes in Solrconfig.xml 
or schema.xml then what to do??
As per my opinion, again upconfig and start the solr and reload the collection. 
Is it true?
By my mistake I removed zoo_data version2 folder from zoo keeper and then 
upconfig and start the solr.
When i did upconfig and started the solr , Error came about coreNodeName . So I 
created collection again but my index data folder was overwrite. is it 
necessary to change index data directory ?
My existing data directory :
F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr
product_shard1_replica_n1
   --- data
   --- core.properties

Note : I don't want to re-index after the change in Solrconfig.xml.

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 11:00 AM
To: vishal patel
Cc: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: Re: Upgrade Solr 8.0.0 issue

Usually I will create the collection again since I will re-index after the 
upgrade. If I create the collection again, the new core.properties will be 
created.

If you plan to use the same schema.xml, have to check if there are class that 
have become deprecated, as usually some old class will get deprecated in the 
new version.

Regards,
Edwin

On Fri, 29 Mar 2019 at 12:48, vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
i will re-index with clean after the solr 8.0.0 upgrade.Is it necessary to 
change the core.properties?
In solr 6.0.0 i wrote only name,shard and collection in core.properties i 
didn't write coreNodeName and collection.configName. For starting solr, first i 
delete the zoo_data version-2 folder and upconfig command and then start the 
solr and successfully worked.

In solr 6.0.0 i never mentioned coreNodeName  but when i did upconfig its 
automatic entry in zoo keeper and after the solr start it worked fine.

why is not working in solr 8.0.0? Is necessary to create a collection again 
using admin GUI or changes in core.properties for solr 8.0.0?

Note : I don't want to copy of data from solr 6.1.0 to solr 8.0.0 i will 
re-index after the upgrade.I want only same schema.xml of solr 6.0.0.

Sent from Outlook<http://aka.ms/weboutlook>

From: Zheng Lin Edwin Yeo mailto:edwinye...@gmail.com>>
Sent: Friday, March 29, 2019 8:47 AM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: Re: Upgrade Solr 8.0.0 issue

Hi Vishal,

There could be problem with your index if you upgrade directly from Solr
6.1.0 to Solr 8.0.0, which is two major version, as Solr only supports
upgrade for one major version.

Regards,
Edwin

On Thu, 28 Mar 2019 at 21:30, vishal patel 
mailto:vishalpatel200...@outlook.com>>
wrote:

> Hi
>
> I am upgrading solr 6.1.0 to 8.0.0. In solr 6.0.0 my folder structure below
> ---product
> ---conf
>  ---schema.xml
> ---solrconfig.xml
> ---core.properties
> ---solr.xml
>
> core.properties contains
> name=product
> shard=shard1
> collection=product
>
> upconfig command :
> zkcli.bat -cmd bootstrap -solrhome
> F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
> 192.168.100.222:3181<http://192.168.100.222:3181>,
> 192.168.100.222:3182<http://192.168.100.222:3182>,192.168.100.222:3183<http

Solr 8.0.0 - CPU usage 100% when indexed documents

2019-04-08 Thread vishal patel
Hi

I have configured 2 shards and 3 zoo keeper. When i indexed document in 
collection, my CPU usage becomes a full.
I have attached thread dump.
Is there Any changes needed in solrconfig.xml?

Sent from Outlook


Re: Solr 8.0.0 - CPU usage 100% when indexed documents

2019-04-08 Thread vishal patel
I have created two solr shards with 3 zoo keeper. First do upconfig in zoo 
keeper then start the both solr with different port then create a 
"actionscomments" collection using API call.

When I indexed one document in actionscomments, my CPU utilization go high.

Note :
upconfig command ::  zkcli.bat -zkhost 
192.168.100.145:3181,192.168.100.145:3182,192.168.100.145:3183 -cmd upconfig 
-confdir E:/SolrCloud-8-0-0/solr1/server/solr/configsets/actionscomments/conf 
-confname actionscomments. 
[E:\SolrCloud-8-0-0\solr1\server\scripts\cloud-scripts]
Solr start command ::  solr start -p 7991 and solr start -p 7992 
[E:\SolrCloud-8-0-0\solr1\bin and E:\SolrCloud-8-0-0\solr2\bin]
Create a collection :: 
http://192.168.102.150:7991/solr/admin/collections?_=1554285992377&action=CREATE&autoAddReplicas=false&collection.configName=actionscomments&maxShardsPerNode=1&name=actionscomments&numShards=2&replicationFactor=1&router.name=compositeId&wt=json
Operating system :: windows server 2008 R2 standard

When I indexed document, CPU goes high and in thread dump noticed 
commitScheduler-25-thread-2,commitScheduler-48-thread-2 
,commitScheduler-21-thread-2. After sometimes  it automatically removed and CPU 
goes down.

In log file I can not find any error and I indexed document using 
AsiteSolrCloudManager.

I have attached my solrconfig.xml and schema.xml and also thread dump which got 
from solr admin GUI.

Sent from Outlook<http://aka.ms/weboutlook>

From: Jörn Franke 
Sent: Monday, April 8, 2019 4:16 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 8.0.0 - CPU usage 100% when indexed documents

Can you please describe your scenario in detail ?

How does your load process look like (custom module? How many threads?)?

How many files do you try to index ? What is their format?
How does your solr config look like?

How many cores do you have? What else is installed on the Solr server?

Which Operation System?

What do the log files tell your from Solr and Zookeeper?

What is the Schema looking like?

> Am 08.04.2019 um 12:01 schrieb vishal patel :
>
> Hi
>
> I have configured 2 shards and 3 zoo keeper. When i indexed document in 
> collection, my CPU usage becomes a full.
> I have attached thread dump.
> Is there Any changes needed in solrconfig.xml?
>
> Sent from Outlook






  

 
   

   


   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
  
   
   
   

   
   

 


 
 id



 

  

   
   
   
   
   
   
   
   
   
	   
   

   
   
 
  





































	
	
		
			
			
		
		
			
			
	
	

	 

  

		


  
  



  


 
  
  
	






  

  
  8.0.0

  

  
  
  

  
  

  
  

  
  
  

  
  
  
  
  ${solr.data.dir:}


  
  
   

  
  

  
  

  
  





 16





 1024




   




   
  









  
  
  
  
  
  



	  
  


  
  
  
  
  
  

  
  



  ${solr.ulog.dir:}
  ${solr.ulog.numVersionBuckets:65536}

 

  
		60
   2 
   false 
 


  
   ${solr.autoSoftCommit.maxTime:-1} 
 






  
  
  
  

  
  

100

 
 
-1









   



 




 







true

   
   
  true
 	 
   
   50

   
   100

   


  

  


  

  *:*

  



false


  


  
  








  

  
  
  

 
   none
	   xml
   10
   summary
 



  
  
 
   explicit
   json
   true
   text
 
  

  
  

  {!xport}
  xsort
  false



  query

  


  

  text

  




  
  
   



  

  

  
  

  text
  true


  tvComponent

  

  
  

  default

  
  org.carrot2.clustering.lingo.LingoClusteringAlgorithm

  
  clustering/carrot2
  20
  ENGLISH
  clustering/carrot2




  stc
  org.carrot2.clustering.stc.STCClusteringAlgorithm

  

  
  

  true
  default
  true
  
  name
  
  id
  
  features
  
  true
  
  
  
  false

  
  edismax
  
text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
  
  *:*
  10
  *,score


  clustering

  

  
  

  
  
 
  true

Re: Solr 8.0.0 - CPU usage 100% when indexed documents

2019-04-09 Thread vishal patel
Hi,

After your suggestion i changed code
String SOLR_URL="http://localhost:7991/solr/actionscomments";;
SolrClient solrClient = new HttpSolrClient.Builder(SOLR_URL).build();
SolrInputDocument document = new SolrInputDocument();
document.addField("id","ACTC6401895");
solrClient.add(document);
solrClient.commit();

Still my CPU usage went high and my CPU has 4 core and no other application 
running in my machine.

After the lots of try, I found out the below issue.
Before solrconfig.xml (6.1.0)

   60
   2
   false
 

After the below change in solrconfig.xml
 (8.0.0)
   ${solr.autoCommit.maxTime:15000}
   2
   false
 

Actually I am upgrading solr 6.1.0 to 8.0.0. In 6.1.0 it is working fine with 
autocommit maxtime 60.
But in 8.0.0, CPU usage goes high.[commitScheduler thread running long time]

Please give me more details why is it happening in solr 8.0.0.
Is any my mistake? In previous mail, I attached solrconfig.xml so please verify 
it.


Sent from Outlook<http://aka.ms/weboutlook>

From: Shawn Heisey 
Sent: Tuesday, April 9, 2019 1:38 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 8.0.0 - CPU usage 100% when indexed documents

On 4/8/2019 11:00 PM, vishal patel wrote:
> Sorry my mistake there is no class of that.
>
> I have add the data using below code.
> CloudSolrServer cloudServer = new CloudSolrServer(zkHost);
> cloudServer.setDefaultCollection("actionscomments");
> cloudServer.setParallelUpdates(true);
> List docs = new ArrayList<>();
> SolrInputDocument solrDocument = new SolrInputDocument();
> solrDocument.addField("id", "123");
> docs.add(solrDocument);
> cloudServer.add(docs, 1000);

Side note:  This code is not using SolrJ 8.0.0.  CloudSolrServer was
deprecated in version 5.0.0 and completely removed in version 6.0.0.
I'm surprised this code even works at all with Solr 8.0.0 -- you need to
upgrade to SolrJ 8 and use CloudSolrClient.

How long does the system remain at 100 percent CPU when you index that
single document that only has one field?  If it's longer than a very
small fraction of a second, then my guess is that it's cache warming
queries using the CPU, not the indexing itself.

How many CPU cores are at 100 percent?  Is it just one, or multiple?  It
would be odd for it to be multiple, unless there is other activity going
on at the same time.

Thanks,
Shawn


CPU usage goes high when indexing one document in Solr 8.0.0

2019-04-09 Thread vishal patel
I am upgrading solr 6.1.0 to 8.0.0. When I indexed only one document in solr 
8.0.0, My CPU usage got high for some times even though indexing done.
I noticed that my autocommit maxTime was 60 in solrconfig.xml of Solr 
6.1.0. And same value I used in solrconfig.xml of solr 8.0.0.
After I replaced ${solr.autoCommit.maxTime:15000} which took from default 
collection, It is working fine and CPU usage does not high for more time.

I have attached my solrconfig.xml of 6.1.0 and 8.0.0. Some changes i have 
updated in solrconfig.xml so Please tell me is there any kind of changes needed 
in this?

Sent from Outlook





  

  
  6.1.0

  

  
  
  

  
  

  
  

  
  

  
  
  
  
  ${solr.data.dir:}


  
  
   

  
  

  
  

  
  





 16





 1024




   


5




   
  









  
  
  
  
  
  



	  
  


  
  
  
  
  
  

  
  



  ${solr.ulog.dir:}
  ${solr.ulog.numVersionBuckets:65536}

 

  
		60
   2 
   false 
 


  
   ${solr.autoSoftCommit.maxTime:-1} 
 






  
  
  
  

  
  

100

 
 
-1









   



 




 
   






true

   
   
  true
 	 
   
   50

   
   100

   


  

  


  

  *:*

  



false


2

  


  
  
 








  

  
  
  

 
   none
   10
   summary
 



  
  
 
   explicit
   json
   true
   text
 
  

  
  

  {!xport}
  xsort
  false



  query

  


  

  text

  

  
  


  
  

  
  

 explicit 
 true

  
  
  
   



  

  

  
  

  text
  true


  tvComponent

  

  
  

  default

  
  org.carrot2.clustering.lingo.LingoClusteringAlgorithm

  
  clustering/carrot2
  20
  ENGLISH
  clustering/carrot2




  stc
  org.carrot2.clustering.stc.STCClusteringAlgorithm

  

  
  

  true
  default
  true
  
  name
  
  id
  
  features
  
  true
  
  
  
  false

  
  edismax
  
text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
  
  *:*
  10
  *,score


  clustering

  
  
  
  

  
  
 
  true
  false
 

  terms

  

  
 
text/plain; charset=UTF-8
  
  
  

  

  
  
5
  
  
  
*:*
  







  

  
  8.0.0

  

  
  
  

  
  

  
  

  
  
  

  
  
  
  
  ${solr.data.dir:}


  
  
   

  
  

  
  

  
  





 16





 1024




   




   
  









  
  
  
  
  
  



	  
  


  
  
  
  
  
  

  
  



  ${solr.ulog.dir:}
  ${solr.ulog.numVersionBuckets:65536}

 

  
		60
   2 
   false 
 


  
   ${solr.autoSoftCommit.maxTime:-1} 
 






  
  
  
  

  
  

100

 
 
-1









   



 




 







true

   
   
  true
 	 
   
   50

   
   100

   


  

  


  

  *:*

  



false


  


  
  








  

  
  
  

 
   none
	   xml
   10
   summary
 



  
  
 
   explicit
   json
   true
   text
 
  

  
  

  {!xport}
  xsort
  false



  query

  


  

  text

  




  
  
   



  

  

  
  

  text
  true


  tvComponent

  

  
  

  default

  
  org.carrot2.clustering.lingo.LingoClusteringAlgorithm

  
  clustering/carrot2
  20
  ENGLISH
  clustering/carrot2




  stc
  org.carrot2.clustering.stc.STCClusteringAlgorithm

  

  
  

  true
  default
  true
  
  name
  
  id
  
  features
  
  true
  
  
  
  false

  
  edismax
  
text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
  
  *:*
  10
  *,score


  clustering

  

  
  

  
  
 
  true
  false


  terms

  

  
 
text/plain; charset=UTF-8
  

  



  
  
5
  
  
  
*:*
  




Shard and replica went down in Solr 6.1.0

2019-04-10 Thread vishal patel
I have 2 shards and 2 replicas of Solr 6.1.0. one shard and one replica went 
down and I got below ERROR

2019-04-08 12:54:01.469 INFO  (commitScheduler-131-thread-1) [c:products 
s:shard1 r:core_node1 x:product1] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@24b9127f[product1] main]
2019-04-08 12:54:01.468 INFO  (commitScheduler-110-thread-1) [c:product2 
s:shard1 r:core_node1 x:product2] o.a.s.c.SolrDeletionPolicy 
SolrDeletionPolicy.onCommit: commits: num=2
commit{dir=G:\SolrCloud\solr1\server\solr\product2\data\index.20180412060518798,segFN=segments_he5,generation=22541}
commit{dir=G:\SolrCloud\solr1\server\solr\product2\data\index.20180412060518798,segFN=segments_he6,generation=22542}
2019-04-08 12:54:01.556 INFO  (commitScheduler-110-thread-1) [c:product2 
s:shard1 r:core_node1 x:product2] o.a.s.c.SolrDeletionPolicy newest commit 
generation = 22542
2019-04-08 12:54:01.465 WARN (commitScheduler-136-thread-1) [c:product3 
s:shard1 r:core_node1 x:product3] o.a.s.c.SolrCore [product3] PERFORMANCE 
WARNING: Overlapping onDeckSearchers=2

2019-04-08 12:54:01.534 ERROR 
(updateExecutor-2-thread-36358-processing-http:10.101.111.80:8983//solr//product3
 x:product3 r:core_node1 n:10.102.119.85:8983_solr s:shard1 c:product3) 
[c:product3 s:shard1 r:core_node1 x:product3] o.a.s.u.StreamingSolrClients error
org.apache.solr.common.SolrException: Service Unavailable

request: 
http://10.101.111.80:8983/solr/product3/update?update.distrib=FROMLEADER&distrib.from=http%3A%2F%2F10.102.119.85%3A8983%2Fsolr%2Fproduct3%2F&wt=javabin&version=2
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:320)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:185)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$3/30175207.run(Unknown
 Source)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Note : product1,product2 and product3 are my collection.

In my solrconfig.xml

60
2
false



${solr.autoSoftCommit.maxTime:-1}

2

There are many documents committed at that time and I found out so many 
commitScheduler threads in Log.
Solr went down due to warn PERFORMANCE WARNING: Overlapping onDeckSearchers=2 
is it possible?
Need to update my autoCommit or  maxWarmingSearchers?

Sent from Outlook


Re: Solr 8.0.0 - CPU usage 100% when indexed documents

2019-04-10 Thread vishal patel
Thanks for your reply..

All 4 CPU core got high by 12 to 15 seconds. we used java 8.

I got your point. We will wait solr 8.1 rather upgrade OpenJDK 11.

Sent from Outlook<http://aka.ms/weboutlook>

From: Shawn Heisey 
Sent: Wednesday, April 10, 2019 9:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 8.0.0 - CPU usage 100% when indexed documents

On 4/9/2019 10:53 PM, vishal patel wrote:
> Still my CPU usage went high and my CPU has 4 core and no other application 
> running in my machine.

I was asking how many CPUs went to 100 percent, not how many CPUs you
have.  And I also asked how long CPU usage remains at 100 percent after
indexing a single document.

What Java version are you running?  We do have a possible bug that could
be affecting you.

https://issues.apache.org/jira/browse/SOLR-13349

If this is the problem you're experiencing, the solution would be to
either upgrade to Java 11 or wait for Solr 8.1 to be released.

Note that Oracle requires payment if you use their Java 11 in
production.  You're likely going to want to use OpenJDK.

Thanks,
Shawn


Re: CPU usage goes high when indexing one document in Solr 8.0.0

2019-04-10 Thread vishal patel
Thanks for your reply..

Yes same problem like SOLR-13349.

We will wait for Solr 8.1.

Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Wednesday, April 10, 2019 8:41 PM
To: solr-user@lucene.apache.org
Subject: Re: CPU usage goes high when indexing one document in Solr 8.0.0

Possibly SOLR-13349?

> On Apr 9, 2019, at 11:41 PM, vishal patel  
> wrote:
>
> I am upgrading solr 6.1.0 to 8.0.0. When I indexed only one document in solr 
> 8.0.0, My CPU usage got high for some times even though indexing done.
> I noticed that my autocommit maxTime was 60 in solrconfig.xml of Solr 
> 6.1.0. And same value I used in solrconfig.xml of solr 8.0.0.
> After I replaced ${solr.autoCommit.maxTime:15000} which took from default 
> collection, It is working fine and CPU usage does not high for more time.
>
> I have attached my solrconfig.xml of 6.1.0 and 8.0.0. Some changes i have 
> updated in solrconfig.xml so Please tell me is there any kind of changes 
> needed in this?
>
> Sent from Outlook
> 



Solr New version 8.1

2019-04-11 Thread vishal patel
Hi

Any one knows about tentative date of stable SOLR 8.1 release?


Sent from Outlook


Re: Shard and replica went down in Solr 6.1.0

2019-04-11 Thread vishal patel
specially when openSearcher is false.

Not sure what’s really generating that error, take a look at all your other 
Solr logs to see if there’s a cause.

Best,
Erick


> On Apr 10, 2019, at 5:21 AM, vishal patel  
> wrote:
>
> I have 2 shards and 2 replicas of Solr 6.1.0. one shard and one replica went 
> down and I got below ERROR
>
> 2019-04-08 12:54:01.469 INFO  (commitScheduler-131-thread-1) [c:products 
> s:shard1 r:core_node1 x:product1] o.a.s.s.SolrIndexSearcher Opening 
> [Searcher@24b9127f[product1] main]
> 2019-04-08 12:54:01.468 INFO  (commitScheduler-110-thread-1) [c:product2 
> s:shard1 r:core_node1 x:product2] o.a.s.c.SolrDeletionPolicy 
> SolrDeletionPolicy.onCommit: commits: num=2
> commit{dir=G:\SolrCloud\solr1\server\solr\product2\data\index.20180412060518798,segFN=segments_he5,generation=22541}
> commit{dir=G:\SolrCloud\solr1\server\solr\product2\data\index.20180412060518798,segFN=segments_he6,generation=22542}
> 2019-04-08 12:54:01.556 INFO  (commitScheduler-110-thread-1) [c:product2 
> s:shard1 r:core_node1 x:product2] o.a.s.c.SolrDeletionPolicy newest commit 
> generation = 22542
> 2019-04-08 12:54:01.465 WARN (commitScheduler-136-thread-1) [c:product3 
> s:shard1 r:core_node1 x:product3] o.a.s.c.SolrCore [product3] PERFORMANCE 
> WARNING: Overlapping onDeckSearchers=2
>
> 2019-04-08 12:54:01.534 ERROR 
> (updateExecutor-2-thread-36358-processing-http:10.101.111.80:8983//solr//product3
>  x:product3 r:core_node1 n:10.102.119.85:8983_solr s:shard1 c:product3) 
> [c:product3 s:shard1 r:core_node1 x:product3] o.a.s.u.StreamingSolrClients 
> error
> org.apache.solr.common.SolrException: Service Unavailable
>
> request: 
> http://10.101.111.80:8983/solr/product3/update?update.distrib=FROMLEADER&distrib.from=http%3A%2F%2F10.102.119.85%3A8983%2Fsolr%2Fproduct3%2F&wt=javabin&version=2
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:320)
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:185)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$3/30175207.run(Unknown
>  Source)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> Note : product1,product2 and product3 are my collection.
>
> In my solrconfig.xml
> 
> 60
>2
>false
> 
>
> 
> ${solr.autoSoftCommit.maxTime:-1}
> 
> 2
>
> There are many documents committed at that time and I found out so many 
> commitScheduler threads in Log.
> Solr went down due to warn PERFORMANCE WARNING: Overlapping onDeckSearchers=2 
> is it possible?
> Need to update my autoCommit or  maxWarmingSearchers?
>
> Sent from Outlook<http://aka.ms/weboutlook>



Re: Shard and replica went down in Solr 6.1.0

2019-04-11 Thread vishal patel
Actually in our application , bulky documents are needed to index and on same 
time we want to see that documents. So in production we keep auto commit 10 
minutes and auto soft commit 1 second.
Is it ohk?

Get Outlook for Android<https://aka.ms/ghei36>




From: Erick Erickson 
Sent: Thursday, April 11, 2019 10:23:00 PM
To: vishal patel
Subject: Re: Shard and replica went down in Solr 6.1.0

We’re not quite on the same page.

These config options will _not_ open any new searchers. Period. They’re not the 
source of your max warming searchers error. Therefore _somebody_ is issuing 
commits. You need to find out who and stop them.

Then change your settings in solrconfig to
1> remove maxDoc from autoCommit. It’s probably leading to useless work
2> I’d shorten my maxTime in autoCommit to, say, 1 minute. This isn’t very 
important, but 10 minutes is unnecessary
3> Change your autoSoftCommit to as long as your application can tolerate, say 
10 minutes if possible.
4> Find out who is issuing commits and stop them.

With these settings, unless you have outrageous “autowarmCount” settings in 
solrconfig.xml for the caches, you should not see any overlapping on deck 
searchers. I usually start with autowarmCount settings in the 10-16 range.

Best,
Erick

> On Apr 11, 2019, at 5:40 AM, vishal patel  
> wrote:
>
>
> Thanks Erick,
> I got your point. As per you, Solr will not go down due to “performance 
> warning” and no need to change the maxdoc value. You talked about number of 
> searchers but in solrconfig.xml there is only 
> 2.[2]
>
> In production, we have 27 collection, 2 shards and 2 replicas and 3 zoo 
> keepers and more than 3 documents indexed within a 10 minutes.
> How can I know how many searchers are open at the specific time?
> As per my understanding, solr searcher will open when soft or hard commit. Am 
> I right? And my commit time below
> 
> 60
> 2
> false
> 
>
> 
> ${solr.autoSoftCommit.maxTime:-1}
> 
>
> I do not write any code for opening the solr index searcher and can not find 
> any error related that.
>
> My all collection has same configuration mean same hard and soft commit time 
> for all 27 collection. is it any issue if on same time two or more 
> collections will come for hard commit?
>
> Below, again send more and accurate log details.
>
> Shard-1 Log ::
> --
> 2019-04-08 12:54:01.395 INFO  (commitScheduler-102-thread-1) [c:collection1 
> s:shard1 r:core_node1 x:collection1] o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> 2019-04-08 12:54:01.395 INFO  (commitScheduler-118-thread-1) [c:collection2 
> s:shard1 r:core_node1 x:collection2] o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> 2019-04-08 12:54:01.395 INFO  (commitScheduler-110-thread-1) [c:collection3 
> s:shard1 r:core_node1 x:collection3] o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> 2019-04-08 12:54:01.394 INFO  (commitScheduler-109-thread-1) [c:collection4 
> s:shard1 r:core_node1 x:collection4] o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> 2019-04-08 12:54:01.394 INFO  (commitScheduler-99-thread-1) [c:collection5 
> s:shard1 r:core_node1 x:collection5] o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
>
> 2019-04-08 12:54:01.405 ERROR 
> (updateExecutor-2-thread-36358-processing-http:10.101.111.80:8983//solr//product
>  x:product r:core_node1 n:10.102.119.85:8983_solr s:shard1 c:product) 
> [c:product s:shard1 r:core_node1 x:product] o.a.s.u.StreamingSolrClients error
> org.apache.solr.common.SolrException: Service Unavailable
>
>
>
> request: 
> http://10.101.111.80:8983/solr/product/update?update.distrib=FROMLEADER&distrib.from=http%3A%2F%2F10.102.119.85%3A8983%2Fsolr%2Fproduct%2F&wt=javabin&version=2
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:320)
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:185)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$3/30175207.run(Unknown
>

Re: Shard and replica went down in Solr 6.1.0

2019-04-11 Thread vishal patel
Thanks your reply

Actually all cache are removed in my solrconfig.xml so no matter for autowarm 
count for us.

And I read your given link about hard commit and soft commit concept.
In my production scenario like - index-heavy, Query-heavy [Near Real Time]
So we set the hardcommit max time 10 minutes and soft commit -1 for each 
collection.[27 collections in production]
Is it ohk for production? Please give some input.

And shard is going down for which reason how can we know from log?

And please tell me what is meaning of below error because I can not find this 
type of error in any web page.

Shard-1-replica
-
2019-04-08 12:54:01.294 ERROR (qtp1239731077-1022464) [c:product s:shard1 
r:core_node3 x:product] o.a.s.u.p.DistributedUpdateProcessor Request says it is 
coming from leader, but we are the leader: 
update.distrib=FROMLEADER&distrib.from=http://10.102.119.85:8983/solr/product/&wt=javabin&version=2
2019-04-08 12:54:01.294 INFO  (qtp1239731077-1022464) [c:product s:shard1 
r:core_node3 x:product] o.a.s.u.p.LogUpdateProcessorFactory [product]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=http://10.102.119.85:8983/solr/product/&wt=javabin&version=2}{}
 0 0
2019-04-08 12:54:01.295 ERROR (qtp1239731077-1022464) [c:product s:shard1 
r:core_node3 x:product] o.a.s.h.RequestHandlerBase 
org.apache.solr.common.SolrException: Request says it is coming from leader, 
but we are the leader
at

Shard-1
--
2019-04-08 12:54:01.534 ERROR 
(updateExecutor-2-thread-36358-processing-http:10.101.111.80:8983//solr//product3
 x:product3 r:core_node1 n:10.102.119.85:8983_solr s:shard1 c:product3) 
[c:product3 s:shard1 r:core_node1 x:product3] o.a.s.u.StreamingSolrClients error
org.apache.solr.common.SolrException: Service Unavailable


First we got error in replica then shard.

Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Friday, April 12, 2019 12:04 AM
To: vishal patel
Subject: Re: Shard and replica went down in Solr 6.1.0

Well, it leads to “too many on deck searchers”, obviously.

Here’s most of what you want to know about commits and the like: 
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

I always push back on any setting that opens a new searcher every second. You 
might as well set all your autowarm counts in solrconfig.xml to zero, 
autowarming isn’t doing you any good. You’re also creating a lot of garbage 
churn etc.

Set your soft commit to as long as you can. If you can set expectations like 
“it may take up to a minute before your changes are visible"

> On Apr 11, 2019, at 10:09 AM, vishal patel  
> wrote:
>
> Actually in our application , bulky documents are needed to index and on same 
> time we want to see that documents. So in production we keep auto commit 10 
> minutes and auto soft commit 1 second.
> Is it ohk?
>
> Get Outlook for Android
>
>
>
> From: Erick Erickson 
> Sent: Thursday, April 11, 2019 10:23:00 PM
> To: vishal patel
> Subject: Re: Shard and replica went down in Solr 6.1.0
>
> We’re not quite on the same page.
>
> These config options will _not_ open any new searchers. Period. They’re not 
> the source of your max warming searchers error. Therefore _somebody_ is 
> issuing commits. You need to find out who and stop them.
>
> Then change your settings in solrconfig to
> 1> remove maxDoc from autoCommit. It’s probably leading to useless work
> 2> I’d shorten my maxTime in autoCommit to, say, 1 minute. This isn’t very 
> important, but 10 minutes is unnecessary
> 3> Change your autoSoftCommit to as long as your application can tolerate, 
> say 10 minutes if possible.
> 4> Find out who is issuing commits and stop them.
>
> With these settings, unless you have outrageous “autowarmCount” settings in 
> solrconfig.xml for the caches, you should not see any overlapping on deck 
> searchers. I usually start with autowarmCount settings in the 10-16 range.
>
> Best,
> Erick
>
> > On Apr 11, 2019, at 5:40 AM, vishal patel  
> > wrote:
> >
> >
> > Thanks Erick,
> > I got your point. As per you, Solr will not go down due to “performance 
> > warning” and no need to change the maxdoc value. You talked about number of 
> > searchers but in solrconfig.xml there is only 
> > 2.[2]
> >
> > In production, we have 27 collection, 2 shards and 2 replicas and 3 zoo 
> > keepers and more than 3 documents indexed within a 10 minutes.
> > How can I know how many searchers are open at the specific time?
> > As per my understanding, solr searcher will open when soft or hard commit. 
> > Am I right? And my commit time below
> > 
> > 60
> > 2
> >

Re: Shard and replica went down in Solr 6.1.0

2019-04-13 Thread vishal patel
e3-n_01
2019-04-08 12:52:22.384 INFO  
(zkCallback-4-thread-202-processing-n:10.101.111.80:8983_solr) [c:product 
s:shard1 r:core_node3 x:product] o.a.s.c.ShardLeaderElectionContext I am the 
new leader: http://10.101.111.80:8983/solr/product/ shard1

2019-04-08 12:54:01.294 ERROR (qtp1239731077-1022464) [c:product s:shard1 
r:core_node3 x:product] o.a.s.u.p.DistributedUpdateProcessor Request says it is 
coming from leader, but we are the leader: 
update.distrib=FROMLEADER&distrib.from=http://10.102.119.85:8983/solr/product/&wt=javabin&version=2
2019-04-08 12:54:01.294 INFO  (qtp1239731077-1022464) [c:product s:shard1 
r:core_node3 x:product] o.a.s.u.p.LogUpdateProcessorFactory [product]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=http://10.102.119.85:8983/solr/product/&wt=javabin&version=2}{}
 0 0
2019-04-08 12:54:01.295 ERROR (qtp1239731077-1022464) [c:product s:shard1 
r:core_node3 x:product] o.a.s.h.RequestHandlerBase 
org.apache.solr.common.SolrException: Request says it is coming from leader, 
but we are the leader
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doDefensiveChecks(DistributedUpdateProcessor.java:621)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:392)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:320)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:679)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)


First for which reason replica becomes leader? and if replica becomes leader 
then why after 2 minutes document come in new leader as a behave it is still 
replica? We do index document using zoo keeper instead of direct on shard.

Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Friday, April 12, 2019 8:52 PM
To: vishal patel
Subject: Re: Shard and replica went down in Solr 6.1.0

What was unclear about the statement "These config options will _not_ open any 
new searchers. Period. “?

You _cannot_ be opening new searchers automatically, some client somewhere 
_must_ be issuing a commit. And you shouldn’t do that. Until you find the 
external source that’s issuing the commit, you’ll make no progress on this.

Is this an explicit enough recommendation?

1> find the source of the commits and stop doing that.
2> change your hard commit interval to 60 seconds. Keep openSearcher set to 
false.
3> change your soft commit to 60 seconds (this will imply that you can’t search 
docs for a minute).

This may or may not be the root of the “Request says it is coming from the 
leader” message. But it’s certainly a contributing factor. We can identify this 
positively as a bad configuration so let’s eliminate it and _then_ look for 
more issues.

Best,
Erick

> On Apr 11, 2019, at 10:36 PM, vishal patel  
> wrote:
>
> Thanks your reply
>
> Actually all cache are removed in my solrconfig.xml so no matter for autowarm 
> count for us.
>
> And I read your given link about hard commit and soft commit concept.
> In my production scenario like - index-heavy, Query-heavy [Near Real Time]
> So we set the hardcommit max time 10 minutes and soft commit -1 for each 
> collection.[27 collections in production]
> Is it ohk for production? Please give some input.
>
> And shard is going down for which reason how can we know from log?
>
> And please tell me what is meaning of below error because I can not find this 
> type of error in any web page.
>
> Shard-1-replica
> -
> 2019-04-08 12:54:01.294 ERROR (qtp1239731077-1022464) [c:product s:shard1 
> r:core_node3 x:product] o.a.s.u.p.DistributedUpdateProcessor Request says it 
> is coming from leader, but we are the leader: 
> update.distrib=FROMLEADER&distrib.from=http://10.102.119.85:8983/solr/product/&wt=javabin&version=2
> 2019-04-08 12:54:01.294 INFO  (qtp1239731077-1022464) [c:product s:shard1 
> r:core_node3 x:product] o.a.s.u.p.LogUpdateProcessorFactory [product]  
> webapp=/solr path=/update 
> params={update.distrib=FROMLEADER&distrib.from=http://10.102.119.85:8983/solr/product/&wt=javabin&version=2}{}
>  0 0
> 2019-04-08 12:54:01.295 ERROR (qtp1239731077-1022464) [c:product s:shard1 
> r:core_node3 x:product] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Request says it is coming from leader, 
> but we are the leader
> at
>
> Shard-1
> --
> 2019-04-08 12:54:01.534 ERROR 
> (updateExecutor-2-thread-36358-processing-http:10.101.111.80:8983//solr//product3
>  x:product3 r:core_node1 n:10.102.119.85:8983_solr s:shard1 c:product3) 
> [c:prod

Replica is going into recovery in Solr 6.1.0

2020-02-12 Thread vishal patel
I am using solr version 6.1.0, Java 8 version and G1gc on production. We have 2 
shards and each shard has 1 replica. Suddenly one replica is going into 
recovery mode and Requests become slow in our production.
I have analyzed that minor GC max pause time was 1 min 6 sec 800 ms on that 
time and also multiple times minor GC pauses.

My logs :
https://drive.google.com/file/d/158z3nzLsnHGouyRnXgfzCjwD4iadgKSp/view?usp=sharing
https://drive.google.com/file/d/1E4jyffvIWVJB7EeEMXBXyqaK2ZfAA8kk/view?usp=sharing

I do not know why long GC pause time happened. In our platform heavy searching 
and indexing is performed.
long GC pause times happen due to searching or indexing?
If GC pause time long then why replica is going into recovery? can we set the 
waiting time of update request?
what is the minimum GC pause time for going into recovery mode?

It is useful for my problem? : https://issues.apache.org/jira/browse/SOLR-9310

Regards,
Vishal Patel

Sent from Outlook<http://aka.ms/weboutlook>


Re: Replica is going into recovery in Solr 6.1.0

2020-02-12 Thread vishal patel
Is there anyone looking at this?

Sent from Outlook<http://aka.ms/weboutlook>

From: vishal patel 
Sent: Wednesday, February 12, 2020 3:45 PM
To: solr-user@lucene.apache.org 
Subject: Replica is going into recovery in Solr 6.1.0

I am using solr version 6.1.0, Java 8 version and G1gc on production. We have 2 
shards and each shard has 1 replica. Suddenly one replica is going into 
recovery mode and Requests become slow in our production.
I have analyzed that minor GC max pause time was 1 min 6 sec 800 ms on that 
time and also multiple times minor GC pauses.

My logs :
https://drive.google.com/file/d/158z3nzLsnHGouyRnXgfzCjwD4iadgKSp/view?usp=sharing
https://drive.google.com/file/d/1E4jyffvIWVJB7EeEMXBXyqaK2ZfAA8kk/view?usp=sharing

I do not know why long GC pause time happened. In our platform heavy searching 
and indexing is performed.
long GC pause times happen due to searching or indexing?
If GC pause time long then why replica is going into recovery? can we set the 
waiting time of update request?
what is the minimum GC pause time for going into recovery mode?

It is useful for my problem? : https://issues.apache.org/jira/browse/SOLR-9310

Regards,
Vishal Patel

Sent from Outlook<http://aka.ms/weboutlook>


Re: Replica is going into recovery in Solr 6.1.0

2020-02-12 Thread vishal patel
My configuration:

-XX:+AggressiveOpts -XX:ConcGCThreads=12 -XX:G1HeapRegionSize=33554432 
-XX:G1ReservePercent=20 -XX:InitialHeapSize=68719476736 
-XX:InitiatingHeapOccupancyPercent=10 -XX:+ManagementServer 
-XX:MaxHeapSize=68719476736 -XX:ParallelGCThreads=36 
-XX:+ParallelRefProcEnabled -XX:PrintFLSStatistics=1 -XX:+PrintGC 
-XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution 
-XX:ThreadStackSize=256 -XX:+UseG1GC -XX:-UseLargePages 
-XX:-UseLargePagesIndividualAllocation -XX:+UseStringDeduplication

Sent from Outlook<http://aka.ms/weboutlook>

From: Rajdeep Sahoo 
Sent: Thursday, February 13, 2020 10:03 AM
To: solr-user@lucene.apache.org 
Subject: Re: Replica is going into recovery in Solr 6.1.0

What is your memory configuration

On Thu, 13 Feb, 2020, 9:46 AM vishal patel, 
wrote:

> Is there anyone looking at this?
>
> Sent from Outlook<http://aka.ms/weboutlook>
> ________
> From: vishal patel 
> Sent: Wednesday, February 12, 2020 3:45 PM
> To: solr-user@lucene.apache.org 
> Subject: Replica is going into recovery in Solr 6.1.0
>
> I am using solr version 6.1.0, Java 8 version and G1gc on production. We
> have 2 shards and each shard has 1 replica. Suddenly one replica is going
> into recovery mode and Requests become slow in our production.
> I have analyzed that minor GC max pause time was 1 min 6 sec 800 ms on
> that time and also multiple times minor GC pauses.
>
> My logs :
>
> https://drive.google.com/file/d/158z3nzLsnHGouyRnXgfzCjwD4iadgKSp/view?usp=sharing
>
> https://drive.google.com/file/d/1E4jyffvIWVJB7EeEMXBXyqaK2ZfAA8kk/view?usp=sharing
>
> I do not know why long GC pause time happened. In our platform heavy
> searching and indexing is performed.
> long GC pause times happen due to searching or indexing?
> If GC pause time long then why replica is going into recovery? can we set
> the waiting time of update request?
> what is the minimum GC pause time for going into recovery mode?
>
> It is useful for my problem? :
> https://issues.apache.org/jira/browse/SOLR-9310
>
> Regards,
> Vishal Patel
>
> Sent from Outlook<http://aka.ms/weboutlook>
>


Re: Replica is going into recovery in Solr 6.1.0

2020-02-12 Thread vishal patel
What GC are you using? -- G1GC

Sent from Outlook<http://aka.ms/weboutlook>


From: Walter Underwood 
Sent: Thursday, February 13, 2020 11:09 AM
To: solr-user@lucene.apache.org 
Subject: Re: Replica is going into recovery in Solr 6.1.0

Your JVM had very bad GC trouble. The 5 second GCs were enough to cause 
problems. The one minute GC is really, really bad. I’m not surprised the 
replica went down.

Look at the graphs for memory usage in the new and old spaces. It looks like it 
ran out. Maybe the heap is too small, but it might be something else.

What GC are you using?

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Feb 12, 2020, at 8:16 PM, vishal patel  
> wrote:
>
> Is there anyone looking at this?
>
> Sent from Outlook<http://aka.ms/weboutlook>
> ____
> From: vishal patel 
> Sent: Wednesday, February 12, 2020 3:45 PM
> To: solr-user@lucene.apache.org 
> Subject: Replica is going into recovery in Solr 6.1.0
>
> I am using solr version 6.1.0, Java 8 version and G1gc on production. We have 
> 2 shards and each shard has 1 replica. Suddenly one replica is going into 
> recovery mode and Requests become slow in our production.
> I have analyzed that minor GC max pause time was 1 min 6 sec 800 ms on that 
> time and also multiple times minor GC pauses.
>
> My logs :
> https://drive.google.com/file/d/158z3nzLsnHGouyRnXgfzCjwD4iadgKSp/view?usp=sharing
> https://drive.google.com/file/d/1E4jyffvIWVJB7EeEMXBXyqaK2ZfAA8kk/view?usp=sharing
>
> I do not know why long GC pause time happened. In our platform heavy 
> searching and indexing is performed.
> long GC pause times happen due to searching or indexing?
> If GC pause time long then why replica is going into recovery? can we set the 
> waiting time of update request?
> what is the minimum GC pause time for going into recovery mode?
>
> It is useful for my problem? : https://issues.apache.org/jira/browse/SOLR-9310
>
> Regards,
> Vishal Patel
>
> Sent from Outlook<http://aka.ms/weboutlook>



Re: Replica is going into recovery in Solr 6.1.0

2020-02-13 Thread vishal patel
Total memory of server is 256 GB and in this server below application running
Application1 50 GB
Application2 30 GB
Application3   8 GB
Application4   2 GB
Solr shard164 GB
Solr shard2 replica   64 GB

Note: Solr shard2 and shard1 replica running on another server. Normally 35 to 
40 GB memory constant usage in one solr instance so we keep the 64 GB. We are 
using NRT.

How big are your indexes on disk? - [Shard1-115 Gb, shard2 replica-96 GB] 
[shard1 replica-114 GB, shard2-100GB]
How many docs per replica?- Approx 30959714 docs
How many replicas per host?   - One server has one shard and one 
replica.

Regards,
Vishal

Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Friday, February 14, 2020 4:00 AM
To: solr-user@lucene.apache.org 
Subject: Re: Replica is going into recovery in Solr 6.1.0

What Walter said. Also, you _must_ leave quite a bit of free RAM for the OS due 
to Lucene using MMapDirectory space, see:

https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Basically until you can get your GC pauses under control, you’ll have an 
unstable collection.

How big are your indexes on disk? How many docs per replica? How many replicas 
per host?

Best,
Erick

> On Feb 13, 2020, at 5:16 PM, Walter Underwood  wrote:
>
> You have a 64GB heap. That is extremely unusual. You can only do that if the 
> instance has 80 GB or more of RAM. If you don’t have enough RAM, the JVM will 
> start using swap space and cause extremely long GC pauses.
>
> How much RAM do you have?
>
> How did you choose these GC settings?
>
> We have been using these settings with Java 8 in prod for three years with no 
> GC problems.
>
> SOLR_HEAP=8g
> # Use G1 GC  -- wunder 2017-01-23
> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> GC_TUNE=" \
> -XX:+UseG1GC \
> -XX:+ParallelRefProcEnabled \
> -XX:G1HeapRegionSize=8m \
> -XX:MaxGCPauseMillis=200 \
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
> “
>
> If you don’t have a very, very good reason for your GC settings, use these 
> instead.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>> On Feb 12, 2020, at 10:47 PM, vishal patel  
>> wrote:
>>
>> My configuration:
>>
>> -XX:+AggressiveOpts -XX:ConcGCThreads=12 -XX:G1HeapRegionSize=33554432 
>> -XX:G1ReservePercent=20 -XX:InitialHeapSize=68719476736 
>> -XX:InitiatingHeapOccupancyPercent=10 -XX:+ManagementServer 
>> -XX:MaxHeapSize=68719476736 -XX:ParallelGCThreads=36 
>> -XX:+ParallelRefProcEnabled -XX:PrintFLSStatistics=1 -XX:+PrintGC 
>> -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps 
>> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC 
>> -XX:+PrintTenuringDistribution -XX:ThreadStackSize=256 -XX:+UseG1GC 
>> -XX:-UseLargePages -XX:-UseLargePagesIndividualAllocation 
>> -XX:+UseStringDeduplication
>>
>> Sent from Outlook<http://aka.ms/weboutlook>
>> 
>> From: Rajdeep Sahoo 
>> Sent: Thursday, February 13, 2020 10:03 AM
>> To: solr-user@lucene.apache.org 
>> Subject: Re: Replica is going into recovery in Solr 6.1.0
>>
>> What is your memory configuration
>>
>> On Thu, 13 Feb, 2020, 9:46 AM vishal patel, 
>> wrote:
>>
>>> Is there anyone looking at this?
>>>
>>> Sent from Outlook<http://aka.ms/weboutlook>
>>> 
>>> From: vishal patel 
>>> Sent: Wednesday, February 12, 2020 3:45 PM
>>> To: solr-user@lucene.apache.org 
>>> Subject: Replica is going into recovery in Solr 6.1.0
>>>
>>> I am using solr version 6.1.0, Java 8 version and G1gc on production. We
>>> have 2 shards and each shard has 1 replica. Suddenly one replica is going
>>> into recovery mode and Requests become slow in our production.
>>> I have analyzed that minor GC max pause time was 1 min 6 sec 800 ms on
>>> that time and also multiple times minor GC pauses.
>>>
>>> My logs :
>>>
>>> https://drive.google.com/file/d/158z3nzLsnHGouyRnXgfzCjwD4iadgKSp/view?usp=sharing
>>>
>>> https://drive.google.com/file/d/1E4jyffvIWVJB7EeEMXBXyqaK2ZfAA8kk/view?usp=sharing
>>>
>>> I do not know why long GC pause time happened. In our platform heavy
>>> searching and indexing is performed.
>>> long GC pause times happen due to searching or indexing?
>>> If GC pause time long then why replica is going into recovery? can we set
>>> the waiting time of update request?
>>> what is the minimum GC pause time for going into recovery mode?
>>>
>>> It is useful for my problem? :
>>> https://issues.apache.org/jira/browse/SOLR-9310
>>>
>>> Regards,
>>> Vishal Patel
>>>
>>> Sent from Outlook<http://aka.ms/weboutlook>
>>>
>



SolrException in Solr 6.1.0

2020-03-05 Thread vishal patel
 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:282)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:214)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:169)
... 48 more
Caused by: java.nio.file.FileSystemException: 
E:\SolrCloud\solr1\server\solr\workflows\data\index\_8suj.fdx: Insufficient 
system resources exist to complete the requested service.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230)
at 
java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at java.nio.file.Files.newOutputStream(Files.java:216)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:408)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:404)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
at 
org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:44)
at 
org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:108)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
at 
org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
at 
org.apache.lucene.index.DefaultIndexingChain.initStoredFieldsWriter(DefaultIndexingChain.java:83)
at 
org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:331)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:368)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:232)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:449)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1492)
... 51 more

Can you suggest to me what is the cause? And how can we resolve it?

Regards,
Vishal Patel

Sent from Outlook<http://aka.ms/weboutlook>


Fw: SolrException in Solr 6.1.0

2020-03-08 Thread vishal patel
Anyone is looking my issue?

Sent from Outlook<http://aka.ms/weboutlook>


From: vishal patel 
Sent: Friday, March 6, 2020 12:02 PM
To: solr-user@lucene.apache.org
Subject: SolrException in Solr 6.1.0

I got below ERROR in Solr 6.1.0 log

2020-03-05 16:54:09.508 ERROR (qtp1239731077-468949) [c:workflows s:shard1 
r:core_node1 x:workflows] o.a.s.h.RequestHandlerBase 
org.apache.solr.common.SolrException: Exception writing document id 
WF204878828_42970103 to the index; possible analysis error.
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:181)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:68)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:939)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1094)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:720)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:97)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:179)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:135)
at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:274)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:239)
at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:157)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:186)
at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:107)
at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is 
clo

JSP support not configured in Solr 8.3.0

2020-03-11 Thread vishal patel

I put the JSP in \server\solr-webapp\webapp\test.jsp. When I hit in browser 
using http://172.178.170.175:7999/solr/test.jsp, I got HTTP ERROR 500 Problem 
accessing /solr/mem.jsp. Reason:JSP support not configured.

Is any jar required in \server\lib\? OR any configuration for that?

Regards,
Vishal

Sent from Outlook


Query is taking a time in Solr 6.1.0

2020-03-13 Thread vishal patel
Some query is taking time in Solr 6.1.0.

2020-03-12 11:05:36.752 INFO  (qtp1239731077-2513155) [c:documents s:shard1 
r:core_node1 x:documents] o.a.s.c.S.Request [documents]  webapp=/solr 
path=/select 
params={df=summary&distrib=false&fl=id&shards.purpose=4&start=0&fsv=true&sort=doc_ref+asc,id+desc&fq=&shard.url=s3.test.com:8983/solr/documents|s3r1.test.com:8983/solr/documents&rows=250&version=2&q=(doc_ref:((*n205*)+))+AND+(title:((*Distribution\+Board\+Schedule*)+))+AND+project_id:(2104616)+AND+is_active:true+AND+((isLatest:(true)+AND+isFolderActive:true+AND+isXref:false+AND+-document_type_id:(3+7)+AND+((is_public:true+OR+distribution_list:7249777+OR+folderadmin_list:7249777+OR+author_user_id:7249777)+AND+(((allowedUsers:(7249777)+OR+allowedRoles:(6368666)+OR+combinationUsers:(7249777))+AND+-blockedUsers:(7249777))+OR+(defaultAccess:(true)+AND+-blockedUsers:(7249777)+AND+-blockedRoles:(6368666)+OR+(isLatestRevPrivate:(true)+AND+allowedUsersForPvtRev:(7249777)+AND+-folderadmin_list:(7249777)))&shards.tolerant=true&NOW=1584011129462&isShard=true&wt=javabin}
 hits=0 status=0 QTime=7276.

Is there any way to reduce the query execution time(7276 Milli)?


Re: Query is taking a time in Solr 6.1.0

2020-03-15 Thread vishal patel
How can I use the tokenizing differently?

Sent from Outlook<http://aka.ms/weboutlook>

From: Erik Hatcher 
Sent: Friday, March 13, 2020 6:20 PM
To: solr-user@lucene.apache.org 
Subject: Re: Query is taking a time in Solr 6.1.0

Looks like you have two, maybe three, wildcard/prefix clauses in there.  
Consider tokenizing differently so you can optimize the queries to not need 
wildcards - thats my first observation and suggestion.

Erik

> On Mar 13, 2020, at 05:56, vishal patel  wrote:
>
> Some query is taking time in Solr 6.1.0.
>
> 2020-03-12 11:05:36.752 INFO  (qtp1239731077-2513155) [c:documents s:shard1 
> r:core_node1 x:documents] o.a.s.c.S.Request [documents]  webapp=/solr 
> path=/select 
> params={df=summary&distrib=false&fl=id&shards.purpose=4&start=0&fsv=true&sort=doc_ref+asc,id+desc&fq=&shard.url=s3.test.com:8983/solr/documents|s3r1.test.com:8983/solr/documents&rows=250&version=2&q=(doc_ref:((*n205*)+))+AND+(title:((*Distribution\+Board\+Schedule*)+))+AND+project_id:(2104616)+AND+is_active:true+AND+((isLatest:(true)+AND+isFolderActive:true+AND+isXref:false+AND+-document_type_id:(3+7)+AND+((is_public:true+OR+distribution_list:7249777+OR+folderadmin_list:7249777+OR+author_user_id:7249777)+AND+(((allowedUsers:(7249777)+OR+allowedRoles:(6368666)+OR+combinationUsers:(7249777))+AND+-blockedUsers:(7249777))+OR+(defaultAccess:(true)+AND+-blockedUsers:(7249777)+AND+-blockedRoles:(6368666)+OR+(isLatestRevPrivate:(true)+AND+allowedUsersForPvtRev:(7249777)+AND+-folderadmin_list:(7249777)))&shards.tolerant=true&NOW=1584011129462&isShard=true&wt=javabin}
>  hits=0 status=0 QTime=7276.
>
> Is there any way to reduce the query execution time(7276 Milli)?


Solr query Slow in Solr 6.1.0

2020-03-19 Thread vishal patel
I am using solr 6.1.0. We have 2 shards and each has one replica. Our index 
size is very large.
I find out that position of field in query will impact of performance.

If I made below query I got slow response

(doc_ref:((*KON\-N2*) )) AND (title:((*cdrl*) )) AND project_id:(2104616) AND 
is_active:true AND ((isLatest:(true) AND isFolderActive:true AND isXref:false 
AND -document_type_id:(3 7) AND ((is_public:true OR distribution_list:1 OR 
folderadmin_list:1 OR author_user_id:1) AND (((allowedUsers:(1) OR 
allowedRoles:( 6440215 6368478) OR combinationUsers:(1)) AND 
-blockedUsers:(1)) OR (defaultAccess:(true) AND -blockedUsers:(1) AND 
-blockedRoles:( 6440215 6368478) OR (isLatestRevPrivate:(true) AND 
allowedUsersForPvtRev:(1) AND -folderadmin_list:(1)))

If I changed (doc_ref:((*KON\-N2*) )) AND (title:((*cdrl*) )) part in last then 
got fast response compare to above.

project_id:(2104616) AND is_active:true AND ((isLatest:(true) AND 
isFolderActive:true AND isXref:false AND -document_type_id:(3 7) AND 
((is_public:true OR distribution_list:1 OR folderadmin_list:1 OR 
author_user_id:1) AND (((allowedUsers:(1) OR allowedRoles:( 6440215 
6368478) OR combinationUsers:(1)) AND -blockedUsers:(1)) OR 
(defaultAccess:(true) AND -blockedUsers:(1) AND -blockedRoles:( 6440215 
6368478) OR (isLatestRevPrivate:(true) AND allowedUsersForPvtRev:(1) 
AND -folderadmin_list:(1))) AND (doc_ref:((*KON\-N2*) )) AND 
(title:((*cdrl*) ))

Is it possible? How does Solr execute this query? field sequence is matter for 
performance?
I want to know the step by step Solr query execution same like database query 
because I will arrange field for better performance.

Regards,
Vishal

Sent from Outlook


Core name mismatch in Solr admin panel 8.3

2020-03-19 Thread vishal patel
I am upgrading Solr 6.1 to 8.3. I am creating collection using below API
http://10.31.32.29:8983/solr/admin/collections?_=1578288589068&action=CREATE&autoAddReplicas=false&collection.configName=catalogue&maxShardsPerNode=2&name=catalogue&numShards=2&replicationFactor=1&router.name=compositeId&wt=json&createNodeSet=10.31.32.29:8983_solr,15.21.12.21:8983_solr&createNodeSet.shuffle=false&property.name=catalogue

My solr\catalogue_shard1_replica_n1\core.properties below:

numShards=2
collection.configName=catalogue
name=catalogue
replicaType=NRT
shard=shard1
collection=catalogue
coreNodeName=core_node3


I found out core name as catalogue_shard1_replica_n1 When I saw in Admin panel. 
Why core name is mismatch in core property and admin panel.
After I got same name when rename API call : 
10.31.32.29:8983/solr/admin/cores?action=RENAME&core=catalogue_shard1_replica_n1&other=catalogue

Actually I want to do same name of core and collection. Can I do at the time of 
collection creation?

Sent from Outlook


Re: Core name mismatch in Solr admin panel 8.3

2020-03-20 Thread vishal patel
I want to change the core name not coreNodeName. Actually I want to check the 
status of core using 
http://10.31.32.29:8983/solr/admin/cores?action=STATUS&core=<http://10.38.33.28:8983/solr/admin/cores?action=STATUS&core=reports_shard1_replica_n1>catalogue<http://10.31.32.29:8983/solr/admin/collections?_=1578288589068&action=CREATE&autoAddReplicas=false&collection.configName=catalogue&maxShardsPerNode=2&name=catalogue&numShards=2&replicationFactor=1&router.name=compositeId&wt=json&createNodeSet=10.31.32.29:8983_solr,15.21.12.21:8983_solr&createNodeSet.shuffle=false&property.name=catalogue>.
It is good for us if core name and collection name will same. For this I will 
give the core name 
&property.name=catalogue<http://10.31.32.29:8983/solr/admin/collections?_=1578288589068&action=CREATE&autoAddReplicas=false&collection.configName=catalogue&maxShardsPerNode=2&name=catalogue&numShards=2&replicationFactor=1&router.name=compositeId&wt=json&createNodeSet=10.31.32.29:8983_solr,15.21.12.21:8983_solr&createNodeSet.shuffle=false&property.name=catalogue>
 at the time of collection creation. It updated in core.property as 
name=catalogue<http://10.31.32.29:8983/solr/admin/collections?_=1578288589068&action=CREATE&autoAddReplicas=false&collection.configName=catalogue&maxShardsPerNode=2&name=catalogue&numShards=2&replicationFactor=1&router.name=compositeId&wt=json&createNodeSet=10.31.32.29:8983_solr,15.21.12.21:8983_solr&createNodeSet.shuffle=false&property.name=catalogue>.
 But when I hit the URL 
http://10.31.32.29:8983/solr/admin/cores?action=STATUS&core=<http://10.38.33.28:8983/solr/admin/cores?action=STATUS&core=reports_shard1_replica_n1>catalogue<http://10.31.32.29:8983/solr/admin/collections?_=1578288589068&action=CREATE&autoAddReplicas=false&collection.configName=catalogue&maxShardsPerNode=2&name=catalogue&numShards=2&replicationFactor=1&router.name=compositeId&wt=json&createNodeSet=10.31.32.29:8983_solr,15.21.12.21:8983_solr&createNodeSet.shuffle=false&property.name=catalogue>
 i can not get the result.

After I renamed using 
10.31.32.29:8983/solr/admin/cores?action=RENAME&core=catalogue_shard1_replica_n1&other=catalogue
  and got proper result.
[After rename there is no any changes in core.property but in admin panel I can 
see core name as catalogue].

Regards,
Vishal


Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Friday, March 20, 2020 6:12 PM
To: solr-user@lucene.apache.org 
Subject: Re: Core name mismatch in Solr admin panel 8.3

coreNodeName is the name of the znode that contains the data for the replica, 
including the replica’s name. So the znode core_node3 contains a property 
“core” with a value of catalogue_shard1_replca_n1

I’ve always found this a bit confusing, but that’s been true since very early 
days. Note that coreNodeName is invariant, whereas the core’s name can change. 
So for a replica to find it’s associated znode to know its role, an invariant 
is much better, thus it’s stored in core.properties.

The only real changes from 6x is that the coreNodeName is now required in 
core.properties, so you noticed it.

I’d strongly recommend you do not try to change this behavior.

Best,
Erick

> On Mar 20, 2020, at 1:35 AM, vishal patel  
> wrote:
>
> I am upgrading Solr 6.1 to 8.3. I am creating collection using below API
> http://10.31.32.29:8983/solr/admin/collections?_=1578288589068&action=CREATE&autoAddReplicas=false&collection.configName=catalogue&maxShardsPerNode=2&name=catalogue&numShards=2&replicationFactor=1&router.name=compositeId&wt=json&createNodeSet=10.31.32.29:8983_solr,15.21.12.21:8983_solr&createNodeSet.shuffle=false&property.name=catalogue
>
> My solr\catalogue_shard1_replica_n1\core.properties below:
>
> numShards=2
> collection.configName=catalogue
> name=catalogue
> replicaType=NRT
> shard=shard1
> collection=catalogue
> coreNodeName=core_node3
>
>
> I found out core name as catalogue_shard1_replica_n1 When I saw in Admin 
> panel. Why core name is mismatch in core property and admin panel.
> After I got same name when rename API call : 
> 10.31.32.29:8983/solr/admin/cores?action=RENAME&core=catalogue_shard1_replica_n1&other=catalogue
>
> Actually I want to do same name of core and collection. Can I do at the time 
> of collection creation?
>
> Sent from Outlook<http://aka.ms/weboutlook>



Slow Query in Solr 8.3.0

2020-05-13 Thread vishal patel
I am upgrading Solr 6.1.0 to Solr 8.3.0.

I have created 2 shards and one form collection in Solr 8.3.0. My schema file 
of form collection is same as Solr 6.1.0. Also Solr config file is same.

I am executing below URL
http://193.268.300.145:8983/solr/forms/select?q=(+(doctype:Apps AND 
((allowed_roles:(2229130)) AND ((is_draft:true AND ((distribution_list:24 OR 
draft_form_all_org_allowed_roles:(2229130)) OR 
(draft_form_own_org_allowed_roles:(2229130) AND msg_distribution_org_list:13))) 
OR (is_draft:false AND is_public:true AND (is_controller_based:false OR 
msg_type_id:(1 3))) OR ((allowed_users:24) OR (is_draft:false AND 
(is_public:false OR is_controller_based:true) AND ((distribution_list:24 OR 
private_form_all_org_allowed_roles:(2229130)) OR 
(private_form_own_org_allowed_roles:(2229130) AND 
msg_distribution_org_list:13)) AND appType:2 AND is_formtype_active:true 
-status_id:(23) AND (is_draft:false OR msg_type_id:1) AND 
instance_group_id:(2289710) AND project_id:(2079453) AND locationId:(9696 
9694))) AND +msg_id:(10519539^3835 10519540^3834 10523575^3833 10523576^3832 
10523578^3831 10525740^3830 10527812^3829 10528779^3828 10528780^3827 
10530141^3826 10530142^3825 10530143^3824 10530147^3823 10525725^3822 
10525716^3821 10526659^3820 10526661^3819 10529460^3818 10529461^3817 
10530338^3816 10531331^3815 10521069^3814 10514233^3813 10514235^3812 
10514236^3811 10514818^3810 10518287^3809 10518289^3808 10518292^3807 
10518291^3806 10514823^3805 3117146^3804 3120673^3803 10116612^3802 
10117480^3801 10117641^3800 10117810^3799 10119703^3798 10128983^3797 
10229892^3796 10232225^3795 10233021^3794 10237712^3793 10237744^3792 
10239494^3791 10239499^3790 10239500^3789 10243233^3788 10243234^3787 
10305946^3786 10305977^3785 10305982^3784 10306994^3783 10306997^3782 
10306999^3781 10308101^3780 10308772^3779 10308804^3778 10309685^3777 
10309820^3776 10309821^3775 10310633^3774 10310634^3773 10311207^3772 
10311210^3771 10352946^3770 10352947^3769 10353164^3768 10353171^3767 
10353176^3766 10353956^3765 10354791^3764 10354792^3763 10354794^3762 
10354798^3761 10355333^3760 10355353^3759 10355406^3758 10355995^3757 
10356008^3756 10358933^3755 10358935^3754 10359420^3753 10359426^3752 
10421223^3751 10421224^3750 10421934^3749 10422864^3748 10422865^3747 
10426444^3746 10426446^3745 10428470^3744 10430357^3743 10430366^3742 
10431990^3741 10490422^3740 10490430^3739 10490742^3738 10490745^3737 
10491552^3736 10492344^3735 10492964^3734 10493965^3733 10494657^3732 
10494660^3731 3121708^3730 3122606^3729 3124424^3728 3125051^3727 3125782^3726 
3125793^3725 3127499^3724 3127600^3723 3127615^3722 3129535^3721 3131364^3720 
3131377^3719 3132062^3718 3133668^3717 3134414^3716 10131445^3715 10133209^3714 
10135640^3713 10136424^3712 10137129^3711 10137168^3710 10244270^3709 
10244324^3708 10244326^3707 10248136^3706 10248137^3705 10248138^3704 
10258595^3703 10259267^3702 10259966^3701 10259967^3700 10260700^3699 
10260701^3698 10262790^3697 10264386^3696 10264536^3695 10264961^3694 
10265098^3693 10265099^3692 10311754^3691 10312638^3690 10312639^3689 
10312640^3688 10313909^3687 10313910^3686 10314024^3685 10314659^3684 
10314691^3683 10314696^3682 10315395^3681 10315426^3680 10359451^3679 
10359835^3678 10361077^3677 10361085^3676 10361277^3675 10361289^3674 
10361824^3673 10362431^3672 10362434^3671 10363618^3670 10365316^3669 
10365322^3668 10365327^3667 10433969^3666 10435946^3665 10435963^3664 
10437695^3663 10437697^3662 10437698^3661 10437703^3660 10438761^3659 
10438763^3658 10439721^3657 10439728^3656 10496118^3655 10496281^3654 
10496289^3653 10496294^3652 10496296^3651 10496570^3650 10496582^3649 
10496626^3648 10497518^3647 10497522^3646 10497530^3645 10498717^3644 
10498722^3643 10499254^3642 10499256^3641 10500374^3640 10500382^3639 
10507062^3638 10507061^3637 3134424^3636 3135192^3635 3135284^3634 3135293^3633 
3139529^3632 3140767^3631 3141525^3630 3141681^3629 3141690^3628 3142537^3627 
3143664^3626 3144581^3625 3145417^3624 3145862^3623 3146382^3622 3147405^3621 
10299450^3620 10299963^3619 10341581^3618 10343393^3617 10344195^3616 
10345911^3615 10345920^3614 10345922^3613 10391985^3612 10391986^3611 
10392673^3610 10392686^3609 10393480^3608 10393483^3607 10394874^3606 
10395508^3605 10395512^3604 10473776^3603 10475382^3602 10476355^3601 
10477840^3600 10477844^3599 10478889^3598 10468310^3597 10468537^3596 
10469272^3595 10469273^3594 10470201^3593 10473703^3592 10085757^3591 
10086389^3590 10086431^3589 10087349^3588 10094016^3587 10094061^3586 
10094765^3585 10097194^3584 10098862^3583 10207359^3582 10207393^3581 
10207861^3580 10208045^3579 10208075^3578 10208077^3577 10212480^3576 
10213423^3575 10215714^3574 10221327^3573 10295572^3572 10298688^3571 
10299961^3570 10341080^3569 10343392^3568 10343395^3567 10344197^3566 
10345925^3565 10392685^3564 10393280^3563 10393486^3562 10394208^3561 
10394223^3560 10394883^3559 10395496^3558 10396041^3557 10475372^3556 
10475379^3

Re: Slow Query in Solr 8.3.0

2020-05-13 Thread vishal patel
Thanks for reply.

Yes query is large but our functionality is like this. And query length is not 
matter because same thing is working fine in Solr 6.1.0.

Return fields multi-valued are not a issue in my case. If I pass single return 
field(fl=id) then it also takes time.(34 seconds). But if I remove the group 
related data(group=true&group.field=form_id&group.sort=msg_creation_date 
desc&group.main=true) then it takes only 200 milliseconds.

Is there any changes or issue of grouping in Solr 8.3.0? because same thing is 
working fine in Solr 6.1.0?

Regards,
Vishal Patel

Sent from Outlook<http://aka.ms/weboutlook>

From: Houston Putman 
Sent: Wednesday, May 13, 2020 9:00 PM
To: solr-user@lucene.apache.org 
Subject: Re: Slow Query in Solr 8.3.0

Hey Vishal,

That's quite a large query. But I think the problem might be completely
unrelated. Are any of the return fields multi-valued? There was a major bug
(SOLR-14013 <https://issues.apache.org/jira/browse/SOLR-14013>) in
returning multi-valued fields that caused trivial queries to take around 30
seconds or more. You should be able to fix this by upgrading to 8.4, which
has the fix included, if you are in fact using multivalued fields.

- Houston Putman

On Wed, May 13, 2020 at 7:02 AM vishal patel 
wrote:

> I am upgrading Solr 6.1.0 to Solr 8.3.0.
>
> I have created 2 shards and one form collection in Solr 8.3.0. My schema
> file of form collection is same as Solr 6.1.0. Also Solr config file is
> same.
>
> I am executing below URL
> http://193.268.300.145:8983/solr/forms/select?q=(+(doctype:Apps AND
> ((allowed_roles:(2229130)) AND ((is_draft:true AND ((distribution_list:24
> OR draft_form_all_org_allowed_roles:(2229130)) OR
> (draft_form_own_org_allowed_roles:(2229130) AND
> msg_distribution_org_list:13))) OR (is_draft:false AND is_public:true AND
> (is_controller_based:false OR msg_type_id:(1 3))) OR ((allowed_users:24) OR
> (is_draft:false AND (is_public:false OR is_controller_based:true) AND
> ((distribution_list:24 OR private_form_all_org_allowed_roles:(2229130)) OR
> (private_form_own_org_allowed_roles:(2229130) AND
> msg_distribution_org_list:13)) AND appType:2 AND
> is_formtype_active:true -status_id:(23) AND (is_draft:false OR
> msg_type_id:1) AND instance_group_id:(2289710) AND project_id:(2079453) AND
> locationId:(9696 9694))) AND +msg_id:(10519539^3835 10519540^3834
> 10523575^3833 10523576^3832 10523578^3831 10525740^3830 10527812^3829
> 10528779^3828 10528780^3827 10530141^3826 10530142^3825 10530143^3824
> 10530147^3823 10525725^3822 10525716^3821 10526659^3820 10526661^3819
> 10529460^3818 10529461^3817 10530338^3816 10531331^3815 10521069^3814
> 10514233^3813 10514235^3812 10514236^3811 10514818^3810 10518287^3809
> 10518289^3808 10518292^3807 10518291^3806 10514823^3805 3117146^3804
> 3120673^3803 10116612^3802 10117480^3801 10117641^3800 10117810^3799
> 10119703^3798 10128983^3797 10229892^3796 10232225^3795 10233021^3794
> 10237712^3793 10237744^3792 10239494^3791 10239499^3790 10239500^3789
> 10243233^3788 10243234^3787 10305946^3786 10305977^3785 10305982^3784
> 10306994^3783 10306997^3782 10306999^3781 10308101^3780 10308772^3779
> 10308804^3778 10309685^3777 10309820^3776 10309821^3775 10310633^3774
> 10310634^3773 10311207^3772 10311210^3771 10352946^3770 10352947^3769
> 10353164^3768 10353171^3767 10353176^3766 10353956^3765 10354791^3764
> 10354792^3763 10354794^3762 10354798^3761 10355333^3760 10355353^3759
> 10355406^3758 10355995^3757 10356008^3756 10358933^3755 10358935^3754
> 10359420^3753 10359426^3752 10421223^3751 10421224^3750 10421934^3749
> 10422864^3748 10422865^3747 10426444^3746 10426446^3745 10428470^3744
> 10430357^3743 10430366^3742 10431990^3741 10490422^3740 10490430^3739
> 10490742^3738 10490745^3737 10491552^3736 10492344^3735 10492964^3734
> 10493965^3733 10494657^3732 10494660^3731 3121708^3730 3122606^3729
> 3124424^3728 3125051^3727 3125782^3726 3125793^3725 3127499^3724
> 3127600^3723 3127615^3722 3129535^3721 3131364^3720 3131377^3719
> 3132062^3718 3133668^3717 3134414^3716 10131445^3715 10133209^3714
> 10135640^3713 10136424^3712 10137129^3711 10137168^3710 10244270^3709
> 10244324^3708 10244326^3707 10248136^3706 10248137^3705 10248138^3704
> 10258595^3703 10259267^3702 10259966^3701 10259967^3700 10260700^3699
> 10260701^3698 10262790^3697 10264386^3696 10264536^3695 10264961^3694
> 10265098^3693 10265099^3692 10311754^3691 10312638^3690 10312639^3689
> 10312640^3688 10313909^3687 10313910^3686 10314024^3685 10314659^3684
> 10314691^3683 10314696^3682 10315395^3681 10315426^3680 10359451^3679
> 10359835^3678 10361077^3677 10361085^3676 10361277^3675 10361289^3674
> 10361824^3673 10362431^3672 10362434^3671 10363618^3670 10365316^3669
> 10365322^3668 10365327^3667 10433

Performance issue in Query execution in Solr 8.3.0 and 8.5.1

2020-05-14 Thread vishal patel
I am upgrading Solr 6.1.0 to Solr 8.3.0 or Solr 8.5.1.

I get performance issue for query execution in Solr 8.3.0 or Solr 8.5.1 when 
values of one field is large in query and group field is apply.

My Solr URL : 
https://drive.google.com/file/d/1UqFE8I6M451Z1wWAu5_C1dzqYEOGjuH2/view
My Solr config and schema : 
https://drive.google.com/drive/folders/1pJBxL0OOwAJSEC5uK_87ikaHEVGdDEEn<https://drive.google.com/drive/folders/1pJBxL0OOwAJSEC5uK_87ikaHEVGdDEEn>

It takes 34 seconds in Solr 8.3.0 or Solr 8.5.1. Same URL takes 1.5 seconds in 
Solr 6.1.0.

Is there any changes or issue related to grouping in Solr 8.3.0 or 8.5.1?


Regards,
Vishal Patel



Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

2020-05-15 Thread vishal patel
I have result of query debug for both version so It will helpful.

Solr 6.1 query debug URL
https://drive.google.com/file/d/1ixqpgAXsVLDZA-aUobJLrMOOefZX2NL1/view
Solr 8.3.1 query debug URL
https://drive.google.com/file/d/1MOKVE-iPZFuzRnDZhY9V6OsAKFT38U5r/view

I indexed same data in both version.

I found score=1.0 in result of Solr 8.3.0 and score=0.016147947 in result of 
Solr 8.6.1. Is there any impact of score in query execution? why is score=1.0 
in result of Solr 8.3.0?

Regards,
Vishal Patel

From: vishal patel 
Sent: Thursday, May 14, 2020 7:39 PM
To: solr-user@lucene.apache.org 
Subject: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

I am upgrading Solr 6.1.0 to Solr 8.3.0 or Solr 8.5.1.

I get performance issue for query execution in Solr 8.3.0 or Solr 8.5.1 when 
values of one field is large in query and group field is apply.

My Solr URL : 
https://drive.google.com/file/d/1UqFE8I6M451Z1wWAu5_C1dzqYEOGjuH2/view
My Solr config and schema : 
https://drive.google.com/drive/folders/1pJBxL0OOwAJSEC5uK_87ikaHEVGdDEEn<https://drive.google.com/drive/folders/1pJBxL0OOwAJSEC5uK_87ikaHEVGdDEEn>

It takes 34 seconds in Solr 8.3.0 or Solr 8.5.1. Same URL takes 1.5 seconds in 
Solr 6.1.0.

Is there any changes or issue related to grouping in Solr 8.3.0 or 8.5.1?


Regards,
Vishal Patel



Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

2020-05-16 Thread vishal patel
Any one is looking my issue? Please help me.

Sent from Outlook<http://aka.ms/weboutlook>

From: vishal patel 
Sent: Friday, May 15, 2020 3:06 PM
To: solr-user@lucene.apache.org 
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

I have result of query debug for both version so It will helpful.

Solr 6.1 query debug URL
https://drive.google.com/file/d/1ixqpgAXsVLDZA-aUobJLrMOOefZX2NL1/view
Solr 8.3.1 query debug URL
https://drive.google.com/file/d/1MOKVE-iPZFuzRnDZhY9V6OsAKFT38U5r/view

I indexed same data in both version.

I found score=1.0 in result of Solr 8.3.0 and score=0.016147947 in result of 
Solr 8.6.1. Is there any impact of score in query execution? why is score=1.0 
in result of Solr 8.3.0?

Regards,
Vishal Patel

From: vishal patel 
Sent: Thursday, May 14, 2020 7:39 PM
To: solr-user@lucene.apache.org 
Subject: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

I am upgrading Solr 6.1.0 to Solr 8.3.0 or Solr 8.5.1.

I get performance issue for query execution in Solr 8.3.0 or Solr 8.5.1 when 
values of one field is large in query and group field is apply.

My Solr URL : 
https://drive.google.com/file/d/1UqFE8I6M451Z1wWAu5_C1dzqYEOGjuH2/view
My Solr config and schema : 
https://drive.google.com/drive/folders/1pJBxL0OOwAJSEC5uK_87ikaHEVGdDEEn<https://drive.google.com/drive/folders/1pJBxL0OOwAJSEC5uK_87ikaHEVGdDEEn>

It takes 34 seconds in Solr 8.3.0 or Solr 8.5.1. Same URL takes 1.5 seconds in 
Solr 6.1.0.

Is there any changes or issue related to grouping in Solr 8.3.0 or 8.5.1?


Regards,
Vishal Patel



Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

2020-05-16 Thread vishal patel
Thanks for reply.

I have taken a thread dump at the time of query execution. I do not know the 
thread name so send the All threads. I have also send the logs so you can get 
idea.

Thread Dump All Stack Trace:
https://drive.google.com/file/d/1N4rVXJoaAwNvPIY2aw57gKA9mb4vRTMR/view
Solr 8.3 shard 1 log:
https://drive.google.com/file/d/1h5d_eZfQvYET7JKzbNKZwhZ_RmaX7hWf/view
Solr 8.3 shard 2 log:
https://drive.google.com/file/d/19CRflzQ7n5BZBNaaC7EFszgzKKlPfIVl/view

I have some questions regarding the thread dump
- How can I know the my thread name from thread dump? can I get from the log?
- When do I take a thread dump? on query execution or after query execution?

Note: I got a thread name from log and checked in thread dump on query 
execution time and after query executed. Both time thread stack trace got 
different.

If any other things are required then let me know I will send.

Regards,
Vishal Patel

From: Mikhail Khludnev 
Sent: Saturday, May 16, 2020 2:23 PM
To: solr-user 
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

Can you check Thread Dump in Solr Admin while Solr 8.3 crunches query for
34 seconds? Please share the deepest thread stack. This might give a clue
what's going on there.

On Sat, May 16, 2020 at 11:46 AM vishal patel 
wrote:

> Any one is looking my issue? Please help me.
>
> Sent from Outlook<http://aka.ms/weboutlook>
> ________
> From: vishal patel 
> Sent: Friday, May 15, 2020 3:06 PM
> To: solr-user@lucene.apache.org 
> Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
>
> I have result of query debug for both version so It will helpful.
>
> Solr 6.1 query debug URL
> https://drive.google.com/file/d/1ixqpgAXsVLDZA-aUobJLrMOOefZX2NL1/view
> Solr 8.3.1 query debug URL
> https://drive.google.com/file/d/1MOKVE-iPZFuzRnDZhY9V6OsAKFT38U5r/view
>
> I indexed same data in both version.
>
> I found score=1.0 in result of Solr 8.3.0 and score=0.016147947 in result
> of Solr 8.6.1. Is there any impact of score in query execution? why is
> score=1.0 in result of Solr 8.3.0?
>
> Regards,
> Vishal Patel
> 
> From: vishal patel 
> Sent: Thursday, May 14, 2020 7:39 PM
> To: solr-user@lucene.apache.org 
> Subject: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
>
> I am upgrading Solr 6.1.0 to Solr 8.3.0 or Solr 8.5.1.
>
> I get performance issue for query execution in Solr 8.3.0 or Solr 8.5.1
> when values of one field is large in query and group field is apply.
>
> My Solr URL :
> https://drive.google.com/file/d/1UqFE8I6M451Z1wWAu5_C1dzqYEOGjuH2/view
> My Solr config and schema :
> https://drive.google.com/drive/folders/1pJBxL0OOwAJSEC5uK_87ikaHEVGdDEEn<
> https://drive.google.com/drive/folders/1pJBxL0OOwAJSEC5uK_87ikaHEVGdDEEn>
>
> It takes 34 seconds in Solr 8.3.0 or Solr 8.5.1. Same URL takes 1.5
> seconds in Solr 6.1.0.
>
> Is there any changes or issue related to grouping in Solr 8.3.0 or 8.5.1?
>
>
> Regards,
> Vishal Patel
>
>

--
Sincerely yours
Mikhail Khludnev


Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

2020-05-16 Thread vishal patel
Thanks for reply.

I know Query field value is large. But same thing is working fine in Solr 6.1.0 
and query executed within 300 milliseconds. Schema.xml and Solrconfig.xml are 
same. Why is it taking lots of time for execution in Solr 8.3.0?

Is there any changes in Solr 8.3.0?

Regards,
Vishal Patel

From: Mikhail Khludnev 
Sent: Saturday, May 16, 2020 6:55 PM
To: solr-user 
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

It seems this thread is doing heavy work, mind the bottom line.

202.8013ms
124.8008ms
qtp153245266-156 (156)
org.apache.lucene.search.similarities.BM25Similarity$BM25Scorer.(BM25Similarity.java:219)
org.apache.lucene.search.similarities.BM25Similarity.scorer(BM25Similarity.java:192)
org.apache.lucene.search.similarities.PerFieldSimilarityWrapper.scorer(PerFieldSimilarityWrapper.java:47)
org.apache.lucene.search.TermQuery$TermWeight.(TermQuery.java:74)
org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:205)
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:726)
org.apache.lucene.search.BooleanWeight.(BooleanWeight.java:63)
org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:231)
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:726)
org.apache.lucene.search.TopFieldCollector.populateScores(TopFieldCollector.java:531)
org.apache.solr.search.grouping.distributed.command.TopGroupsFieldCommand.postCollect(TopGroupsFieldCommand.java:178)
org.apache.solr.search.grouping.CommandHandler.execute(CommandHandler.java:168)
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1403)
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:387)
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
org.apache.solr.core.SolrCore.execute(SolrCore.java:2596)


It seems like it ranks groups by query score, that doubtful thing to do.

>From the log. Here's how to recognize query running 25 sec "QTime=25063"


Query itself q=+msg_id:(10519539+10519540+10523575+10523576+ ... is
not what search engines are made for. They are purposed for short
query.

You may

1. leverage {!terms} query parser which might handle such long terms
list more efficiently

2. make sure you don't enable unnecessary grouping features, eg group
ranking in the stack above makes no sense for this kind of query


It's worth to revamp an overall approach in favor of query time
{!join} or index time join see {!parent}/nested docs.



On Sat, May 16, 2020 at 1:46 PM vishal patel 
wrote:

> Thanks for reply.
>
> I have taken a thread dump at the time of query execution. I do not know
> the thread name so send the All threads. I have also send the logs so you
> can get idea.
>
> Thread Dump All Stack Trace:
> https://drive.google.com/file/d/1N4rVXJoaAwNvPIY2aw57gKA9mb4vRTMR/view
> Solr 8.3 shard 1 log:
> https://drive.google.com/file/d/1h5d_eZfQvYET7JKzbNKZwhZ_RmaX7hWf/view
> Solr 8.3 shard 2 log:
> https://drive.google.com/file/d/19CRflzQ7n5BZBNaaC7EFszgzKKlPfIVl/view
>
> I have some questions regarding the thread dump
> - How can I know the my thread name from thread dump? can I get from the
> log?
> - When do I take a thread dump? on query execution or after query
> execution?
>
> Note: I got a thread name from log and checked in thread dump on query
> execution time and after query executed. Both time thread stack trace got
> different.
>
> If any other things are required then let me know I will send.
>
> Regards,
> Vishal Patel
> 
> From: Mikhail Khludnev 
> Sent: Saturday, May 16, 2020 2:23 PM
> To: solr-user 
> Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
>
> Can you check Thread Dump in Solr Admin while Solr 8.3 crunches query for
> 34 seconds? Please share the deepest thread stack. This might give a clue
> what's going on there.
>
> On Sat, May 16, 2020 at 11:46 AM vishal patel <
> vishalpatel200...@outlook.com>
> wrote:
>
> > Any one is looking my issue? Please help me.
> >
> > Sent from Outlook<http://aka.ms/weboutlook>
> > 
> > From: vishal patel 
> > Sent: Friday, May 15, 2020 3:06 PM
> > To: solr-user@lucene.apache.org 
> > Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
> >
> > I have result of query debug for both version so It will helpful.
> >
> > Solr 6.1 query debug URL
> > https://drive.google.com/file/d/1ixqpgAXsVLDZA-aUobJLrMOOefZX2NL1/view
> > Solr 8.3.1 query debug URL
> > https://drive.google.com/file/d/1MOKVE-iPZFuzR

Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

2020-05-16 Thread vishal patel
Solr 6.1.0 : 1881

Here is my thread dump stack trace and log for Solr 6.1.0. It is helpful for 
you.
My threads: qtp557041912-245356 and qtp557041912-245342.
https://drive.google.com/file/d/1owtotYEnJacMiEZyuGLk3AHQ9kQG5rww/view?usp=sharing

Regards
Vishal Patel



From: vishal patel 
Sent: Sunday, May 17, 2020 11:04 AM
To: solr-user 
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

Thanks for reply.

I know Query field value is large. But same thing is working fine in Solr 6.1.0 
and query executed within 300 milliseconds. Schema.xml and Solrconfig.xml are 
same. Why is it taking lots of time for execution in Solr 8.3.0?

Is there any changes in Solr 8.3.0?

Regards,
Vishal Patel

From: Mikhail Khludnev 
Sent: Saturday, May 16, 2020 6:55 PM
To: solr-user 
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

It seems this thread is doing heavy work, mind the bottom line.

202.8013ms
124.8008ms
qtp153245266-156 (156)
org.apache.lucene.search.similarities.BM25Similarity$BM25Scorer.(BM25Similarity.java:219)
org.apache.lucene.search.similarities.BM25Similarity.scorer(BM25Similarity.java:192)
org.apache.lucene.search.similarities.PerFieldSimilarityWrapper.scorer(PerFieldSimilarityWrapper.java:47)
org.apache.lucene.search.TermQuery$TermWeight.(TermQuery.java:74)
org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:205)
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:726)
org.apache.lucene.search.BooleanWeight.(BooleanWeight.java:63)
org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:231)
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:726)
org.apache.lucene.search.TopFieldCollector.populateScores(TopFieldCollector.java:531)
org.apache.solr.search.grouping.distributed.command.TopGroupsFieldCommand.postCollect(TopGroupsFieldCommand.java:178)
org.apache.solr.search.grouping.CommandHandler.execute(CommandHandler.java:168)
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1403)
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:387)
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
org.apache.solr.core.SolrCore.execute(SolrCore.java:2596)


It seems like it ranks groups by query score, that doubtful thing to do.

>From the log. Here's how to recognize query running 25 sec "QTime=25063"


Query itself q=+msg_id:(10519539+10519540+10523575+10523576+ ... is
not what search engines are made for. They are purposed for short
query.

You may

1. leverage {!terms} query parser which might handle such long terms
list more efficiently

2. make sure you don't enable unnecessary grouping features, eg group
ranking in the stack above makes no sense for this kind of query


It's worth to revamp an overall approach in favor of query time
{!join} or index time join see {!parent}/nested docs.



On Sat, May 16, 2020 at 1:46 PM vishal patel 
wrote:

> Thanks for reply.
>
> I have taken a thread dump at the time of query execution. I do not know
> the thread name so send the All threads. I have also send the logs so you
> can get idea.
>
> Thread Dump All Stack Trace:
> https://drive.google.com/file/d/1N4rVXJoaAwNvPIY2aw57gKA9mb4vRTMR/view
> Solr 8.3 shard 1 log:
> https://drive.google.com/file/d/1h5d_eZfQvYET7JKzbNKZwhZ_RmaX7hWf/view
> Solr 8.3 shard 2 log:
> https://drive.google.com/file/d/19CRflzQ7n5BZBNaaC7EFszgzKKlPfIVl/view
>
> I have some questions regarding the thread dump
> - How can I know the my thread name from thread dump? can I get from the
> log?
> - When do I take a thread dump? on query execution or after query
> execution?
>
> Note: I got a thread name from log and checked in thread dump on query
> execution time and after query executed. Both time thread stack trace got
> different.
>
> If any other things are required then let me know I will send.
>
> Regards,
> Vishal Patel
> 
> From: Mikhail Khludnev 
> Sent: Saturday, May 16, 2020 2:23 PM
> To: solr-user 
> Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
>
> Can you check Thread Dump in Solr Admin while Solr 8.3 crunches query for
> 34 seconds? Please share the deepest thread stack. This might give a clue
> what's going on there.
>
> On Sat, May 16, 2020 at 11:46 AM vishal patel <
> vishalpatel200...@outlook.com>
> wrote:
>
> > Any one is looking my issue? Please help me.
> >
> > Sent from Outlook<http://aka.ms/weboutlook>
> > 
> > From: vishal patel 
> > Sent: Friday, May 

Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

2020-05-18 Thread vishal patel
Any one is looking my issue? Due to this issue I can not upgrade Solr 8.3.0.

regards,
Vishal Patel

From: vishal patel 
Sent: Sunday, May 17, 2020 11:49 AM
To: solr-user 
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

Solr 6.1.0 : 1881

Here is my thread dump stack trace and log for Solr 6.1.0. It is helpful for 
you.
My threads: qtp557041912-245356 and qtp557041912-245342.
https://drive.google.com/file/d/1owtotYEnJacMiEZyuGLk3AHQ9kQG5rww/view?usp=sharing

Regards
Vishal Patel



From: vishal patel 
Sent: Sunday, May 17, 2020 11:04 AM
To: solr-user 
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

Thanks for reply.

I know Query field value is large. But same thing is working fine in Solr 6.1.0 
and query executed within 300 milliseconds. Schema.xml and Solrconfig.xml are 
same. Why is it taking lots of time for execution in Solr 8.3.0?

Is there any changes in Solr 8.3.0?

Regards,
Vishal Patel

From: Mikhail Khludnev 
Sent: Saturday, May 16, 2020 6:55 PM
To: solr-user 
Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1

It seems this thread is doing heavy work, mind the bottom line.

202.8013ms
124.8008ms
qtp153245266-156 (156)
org.apache.lucene.search.similarities.BM25Similarity$BM25Scorer.(BM25Similarity.java:219)
org.apache.lucene.search.similarities.BM25Similarity.scorer(BM25Similarity.java:192)
org.apache.lucene.search.similarities.PerFieldSimilarityWrapper.scorer(PerFieldSimilarityWrapper.java:47)
org.apache.lucene.search.TermQuery$TermWeight.(TermQuery.java:74)
org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:205)
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:726)
org.apache.lucene.search.BooleanWeight.(BooleanWeight.java:63)
org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:231)
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:726)
org.apache.lucene.search.TopFieldCollector.populateScores(TopFieldCollector.java:531)
org.apache.solr.search.grouping.distributed.command.TopGroupsFieldCommand.postCollect(TopGroupsFieldCommand.java:178)
org.apache.solr.search.grouping.CommandHandler.execute(CommandHandler.java:168)
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1403)
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:387)
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
org.apache.solr.core.SolrCore.execute(SolrCore.java:2596)


It seems like it ranks groups by query score, that doubtful thing to do.

>From the log. Here's how to recognize query running 25 sec "QTime=25063"


Query itself q=+msg_id:(10519539+10519540+10523575+10523576+ ... is
not what search engines are made for. They are purposed for short
query.

You may

1. leverage {!terms} query parser which might handle such long terms
list more efficiently

2. make sure you don't enable unnecessary grouping features, eg group
ranking in the stack above makes no sense for this kind of query


It's worth to revamp an overall approach in favor of query time
{!join} or index time join see {!parent}/nested docs.



On Sat, May 16, 2020 at 1:46 PM vishal patel 
wrote:

> Thanks for reply.
>
> I have taken a thread dump at the time of query execution. I do not know
> the thread name so send the All threads. I have also send the logs so you
> can get idea.
>
> Thread Dump All Stack Trace:
> https://drive.google.com/file/d/1N4rVXJoaAwNvPIY2aw57gKA9mb4vRTMR/view
> Solr 8.3 shard 1 log:
> https://drive.google.com/file/d/1h5d_eZfQvYET7JKzbNKZwhZ_RmaX7hWf/view
> Solr 8.3 shard 2 log:
> https://drive.google.com/file/d/19CRflzQ7n5BZBNaaC7EFszgzKKlPfIVl/view
>
> I have some questions regarding the thread dump
> - How can I know the my thread name from thread dump? can I get from the
> log?
> - When do I take a thread dump? on query execution or after query
> execution?
>
> Note: I got a thread name from log and checked in thread dump on query
> execution time and after query executed. Both time thread stack trace got
> different.
>
> If any other things are required then let me know I will send.
>
> Regards,
> Vishal Patel
> 
> From: Mikhail Khludnev 
> Sent: Saturday, May 16, 2020 2:23 PM
> To: solr-user 
> Subject: Re: Performance issue in Query execution in Solr 8.3.0 and 8.5.1
>
> Can you check Thread Dump in Solr Admin while Solr 8.3 crunches query for
> 34 seconds? Please share the deepest thread stack. This might give a clue
> what's going on there.
>
> On Sat, May 16, 2020 at 11:46 AM vishal pate

Large query size in Solr 8.3.0

2020-05-19 Thread vishal patel

Which query parser is used if my query length is large?
My query is 
https://drive.google.com/file/d/1P609VQReKM0IBzljvG2PDnyJcfv1P3Dz/view


Regards,
Vishal Patel


Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version

2020-05-21 Thread vishal patel
Any one is looking this issue?
I got same issue.

Regards,
Vishal Patel




From: jay harkhani 
Sent: Wednesday, May 20, 2020 7:39 PM
To: solr-user@lucene.apache.org 
Subject: Query takes more time in Solr 8.5.1 compare to 6.1.0 version

Hello,

Currently I upgrade Solr version from 6.1.0 to 8.5.1 and come across one issue. 
Query which have more ids (around 3000) and grouping is applied takes more time 
to execute. In Solr 6.1.0 it takes 677ms and in Solr 8.5.1 it takes 26090ms. 
While take reading we have same solr schema and same no. of records in both 
solr version.

Please refer below details for query, logs and thead dump (generate from Solr 
Admin while execute query).

Query : https://drive.google.com/file/d/1bavCqwHfJxoKHFzdOEt-mSG8N0fCHE-w/view

Logs and Thread dump stack trace
Solr 8.5.1 : 
https://drive.google.com/file/d/149IgaMdLomTjkngKHrwd80OSEa1eJbBF/view
Solr 6.1.0 : 
https://drive.google.com/file/d/13v1u__fM8nHfyvA0Mnj30IhdffW6xhwQ/view

To analyse further more we found that if we remove grouping field or we reduce 
no. of ids from query it execute fast. Is anything change in 8.5.1 version 
compare to 6.1.0 as in 6.1.0 even for large no. Ids along with grouping it 
works faster?

Can someone please help to isolate this issue.

Regards,
Jay Harkhani.


Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version

2020-05-23 Thread vishal patel
Hi Jason

Thanks for reply.

I have checked jay's query using "terms" query parser and it is really helpful 
to us. After execute using "terms" query parser it will come within a 500 
milliseconds even though grouping is applied.
Jay's Query : 
https://drive.google.com/file/d/1bavCqwHfJxoKHFzdOEt-mSG8N0fCHE-w/view

Actually I want to apply same things in my query but my field "msg_id" is 
applied boost.group is also used in my query.
I am also upgrading Solr 8.5.1.


MY query is : 
https://drive.google.com/file/d/1Op_Ja292Bcnv0Ijxw6VdAxvGlfsdczmS/view

I got 30 seconds for above query. How can I use the "terms" query parser in my 
query?

Regards,
Vishal Patel

From: Jason Gerlowski 
Sent: Friday, May 22, 2020 2:59 AM
To: solr-user@lucene.apache.org 
Subject: Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version

Hi Jay,

I can't speak to why you're seeing a performance change between 6.x
and 8.x.  What I can suggest though is an alternative way of
formulating the query: you might get different performance if you run
your query using Solr's "terms" query parser:
https://lucene.apache.org/solr/guide/8_5/other-parsers.html#terms-query-parser
 It's not guaranteed to help, but there's a chance it'll work for you.
And knowing whether or not it helps might point others here towards
the cause of your slowdown.

Even if "terms" performs better for you, it's probably worth
understanding what's going on here of course.

Are all other queries running comparably?

Jason

On Thu, May 21, 2020 at 10:25 AM jay harkhani  wrote:
>
> Hello,
>
> Please refer below details.
>
> >Did you create Solrconfig.xml for the collection from scratch after 
> >upgrading and reindexing?
> Yes, We have created collection from scratch and also re-indexing.
>
> >Was it based on the latest template?
> Yes, It was as per latest template.
>
> >What happens if you reexecute the query?
> Not more visible difference. Minor change in milliseconds.
>
> >Are there other processes/containers running on the same VM?
> No
>
> >How much heap and how much total memory you have?
> My heap and total memory are same as Solr 6.1.0. heap memory 5 gb and total 
> memory 25gb. As per me there is no issue related to memory.
>
> >Maybe also you need to increase the corresponding caches in the config.
> We are not using cache in both version.
>
> Both version have same configuration.
>
> Regards,
> Jay Harkhani.
>
> 
> From: Jörn Franke 
> Sent: Thursday, May 21, 2020 7:05 PM
> To: solr-user@lucene.apache.org 
> Subject: Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version
>
> Did you create Solrconfig.xml for the collection from scratch after upgrading 
> and reindexing? Was it based on the latest template?
> If not then please try this. Maybe also you need to increase the 
> corresponding caches in the config.
>
> What happens if you reexecute the query?
>
> Are there other processes/containers running on the same VM?
>
> How much heap and how much total memory you have? You should only have a 
> minor fraction of the memory as heap and most of it „free“ (this means it is 
> used for file caches).
>
>
>
> > Am 21.05.2020 um 15:24 schrieb vishal patel :
> >
> > Any one is looking this issue?
> > I got same issue.
> >
> > Regards,
> > Vishal Patel
> >
> >
> >
> > 
> > From: jay harkhani 
> > Sent: Wednesday, May 20, 2020 7:39 PM
> > To: solr-user@lucene.apache.org 
> > Subject: Query takes more time in Solr 8.5.1 compare to 6.1.0 version
> >
> > Hello,
> >
> > Currently I upgrade Solr version from 6.1.0 to 8.5.1 and come across one 
> > issue. Query which have more ids (around 3000) and grouping is applied 
> > takes more time to execute. In Solr 6.1.0 it takes 677ms and in Solr 8.5.1 
> > it takes 26090ms. While take reading we have same solr schema and same no. 
> > of records in both solr version.
> >
> > Please refer below details for query, logs and thead dump (generate from 
> > Solr Admin while execute query).
> >
> > Query : 
> > https://drive.google.com/file/d/1bavCqwHfJxoKHFzdOEt-mSG8N0fCHE-w/view
> >
> > Logs and Thread dump stack trace
> > Solr 8.5.1 : 
> > https://drive.google.com/file/d/149IgaMdLomTjkngKHrwd80OSEa1eJbBF/view
> > Solr 6.1.0 : 
> > https://drive.google.com/file/d/13v1u__fM8nHfyvA0Mnj30IhdffW6xhwQ/view
> >
> > To analyse further more we found that if we remove grouping field or we 
> > reduce no. of ids from query it execute fast. Is anything change in 8.5.1 
> > version compare to 6.1.0 as in 6.1.0 even for large no. Ids along with 
> > grouping it works faster?
> >
> > Can someone please help to isolate this issue.
> >
> > Regards,
> > Jay Harkhani.


Sorting in other collection in Solr 8.5.1

2020-05-23 Thread vishal patel
Hi

I am upgrading Solr 8.5.1. I have created 2 shards and each has one replica.
I have created 2 collection one is form and second is actionscomment.forms 
related data are stored in form collection and actions of that forms are stored 
in actionscomment collection.
There are 10 lakh documents in form and 50 lakh documents in actionscomment 
collection.

form schema.xml






actionscomment schema.xml










We are showing form listing using form and actionscomment collection. We are 
showing only 250 records in form listing page. Our form listing columns are 
id,title,form created date and action names. id,title,form created date and 
action names come from form collection and action names come from 
actionscomment collection. We want to give the sorting functionality for all 
columns.It is easy to sort id, title and form created date because it is in 
same collection.

For action name sorting, I execute 2 query. First I execute query in 
actionscomment collection with sort field title and get the form_id list and 
using those form_ids I execute in form collection. But I do not get the proper 
sorting. Sometimes I got so many form ids and my second query length becomes 
larger.
How can I get data from form collection same as order of form id list came from 
actionscomment?

Regards,
Vishal Patel


Sorting in other collection in Solr 8.5.1

2020-06-22 Thread vishal patel
Hi

I am upgrading Solr 8.5.1. I have created 2 shards and each has one replica.
I have created 2 collection one is form and second is actionscomment.forms 
related data are stored in form collection and actions of that forms are stored 
in actionscomment collection.
There are 10 lakh documents in form and 50 lakh documents in actionscomment 
collection.

form schema.xml






actionscomment schema.xml










We are showing form listing using form and actionscomment collection. We are 
showing only 250 records in form listing page. Our form listing columns are 
id,title,form created date and action names. id,title,form created date and 
action names come from form collection and action names come from 
actionscomment collection. We want to give the sorting functionality for all 
columns.It is easy to sort id, title and form created date because it is in 
same collection.

For action name sorting, I execute 2 query. First I execute query in 
actionscomment collection with sort field title and get the form_id list and 
using those form_ids I execute in form collection. But I do not get the proper 
sorting. Sometimes I got so many form ids and my second query length becomes 
larger.
How can I get data from form collection same as order of form id list came from 
actionscomment?

Regards,
Vishal Patel
<http://aka.ms/weboutlook>


Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version

2020-06-22 Thread vishal patel
Is there any other option?

Sent from Outlook<http://aka.ms/weboutlook>

From: Mikhail Khludnev 
Sent: Sunday, May 24, 2020 3:24 AM
To: solr-user 
Subject: Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version

Unfortunately {!terms} doesn't let one ^boost terms.

On Sat, May 23, 2020 at 10:13 AM vishal patel 
wrote:

> Hi Jason
>
> Thanks for reply.
>
> I have checked jay's query using "terms" query parser and it is really
> helpful to us. After execute using "terms" query parser it will come within
> a 500 milliseconds even though grouping is applied.
> Jay's Query :
> https://drive.google.com/file/d/1bavCqwHfJxoKHFzdOEt-mSG8N0fCHE-w/view
>
> Actually I want to apply same things in my query but my field "msg_id" is
> applied boost.group is also used in my query.
> I am also upgrading Solr 8.5.1.
>
>
> MY query is :
> https://drive.google.com/file/d/1Op_Ja292Bcnv0Ijxw6VdAxvGlfsdczmS/view
>
> I got 30 seconds for above query. How can I use the "terms" query parser
> in my query?
>
> Regards,
> Vishal Patel
> 
> From: Jason Gerlowski 
> Sent: Friday, May 22, 2020 2:59 AM
> To: solr-user@lucene.apache.org 
> Subject: Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version
>
> Hi Jay,
>
> I can't speak to why you're seeing a performance change between 6.x
> and 8.x.  What I can suggest though is an alternative way of
> formulating the query: you might get different performance if you run
> your query using Solr's "terms" query parser:
>
> https://lucene.apache.org/solr/guide/8_5/other-parsers.html#terms-query-parser
>  It's not guaranteed to help, but there's a chance it'll work for you.
> And knowing whether or not it helps might point others here towards
> the cause of your slowdown.
>
> Even if "terms" performs better for you, it's probably worth
> understanding what's going on here of course.
>
> Are all other queries running comparably?
>
> Jason
>
> On Thu, May 21, 2020 at 10:25 AM jay harkhani 
> wrote:
> >
> > Hello,
> >
> > Please refer below details.
> >
> > >Did you create Solrconfig.xml for the collection from scratch after
> upgrading and reindexing?
> > Yes, We have created collection from scratch and also re-indexing.
> >
> > >Was it based on the latest template?
> > Yes, It was as per latest template.
> >
> > >What happens if you reexecute the query?
> > Not more visible difference. Minor change in milliseconds.
> >
> > >Are there other processes/containers running on the same VM?
> > No
> >
> > >How much heap and how much total memory you have?
> > My heap and total memory are same as Solr 6.1.0. heap memory 5 gb and
> total memory 25gb. As per me there is no issue related to memory.
> >
> > >Maybe also you need to increase the corresponding caches in the config.
> > We are not using cache in both version.
> >
> > Both version have same configuration.
> >
> > Regards,
> > Jay Harkhani.
> >
> > 
> > From: Jörn Franke 
> > Sent: Thursday, May 21, 2020 7:05 PM
> > To: solr-user@lucene.apache.org 
> > Subject: Re: Query takes more time in Solr 8.5.1 compare to 6.1.0 version
> >
> > Did you create Solrconfig.xml for the collection from scratch after
> upgrading and reindexing? Was it based on the latest template?
> > If not then please try this. Maybe also you need to increase the
> corresponding caches in the config.
> >
> > What happens if you reexecute the query?
> >
> > Are there other processes/containers running on the same VM?
> >
> > How much heap and how much total memory you have? You should only have a
> minor fraction of the memory as heap and most of it „free“ (this means it
> is used for file caches).
> >
> >
> >
> > > Am 21.05.2020 um 15:24 schrieb vishal patel <
> vishalpatel200...@outlook.com>:
> > >
> > > Any one is looking this issue?
> > > I got same issue.
> > >
> > > Regards,
> > > Vishal Patel
> > >
> > >
> > >
> > > 
> > > From: jay harkhani 
> > > Sent: Wednesday, May 20, 2020 7:39 PM
> > > To: solr-user@lucene.apache.org 
> > > Subject: Query takes more time in Solr 8.5.1 compare to 6.1.0 version
> > >
> > > Hello,
> > >
> > > Currently I upgrade Solr version from 6.1.0 to 8.5.1 an

Re: Sorting in other collection in Solr 8.5.1

2020-06-24 Thread vishal patel
My listing looks like : 
https://drive.google.com/file/d/1Pw94topEJfarEA_P5JeiDOsHlvyACPK0/view
ID, Form Title and Created Date are coming from form collection while My Tasks 
is coming from actionscomments collection.
My listing fields are mapped with schema fields below:
collection name Listing Fields  Scehma Fields
forms   ID  id
Form Title  title
Created Dateform_creation_date
actionscomments ID  id
My Taskstitle

My form collection has forms related data and actionscomments collection has 
action related data. One form has many actions. So, we put a separate 
collection.

My storage like below:
actionscomments collections data
 {
"id":"ACTC10589345_656644",
"title":"Respond"
"form_id":"10252140",
"project_id":"102578"
 },
  {
"id":"ACTC10589345_709647",
"title":"For Information"
"form_id":"10252141",
"project_id":"102578"
 }

form collection data
  {
"id":"RFI019(2)",
"title":"QA Test 01",
"form_creation_date":"25-Jun-2018"
"form_id":"10252140",
"project_id":"102578"
  },
  {
"id":"RFI011(3)",
"title":"Test_RFI_Bolgs",
"form_creation_date":"29-Nov-2017"
"form_id":"10252141",
"project_id":"102578"
  },
  {
"id":"RFI015",
"title":"Check",
"form_creation_date":"20-Nov-2017"
"form_id":"10252142",
"project_id":"102579"
  }


 As of now one form has only one action but possible one form has many actions.
  For getting listing of project_id:102578, First we sort in ascending in 
actionscomments with field title so we get form_id in order 10252141 and 
10252140.
  Base on that ordered form_id we want same order data in form collection for 
project_id:102578

I am expecting listing like first form "Test_RFI_Bolgs" and second "QA Test 01".

 How can we achieve this?

Regards,
Vishal Patel

From: Erick Erickson 
Sent: Tuesday, June 23, 2020 5:07 PM
To: solr-user@lucene.apache.org 
Subject: Re: Sorting in other collection in Solr 8.5.1

You have two separate collections with dissimilar data, so what
does “sorting them in the same order” mean? Your example
sorts on title, so why can’t you sort them both on title? That won’t
work of course for any field that isn’t identical in both
collections.

These are actually pretty small collections. It sounds like you’re
doing what in SQL terms would be a sub-select. Have you considered
putting all the records (with different types) in the same collection
and using something like join queries or RerankQParser?

Don’t know how that fits into your model….

Best,
Erick

> On Jun 23, 2020, at 2:06 AM, vishal patel  
> wrote:
>
> Hi
>
> I am upgrading Solr 8.5.1. I have created 2 shards and each has one replica.
> I have created 2 collection one is form and second is actionscomment.forms 
> related data are stored in form collection and actions of that forms are 
> stored in actionscomment collection.
> There are 10 lakh documents in form and 50 lakh documents in actionscomment 
> collection.
>
> form schema.xml
>  multiValued="false" docValues="true"/>
>  docValues="true"/>
>  docValues="true"/>
>  omitNorms="true"/>
>  docValues="true"/>
>
> actionscomment schema.xml
>  multiValued="false" docValues="true"/>
>  docValues="true"/>
>  docValues="true"/>
> 
>  omitNorms="true"/>
>  docValues="true"/>
>  docValues="true"/>
>
>
>
> We are showing form listing using form and actionscomment collection. We are 
> showing only 250 records in form listing page. Our form listing columns are 
> id,title,form created date and action names. id,title,form created date and 
> action names come from form collection and action names come from 
> actionscomment collection. We want to give the sorting functionality for all 
> columns.It is easy to sort id, title and form created date because it is in 
> same collection.
>
> For action name sorting, I execute 2 query. First I execute query in 
> actionscomment collection with sort field title and get the form_id list and 
> using those form_ids I execute in form collection. But I do not get the 
> proper sorting. Sometimes I got so many form ids and my second query length 
> becomes larger.
> How can I get data from form collection same as order of form id list came 
> from actionscomment?
>
> Regards,
> Vishal Patel
> <http://aka.ms/weboutlook>



Re: Sorting in other collection in Solr 8.5.1

2020-06-24 Thread vishal patel
Forget this URL : 
https://drive.google.com/file/d/1Pw94topEJfarEA_P5JeiDOsHlvyACPK0/view

My listing looks like : 
https://drive.google.com/file/d/1K51qUrrdbTV65Qh83ZQD5Dm94XNte7xa/view<https://drive.google.com/file/d/1K51qUrrdbTV65Qh83ZQD5Dm94XNte7xa/view>
[https://lh3.googleusercontent.com/9JUR-mlCdbBTuqPtZa4sI_5njATXp28ERE4jmc_nBr5s9mHfAiJeHd1VI2t85OY=w1200-h630-p]<https://drive.google.com/file/d/1K51qUrrdbTV65Qh83ZQD5Dm94XNte7xa/view>
listing.PNG<https://drive.google.com/file/d/1K51qUrrdbTV65Qh83ZQD5Dm94XNte7xa/view>
drive.google.com


Sent from Outlook<http://aka.ms/weboutlook>
________
From: vishal patel 
Sent: Wednesday, June 24, 2020 1:33 PM
To: solr-user@lucene.apache.org 
Subject: Re: Sorting in other collection in Solr 8.5.1

My listing looks like : 
https://drive.google.com/file/d/1Pw94topEJfarEA_P5JeiDOsHlvyACPK0/view
ID, Form Title and Created Date are coming from form collection while My Tasks 
is coming from actionscomments collection.
My listing fields are mapped with schema fields below:
collection name Listing Fields  Scehma Fields
forms   ID  id
Form Title  title
Created Dateform_creation_date
actionscomments ID  id
My Taskstitle

My form collection has forms related data and actionscomments collection has 
action related data. One form has many actions. So, we put a separate 
collection.

My storage like below:
actionscomments collections data
 {
"id":"ACTC10589345_656644",
"title":"Respond"
"form_id":"10252140",
"project_id":"102578"
 },
  {
"id":"ACTC10589345_709647",
"title":"For Information"
"form_id":"10252141",
"project_id":"102578"
 }

form collection data
  {
"id":"RFI019(2)",
"title":"QA Test 01",
"form_creation_date":"25-Jun-2018"
"form_id":"10252140",
"project_id":"102578"
  },
  {
"id":"RFI011(3)",
"title":"Test_RFI_Bolgs",
"form_creation_date":"29-Nov-2017"
"form_id":"10252141",
"project_id":"102578"
  },
  {
"id":"RFI015",
"title":"Check",
"form_creation_date":"20-Nov-2017"
"form_id":"10252142",
"project_id":"102579"
  }


 As of now one form has only one action but possible one form has many actions.
  For getting listing of project_id:102578, First we sort in ascending in 
actionscomments with field title so we get form_id in order 10252141 and 
10252140.
  Base on that ordered form_id we want same order data in form collection for 
project_id:102578

I am expecting listing like first form "Test_RFI_Bolgs" and second "QA Test 01".

 How can we achieve this?

Regards,
Vishal Patel

From: Erick Erickson 
Sent: Tuesday, June 23, 2020 5:07 PM
To: solr-user@lucene.apache.org 
Subject: Re: Sorting in other collection in Solr 8.5.1

You have two separate collections with dissimilar data, so what
does “sorting them in the same order” mean? Your example
sorts on title, so why can’t you sort them both on title? That won’t
work of course for any field that isn’t identical in both
collections.

These are actually pretty small collections. It sounds like you’re
doing what in SQL terms would be a sub-select. Have you considered
putting all the records (with different types) in the same collection
and using something like join queries or RerankQParser?

Don’t know how that fits into your model….

Best,
Erick

> On Jun 23, 2020, at 2:06 AM, vishal patel  
> wrote:
>
> Hi
>
> I am upgrading Solr 8.5.1. I have created 2 shards and each has one replica.
> I have created 2 collection one is form and second is actionscomment.forms 
> related data are stored in form collection and actions of that forms are 
> stored in actionscomment collection.
> There are 10 lakh documents in form and 50 lakh documents in actionscomment 
> collection.
>
> form schema.xml
>  multiValued="false" docValues="true"/>
>  docValues="true"/>
>  docValues="true"/>
>  omitNorms="true"/>
>  docValues="true"/>
>
> actionscomment schema.xml
>  multiValued="false" docValues="true"/>
>  docValues="true"/>
>  docValues="true"/>
> 
>  omitNorms="true"/>
>  docValues="true"/>
>  docValues="true"/>
>
>
>
> We are showing form listing using form and actionscomment collection. We are 
&

Replica goes into recovery mode in Solr 6.1.0

2020-07-06 Thread vishal patel
I am using Solr version 6.1.0, Java 8 version and G1GC on production. We have 2 
shards and each shard has 1 replica. We have 3 collection.
We do not use any cache and also disable in Solr config.xml. Search and Update 
requests are coming frequently in our live platform.

*Our commit configuration in solr.config are below

60
   2
   false


   ${solr.autoSoftCommit.maxTime:-1}


*We used Near Real Time Searching So we did below configuration in solr.in.cmd
set SOLR_OPTS=%SOLR_OPTS% -Dsolr.autoSoftCommit.maxTime=100

*Our collections details are below:

Collection  Shard1  Shard1 Replica  Shard2  Shard2 Replica
Number of Documents Size(GB)Number of Documents Size(GB)
Number of Documents Size(GB)Number of Documents Size(GB)
collection1 26913364201 26913379202 26913380
198 26913379198
collection2 13934360310 13934367310 13934368
219 13934367219
collection3 351539689   73.5351540040   73.5351540136   
75.2351539722   75.2

*My server configurations are below:

Server1 Server2
CPU Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2301 Mhz, 10 Core(s), 20 
Logical Processor(s)Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2301 
Mhz, 10 Core(s), 20 Logical Processor(s)
HardDisk(GB)3845 ( 3.84 TB) 3485 GB (3.48 TB)
Total memory(GB)320 320
Shard1 Allocated memory(GB) 55
Shard2 Replica Allocated memory(GB) 55
Shard2 Allocated memory(GB) 55
Shard1 Replica Allocated memory(GB) 55
Other Applications Allocated Memory(GB) 60  22
Other Number Of Applications11  7


Sometimes, any one replica goes into recovery mode. Why replica goes into 
recovery? Due to heavy search OR heavy update/insert OR long GC pause time? If 
any one of them then what should we do in configuration?
Should we increase the shard for recovery issue?

Regards,
Vishal Patel



Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-07 Thread vishal patel
Any one is looking my issue? Please guide me.

Regards,
Vishal Patel



From: vishal patel 
Sent: Monday, July 6, 2020 7:11 PM
To: solr-user@lucene.apache.org 
Subject: Replica goes into recovery mode in Solr 6.1.0

I am using Solr version 6.1.0, Java 8 version and G1GC on production. We have 2 
shards and each shard has 1 replica. We have 3 collection.
We do not use any cache and also disable in Solr config.xml. Search and Update 
requests are coming frequently in our live platform.

*Our commit configuration in solr.config are below

60
   2
   false


   ${solr.autoSoftCommit.maxTime:-1}


*We used Near Real Time Searching So we did below configuration in solr.in.cmd
set SOLR_OPTS=%SOLR_OPTS% -Dsolr.autoSoftCommit.maxTime=100

*Our collections details are below:

Collection  Shard1  Shard1 Replica  Shard2  Shard2 Replica
Number of Documents Size(GB)Number of Documents Size(GB)
Number of Documents Size(GB)Number of Documents Size(GB)
collection1 26913364201 26913379202 26913380
198 26913379198
collection2 13934360310 13934367310 13934368
219 13934367219
collection3 351539689   73.5351540040   73.5351540136   
75.2351539722   75.2

*My server configurations are below:

Server1 Server2
CPU Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2301 Mhz, 10 Core(s), 20 
Logical Processor(s)Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2301 
Mhz, 10 Core(s), 20 Logical Processor(s)
HardDisk(GB)3845 ( 3.84 TB) 3485 GB (3.48 TB)
Total memory(GB)320 320
Shard1 Allocated memory(GB) 55
Shard2 Replica Allocated memory(GB) 55
Shard2 Allocated memory(GB) 55
Shard1 Replica Allocated memory(GB) 55
Other Applications Allocated Memory(GB) 60  22
Other Number Of Applications11  7


Sometimes, any one replica goes into recovery mode. Why replica goes into 
recovery? Due to heavy search OR heavy update/insert OR long GC pause time? If 
any one of them then what should we do in configuration?
Should we increase the shard for recovery issue?

Regards,
Vishal Patel



Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-07 Thread vishal patel
Thanks for your reply.

One server has total 320GB ram. In this 2 solr node one is shard1 and second is 
shard2 replica. Each solr node have 55GB memory allocated. shard1 has 585GB 
data and shard2 replica has 492GB data. means almost 1TB data in this server. 
server has also other applications and for that 60GB memory allocated. So total 
150GB memory is left.

Proper formatting details:
https://drive.google.com/file/d/1K9JyvJ50Vele9pPJCiMwm25wV4A6x4eD/view

Are you running multiple huge JVMs?
>> Not huge but 60GB memory allocated for our 11 application. 150GB memory are 
>> still free.

The servers will be doing a LOT of disk IO, so look at the read and write iops. 
I expect that the solr processes are blocked on disk reads almost all the time.
>> is it chance to go in recovery mode if more IO read and write or blocked?

"-Dsolr.autoSoftCommit.maxTime=100” is way too short (100 ms).
>> Our requirement is NRT so we keep the less time

Regards,
Vishal Patel

From: Walter Underwood 
Sent: Tuesday, July 7, 2020 8:15 PM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

This isn’t a support list, so nobody looks at issues. We do try to help.

It looks like you have 1 TB of index on a system with 320 GB of RAM.
I don’t know what "Shard1 Allocated memory” is, but maybe half of
that RAM is used by JVMs or some other process, I guess. Are you
running multiple huge JVMs?

The servers will be doing a LOT of disk IO, so look at the read and
write iops. I expect that the solr processes are blocked on disk reads
almost all the time.

"-Dsolr.autoSoftCommit.maxTime=100” is way too short (100 ms).
That is probably causing your outages.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jul 7, 2020, at 5:18 AM, vishal patel  
> wrote:
>
> Any one is looking my issue? Please guide me.
>
> Regards,
> Vishal Patel
>
>
> 
> From: vishal patel 
> Sent: Monday, July 6, 2020 7:11 PM
> To: solr-user@lucene.apache.org 
> Subject: Replica goes into recovery mode in Solr 6.1.0
>
> I am using Solr version 6.1.0, Java 8 version and G1GC on production. We have 
> 2 shards and each shard has 1 replica. We have 3 collection.
> We do not use any cache and also disable in Solr config.xml. Search and 
> Update requests are coming frequently in our live platform.
>
> *Our commit configuration in solr.config are below
> 
> 60
>   2
>   false
> 
> 
>   ${solr.autoSoftCommit.maxTime:-1}
> 
>
> *We used Near Real Time Searching So we did below configuration in solr.in.cmd
> set SOLR_OPTS=%SOLR_OPTS% -Dsolr.autoSoftCommit.maxTime=100
>
> *Our collections details are below:
>
> Collection  Shard1  Shard1 Replica  Shard2  Shard2 Replica
> Number of Documents Size(GB)Number of Documents Size(GB)  
>   Number of Documents Size(GB)Number of Documents Size(GB)
> collection1 26913364201 26913379202 26913380  
>   198 26913379198
> collection2 13934360310 13934367310 13934368  
>   219 13934367219
> collection3 351539689   73.5351540040   73.5351540136 
>   75.2351539722   75.2
>
> *My server configurations are below:
>
>Server1 Server2
> CPU Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2301 Mhz, 10 Core(s), 20 
> Logical Processor(s)Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2301 
> Mhz, 10 Core(s), 20 Logical Processor(s)
> HardDisk(GB)3845 ( 3.84 TB) 3485 GB (3.48 TB)
> Total memory(GB)320 320
> Shard1 Allocated memory(GB) 55
> Shard2 Replica Allocated memory(GB) 55
> Shard2 Allocated memory(GB) 55
> Shard1 Replica Allocated memory(GB) 55
> Other Applications Allocated Memory(GB) 60  22
> Other Number Of Applications11  7
>
>
> Sometimes, any one replica goes into recovery mode. Why replica goes into 
> recovery? Due to heavy search OR heavy update/insert OR long GC pause time? 
> If any one of them then what should we do in configuration?
> Should we increase the shard for recovery issue?
>
> Regards,
> Vishal Patel
>



Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-08 Thread vishal patel
Actually, I have showed our collection details in Excel format but may be 
formatting is removed here.

For this you can see 
https://drive.google.com/file/d/1K9JyvJ50Vele9pPJCiMwm25wV4A6x4eD/view

Regards,
Vishal Patel

From: Rodrigo Oliveira 
Sent: Wednesday, July 8, 2020 4:23 PM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

Hi,

How do you show this? Command for this resume?


*Our collections details are below:

Collection  Shard1  Shard1 Replica  Shard2  Shard2 Replica
Number of Documents Size(GB)Number of Documents Size(GB)
Number of Documents Size(GB)Number of Documents Size(GB)
collection1 26913364201 26913379202 26913380
198 26913379198
collection2 13934360310 13934367310 13934368
219 13934367219
collection3 351539689   73.5351540040   73.5351540136
 75.2351539722



Em seg, 6 de jul de 2020 10:41, vishal patel 
escreveu:

> I am using Solr version 6.1.0, Java 8 version and G1GC on production. We
> have 2 shards and each shard has 1 replica. We have 3 collection.
> We do not use any cache and also disable in Solr config.xml. Search and
> Update requests are coming frequently in our live platform.
>
> *Our commit configuration in solr.config are below
> 
> 60
>2
>false
> 
> 
>${solr.autoSoftCommit.maxTime:-1}
> 
>
> *We used Near Real Time Searching So we did below configuration in
> solr.in.cmd
> set SOLR_OPTS=%SOLR_OPTS% -Dsolr.autoSoftCommit.maxTime=100
>
> *Our collections details are below:
>
> Collection  Shard1  Shard1 Replica  Shard2  Shard2 Replica
> Number of Documents Size(GB)Number of Documents Size(GB)
>   Number of Documents Size(GB)Number of Documents
>  Size(GB)
> collection1 26913364201 26913379202 26913380
>   198 26913379198
> collection2 13934360310 13934367310 13934368
>   219 13934367219
> collection3 351539689   73.5351540040   73.5351540136
>  75.2351539722   75.2
>
> *My server configurations are below:
>
> Server1 Server2
> CPU Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2301 Mhz, 10 Core(s),
> 20 Logical Processor(s)Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz,
> 2301 Mhz, 10 Core(s), 20 Logical Processor(s)
> HardDisk(GB)3845 ( 3.84 TB) 3485 GB (3.48 TB)
> Total memory(GB)320 320
> Shard1 Allocated memory(GB) 55
> Shard2 Replica Allocated memory(GB) 55
> Shard2 Allocated memory(GB) 55
> Shard1 Replica Allocated memory(GB) 55
> Other Applications Allocated Memory(GB) 60  22
> Other Number Of Applications11  7
>
>
> Sometimes, any one replica goes into recovery mode. Why replica goes into
> recovery? Due to heavy search OR heavy update/insert OR long GC pause time?
> If any one of them then what should we do in configuration?
> Should we increase the shard for recovery issue?
>
> Regards,
> Vishal Patel
>
>


Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-08 Thread vishal patel
Thanks for reply.

what you mean by "Shard1 Allocated memory”
>> It means JVM memory of one solr node or instance.

How many Solr JVMs are you running?
>> In one server 2 solr JVMs in which one is shard and other is replica.

What is the heap size for your JVMs?
>> 55GB of one Solr JVM.

Regards,
Vishal Patel

Sent from Outlook<http://aka.ms/weboutlook>

From: Walter Underwood 
Sent: Wednesday, July 8, 2020 8:45 PM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

I don’t understand what you mean by "Shard1 Allocated memory”. I don’t know of
any way to dedicate system RAM to an application object like a replica.

How many Solr JVMs are you running?

What is the heap size for your JVMs?

Setting soft commit max time to 100 ms does not magically make Solr super fast.
It makes Solr do too much work, makes the work queues fill up, and makes it 
fail.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jul 7, 2020, at 10:55 PM, vishal patel  
> wrote:
>
> Thanks for your reply.
>
> One server has total 320GB ram. In this 2 solr node one is shard1 and second 
> is shard2 replica. Each solr node have 55GB memory allocated. shard1 has 
> 585GB data and shard2 replica has 492GB data. means almost 1TB data in this 
> server. server has also other applications and for that 60GB memory 
> allocated. So total 150GB memory is left.
>
> Proper formatting details:
> https://drive.google.com/file/d/1K9JyvJ50Vele9pPJCiMwm25wV4A6x4eD/view
>
> Are you running multiple huge JVMs?
>>> Not huge but 60GB memory allocated for our 11 application. 150GB memory are 
>>> still free.
>
> The servers will be doing a LOT of disk IO, so look at the read and write 
> iops. I expect that the solr processes are blocked on disk reads almost all 
> the time.
>>> is it chance to go in recovery mode if more IO read and write or blocked?
>
> "-Dsolr.autoSoftCommit.maxTime=100” is way too short (100 ms).
>>> Our requirement is NRT so we keep the less time
>
> Regards,
> Vishal Patel
> 
> From: Walter Underwood 
> Sent: Tuesday, July 7, 2020 8:15 PM
> To: solr-user@lucene.apache.org 
> Subject: Re: Replica goes into recovery mode in Solr 6.1.0
>
> This isn’t a support list, so nobody looks at issues. We do try to help.
>
> It looks like you have 1 TB of index on a system with 320 GB of RAM.
> I don’t know what "Shard1 Allocated memory” is, but maybe half of
> that RAM is used by JVMs or some other process, I guess. Are you
> running multiple huge JVMs?
>
> The servers will be doing a LOT of disk IO, so look at the read and
> write iops. I expect that the solr processes are blocked on disk reads
> almost all the time.
>
> "-Dsolr.autoSoftCommit.maxTime=100” is way too short (100 ms).
> That is probably causing your outages.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>> On Jul 7, 2020, at 5:18 AM, vishal patel  
>> wrote:
>>
>> Any one is looking my issue? Please guide me.
>>
>> Regards,
>> Vishal Patel
>>
>>
>> 
>> From: vishal patel 
>> Sent: Monday, July 6, 2020 7:11 PM
>> To: solr-user@lucene.apache.org 
>> Subject: Replica goes into recovery mode in Solr 6.1.0
>>
>> I am using Solr version 6.1.0, Java 8 version and G1GC on production. We 
>> have 2 shards and each shard has 1 replica. We have 3 collection.
>> We do not use any cache and also disable in Solr config.xml. Search and 
>> Update requests are coming frequently in our live platform.
>>
>> *Our commit configuration in solr.config are below
>> 
>> 60
>>  2
>>  false
>> 
>> 
>>  ${solr.autoSoftCommit.maxTime:-1}
>> 
>>
>> *We used Near Real Time Searching So we did below configuration in 
>> solr.in.cmd
>> set SOLR_OPTS=%SOLR_OPTS% -Dsolr.autoSoftCommit.maxTime=100
>>
>> *Our collections details are below:
>>
>> Collection  Shard1  Shard1 Replica  Shard2  Shard2 Replica
>> Number of Documents Size(GB)Number of Documents Size(GB) 
>>Number of Documents Size(GB)Number of Documents Size(GB)
>> collection1 26913364201 26913379202 26913380 
>>198 26913379198
>> collection2 13934360310 13934367310 13934368 
>>219 13934367219
>> collection3 351539689   73.5351540040   73.5351540136
>> 

Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-09 Thread vishal patel
I’ve been running Solr for a dozen years and I’ve never needed a heap larger 
than 8 GB.
>> What is your data size? same like us 1 TB? is your searching or indexing 
>> frequently? NRT model?

My question is why replica is going into recovery? When replica went down, I 
checked GC log but GC pause was not more than 2 seconds.
Also, I cannot find out any reason for recovery from Solr log file. i want to 
know the reason why replica goes into recovery.

Regards,
Vishal Patel

From: Walter Underwood 
Sent: Friday, July 10, 2020 3:03 AM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

Those are extremely large JVMs. Unless you have proven that you MUST
have 55 GB of heap, use a smaller heap.

I’ve been running Solr for a dozen years and I’ve never needed a heap
larger than 8 GB.

Also, there is usually no need to use one JVM per replica.

Your configuration is using 110 GB (two JVMs) just for Java
where I would configure it with a single 8 GB JVM. That would
free up 100 GB for file caches.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jul 8, 2020, at 10:10 PM, vishal patel  
> wrote:
>
> Thanks for reply.
>
> what you mean by "Shard1 Allocated memory”
>>> It means JVM memory of one solr node or instance.
>
> How many Solr JVMs are you running?
>>> In one server 2 solr JVMs in which one is shard and other is replica.
>
> What is the heap size for your JVMs?
>>> 55GB of one Solr JVM.
>
> Regards,
> Vishal Patel
>
> Sent from Outlook<http://aka.ms/weboutlook>
> 
> From: Walter Underwood 
> Sent: Wednesday, July 8, 2020 8:45 PM
> To: solr-user@lucene.apache.org 
> Subject: Re: Replica goes into recovery mode in Solr 6.1.0
>
> I don’t understand what you mean by "Shard1 Allocated memory”. I don’t know of
> any way to dedicate system RAM to an application object like a replica.
>
> How many Solr JVMs are you running?
>
> What is the heap size for your JVMs?
>
> Setting soft commit max time to 100 ms does not magically make Solr super 
> fast.
> It makes Solr do too much work, makes the work queues fill up, and makes it 
> fail.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>> On Jul 7, 2020, at 10:55 PM, vishal patel  
>> wrote:
>>
>> Thanks for your reply.
>>
>> One server has total 320GB ram. In this 2 solr node one is shard1 and second 
>> is shard2 replica. Each solr node have 55GB memory allocated. shard1 has 
>> 585GB data and shard2 replica has 492GB data. means almost 1TB data in this 
>> server. server has also other applications and for that 60GB memory 
>> allocated. So total 150GB memory is left.
>>
>> Proper formatting details:
>> https://drive.google.com/file/d/1K9JyvJ50Vele9pPJCiMwm25wV4A6x4eD/view
>>
>> Are you running multiple huge JVMs?
>>>> Not huge but 60GB memory allocated for our 11 application. 150GB memory 
>>>> are still free.
>>
>> The servers will be doing a LOT of disk IO, so look at the read and write 
>> iops. I expect that the solr processes are blocked on disk reads almost all 
>> the time.
>>>> is it chance to go in recovery mode if more IO read and write or blocked?
>>
>> "-Dsolr.autoSoftCommit.maxTime=100” is way too short (100 ms).
>>>> Our requirement is NRT so we keep the less time
>>
>> Regards,
>> Vishal Patel
>> 
>> From: Walter Underwood 
>> Sent: Tuesday, July 7, 2020 8:15 PM
>> To: solr-user@lucene.apache.org 
>> Subject: Re: Replica goes into recovery mode in Solr 6.1.0
>>
>> This isn’t a support list, so nobody looks at issues. We do try to help.
>>
>> It looks like you have 1 TB of index on a system with 320 GB of RAM.
>> I don’t know what "Shard1 Allocated memory” is, but maybe half of
>> that RAM is used by JVMs or some other process, I guess. Are you
>> running multiple huge JVMs?
>>
>> The servers will be doing a LOT of disk IO, so look at the read and
>> write iops. I expect that the solr processes are blocked on disk reads
>> almost all the time.
>>
>> "-Dsolr.autoSoftCommit.maxTime=100” is way too short (100 ms).
>> That is probably causing your outages.
>>
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>>
>>> On Jul 7, 2020, at 5:18 AM, vishal patel  
>>> wrote:
>>>
>>> Any one is looking my issue? Pleas

Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-10 Thread vishal patel
Thanks for your input.

Walter already said that setting soft commit max time to 100 ms is a recipe for 
disaster
>> I know that but our application is already developed and run on live 
>> environment since last 5 years. Actually, we want to show a data very 
>> quickly after the insert.

you have huge JVM heaps without an explanation for the reason
>> We gave the 55GB ram because our usage is like that large query search and 
>> very frequent searching and indexing.
Here is my memory snapshot which I have taken from GC.

https://drive.google.com/file/d/1WPYqg-wPFGnnMu8FopXs4EAGAgSq8ZEG/view
[https://lh6.googleusercontent.com/INm-eNbRs_A9CuCjQcxOyoHlX_gmRHHu7FeyMbU1Mj3rOj3UHjYbn0j9tIk8TuM=w1200-h630-p]<https://drive.google.com/file/d/1WPYqg-wPFGnnMu8FopXs4EAGAgSq8ZEG/view>
heapusage_before_gc.PNG<https://drive.google.com/file/d/1WPYqg-wPFGnnMu8FopXs4EAGAgSq8ZEG/view>
drive.google.com


https://drive.google.com/file/d/1LYEdcY9Om_0u8ltIHikU7hsuuKYQPh_m/view
[https://lh6.googleusercontent.com/hzRu4UiUAALHmoZuNi2pmNX-M4W2Um7EL67ee6qn0X_F1hpzlx2DVwGndvQ3K2Y=w1200-h630-p]<https://drive.google.com/file/d/1LYEdcY9Om_0u8ltIHikU7hsuuKYQPh_m/view>
JVM_memory.PNG<https://drive.google.com/file/d/1LYEdcY9Om_0u8ltIHikU7hsuuKYQPh_m/view>
drive.google.com



you indicated that you also run some other software on the same server. Is it 
possible that the other processes hog CPU, disk or network and starve Solr?
>> I will check that

I have tried Solr upgrade from 6.1.0 to 8.5.1 but due to some issue we cannot 
do. I have also asked in here
https://lucene.472066.n3.nabble.com/Sorting-in-other-collection-in-Solr-8-5-1-td4459506.html#a4459562

https://lucene.472066.n3.nabble.com/Query-takes-more-time-in-Solr-8-5-1-compare-to-6-1-0-version-td4458153.html


Why we cannot find the reason of recovery from log? like memory or CPU issue, 
frequent index or search, large query hit,
My log at the time of recovery
https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view
[https://lh5.googleusercontent.com/htOUfpihpAqncFsMlCLnSUZPu1_9DRKGNajaXV1jG44fpFzgx51ecNtUK58m5lk=w1200-h630-p]<https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view>
recovery_shard.txt<https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view>
drive.google.com


https://drive.google.com/file/d/1y0fC_n5u3MBMQbXrvxtqaD8vBBXDLR6I/view
[https://lh4.googleusercontent.com/WtJhD6JBgBDxbT-hEp59mGl82Z0OIR0CseEKphLm7PGAPwOGB2EXNhe0Dfa5t6E=w1200-h630-p]<https://drive.google.com/file/d/1y0fC_n5u3MBMQbXrvxtqaD8vBBXDLR6I/view>
recovery_replica.txt<https://drive.google.com/file/d/1y0fC_n5u3MBMQbXrvxtqaD8vBBXDLR6I/view>
drive.google.com

Regards,
Vishal Patel



From: Ere Maijala 
Sent: Friday, July 10, 2020 2:10 PM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

Walter already said that setting soft commit max time to 100 ms is a
recipe for disaster. That alone can be the issue, but if you're not
willing to try higher values, there's no way of being sure. And you have
huge JVM heaps without an explanation for the reason. If those do not
cause problems, you indicated that you also run some other software on
the same server. Is it possible that the other processes hog CPU, disk
or network and starve Solr?

I must add that Solr 6.1.0 is over four years old. You could be hitting
a bug that has been fixed for years, but even if you encounter an issue
that's still present, you will need to uprgade to get it fixed. If you
look at the number of fixes done in subsequent 6.x versions alone in the
changelog (https://lucene.apache.org/solr/8_5_1/changes/Changes.html)
you'll see that there are a lot of them. You could be hitting something
like SOLR-10420, which has been fixed for over three years.

Best,
Ere

vishal patel kirjoitti 10.7.2020 klo 7.52:
> I’ve been running Solr for a dozen years and I’ve never needed a heap larger 
> than 8 GB.
>>> What is your data size? same like us 1 TB? is your searching or indexing 
>>> frequently? NRT model?
>
> My question is why replica is going into recovery? When replica went down, I 
> checked GC log but GC pause was not more than 2 seconds.
> Also, I cannot find out any reason for recovery from Solr log file. i want to 
> know the reason why replica goes into recovery.
>
> Regards,
> Vishal Patel
> 
> From: Walter Underwood 
> Sent: Friday, July 10, 2020 3:03 AM
> To: solr-user@lucene.apache.org 
> Subject: Re: Replica goes into recovery mode in Solr 6.1.0
>
> Those are extremely large JVMs. Unless you have proven that you MUST
> have 55 GB of heap, use a smaller heap.
>
> I’ve been running Solr for a dozen years and I’ve never needed a heap
> larger than 8 GB.
>
> Also, there is usually no need to use on

Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-10 Thread vishal patel
Thanks for quick reply.

I assume caches (are they too large?), perhaps uninverted indexes.
Docvalues would help with latter ones. Do you use them?
>> We do not use any cache. we disabled the cache from solrconfig.xml
here is my solrconfig .xml and schema.xml
https://drive.google.com/file/d/12SHl3YGP7jT4goikBkeyB2s1NX5_C2gz/view
https://drive.google.com/file/d/1LwA1d4OiMhQQv806tR0HbZoEjA8IyfdR/view

We used Docvalues on that field which is used for sorting or faceting.

You could also try upgrading to the latest version in 6.x series as a starter.
>> I will surely try.

So, the node in question isn't responding quickly enough to http requests and 
gets put into recovery. The log for the recovering node starts too late, so I 
can't say anything about what happened before 14:42:43.943 that lead to 
recovery.
>> There is no error before 14:42:43.943 just search and insert requests are 
>> there. I got that node is responding but why it is not responding? Due to 
>> lack of memory or any other cause
why we cannot get idea from log for reason of not responding.

Is there any monitor for Solr from where we can find the root cause?

Regards,
Vishal Patel



From: Ere Maijala 
Sent: Friday, July 10, 2020 4:27 PM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

vishal patel kirjoitti 10.7.2020 klo 12.45:
> Thanks for your input.
>
> Walter already said that setting soft commit max time to 100 ms is a recipe 
> for disaster
>>> I know that but our application is already developed and run on live 
>>> environment since last 5 years. Actually, we want to show a data very 
>>> quickly after the insert.
>
> you have huge JVM heaps without an explanation for the reason
>>> We gave the 55GB ram because our usage is like that large query search and 
>>> very frequent searching and indexing.
> Here is my memory snapshot which I have taken from GC.

Yes, I can see that a lot of memory is in use, but the question is why.
I assume caches (are they too large?), perhaps uninverted indexes.
Docvalues would help with latter ones. Do you use them?

> I have tried Solr upgrade from 6.1.0 to 8.5.1 but due to some issue we cannot 
> do. I have also asked in here
> https://lucene.472066.n3.nabble.com/Sorting-in-other-collection-in-Solr-8-5-1-td4459506.html#a4459562

You could also try upgrading to the latest version in 6.x series as a
starter.

> Why we cannot find the reason of recovery from log? like memory or CPU issue, 
> frequent index or search, large query hit,
> My log at the time of recovery
> https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view
> [https://lh5.googleusercontent.com/htOUfpihpAqncFsMlCLnSUZPu1_9DRKGNajaXV1jG44fpFzgx51ecNtUK58m5lk=w1200-h630-p]<https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view>
> recovery_shard.txt<https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view>
> drive.google.com

Isn't it right there on the first lines?

2020-07-09 14:42:43.943 ERROR
(updateExecutor-2-thread-21007-processing-http:11.200.212.305:8983//solr//products
x:products r:core_node1 n:11.200.212.306:8983_solr s:shard1 c:products)
[c:products s:shard1 r:core_node1 x:products]
o.a.s.u.StreamingSolrClients error
org.apache.http.NoHttpResponseException: 11.200.212.305:8983 failed to
respond

followed by a couple more error messages about the same problem and then
initiation of recovery:

2020-07-09 14:42:44.002 INFO  (qtp1239731077-771611) [c:products
s:shard1 r:core_node1 x:products] o.a.s.c.ZkController Put replica
core=products coreNodeName=core_node3 on 11.200.212.305:8983_solr into
leader-initiated recovery.

So the node in question isn't responding quickly enough to http requests
and gets put into recovery. The log for the recovering node starts too
late, so I can't say anything about what happened before 14:42:43.943
that lead to recovery.

--Ere

>
> 
> From: Ere Maijala 
> Sent: Friday, July 10, 2020 2:10 PM
> To: solr-user@lucene.apache.org 
> Subject: Re: Replica goes into recovery mode in Solr 6.1.0
>
> Walter already said that setting soft commit max time to 100 ms is a
> recipe for disaster. That alone can be the issue, but if you're not
> willing to try higher values, there's no way of being sure. And you have
> huge JVM heaps without an explanation for the reason. If those do not
> cause problems, you indicated that you also run some other software on
> the same server. Is it possible that the other processes hog CPU, disk
> or network and starve Solr?
>
> I must add that Solr 6.1.0 is over four years old. You could be hitting
> a bug that has been fixed for years, but even if you encounter an issue
> that's still present, 

Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-13 Thread vishal patel
Thanks for your reply.

When I searched my error "org.apache.http.NoHttpResponseException:  failed to 
respond" in Google, I found the one Solr jira case : 
https://issues.apache.org/jira/browse/SOLR-7483.  I saw a comment of @Erick 
Erickson<mailto:erickerick...@gmail.com>.
is this issue resolved? Can I get that jira case?
[SOLR-7483] Investigate ways to deal with the tlog growing indefinitely while 
it's being replayed - ASF JIRA<https://issues.apache.org/jira/browse/SOLR-7483>
WARN - 2015-04-28 21:38:43.345; [ ] org.apache.solr.handler.IndexFetcher; File 
_xv.si did not match. expected checksum is 617655777 and actual is checksum 
1090588695. expected length is 419 and actual length is 419 WARN - 2015-04-28 
21:38:43.349; [ ] org.apache.solr.handler.IndexFetcher; File _xv.fnm did not 
match. expected checksum is 1992662616 and actual is checksum 1632122630. 
expected ...
issues.apache.org

In this same error which I got.
My Error Log:
shard: https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view
replica: https://drive.google.com/file/d/1y0fC_n5u3MBMQbXrvxtqaD8vBBXDLR6I/view

Regards,
Vishal Patel

From: Walter Underwood 
Sent: Friday, July 10, 2020 11:15 PM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

Sorting and faceting takes a lot of memory. From your charts, I would try
a 31 GB heap. That would make GC faster. 680 ms is very long for a GC
and can cause problems.

Combine a 680 ms GC with a 100 ms soft commit time and you can have
lots of trouble.

Change your soft commit time to 1 (ten seconds) or longer.

Look at a 24 hour graph of heap usage. It should look like a sawtooth,
increasing, then dropping after every full GC. The bottom of the sawtooth
is the the memory that Solr actually needs. Take the highest number from
the bottom of the sawtooth and add some extra, maybe 2 GB. Try that
heap size.

Upgrade to 6.6.2. That includes all bug fixes for the 6.x release. The 6.x
release had several bad bugs, especially in the middle releases. We were
switching prod to Sol Cloud while those were being released and it was
not fun.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jul 10, 2020, at 4:59 AM, vishal patel  
> wrote:
>
> Thanks for quick reply.
>
> I assume caches (are they too large?), perhaps uninverted indexes.
> Docvalues would help with latter ones. Do you use them?
>>> We do not use any cache. we disabled the cache from solrconfig.xml
> here is my solrconfig .xml and schema.xml
> https://drive.google.com/file/d/12SHl3YGP7jT4goikBkeyB2s1NX5_C2gz/view
> https://drive.google.com/file/d/1LwA1d4OiMhQQv806tR0HbZoEjA8IyfdR/view
>
> We used Docvalues on that field which is used for sorting or faceting.
>
> You could also try upgrading to the latest version in 6.x series as a starter.
>>> I will surely try.
>
> So, the node in question isn't responding quickly enough to http requests and 
> gets put into recovery. The log for the recovering node starts too late, so I 
> can't say anything about what happened before 14:42:43.943 that lead to 
> recovery.
>>> There is no error before 14:42:43.943 just search and insert requests are 
>>> there. I got that node is responding but why it is not responding? Due to 
>>> lack of memory or any other cause
> why we cannot get idea from log for reason of not responding.
>
> Is there any monitor for Solr from where we can find the root cause?
>
> Regards,
> Vishal Patel
>
>
> ____
> From: Ere Maijala 
> Sent: Friday, July 10, 2020 4:27 PM
> To: solr-user@lucene.apache.org 
> Subject: Re: Replica goes into recovery mode in Solr 6.1.0
>
> vishal patel kirjoitti 10.7.2020 klo 12.45:
>> Thanks for your input.
>>
>> Walter already said that setting soft commit max time to 100 ms is a recipe 
>> for disaster
>>>> I know that but our application is already developed and run on live 
>>>> environment since last 5 years. Actually, we want to show a data very 
>>>> quickly after the insert.
>>
>> you have huge JVM heaps without an explanation for the reason
>>>> We gave the 55GB ram because our usage is like that large query search and 
>>>> very frequent searching and indexing.
>> Here is my memory snapshot which I have taken from GC.
>
> Yes, I can see that a lot of memory is in use, but the question is why.
> I assume caches (are they too large?), perhaps uninverted indexes.
> Docvalues would help with latter ones. Do you use them?
>
>> I have tried Solr upgrade from 6.1.0 to 8.5.1 but due to some issue we 
>> cannot do. I have also asked in here
>> https://lucene.472066.n3.nabble

Replica goes into recovery mode in Solr 6.1.0

2020-07-20 Thread vishal patel
I am using Solr version 6.1.0, Java 8 version and G1GC on production. We have 2 
shards and each shard has 1 replica.
Some times my replica goes into recovery mode and when I check my GC log, I can 
not find the GC pause time more than 600 milliseconds. sometimes GC pause time 
goes near to 1 seconds but at that time replica does not go into recovery mode.

My Error Log:
shard: https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view
replica: https://drive.google.com/file/d/1y0fC_n5u3MBMQbXrvxtqaD8vBBXDLR6I/view

When I searched my error "org.apache.http.NoHttpResponseException:  failed to 
respond" in Google, I found the one Solr jira case : 
https://issues.apache.org/jira/browse/SOLR-7483

Any one gives me details about that jira case? is it resolved in other jira 
case?

Regards,
Vishal patel




Sent from Outlook<http://aka.ms/weboutlook>


Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-21 Thread vishal patel
Thanks for reply.

>>The recovery is probably _caused_ by the node not responding to the update
request due to a timeout
Can we increase update request timeout?

>>What kind of documents are you indexing? I have seen situations where massive
documents take so long that the request times out and starts this process.
Normal document here is my schema file : 
https://drive.google.com/file/d/12SHl3YGP7jT4goikBkeyB2s1NX5_C2gz/view

>>The logs you posted don’t show the reason the node went into recovery
in the first place, that’s the thing I’d concentrate on finding.
Really, we cannot find from the log that why replica goes into recovery before 
recovery log there were our insert and search requests.

Regards,
Vishal Patel

Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Tuesday, July 21, 2020 6:36 PM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

The recovery is probably _caused_ by the node not responding to the update
request due to a timeout. The JIRA you reference is unrelated I’d guess.

What kind of documents are you indexing? I have seen situations where massive
documents take so long that the request times out and starts this process.

The logs you posted don’t show the reason the node went into recovery
in the first place, that’s the thing I’d concentrate on finding.

Best,
Erick

> On Jul 21, 2020, at 1:37 AM, vishal patel  
> wrote:
>
> I am using Solr version 6.1.0, Java 8 version and G1GC on production. We have 
> 2 shards and each shard has 1 replica.
> Some times my replica goes into recovery mode and when I check my GC log, I 
> can not find the GC pause time more than 600 milliseconds. sometimes GC pause 
> time goes near to 1 seconds but at that time replica does not go into 
> recovery mode.
>
> My Error Log:
> shard: https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view
> replica: 
> https://drive.google.com/file/d/1y0fC_n5u3MBMQbXrvxtqaD8vBBXDLR6I/view
>
> When I searched my error "org.apache.http.NoHttpResponseException:  failed to 
> respond" in Google, I found the one Solr jira case : 
> https://issues.apache.org/jira/browse/SOLR-7483
>
> Any one gives me details about that jira case? is it resolved in other jira 
> case?
>
> Regards,
> Vishal patel
>
>
>
>
> Sent from Outlook<http://aka.ms/weboutlook>



Slow commit in Solr 6.1.0

2020-08-26 Thread vishal patel
I am using solr 6.1.0. We have 2 shards and each has one replica.
When I checked shard1 log, I found that commit process was going to slow for 
some collection.

Slow commit:
2020-08-25 09:08:10.328 INFO  (commitScheduler-124-thread-1) [c:forms s:shard1 
r:core_node1 x:forms] o.a.s.u.DirectUpdateHandler2 start 
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
2020-08-25 09:08:11.424 INFO  (commitScheduler-124-thread-1) [c:forms s:shard1 
r:core_node1 x:forms] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@5def3a5a[forms] main]
2020-08-25 09:08:11.932 INFO  (commitScheduler-124-thread-1) [c:forms s:shard1 
r:core_node1 x:forms] o.a.s.u.DirectUpdateHandler2 end_commit_flush

2020-08-25 09:08:11.935 INFO  (commitScheduler-124-thread-1) [c:forms s:shard1 
r:core_node1 x:forms] o.a.s.u.DirectUpdateHandler2 start 
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
2020-08-25 09:08:13.676 INFO  (commitScheduler-124-thread-1) [c:forms s:shard1 
r:core_node1 x:forms] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@15c82f14[forms] main]
2020-08-25 09:08:14.071 INFO  (commitScheduler-124-thread-1) [c:forms s:shard1 
r:core_node1 x:forms] o.a.s.u.DirectUpdateHandler2 end_commit_flush

I have found that all threads other than commitScheduler-124-thread-1 were 
fast.(commitScheduler-115-thread-1,commitScheduler-123-thread-1,commitScheduler-121-thread-1,commitScheduler-128-thread-1)
what is the reason for slow commit? Why other threads are not slow?

Here is my full log:
shard1: https://drive.google.com/file/d/1jim55pbYxQPORpGJSNjmek5OdraGX79h/view
shard1 replica: 
https://drive.google.com/file/d/1o34kEO1ZZwPE6eF1ppxsQnwL7wLqcx-o/view

Regards,
Vishal Patel

<http://aka.ms/weboutlook>


Re: Slow commit in Solr 6.1.0

2020-08-26 Thread vishal patel
Thanks for your quick reply.

Commit is not called from client side.
We do not use any cache. Here is my solrconfig.xml : 
https://drive.google.com/file/d/1LwA1d4OiMhQQv806tR0HbZoEjA8IyfdR/view

We give set SOLR_OPTS=%SOLR_OPTS% -Dsolr.autoSoftCommit.maxTime=100 because we 
want quick view after indexing.

My main question is why it is taking time for only form 
collection(commitScheduler-124-thread-1)?
Why it is committing fast for other 
collection(commitScheduler-115-thread-1,commitScheduler-123-thread-1,commitScheduler-121-thread-1,commitScheduler-128-thread-1)?
My Solrconfig.xml is same for all collection.
Does it depend on index size of documents which I 
send(FORM1056159510735194,FORM1056159810735197,FORM1056160010735199)?
Does it depend on index size of form collection?

Regards,
Vishal Patel

From: Erick Erickson 
Sent: Wednesday, August 26, 2020 5:36 PM
To: solr-user@lucene.apache.org 
Subject: Re: Slow commit in Solr 6.1.0

It depends on how the commit is called. You have openSearcher=true, which means 
the call
won’t return until all your autowarming is done. This _looks_ like it might be 
a commit
called from a client, which you should not do.

It’s also suspicious that these are soft commits 1 second apart. The other 
possibility is that
you have your autoSoftCommit interval set to 1000 ms. This is usually far too 
fast.

Here’s more than you want to know about how commits work:

https://lucidworks.com/post/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Best,
Erick

> On Aug 26, 2020, at 5:29 AM, vishal patel  
> wrote:
>
> I am using solr 6.1.0. We have 2 shards and each has one replica.
> When I checked shard1 log, I found that commit process was going to slow for 
> some collection.
>
> Slow commit:
> 2020-08-25 09:08:10.328 INFO  (commitScheduler-124-thread-1) [c:forms 
> s:shard1 r:core_node1 x:forms] o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
> 2020-08-25 09:08:11.424 INFO  (commitScheduler-124-thread-1) [c:forms 
> s:shard1 r:core_node1 x:forms] o.a.s.s.SolrIndexSearcher Opening 
> [Searcher@5def3a5a[forms] main]
> 2020-08-25 09:08:11.932 INFO  (commitScheduler-124-thread-1) [c:forms 
> s:shard1 r:core_node1 x:forms] o.a.s.u.DirectUpdateHandler2 end_commit_flush
>
> 2020-08-25 09:08:11.935 INFO  (commitScheduler-124-thread-1) [c:forms 
> s:shard1 r:core_node1 x:forms] o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
> 2020-08-25 09:08:13.676 INFO  (commitScheduler-124-thread-1) [c:forms 
> s:shard1 r:core_node1 x:forms] o.a.s.s.SolrIndexSearcher Opening 
> [Searcher@15c82f14[forms] main]
> 2020-08-25 09:08:14.071 INFO  (commitScheduler-124-thread-1) [c:forms 
> s:shard1 r:core_node1 x:forms] o.a.s.u.DirectUpdateHandler2 end_commit_flush
>
> I have found that all threads other than commitScheduler-124-thread-1 were 
> fast.(commitScheduler-115-thread-1,commitScheduler-123-thread-1,commitScheduler-121-thread-1,commitScheduler-128-thread-1)
> what is the reason for slow commit? Why other threads are not slow?
>
> Here is my full log:
> shard1: https://drive.google.com/file/d/1jim55pbYxQPORpGJSNjmek5OdraGX79h/view
> shard1 replica: 
> https://drive.google.com/file/d/1o34kEO1ZZwPE6eF1ppxsQnwL7wLqcx-o/view
>
> Regards,
> Vishal Patel
>
> <http://aka.ms/weboutlook>



Daylight savings time issue using NOW in Solr 6.1.0

2020-10-01 Thread vishal patel

Hi

I am using Solr 6.1.0. My SOLR_TIMEZONE=UTC  in solr.in.cmd.
My current Solr server machine time zone is also UTC.

My one collection has below one field in schema.


Suppose my current Solr server machine time is 2020-10-01 10:00:00.000. I have 
one document in that collection and in that document action_date is 
2020-10-01T09:45:46Z.
When I search in Solr action_date:[2020-10-01T08:00:00Z TO NOW] , I cannot 
return that record. I check my solr log and found that time was different 
between Solr log time and solr server machine time.(almost 1 hours difference)

Why I cannot get the result? Why NOW is not taking the 2020-10-01T10:00:00Z?
"NOW" takes which time? Is there difference due to daylight saving 
time? How can I configure 
or change timezone which consider daylight saving time?

Regards,
Vishal



Re: Daylight savings time issue using NOW in Solr 6.1.0

2020-10-04 Thread vishal patel
Hello,

Can anyone help me?

Regards,
Vishal

Sent from Outlook<http://aka.ms/weboutlook>

From: vishal patel 
Sent: Thursday, October 1, 2020 4:51 PM
To: solr-user@lucene.apache.org 
Subject: Daylight savings time issue using NOW in Solr 6.1.0


Hi

I am using Solr 6.1.0. My SOLR_TIMEZONE=UTC  in solr.in.cmd.
My current Solr server machine time zone is also UTC.

My one collection has below one field in schema.


Suppose my current Solr server machine time is 2020-10-01 10:00:00.000. I have 
one document in that collection and in that document action_date is 
2020-10-01T09:45:46Z.
When I search in Solr action_date:[2020-10-01T08:00:00Z TO NOW] , I cannot 
return that record. I check my solr log and found that time was different 
between Solr log time and solr server machine time.(almost 1 hours difference)

Why I cannot get the result? Why NOW is not taking the 2020-10-01T10:00:00Z?
"NOW" takes which time? Is there difference due to daylight saving 
time<https://en.wikipedia.org/wiki/Daylight_saving_time>? How can I configure 
or change timezone which consider daylight saving time?

Regards,
Vishal



Daylight savings time issue using NOW in Solr 6.1.0

2020-10-06 Thread vishal patel
Hi

I am using Solr 6.1.0. My SOLR_TIMEZONE=UTC  in solr.in.cmd.
My current Solr server machine time zone is also UTC.

My one collection has below one field in schema.


Suppose my current Solr server machine time is 2020-10-01 10:00:00.000. I have 
one document in that collection and in that document action_date is 
2020-10-01T09:45:46Z.
When I search in Solr action_date:[2020-10-01T08:00:00Z TO NOW] , I cannot 
return that record. I check my solr log and found that time was different 
between Solr log time and solr server machine time.(almost 1 hours difference)

Why I cannot get the result? Why NOW is not taking the 2020-10-01T10:00:00Z?
"NOW" takes which time? Is there difference due to daylight saving 
time? How can I configure 
or change timezone which consider daylight saving time?


Add single or batch document in Solr 6.1.0

2020-10-20 Thread vishal patel
I am using solr 6.1.0. We have 2 shards and each has one replica.

I want to insert 100 documents in one collection. I am using below code.

org.apache.solr.client.solrj.impl.CloudSolrClient cloudServer = new 
org.apache.solr.client.solrj.impl.CloudSolrClient(zkHost);
cloudServer.setParallelUpdates(true);
cloudServer.setDefaultCollection(collection);

I have 2 ways to add the documents. single or batch
1) cloudServer.add(SolrInputDocument); //loop of 100 documents
2) cloudServer.add(List); // 100 documents

Note: we are not using cloudServer.commit from application. we used below 
configuration from solrconfig.xml

60
   2
   false


   1000

2

Which one is better for performance oriented single or batch? which one is 
faster for commit process?

Regards,
Vishal



Uploading Features in Solr 6.6

2020-11-26 Thread vishal patel
Hi

I have read the concept of "Learning To Rank".  I see the Example: 
/path/myFeatures.json

{
"name" : "documentRecency",
"class" : "org.apache.solr.ltr.feature.SolrFeature",
"params" : {
  "q" : "{!func}recip( ms(NOW,last_modified), 3.16e-11, 1, 1)"
}
  },
  {
"name" : "isBook",
"class" : "org.apache.solr.ltr.feature.SolrFeature",
"params" : {
  "fq": ["{!terms f=cat}book"]
}
  }

I want to make feature like q=title:"Test with some GB18030 encoded 
characters". Can I make?
Can I search sentence "test book" in terms like "fq": ["{!terms f=cat}test 
book"]?

I want to use q and fq same like normal in feature.

Regards,
Vishal Patel


Sent from Outlook<http://aka.ms/weboutlook>


uploading model in Solr 6.6

2020-11-26 Thread vishal patel
Hi

what is meaning of weight of feature at the time Uploading a Model for 
Re-Ranking?
How can we calculate the weight? Ranking is depended on weight?

Please give me more details about weight.
https://lucene.apache.org/solr/guide/8_1/learning-to-rank.html#uploading-a-model

Regards,
Vishal

Sent from Outlook


Re: uploading model in Solr 6.6

2020-11-30 Thread vishal patel
Any one help me for my question?

Regards,
Vishal


From: vishal patel 
Sent: Friday, November 27, 2020 12:18 PM
To: solr-user@lucene.apache.org 
Subject: uploading model in Solr 6.6

Hi

what is meaning of weight of feature at the time Uploading a Model for 
Re-Ranking?
How can we calculate the weight? Ranking is depended on weight?

Please give me more details about weight.
https://lucene.apache.org/solr/guide/8_1/learning-to-rank.html#uploading-a-model

Regards,
Vishal

Sent from Outlook<http://aka.ms/weboutlook>


Re: Shard and replica went down in Solr 6.1.0

2019-04-15 Thread vishal patel
Thanks for your reply.


Get Outlook for Android<https://aka.ms/ghei36>


From: Shawn Heisey 
Sent: Monday, April 15, 2019 12:40:59 AM
To: solr-user@lucene.apache.org
Subject: Re: Shard and replica went down in Solr 6.1.0

On 4/13/2019 9:29 PM, vishal patel wrote:
> 2> In production, lots of documents come for indexing within a second.If i do 
> hard commit interval to 60 seconds then in less times open searchers when 
> hard commit execute. Is it ohk for performance?

The autoCommit configuration should have openSearcher set to false.
That way there will be no searchers opening on the hard commit.  It is
opening the searcher that is expensive, so doing hard commits without
opening a new searcher is normally VERY fast and uses very few
resources.  When openSearcher is false, that commit will NOT make index
changes visible.

> 3> soft commit 60 second we can not do as of now because our product 
> implement like NRT after indexing instant show that changes.

As I just said, it is opening the searcher that is expensive.  If that
happens extremely frequently, which it would have to in order to achieve
instant NRT, that's going to be opening a LOT of new searchers.  This
will absolutely kill performance.  If you remove all cache warming, it
will be better, but probably still bad.

Unless the index is extremely small, it is not usually possible to
achieve a goal of "documents must be visible within one second after
indexing".

If you are getting the "Overlapping onDecSearchers" warning, it means
you have commits that open a new searcher happening too quickly.  Let's
say that it takes five seconds to finish a commit that opens a new
searcher.  I do not know how long it takes on your index, but I've seen
it take much longer, so that's the example I'm going with.  If you have
such commits happening once a second, then you'll the warning will be
logged because you will end up with five new searchers opening at the
same time,  Running them all simultaneously might make them take even
longer -- the problem compounds itself.

I don't have any idea why you're having replicas go down, other than
maybe this:  All the commit activity (opening new searchers) might be
keeping the system so busy that queries are taking long enough to reach
timeouts.

Thanks,
Shawn


Replica becomes leader when shard was taking a time to update document - Solr 6.1.0

2019-04-17 Thread vishal patel

We have 2 shard and 2 replicas in production server.Somehow replica1 became 
leader when some commit process was running in shard1.
Log ::

***shard1***
2019-04-08 12:52:09.930 INFO  
(searcherExecutor-30-thread-1-processing-n:shard1:8983_solr x:productData 
s:shard1 c:productData r:core_node1) [c:productData s:shard1 r:core_node1 
x:productData] o.a.s.c.QuerySenderListener QuerySenderListener done.
2019-04-08 12:54:01.397 INFO  (qtp1239731077-1359101) [c:product s:shard1 
r:core_node1 x:product] o.a.s.u.p.LogUpdateProcessorFactory [product]  
webapp=/solr path=/update params={wt=javabin&version=2}{add=[PRO23241768 
(1630250393598427136)]} 0 111711

***replica1***
2019-04-08 12:52:09.581 INFO  (qtp1239731077-1021605) [c:product s:shard1 
r:core_node3 x:product] o.a.s.u.p.LogUpdateProcessorFactory [product]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=shard1:8983/solr/product/&wt=javabin&version=2}{add=[PRO23241768
 (1630250393598427136)]} 0 0
2019-04-08 12:52:19.717 INFO  
(zkCallback-4-thread-207-processing-n:replica1:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader A live node change: [WatchedEvent state:SyncConnected 
type:NodeChildrenChanged path:/live_nodes], has occurred - updating... (live 
nodes size: [4])

PRO23241768 was successfully updated at time 12:52:09.581 in replica1 but 
updated time was 12:54:01.397 in shard1. It took around 1.86(111711) minutes. 
In between replica1 tried to become a leader at time 12:52:19.717 and it became 
successfully.

My production solr.xml
${zkClientTimeout:60}
${distribUpdateSoTimeout:60}
${distribUpdateConnTimeout:6}


 ${socketTimeout:60}
 ${connTimeout:6}


Collection : product and productData.

Version ::
solr  : 6.1.0
Zoo keeper : 3.4.6


Why did shard1 take a 1.8 minutes time for update? and if it took time for 
update then why did replica1 try to become leader? Is it required to update any 
timeout?

Note : PRO23241768 was soft commit and log was info level.


Re: Replica becomes leader when shard was taking a time to update document - Solr 6.1.0

2019-04-18 Thread vishal patel
Thanks for your reply.

You are right. I checked GC log and use of GC Viewer I noticed that pause time 
was 111.4546597 secs.

GC Log :

2019-04-08T13:52:09.198+0100: 796799.689: [CMS-concurrent-mark: 1.676/30.552 
secs] [Times: user=93.42 sys=34.11, real=30.55 secs]
2019-04-08T13:52:09.198+0100: 796799.689: [CMS-concurrent-preclean-start]
2019-04-08T13:52:09.603+0100: 796800.094: [CMS-concurrent-preclean: 0.387/0.405 
secs] [Times: user=8.47 sys=1.13, real=0.40 secs]
2019-04-08T13:52:09.603+0100: 796800.095: 
[CMS-concurrent-abortable-preclean-start]
{Heap before GC invocations=112412 (full 55591):
 par new generation   total 13107200K, used 11580169K [0x8000, 
0x00044000, 0x00044000)
  eden space 10485760K, 100% used [0x8000, 0x0003, 
0x0003)
  from space 2621440K,  41% used [0x0003, 0x000342cc2600, 
0x0003a000)
  to   space 2621440K,   0% used [0x0003a000, 0x0003a000, 
0x00044000)
 concurrent mark-sweep generation total 47185920K, used 28266850K 
[0x00044000, 0x000f8000, 0x000f8000)
 Metaspace   used 49763K, capacity 50614K, committed 53408K, reserved 55296K
2019-04-08T13:52:09.939+0100: 796800.430: [GC (Allocation Failure) 796800.431: 
[ParNew
Desired survivor size 2415919104 bytes, new threshold 8 (max 8)
- age   1:  197413992 bytes,  197413992 total
- age   2:  170743472 bytes,  368157464 total
- age   3:  218531128 bytes,  586688592 total
- age   4:3636992 bytes,  590325584 total
- age   5:   18608784 bytes,  608934368 total
- age   6:  163869560 bytes,  772803928 total
- age   7:   55349616 bytes,  828153544 total
- age   8:5124472 bytes,  833278016 total
: 11580169K->985493K(13107200K), 111.4543849 secs] 
39847019K->29253720K(60293120K), 111.4546597 secs] [Times: user=302.38 
sys=109.81, real=111.46 secs]
Heap after GC invocations=112413 (full 55591):
 par new generation   total 13107200K, used 985493K [0x8000, 
0x00044000, 0x00044000)
  eden space 10485760K,   0% used [0x8000, 0x8000, 
0x0003)
  from space 2621440K,  37% used [0x0003a000, 0x0003dc265470, 
0x00044000)
  to   space 2621440K,   0% used [0x0003, 0x0003, 
0x0003a000)
 concurrent mark-sweep generation total 47185920K, used 28268227K 
[0x00044000, 0x000f8000, 0x000f8000)
 Metaspace   used 49763K, capacity 50614K, committed 53408K, reserved 55296K
}
2019-04-08T13:54:01.394+0100: 796911.885: Total time for which application 
threads were stopped: 111.4638238 seconds, Stopping threads took: 0.0069189 
seconds


May I set any max timeout when GC pause 2 second in Solr.xml or any file of Zoo 
keeper ? what to do when GC pause time more?

Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Thursday, April 18, 2019 7:36 AM
To: solr-user@lucene.apache.org
Subject: Re: Replica becomes leader when shard was taking a time to update 
document - Solr 6.1.0

Specifically a _leader_ being put into the down or recovering state is almost 
always because ZooKeeper cannot ping it and get a response back before it times 
out. This also points to large GC pauses no the Solr node. Using something like 
GCViewer on the GC logs at the time of the problem will help a lot.

A _follower_ can go into recovery when an update takes too long but that’s 
“leader initiated recovery” and originates _from_ the leader, which is much 
different than the leader going into a down state.

Best,
Erick

> On Apr 17, 2019, at 7:54 AM, Shawn Heisey  wrote:
>
> On 4/17/2019 6:25 AM, vishal patel wrote:
>> Why did shard1 take a 1.8 minutes time for update? and if it took time for 
>> update then why did replica1 try to become leader? Is it required to update 
>> any timeout?
>
> There's no information here that can tell us why the update took so long.  My 
> best guess would be long GC pauses due to the heap size being too small.  But 
> there might be other causes.
>
> Indexing a single document should be VERY fast.  Even a large document should 
> only take a handful of milliseconds.
>
> If the request included "commit=true" as a parameter, then it might be the 
> commit that was slow, not the indexing.  You'll need to check the logs to 
> determine that.
>
> The reason that the leader changed was almost certainly the fact that the 
> update took so long.  SolrCloud would have decided that the node was down if 
> any operation took that long.
>
> Thanks,
> Shawn



Solr query takes a too much time in Solr 6.1.0

2019-05-10 Thread vishal patel
We have 2 shards and 2 replicas in Live environment. we have multiple 
collections.
Some times some query takes much time(QTime=52552).  There are so many 
documents indexing and searching within milliseconds.
When we executed the same query again using admin panel, it does not take a 
much time and it completes within 20 milliseconds.

My Solr Logs :
2019-05-10 09:48:56.744 INFO  (qtp1239731077-128223) [c:actionscomments 
s:shard1 r:core_node1 x:actionscomments] o.a.s.c.S.Request [actionscomments]  
webapp=/solr path=/select 
params={q=%2Bproject_id:(2102117)%2Brecipient_id:(4642365)+%2Bentity_type:(1)+-action_id:(20+32)+%2Baction_status:(0)+%2Bis_active:(true)+%2B(is_formtype_active:true)+%2B(appType:1)&shards=s1.example.com:8983/solr/actionscomments|s1r1.example.com:8983/solr/actionscomments,s2.example.com:8983/solr/actionscomments|s2r1.example.com:8983/solr/actionscomments&indent=off&shards.tolerant=true&fl=id&start=0&sort=id+desc,id+desc&fq=&rows=1}
 hits=198 status=0 QTime=52552
2019-05-10 09:48:56.744 INFO  (qtp1239731077-127998) [c:actionscomments 
s:shard1 r:core_node1 x:actionscomments] o.a.s.c.S.Request [actionscomments]  
webapp=/solr path=/select 
params={q=%2Bproject_id:(2102117)%2Brecipient_id:(4642365)+%2Bentity_type:(1)+-action_id:(20+32)+%2Baction_status:(0)+%2Bis_active:(true)+%2Bdue_date:[2019-05-09T19:30:00Z+TO+2019-05-09T19:30:00Z%2B1DAY]+%2B(is_formtype_active:true)+%2B(appType:1)&shards=s1.example.com:8983/solr/actionscomments|s1r1.example.com:8983/solr/actionscomments,s2.example.com:8983/solr/actionscomments|s2r1.example.com:8983/solr/actionscomments&indent=off&shards.tolerant=true&fl=id&start=0&sort=id+desc,id+desc&fq=&rows=1}
 hits=0 status=0 QTime=51970
2019-05-10 09:48:56.746 INFO  (qtp1239731077-128224) [c:actionscomments 
s:shard1 r:core_node1 x:actionscomments] o.a.s.c.S.Request [actionscomments]  
webapp=/solr path=/select 
params={q=%2Bproject_id:(2121600+2115171+2104206)%2Brecipient_id:(2834330)+%2Bentity_type:(2)+-action_id:(20+32)+%2Baction_status:(0)+%2Bis_active:(true)+%2Bdue_date:[2019-05-10T00:00:00Z+TO+2019-05-10T00:00:00Z%2B1DAY]&shards=s1.example.com:8983/solr/actionscomments|s1r1.example.com:8983/solr/actionscomments,s2.example.com:8983/solr/actionscomments|s2r1.example.com:8983/solr/actionscomments&indent=off&shards.tolerant=true&fl=id&start=0&sort=id+desc,id+desc&fq=&rows=1}
 hits=98 status=0 QTime=51402


My schema fields below :












What could be a problem here? why the query takes too much time at that time?

Sent from Outlook


Re: Solr query takes a too much time in Solr 6.1.0

2019-05-13 Thread vishal patel
Thanks for the reply.

> Executing an identical query again will likely satisfy the query from Solr's 
> caches.  Solr won't need to talk to the actual index, and it will be REALLY 
> fast.  Even a massively complex query, if it is cached, will be fast.

All caches are disabled in our schema file because of our indexing and 
searching ratio is high in our live environment.


Sent from Outlook<http://aka.ms/weboutlook>

From: Shawn Heisey 
Sent: Friday, May 10, 2019 9:32 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr query takes a too much time in Solr 6.1.0

On 5/10/2019 7:32 AM, vishal patel wrote:
> We have 2 shards and 2 replicas in Live environment. we have multiple 
> collections.
> Some times some query takes much time(QTime=52552).  There are so many 
> documents indexing and searching within milliseconds.

There could be any number of causes of slow performance.

A common reason is not having enough spare memory in the machine to
allow the operating system to cache the index data.  This is memory NOT
allocated by programs (including Solr).

Another common reason is that the heap size is too small, which causes
Java to frequently perform full garbage collections, which will REALLY
kill performance.

Since there's very little information here, it's difficult for us to
diagnose the cause.  Here's a wiki page about performance problems:

https://wiki.apache.org/solr/SolrPerformanceProblems

(Disclaimer:  I am the principal author of that page)

> When we executed the same query again using admin panel, it does not take a 
> much time and it completes within 20 milliseconds.

Executing an identical query again will likely satisfy the query from
Solr's caches.  Solr won't need to talk to the actual index, and it will
be REALLY fast.  Even a massively complex query, if it is cached, will
be fast.

Running the information from your logs through a URL decoder, this is
what I found:

q=+project_id:(2102117)+recipient_id:(4642365) +entity_type:(1)
-action_id:(20 32) +action_status:(0) +is_active:(true)
+(is_formtype_active:true) +(appType:1)

If all of those fields are indexed, then I would not expect a properly
sized server to be slow.  If any of those fields are indexed=false and
have docValues, then it could be a schema configuration issue.
Searching docValues does work, but it's really slow.

Your query does have an empty fq ... "fq=" ... I do not know whether
that's problematic.  Try it without that to verify.  I would not expect
it to cause problems, but I can't be sure.

Thanks,
Shawn


Shard got down in Solr 6.1.0

2019-05-13 Thread vishal patel
vishal patel has shared a OneDrive file with you. To view it, click the link 
below.


<https://1drv.ms/t/s!AhS5CvIRnaQCbAyL7HxyxR6CQOU>
[https://r1.res.office365.com/owa/prem/images/dc-txt_20.png]<https://1drv.ms/t/s!AhS5CvIRnaQCbAyL7HxyxR6CQOU>

GC_log.txt<https://1drv.ms/t/s!AhS5CvIRnaQCbAyL7HxyxR6CQOU>



We have 2 shards and 2 replicas with 7 zookeepers in our live environment. 
unexpectedly shard got down. From logs, we can not identify why shard got down. 
I have attached the GC and solr log.

*
My solr.xml data
**


  

${host:localhost}
${jetty.port:8983}
${hostContext:solr}

${genericCoreNodeNames:true}

${zkClientTimeout:60}
${distribUpdateSoTimeout:60}
${distribUpdateConnTimeout:6}
${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider}
${zkACLProvider:org.apache.solr.common.cloud.DefaultZkACLProvider}

  

  
${socketTimeout:60}
${connTimeout:6}
  


*
My zoo.cfg data
**
tickTime=2000
initLimit=10
syncLimit=5
*



Can you suggest me how can we find out this issue?
2019-05-10 13:00:54.559 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@3a0b9e39 
name:ZooKeeperConnection 
Watcher:10.200.312.80:1,10.200.312.81:2,10.200.312.82:3,10.200.312.83:4,10.200.312.84:5,10.200.312.85:6,10.200.312.86:7
 got event WatchedEvent state:Expired type:None path:null path:null type:None
2019-05-10 13:00:54.559 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.ConnectionManager Our previous ZooKeeper session was expired. 
Attempting to reconnect to recover relationship with ZooKeeper...
2019-05-10 13:00:54.559 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.Overseer Overseer 
(id=246176007594049637-localhost:8983_solr-n_48) closing
2019-05-10 13:00:54.621 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.DefaultConnectionStrategy Connection expired - starting a new one...
2019-05-10 13:00:54.691 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
2019-05-10 13:00:54.691 ERROR (OverseerExitThread) [   ] o.a.s.c.Overseer could 
not read the data
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /overseer_elect/leader
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:348)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:345)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:309)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.access$300(Overseer.java:89)
at org.apache.solr.cloud.Overseer$ClusterStateUpdater$2.run(Overseer.java:268)
2019-05-10 13:00:57.662 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr-EventThread) [   ] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@3a0b9e39 
name:ZooKeeperConnection 
Watcher:10.200.312.80:1,10.200.312.81:2,10.200.312.82:3,10.200.312.83:4,10.200.312.84:5,10.200.312.85:6,10.200.312.86:7
 got event WatchedEvent state:SyncConnected type:None path:null path:null 
type:None
2019-05-10 13:00:57.666 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
2019-05-10 13:00:57.668 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.ConnectionManager Connection with ZooKeeper reestablished.
2019-05-10 13:00:57.668 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.ZkController ZooKeeper session re-connected ... refreshing core states 
after session expiration.
2019-05-10 13:00:57.700 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader Updating cluster state from ZooKeeper... 
2019-05-10 13:01:01.138 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader Loaded empty cluster properties
2019-05-10 13:01:03.382 INFO  
(zkCallback-4-thread-632-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (3)
2019-05-10 13:01:03.400 INFO  
(zkCallback-4-

Re: Solr query takes a too much time in Solr 6.1.0

2019-05-13 Thread vishal patel
In our live environment, there are many searching and indexing within a 
millisecond. we used facet and sorting in Query.

> 3. To speedup sorting have a separate field with docValues=true for sorting.

Is it necessary or useful to make a separate field if I used this field in 
sorting or facet?
If I do not do a separate field then any performance issue when the same field 
will search in a query?

Sent from Outlook<http://aka.ms/weboutlook>

From: Bernd Fehling 
Sent: Monday, May 13, 2019 11:52 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr query takes a too much time in Solr 6.1.0

Your "sort" parameter has "sort=id+desc,id+desc".
1. It doesn't make sense to have a sort on "id" in descending order twice.
2. Be aware that the id field has the highest cadinality.
3. To speedup sorting have a separate field with docValues=true for sorting.
E.g.




Regards
Bernd


Am 10.05.19 um 15:32 schrieb vishal patel:
> We have 2 shards and 2 replicas in Live environment. we have multiple 
> collections.
> Some times some query takes much time(QTime=52552).  There are so many 
> documents indexing and searching within milliseconds.
> When we executed the same query again using admin panel, it does not take a 
> much time and it completes within 20 milliseconds.
>
> My Solr Logs :
> 2019-05-10 09:48:56.744 INFO  (qtp1239731077-128223) [c:actionscomments 
> s:shard1 r:core_node1 x:actionscomments] o.a.s.c.S.Request [actionscomments]  
> webapp=/solr path=/select 
> params={q=%2Bproject_id:(2102117)%2Brecipient_id:(4642365)+%2Bentity_type:(1)+-action_id:(20+32)+%2Baction_status:(0)+%2Bis_active:(true)+%2B(is_formtype_active:true)+%2B(appType:1)&shards=s1.example.com:8983/solr/actionscomments|s1r1.example.com:8983/solr/actionscomments,s2.example.com:8983/solr/actionscomments|s2r1.example.com:8983/solr/actionscomments&indent=off&shards.tolerant=true&fl=id&start=0&sort=id+desc,id+desc&fq=&rows=1}
>  hits=198 status=0 QTime=52552
> 2019-05-10 09:48:56.744 INFO  (qtp1239731077-127998) [c:actionscomments 
> s:shard1 r:core_node1 x:actionscomments] o.a.s.c.S.Request [actionscomments]  
> webapp=/solr path=/select 
> params={q=%2Bproject_id:(2102117)%2Brecipient_id:(4642365)+%2Bentity_type:(1)+-action_id:(20+32)+%2Baction_status:(0)+%2Bis_active:(true)+%2Bdue_date:[2019-05-09T19:30:00Z+TO+2019-05-09T19:30:00Z%2B1DAY]+%2B(is_formtype_active:true)+%2B(appType:1)&shards=s1.example.com:8983/solr/actionscomments|s1r1.example.com:8983/solr/actionscomments,s2.example.com:8983/solr/actionscomments|s2r1.example.com:8983/solr/actionscomments&indent=off&shards.tolerant=true&fl=id&start=0&sort=id+desc,id+desc&fq=&rows=1}
>  hits=0 status=0 QTime=51970
> 2019-05-10 09:48:56.746 INFO  (qtp1239731077-128224) [c:actionscomments 
> s:shard1 r:core_node1 x:actionscomments] o.a.s.c.S.Request [actionscomments]  
> webapp=/solr path=/select 
> params={q=%2Bproject_id:(2121600+2115171+2104206)%2Brecipient_id:(2834330)+%2Bentity_type:(2)+-action_id:(20+32)+%2Baction_status:(0)+%2Bis_active:(true)+%2Bdue_date:[2019-05-10T00:00:00Z+TO+2019-05-10T00:00:00Z%2B1DAY]&shards=s1.example.com:8983/solr/actionscomments|s1r1.example.com:8983/solr/actionscomments,s2.example.com:8983/solr/actionscomments|s2r1.example.com:8983/solr/actionscomments&indent=off&shards.tolerant=true&fl=id&start=0&sort=id+desc,id+desc&fq=&rows=1}
>  hits=98 status=0 QTime=51402
>
>
> My schema fields below :
>
>  multiValued="false"/>
> 
> 
> 
> 
> 
> 
>  />
> 
> 
>
> What could be a problem here? why the query takes too much time at that time?
>
> Sent from Outlook<http://aka.ms/weboutlook>
>


Usage of docValuesFormat

2019-05-22 Thread vishal patel
We enabled the DocValues on some schema fields for sorting and faceting query 
result.
Is it necessary to add docValuesFormat for faster query process?
Which one should better? docValuesFormat="Memory" or docValuesFormat="Disk"?

Note: Our indexed data size are high in one collection and different sort and 
faceting queries are executed within a second.

Sent from Outlook


Query takes a long time Solr 6.1.0

2019-06-05 Thread vishal patel
We have 2 shards and 2 replicas in Live also have multiple collections. We are 
performing heavy search and update.

-> I have attached some query which takes time for executing. why does it take 
too much time? Due to the query length?

-> Some times replica goes in recovery mode and from the log, we can not 
identify the issue but GC pause time 15 to 20 seconds. Ideally what should be 
GC pause time? GC pause time increase due to indexing or searching documents?

My Solr live data :

SIZE(gb)
Collections Total documents shard1  shard2
documents   20419967117 99.4
commentdetails  183054856.476.83
documentcontent 8810482 191 102
forms   4316563 80.1
76.4

Regards,
Vishal



Re: Query takes a long time Solr 6.1.0

2019-06-06 Thread vishal patel
Thanks for your reply.

> How much index data is on one server with 256GB of memory?  What is the
> max heap size on the Solr instance?  Is there only one Solr instance?

One server(256GB RAM) has two below Solr instance and other application also
1) shards1 (80GB heap ,790GB Storage, 449GB Indexed data)
2) replica of shard2 (80GB heap, 895GB Storage, 337GB Indexed data)

The second server(256GB RAM and 1 TB storage) has two below Solr instance and 
other application also
1) shards2 (80GB heap, 790GB Storage, 338GB Indexed data)
2) replica of shard1 (80GB heap, 895GB Storage, 448GB Indexed data)

Both server memory and disk usage:
https://drive.google.com/drive/folders/11GoZy8C0i-qUGH-ranPD8PCoPWCxeS-5

Note: Average 40GB heap used normally in each Solr instance. when replica gets 
down at that time disk IO are high and also GC pause time above 15 seconds. We 
can not identify the exact issue of replica recovery OR down from logs. due to 
the GC pause? OR due to disk IO high? OR due to time-consuming query? OR due to 
heavy indexing?

Regards,
Vishal

From: Shawn Heisey 
Sent: Wednesday, June 5, 2019 7:10 PM
To: solr-user@lucene.apache.org
Subject: Re: Query takes a long time Solr 6.1.0

On 6/5/2019 7:08 AM, vishal patel wrote:
> I have attached RAR file but not attached properly. Again attached txt file.
>
> For 2 shards and 2 replicas, we have 2 servers and each has 256 GB ram
> and 1 TB storage. One shard and another shard replica in one server.

You got lucky.  Even text files usually don't make it to the list --
yours did this time.  Use a file sharing website in the future.

That is a massive query.  The primary reason that Lucene defaults to a
maxBooleanClauses value of 1024, which you are definitely exceeding
here, is that queries with that many clauses tend to be slow and consume
massive levels of resources.  It might not be possible to improve the
query speed very much here if you cannot reduce the size of the query.

Your query doesn't look like it is simple enough to replace with the
terms query parser, which has better performance than a boolean query
with thousands of "OR" clauses.

How much index data is on one server with 256GB of memory?  What is the
max heap size on the Solr instance?  Is there only one Solr instance?

The screenshot mentioned here will most likely relay all the info I am
looking for.  Be sure the sort is correct:

https://wiki.apache.org/solr/SolrPerformanceProblems#Asking_for_help_on_a_memory.2Fperformance_issue

You will not be able to successfully attach the screenshot to a message.
  That will require a file sharing website.

Thanks,
Shawn


Fwd: Re: Query takes a long time Solr 6.1.0

2019-06-07 Thread vishal patel
Any one is looking my issue??

Get Outlook for Android<https://aka.ms/ghei36>


From: vishal patel
Sent: Thursday, June 6, 2019 5:15:15 PM
To: solr-user@lucene.apache.org
Subject: Re: Query takes a long time Solr 6.1.0

Thanks for your reply.

> How much index data is on one server with 256GB of memory?  What is the
> max heap size on the Solr instance?  Is there only one Solr instance?

One server(256GB RAM) has two below Solr instance and other application also
1) shards1 (80GB heap ,790GB Storage, 449GB Indexed data)
2) replica of shard2 (80GB heap, 895GB Storage, 337GB Indexed data)

The second server(256GB RAM and 1 TB storage) has two below Solr instance and 
other application also
1) shards2 (80GB heap, 790GB Storage, 338GB Indexed data)
2) replica of shard1 (80GB heap, 895GB Storage, 448GB Indexed data)

Both server memory and disk usage:
https://drive.google.com/drive/folders/11GoZy8C0i-qUGH-ranPD8PCoPWCxeS-5

Note: Average 40GB heap used normally in each Solr instance. when replica gets 
down at that time disk IO are high and also GC pause time above 15 seconds. We 
can not identify the exact issue of replica recovery OR down from logs. due to 
the GC pause? OR due to disk IO high? OR due to time-consuming query? OR due to 
heavy indexing?

Regards,
Vishal

From: Shawn Heisey 
Sent: Wednesday, June 5, 2019 7:10 PM
To: solr-user@lucene.apache.org
Subject: Re: Query takes a long time Solr 6.1.0

On 6/5/2019 7:08 AM, vishal patel wrote:
> I have attached RAR file but not attached properly. Again attached txt file.
>
> For 2 shards and 2 replicas, we have 2 servers and each has 256 GB ram
> and 1 TB storage. One shard and another shard replica in one server.

You got lucky.  Even text files usually don't make it to the list --
yours did this time.  Use a file sharing website in the future.

That is a massive query.  The primary reason that Lucene defaults to a
maxBooleanClauses value of 1024, which you are definitely exceeding
here, is that queries with that many clauses tend to be slow and consume
massive levels of resources.  It might not be possible to improve the
query speed very much here if you cannot reduce the size of the query.

Your query doesn't look like it is simple enough to replace with the
terms query parser, which has better performance than a boolean query
with thousands of "OR" clauses.

How much index data is on one server with 256GB of memory?  What is the
max heap size on the Solr instance?  Is there only one Solr instance?

The screenshot mentioned here will most likely relay all the info I am
looking for.  Be sure the sort is correct:

https://wiki.apache.org/solr/SolrPerformanceProblems#Asking_for_help_on_a_memory.2Fperformance_issue

You will not be able to successfully attach the screenshot to a message.
  That will require a file sharing website.

Thanks,
Shawn


Re: Query takes a long time Solr 6.1.0

2019-06-10 Thread vishal patel
> An 80GB heap is ENORMOUS.  And you have two of those per server.  Do you
> *know* that you need a heap that large?  You only have 50 million
> documents total, two instances that each have 80GB seems completely
> unnecessary.  I would think that one instance with a much smaller heap
> would handle just about anything you could throw at 50 million documents.

> With 160GB taken by heaps, you're leaving less than 100GB of memory to
> cache over 700GB of index.  This is not going to work well, especially
> if your index doesn't have many fields that are stored.  It will cause a
> lot of disk I/O.

We have 27 collections and each collection has many schema fields and in live 
too many search and index create&update requests come and most of the searching 
requests are sorting, faceting, grouping, and long query.
So approx average 40GB heap are used so we gave 80GB memory.

> Unless you have changed the DirectoryFactory to something that's not
> default, your process listing does not reflect over 700GB of index data.
> If you have changed the DirectoryFactory, then I would strongly
> recommend removing that part of your config and letting Solr use its
> default.

our directory in solrconfig.xml



Here our schema file and solrconfig XML and GC log, please verify it. is it 
anything wrong or suggestions for improvement?
https://drive.google.com/drive/folders/1wV9bdQ5-pP4s4yc8jrYNz77YYVRmT7FG


GC log ::
2019-06-06T11:55:37.729+0100: 1053781.828: [GC (Allocation Failure) 
1053781.828: [ParNew
Desired survivor size 3221205808 bytes, new threshold 8 (max 8)
- age   1:  268310312 bytes,  268310312 total
- age   2:  220271984 bytes,  488582296 total
- age   3:   75942632 bytes,  564524928 total
- age   4:   76397104 bytes,  640922032 total
- age   5:  126931768 bytes,  767853800 total
- age   6:   92672080 bytes,  860525880 total
- age   7:2810048 bytes,  863335928 total
- age   8:   11755104 bytes,  875091032 total
: 15126407K->1103229K(17476288K), 15.7272287 secs] 
45423308K->31414239K(80390848K), 15.7274518 secs] [Times: user=212.05 
sys=16.08, real=15.73 secs]
Heap after GC invocations=68829 (full 187):
 par new generation   total 17476288K, used 1103229K [0x8000, 
0x00058000, 0x00058000)
  eden space 13981056K,   0% used [0x8000, 0x8000, 
0x0003d556)
  from space 3495232K,  31% used [0x0004aaab, 0x0004ee00f508, 
0x00058000)
  to   space 3495232K,   0% used [0x0003d556, 0x0003d556, 
0x0004aaab)
 concurrent mark-sweep generation total 62914560K, used 30311010K 
[0x00058000, 0x00148000, 0x00148000)
 Metaspace   used 50033K, capacity 50805K, committed 53700K, reserved 55296K
}
2019-06-06T11:55:53.456+0100: 1053797.556: Total time for which application 
threads were stopped: 42.4594545 seconds, Stopping threads took: 26.7301882 
seconds

For which reason GC paused 42 seconds?

Heavy searching and indexing create & update in our Solr Cloud.
So, Should we divide a cloud between 27 collections? Should we add one more 
shard?

Sent from Outlook<http://aka.ms/weboutlook>

From: Shawn Heisey 
Sent: Friday, June 7, 2019 9:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Query takes a long time Solr 6.1.0

On 6/6/2019 5:45 AM, vishal patel wrote:
> One server(256GB RAM) has two below Solr instance and other application also
> 1) shards1 (80GB heap ,790GB Storage, 449GB Indexed data)
> 2) replica of shard2 (80GB heap, 895GB Storage, 337GB Indexed data)
>
> The second server(256GB RAM and 1 TB storage) has two below Solr instance and 
> other application also
> 1) shards2 (80GB heap, 790GB Storage, 338GB Indexed data)
> 2) replica of shard1 (80GB heap, 895GB Storage, 448GB Indexed data)

An 80GB heap is ENORMOUS.  And you have two of those per server.  Do you
*know* that you need a heap that large?  You only have 50 million
documents total, two instances that each have 80GB seems completely
unnecessary.  I would think that one instance with a much smaller heap
would handle just about anything you could throw at 50 million documents.

With 160GB taken by heaps, you're leaving less than 100GB of memory to
cache over 700GB of index.  This is not going to work well, especially
if your index doesn't have many fields that are stored.  It will cause a
lot of disk I/O.

> Both server memory and disk usage:
> https://drive.google.com/drive/folders/11GoZy8C0i-qUGH-ranPD8PCoPWCxeS-5

Unless you have changed the DirectoryFactory to something that's not
default, your process listing does not reflect over 700GB of index data.
  If you have changed the DirectoryFactory, then I would strongly
recommend removing that part of your config and letting Solr use its
default.

> Note: Average 40GB heap used normally in each Solr instance. when replica 

Deleted Docs increasing in Solr 6.1.0

2019-11-04 Thread vishal patel
We have 2 shards and 2 replicas in a testing environment.Deleted Docs are 18749 
for one collection[documents].I have attached a screenshot of solr admin panel.
(1) Would there any impact on disk size if deleted docs will increase?
(2) We try to remove deleted doc by executing command : curl 
"http://50.40.30.20:8983/solr/documents/update"; -H "Content-Type:text/xml" 
--data-binary ""
After executing this command, we checked in solr admin but no changed in 
deleted docs. How can we remove the deleted docs?

Note: default merge policy[TieredMergePolicy] used.



Regards,
Vishal


Solr 8.3.0

2019-11-17 Thread vishal patel

I have created 2 shards of Solr 8.3.0. We have created 27 collections using the 
below
http://191.162.100.148:7971/solr/admin/collections?_=1573813004271&action=CREATE&autoAddReplicas=false&collection.configName=actionscomments&maxShardsPerNode=1&name=actionscomments&numShards=2&replicationFactor=1&router.name=compositeId&wt=json


After the re-indexing Data, I want to add a replica of each shard. How can I 
add a replica without re-creating collection and re-indexing?
Can I add one more shard dynamically without re-creating collections and 
re-indexing?


Solr 8.3.0

2019-11-17 Thread vishal patel
I have created 2 shards of Solr 8.3.0. After I have created 10 collections and 
also re-indexed data.

Some fields are changed in one collection. I deleted a version-2 folder from 
zoo_data and up config that collection.

Is it necessary to create all collections again? Also indexing data again?

Regards,
Vishal


Need to recreate collection when version-2 folder deleted in zookeeper

2019-11-19 Thread vishal patel
I have created 2 shards of Solr 8.3.0. After I have created 10 collections and 
also re-indexed data.

Some fields are changed in one collection. I deleted a version-2 folder from 
zoo_data and up config that collection.

Is it necessary to create all collections again? Also indexing data again?

Regards,
Vishal


Re: Solr 8.3.0

2019-11-19 Thread vishal patel
My autoAddReplicas is false. Is it necessary to change? Also, the replication 
factor is 1. Need to update?

Regards,
Vishal

Sent from Outlook<http://aka.ms/weboutlook>

From: Erick Erickson 
Sent: Monday, November 18, 2019 9:47 PM
To: solr-user@lucene.apache.org 
Subject: Re: Solr 8.3.0

The Collections API ADDREPLICA command.

Best,
Erick

> On Nov 18, 2019, at 12:48 AM, vishal patel  
> wrote:
>
>
> I have created 2 shards of Solr 8.3.0. We have created 27 collections using 
> the below
> http://191.162.100.148:7971/solr/admin/collections?_=1573813004271&action=CREATE&autoAddReplicas=false&collection.configName=actionscomments&maxShardsPerNode=1&name=actionscomments&numShards=2&replicationFactor=1&router.name=compositeId&wt=json
>
>
> After the re-indexing Data, I want to add a replica of each shard. How can I 
> add a replica without re-creating collection and re-indexing?
> Can I add one more shard dynamically without re-creating collections and 
> re-indexing?



Issue in SolrInputDocument on Upgrade time

2019-12-02 Thread vishal patel
Hi,

I am getting below error while converting json to my object. I am using Gson 
class (gson-2.2.4.jar) to generate json from object and object from json.
gson fromJson() method throws below error.
Note: This was working fine with solr-solrj-5.2.0.jar but it causing issue when 
i uses solr-solrj-6.1.0.jar or higher. As I checked SolrInputDocument class has 
changed in solr-solrj-5.5.0.

java.lang.IllegalArgumentException: Can not set 
org.apache.solr.common.SolrInputDocument field 
com.test.common.MySolrMessage.body to com.google.gson.internal.LinkedTreeMap
at 
sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:167)
at 
sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:171)
at 
sun.reflect.UnsafeObjectFieldAccessorImpl.set(UnsafeObjectFieldAccessorImpl.java:81)
at java.lang.reflect.Field.set(Field.java:764)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:108)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:185)
at 
com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:40)
at 
com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:81)
at 
com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:1)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:106)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:185)
at com.google.gson.Gson.fromJson(Gson.java:825)
at com.google.gson.Gson.fromJson(Gson.java:790)
at com.google.gson.Gson.fromJson(Gson.java:739)
at com.google.gson.Gson.fromJson(Gson.java:711)


public class MySolrMessage implements IMessage
{
private static final long serialVersionUID = 1L;
private T body = null;
private String collection;
private int action;
private int errorCode;
private long msgId;
//few parameterized constructor
//getter and setter method of all above attributes
}

public interface IMessage extends Serializable
{
public long getMsgId();
public void setMsgId(long id);
public Object getBody();
public void setBody(Object o);
public void setErrorCode(int ec);
public int getErrorCode();
}

public class Request {
LinkedList msgList = new LinkedList();

public Request() {
}

public Request(LinkedList l) {
this.msgList = l;
}

public LinkedList getMsgList() {
return this.msgList;
}
}

@JsonAutoDetect(JsonMethod.FIELD)
@JsonSerialize(include = JsonSerialize.Inclusion.NON_NULL)
public class Request2
{
@JsonProperty
@JsonDeserialize(as=LinkedList.class,contentAs = MySolrMessage.class)
LinkedList> msgList = new 
LinkedList>();

public Request()
{

}

public Request(LinkedList> l)
{
this.msgList = l;
}

public LinkedList> getMsgList()
{
return this.msgList;
}
}


public class Test {

public static void main(String[] args) {
SolrInputDocument solrDocument = new SolrInputDocument();
solrDocument.addField("id", "1234");
solrDocument.addField("name", "test");
MySolrMessage asm = new MySolrMessage(solrDocument, 
"collection1", 1);
IMessage message = asm;
List msgList = new ArrayList();
msgList.add(message);
LinkedList ex = new LinkedList();
ex.addAll(msgList);
Request request = new Request(ex);
try
{
String json = "";
Gson gson = (new GsonBuilder()).serializeNulls().create();
gson.setASessionId((String) null);
json = gson.toJson(request);
Gson gson2 = new Gson();
Request2 retObj = gson2.fromJson(json, Request2.class); //this will gives the 
above error.
}
catch (Exception e)
{
   e.printStackTrace();
}
}
}

Any idea?



Regards,
Vishal


NPE on exceeding timeAllowed on SOLR-8.1.1

2019-12-03 Thread Vishal Patel
9+72982+72983+72985+72986+72988+72991+72992+[72994+TO+72996]+72999+73000+73002+73005+[73009+TO+73014]+[73017+TO+73019]+73021+[73023+TO+73030]+[73032+TO+73035]+[73038+TO+73049]+[73052+TO+73060]+[73062+TO+73064]+[73068+TO+73072]+[73077+TO+73080]+[73085+TO+73098]+73103+73105+73106+73109+73111+73112+73116+73118+[73120+TO+73126]+73128+73132+73133+[73135+TO+73138]+[73140+TO+73143]+73145+[73147+TO+73149]+[73151+TO+73153]+73156+[73159+TO+73161]+[73163+TO+73170]+[73172+TO+73176]+[73178+TO+73180]+[73182+TO+73185]+73187+[73189+TO+73194]+73196+73197+[73199+TO+73201]+[73203+TO+73207]+[73209+TO+73214]+[73216+TO+73239]+73242+73243+73245+[73248+TO+73251]+73253+[73256+TO+73258]+73260+[73262+TO+73264]+[73266+TO+73271]+73274+73275+73281+73287+[73290+TO+73297]+73299+73307+73308+[73310+TO+73315]+[73317+TO+73323]+[73326+TO+73328]+73330+73332+7+[73335+TO+73337]+[73339+TO+73342]+73359+73361+[73363+TO+73372]+73374+73375+[73377+TO+73384]+73395+73396+73401+73403+73405+[73410+TO+73413]+[73415+TO+73419]+[73421+TO+73433]+[73436+TO+73454]+73456+73459+73462+73463+[73465+TO+73469]+73471+73475+73477+73485+73488+73490+73492+73495+73498+73499+73503+73504+[73507+TO+73511]+73515+[73518+TO+73520]+[73522+TO+73524]+[73529+TO+73531]+[73541+TO+73546]+[73548+TO+73550]+[73552+TO+73557]+[73559+TO+73562]+73573+73575+73576+73579+73594+73619+73625+73631+73634+73636+73637+[73640+TO+73645]+73648+73650+73653+73658+73667+[73670+TO+73672]+[73900+TO+73979]+[80497+TO+80516]+[81100+TO+81108]+81110+8+[82000+TO+82004]+[82006+TO+82042]+[89000+TO+89019]+9+90001+[90003+TO+90008]+[90010+TO+90025]+[90027+TO+90049]+[90051+TO+90070]+[90072+TO+90117]+[90119+TO+90135]+[90137+TO+90165]+[90200+TO+90217]+[90220+TO+90252]+[90254+TO+90330]+[90333+TO+90379]+90740+[90742+TO+90918]+[90920+TO+91146]+[91148+TO+91159]+[91173+TO+91185]+[91190+TO+91338]+[92001+TO+92006]+[99599+TO+99606]+[99625+TO+99645]+[99650+TO+99657]+99701+99843+99859+[99861+TO+99864]+99866+99869)&sort=sys_date+desc&shard.url=http://solrc3.qat.infodesk.com:8983/solr/repository_shard1_replica_n2/|http://solrc1.qat.infodesk.com:8983/solr/repository_shard1_replica_n1/&rows=50&version=2&q=(DOCUMENT:(("blood+test"+OR+"blood+tests"+OR+immunoassay*+OR+assay*+OR+"blood+screen"+OR+"blood+screening"+OR+"blood+screens"+OR+"blood+chemistry"+OR+"blood+chem"+OR+"clinical+chemistry"+OR+"clinical+chem"+OR+"donor+screening"+OR+"cellular+diagnostic"+OR+"cellular+diagnosis*"+OR+"cellular+diagnostics"+OR+"cellular+diagnose"+OR+"cellular+diagnoses"+OR+"cellular+diagnosing"+OR+immunodiagn*+OR+"molecular+diagnostic"+OR+"molecular+diagnosis"+OR+"molecular+diagnostics"+OR+"molecular+diagnose"+OR+"molecular+diagnoses"+OR+"molecular+diagnosing"+OR+"lab+test"+OR+"lab+tests"+OR+"lab+testing"+OR+"laboratory+test"+OR+"laboratory+testing"+OR+"laboratory+tests")+OR+("tissue+diagnosis"+OR+"tissue+based+diagnosis"+OR+"tissue+diagnoses"+OR+(tissue*+AND+diagnos*))+OR+(("blood+test"+OR+"blood+tests"+OR+immunoassay*+OR+assay*+OR+"blood+screen"+OR+"blood+screening"+OR+"blood+screens"+OR+"cellular+diagnostic"+OR+"cellular+diagnosis"+OR+"cellular+diagnostics"+OR+"cellular+diagnose"+OR+"cellular+diagnoses"+OR+"cellular+diagnosing"+OR+

Sometimes searching slow in Solr 6.1.0

2019-12-12 Thread vishal patel
We have 2 shards and 2 replicas in our live environment. Total of 26 
collections. we give 64GB RAM for a single Solr instance.
I have faced a slow searching issue in our live environment. In our scenario, 
there are many update requests come within minutes like 50,000. At that time 
searching becomes slow.
The query is normal but taking a 4to5 seconds. When the same query will execute 
after sometimes it will not take time.
Our solr config details:
* autoCommit is 6 and autoSoftCommit is 100.
* No caching add in config.

Can we handle the update thread priority? why searching is slow at that time? 
can we monitor updates and search requests for solr cloud performance?


Regards,
Vishal


Solr 8.3

2020-01-02 Thread vishal patel
When I am creating 2 shards and 2 replicas using admin panel, automatic assign 
a shard or replica to any IP.
I want to make the specific shard or replica to solr instance at the time of 
creating a collection. Can I?

Regards,
Vishal


Re: Solr 8.3

2020-01-02 Thread vishal patel
My created collection in solr cloud below
[cid:4461af25-67be-4647-b9e5-766d3e2a2602]
10.38.33.24 is shard and its replica is 10.38.33.27.
10.38.33.227 is shard and its replica is 10.38.33.219.

I want to create a new collection on the Same. can not change the shard IP for 
the new collection. How can I?



From: sudhir kumar 
Sent: Thursday, January 2, 2020 2:01 PM
To: solr-user@lucene.apache.org 
Subject: Re: Solr 8.3

sample url to create collection:

http//host:8080/solr/admin/collections?action=CREATE&name=collectionname&numShards=2&replicationFactor=3&maxShardsPerNode=2&createNodeSet=
host:8080_solr,host:8080_solr,host:8080_solr,host:8080_solr
&collection.configName=collectionconfig

On Thu, Jan 2, 2020 at 1:56 PM sudhir kumar  wrote:

> Hey Vishal,
>
> You can use createNodeSet property while creating collection which will
> allows you to create shards on specified IP.
>
> /admin/collections?action=CREATE&name=*name*&numShards=*number*
> &replicationFactor=*number*&*maxShardsPerNode*=*number*&*createNodeSet*=
> *nodelist*&collection.configName=*configname*
>
>
> Thanks,
>
> Sudhir
>
>
> On Thu, Jan 2, 2020 at 1:49 PM vishal patel 
> wrote:
>
>> When I am creating 2 shards and 2 replicas using admin panel, automatic
>> assign a shard or replica to any IP.
>> I want to make the specific shard or replica to solr instance at the time
>> of creating a collection. Can I?
>>
>> Regards,
>> Vishal
>>
>


Re: Solr 8.3

2020-01-02 Thread vishal patel
I do not want to change the IP of the existing replica. I want to fix the IP 
for the first time creating a collection.

I have 4 machines. my IP of each machine is below
machine1 10.38.33.28
machine2 10.38.33.29
machine3 10.38.33.30
machine4 10.38.33.31

I have created solr instance on each machine. when I am creating a first 
collection(documents) using admin panel my structure becomes like this
shard1 10.38.33.30
   replica1 10.38.33.31
shard2 10.38.33.28
   replica2 10.38.33.29

When creating a second collection(forms), my structure becomes like this
shard1 10.38.33.28
   replica1 10.38.33.30
shard2 10.38.33.29
   replica2 10.38.33.31

I have attached a screenshot. [https://ibb.co/Yb2TpVX]

Randomly IP assigned to shard or replica when creating collection but I want to 
make shard1 on 10.38.33.28, replica1 on 10.38.33.31, shard2 on 10.38.33.30 and 
replica2 on 10.38.33.31 when first time creating a collection using admin panel.

Can I Fix the IP to shard OR replica when creating a new 
[cid:4c560b03-a18d-4bc1-b592-6b2f8103f1ae] collection?

Regards,
Vishal



From: Erick Erickson 
Sent: Thursday, January 2, 2020 7:40 PM
To: solr-user@lucene.apache.org 
Subject: Re: Solr 8.3

No, you cannot change the IP of an existing replica. Either do as Sankar 
mentioned when you first create the collection or use the MOVREPLICA 
collections API command.

MOVEREPLICA has existed for quite a long time, but if it’s not available, you 
can do the same with the ADDREPLICA command to add a replica to a specific 
node, wait for the replica to become fully active, then use DELETEREPLICA on 
the one you no longer want.

Best,
Erick

> On Jan 2, 2020, at 8:10 AM, Sankar Panda  wrote:
>
> Hi Vishal,
> You can create a empty nodeset and manually configure in the collection as
> desired in the admin page
> Thanks
> Sankar Panda
>
> On Thu, Jan 2, 2020, 14:36 vishal patel 
> wrote:
>
>> My created collection in solr cloud below
>>
>> 10.38.33.24 is shard and its replica is 10.38.33.27.
>> 10.38.33.227 is shard and its replica is 10.38.33.219.
>>
>> I want to create a new collection on the Same. can not change the shard IP
>> for the new collection. How can I?
>>
>>
>> --
>> *From:* sudhir kumar 
>> *Sent:* Thursday, January 2, 2020 2:01 PM
>> *To:* solr-user@lucene.apache.org 
>> *Subject:* Re: Solr 8.3
>>
>> sample url to create collection:
>>
>>
>> http//host:8080/solr/admin/collections?action=CREATE&name=collectionname&numShards=2&replicationFactor=3&maxShardsPerNode=2&createNodeSet=
>> host:8080_solr,host:8080_solr,host:8080_solr,host:8080_solr
>> &collection.configName=collectionconfig
>>
>> On Thu, Jan 2, 2020 at 1:56 PM sudhir kumar 
>> wrote:
>>
>>> Hey Vishal,
>>>
>>> You can use createNodeSet property while creating collection which will
>>> allows you to create shards on specified IP.
>>>
>>> /admin/collections?action=CREATE&name=*name*&numShards=*number*
>>> &replicationFactor=*number*&*maxShardsPerNode*=*number*&*createNodeSet*=
>>> *nodelist*&collection.configName=*configname*
>>>
>>>
>>> Thanks,
>>>
>>> Sudhir
>>>
>>>
>>> On Thu, Jan 2, 2020 at 1:49 PM vishal patel <
>> vishalpatel200...@outlook.com>
>>> wrote:
>>>
>>>> When I am creating 2 shards and 2 replicas using admin panel, automatic
>>>> assign a shard or replica to any IP.
>>>> I want to make the specific shard or replica to solr instance at the
>> time
>>>> of creating a collection. Can I?
>>>>
>>>> Regards,
>>>> Vishal
>>>>
>>>
>>



Re: Solr 8.3

2020-01-03 Thread vishal patel
Thanks for the reply. Actually, I don't want to change the IP of the replica 
but I want to create a collection on a specific shard IP.
As per my production server configuration, we consider 10.38.33.28 is shard1, 
10.38.33.30 is shard2,  10.38.33.31 is replica of shard1 and 10.38.33.29 is 
replica of shard2.
We need to create all new collections the same as above.


When I create a collection(documents) using the admin panel. It created like
shard1 10.38.33.30
   replica1 10.38.33.31
shard2 10.38.33.28
   replica2 10.38.33.29

Actually we want like this
shard1 10.38.33.28
   replica1 10.38.33.31
shard2 10.38.33.30
   replica2 10.38.33.29

Regards,
Vishal

From: Sankar Panda 
Sent: Friday, January 3, 2020 12:36 PM
To: solr-user@lucene.apache.org 
Subject: Re: Solr 8.3

Hi Vishal,

You can .go to the collection in admin console.mannually changed the ip address 
as you want.Remove the replica and add it as per your requirements.This option 
provides by the solr cloud.
Thanks
Sankar panda

On Fri, Jan 3, 2020, 11:33 vishal patel 
mailto:vishalpatel200...@outlook.com>> wrote:
I do not want to change the IP of the existing replica. I want to fix the IP 
for the first time creating a collection.

I have 4 machines. my IP of each machine is below
machine1 10.38.33.28
machine2 10.38.33.29
machine3 10.38.33.30
machine4 10.38.33.31

I have created solr instance on each machine. when I am creating a first 
collection(documents) using admin panel my structure becomes like this
shard1 10.38.33.30
   replica1 10.38.33.31
shard2 10.38.33.28
   replica2 10.38.33.29

When creating a second collection(forms), my structure becomes like this
shard1 10.38.33.28
   replica1 10.38.33.30
shard2 10.38.33.29
   replica2 10.38.33.31

I have attached a screenshot. [https://ibb.co/Yb2TpVX]

Randomly IP assigned to shard or replica when creating collection but I want to 
make shard1 on 10.38.33.28, replica1 on 10.38.33.31, shard2 on 10.38.33.30 and 
replica2 on 10.38.33.31 when first time creating a collection using admin panel.

Can I Fix the IP to shard OR replica when creating a new 
[cid:4c560b03-a18d-4bc1-b592-6b2f8103f1ae] collection?

Regards,
Vishal



From: Erick Erickson mailto:erickerick...@gmail.com>>
Sent: Thursday, January 2, 2020 7:40 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org> 
mailto:solr-user@lucene.apache.org>>
Subject: Re: Solr 8.3

No, you cannot change the IP of an existing replica. Either do as Sankar 
mentioned when you first create the collection or use the MOVREPLICA 
collections API command.

MOVEREPLICA has existed for quite a long time, but if it’s not available, you 
can do the same with the ADDREPLICA command to add a replica to a specific 
node, wait for the replica to become fully active, then use DELETEREPLICA on 
the one you no longer want.

Best,
Erick

> On Jan 2, 2020, at 8:10 AM, Sankar Panda 
> mailto:panda.san...@gmail.com>> wrote:
>
> Hi Vishal,
> You can create a empty nodeset and manually configure in the collection as
> desired in the admin page
> Thanks
> Sankar Panda
>
> On Thu, Jan 2, 2020, 14:36 vishal patel 
> mailto:vishalpatel200...@outlook.com>>
> wrote:
>
>> My created collection in solr cloud below
>>
>> 10.38.33.24 is shard and its replica is 10.38.33.27.
>> 10.38.33.227 is shard and its replica is 10.38.33.219.
>>
>> I want to create a new collection on the Same. can not change the shard IP
>> for the new collection. How can I?
>>
>>
>> --
>> *From:* sudhir kumar 
>> mailto:gogikarsud...@gmail.com>>
>> *Sent:* Thursday, January 2, 2020 2:01 PM
>> *To:* solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org> 
>> mailto:solr-user@lucene.apache.org>>
>> *Subject:* Re: Solr 8.3
>>
>> sample url to create collection:
>>
>>
>> http//host:8080/solr/admin/collections?action=CREATE&name=collectionname&numShards=2&replicationFactor=3&maxShardsPerNode=2&createNodeSet=
>> host:8080_solr,host:8080_solr,host:8080_solr,host:8080_solr
>> &collection.configName=collectionconfig
>>
>> On Thu, Jan 2, 2020 at 1:56 PM sudhir kumar 
>> mailto:gogikarsud...@gmail.com>>
>> wrote:
>>
>>> Hey Vishal,
>>>
>>> You can use createNodeSet property while creating collection which will
>>> allows you to create shards on specified IP.
>>>
>>> /admin/collections?action=CREATE&name=*name*&numShards=*number*
>>> &replicationFactor=*number*&*maxShardsPerNode*=*number*&*createNodeSet*=
>>> *nodelist*&collection.configName=*configname*
>>>
>>>

Re: Solr 8.3

2020-01-05 Thread vishal patel
Thanks, Erick

The solution provided by you works for us. Regarding the reason to implement it 
is as below.

Current we use Solr 6.1 version and plan to switch to 8.3. In 6.1 the behavior 
is replica and shards of all collection maintain on separate. On production, 
many times we face replica goes into recovery mode. After some time if it not 
respond we required to restart it. Now, consider same scenario for 8.3 version 
and on one ip few collections have shared and some have a replica, anything 
goes into recovery may be hard to restart/manage it.

If you can suggest any better way to manage this or overcome such a scenario 
please suggest.

Regards,
Vishal

From: Erick Erickson 
Sent: Friday, January 3, 2020 8:39 PM
To: solr-user@lucene.apache.org 
Subject: Re: Solr 8.3

Well, you can’t do what you want. The Admin UI is
intended as a way for new users to get
started, it was never intended to support all of the
possible options.

Look at the documentation for the Collections
CREATE API, for instance:
https://lucene.apache.org/solr/guide/8_1/collections-api.html

In particular createNodeSet, and createNodeSet.shuffle. If
you want to control absolutely everything, the special
value EMPTY for createNodeSet won’t create any replicas
and you can place each one individually with ADDREPLICA. Of
course you can script this if you need automation.

What you haven’t explained is _why_ you want to do this. Very
often taking this kind of control is unnecessary and a distraction.
That said, when it _is_ necessary you need to use the Collections
API rather than the admin UI.


Best,
Erick


> On Jan 3, 2020, at 4:53 AM, vishal patel  
> wrote:
>
> Thanks for the reply. Actually, I don't want to change the IP of the replica 
> but I want to create a collection on a specific shard IP.
> As per my production server configuration, we consider 10.38.33.28 is shard1, 
> 10.38.33.30 is shard2,  10.38.33.31 is replica of shard1 and 10.38.33.29 is 
> replica of shard2.
> We need to create all new collections the same as above.
>
>
> When I create a collection(documents) using the admin panel. It created like
> shard1 10.38.33.30
>   replica1 10.38.33.31
> shard2 10.38.33.28
>   replica2 10.38.33.29
>
> Actually we want like this
> shard1 10.38.33.28
>   replica1 10.38.33.31
> shard2 10.38.33.30
>   replica2 10.38.33.29
>
> Regards,
> Vishal
> 
> From: Sankar Panda 
> Sent: Friday, January 3, 2020 12:36 PM
> To: solr-user@lucene.apache.org 
> Subject: Re: Solr 8.3
>
> Hi Vishal,
>
> You can .go to the collection in admin console.mannually changed the ip 
> address as you want.Remove the replica and add it as per your 
> requirements.This option provides by the solr cloud.
> Thanks
> Sankar panda
>
> On Fri, Jan 3, 2020, 11:33 vishal patel 
> mailto:vishalpatel200...@outlook.com>> wrote:
> I do not want to change the IP of the existing replica. I want to fix the IP 
> for the first time creating a collection.
>
> I have 4 machines. my IP of each machine is below
> machine1 10.38.33.28
> machine2 10.38.33.29
> machine3 10.38.33.30
> machine4 10.38.33.31
>
> I have created solr instance on each machine. when I am creating a first 
> collection(documents) using admin panel my structure becomes like this
> shard1 10.38.33.30
>   replica1 10.38.33.31
> shard2 10.38.33.28
>   replica2 10.38.33.29
>
> When creating a second collection(forms), my structure becomes like this
> shard1 10.38.33.28
>   replica1 10.38.33.30
> shard2 10.38.33.29
>   replica2 10.38.33.31
>
> I have attached a screenshot. [https://ibb.co/Yb2TpVX]
>
> Randomly IP assigned to shard or replica when creating collection but I want 
> to make shard1 on 10.38.33.28, replica1 on 10.38.33.31, shard2 on 10.38.33.30 
> and replica2 on 10.38.33.31 when first time creating a collection using admin 
> panel.
>
> Can I Fix the IP to shard OR replica when creating a new 
> [cid:4c560b03-a18d-4bc1-b592-6b2f8103f1ae] collection?
>
> Regards,
> Vishal
>
>
> 
> From: Erick Erickson mailto:erickerick...@gmail.com>>
> Sent: Thursday, January 2, 2020 7:40 PM
> To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org> 
> mailto:solr-user@lucene.apache.org>>
> Subject: Re: Solr 8.3
>
> No, you cannot change the IP of an existing replica. Either do as Sankar 
> mentioned when you first create the collection or use the MOVREPLICA 
> collections API command.
>
> MOVEREPLICA has existed for quite a long time, but if it’s not available, you 
> can do the same with the ADDREPLICA command to add a replica to a specific 
> node, wait for the replica to become

  1   2   >