Parent Child Schema Design

2016-01-06 Thread Pranaya Behera

Hi,
 I have read yonik.com/solr-nested-objects/ which states that there 
is no need for additional schema changes other than having a _root_ 
which is already present in the schema.xml. But it never specified on 
for the child elements what would the schema look like. And the post 
actually uses curl with json to index data to solr. I am using python 
client to index data to solr.


I have products as the core. This is one document but it has more 
interlinked child documents. As of now it is a single flat structure 
schema. But if I would like to use the parent-child relationship how 
would I go about it. Sample current schema:


required="true" multiValued="false" />



multiValued="true"/>
required="false" multiValued="false" />
required="false" />


Now I would like to add child document to it. Lets say I would like to 
add another field named steps which will contain id, product_id, name, 
description. This steps would be a multivalued as per product we have 
multiple steps.


Can someone help me figure out how to go about this ?

--
Thanks & Regards
Pranaya Behera



Re: Parent Child Schema Design

2016-01-07 Thread Pranaya Behera

Hi Binay,
  Are you saying there is no need to add anything to the 
existing schema that I have ? While indexing all I have to provide is a 
_childDocuments_ key and the key => value pair as per I want without 
needing to specify it in the schema.xml ?


On Thursday 07 January 2016 01:47 PM, Binoy Dalal wrote:

How to index such documents is also given in the same blog under Indexing
Nested Documents.
You just need to add a json key _childDocuments_ to the doc and specify the
child doc as the value for this key.
There was a similar question on the mailing list earlier. You can find that
here:
https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201512.mbox/


On Thu, 7 Jan 2016, 13:16 Pranaya Behera  wrote:


Hi,
   I have read yonik.com/solr-nested-objects/ which states that there
is no need for additional schema changes other than having a _root_
which is already present in the schema.xml. But it never specified on
for the child elements what would the schema look like. And the post
actually uses curl with json to index data to solr. I am using python
client to index data to solr.

I have products as the core. This is one document but it has more
interlinked child documents. As of now it is a single flat structure
schema. But if I would like to use the parent-child relationship how
would I go about it. Sample current schema:








Now I would like to add child document to it. Lets say I would like to
add another field named steps which will contain id, product_id, name,
description. This steps would be a multivalued as per product we have
multiple steps.

Can someone help me figure out how to go about this ?

--
Thanks & Regards
Pranaya Behera

--

Regards,
Binoy Dalal



--
Thanks & Regards
Pranaya Behera



Solrcloud hosting

2016-01-17 Thread Pranaya Behera

Hi,
I have 1 zookeeper server and 3 solr servers. I want to access the 
search end point which solr server's url i should try ?

And is there anyway to assign one domain for this solrcloud and how ?

--
Thanks & Regards
Pranaya Behera



Solrcloud error on finding active nodes.

2016-01-27 Thread Pranaya Behera

Hi,
 I have created one solrcloud collection with this
`curl 
"http://localhost:8983/solr/admin/collections?action=CREATE&name=card&numShards=2&replicationFactor=2&maxShardsPerNode=2&createNodeSet=localhost:8983,localhost:8984,localhost:8985&collection.configName=igp";


It gave me success. And when I saw in solr admin ui: i got  to see the 
collection name as card and pointing to two shards in the radial graph 
but nothing on the graph tab.  Both shards are in leader color.


When I tried to index data to this collection it gave me this error:

Indexing cardERROR StatusLogger No log4j2 configuration file found. 
Using default configuration: logging only errors to the console.
16:49:21.899 [main] ERROR 
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to 
collection card failed due to (510) 
org.apache.solr.common.SolrException: Could not find a healthy node to 
handle the request., retry? 0
16:49:21.911 [main] ERROR 
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to 
collection card failed due to (510) 
org.apache.solr.common.SolrException: Could not find a healthy node to 
handle the request., retry? 1
16:49:21.915 [main] ERROR 
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to 
collection card failed due to (510) 
org.apache.solr.common.SolrException: Could not find a healthy node to 
handle the request., retry? 2
16:49:21.925 [main] ERROR 
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to 
collection card failed due to (510) 
org.apache.solr.common.SolrException: Could not find a healthy node to 
handle the request., retry? 3
16:49:21.928 [main] ERROR 
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to 
collection card failed due to (510) 
org.apache.solr.common.SolrException: Could not find a healthy node to 
handle the request., retry? 4
16:49:21.931 [main] ERROR 
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to 
collection card failed due to (510) 
org.apache.solr.common.SolrException: Could not find a healthy node to 
handle the request., retry? 5
org.apache.solr.common.SolrException: Could not find a healthy node to 
handle the request.
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)

at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:86)
at com.igp.solrindex.CardIndex.index(CardIndex.java:75)
at com.igp.solrindex.App.main(App.java:19)


Why I am getting error ?

--
Thanks & Regards
Pranaya Behera



Re: Solrcloud error on finding active nodes.

2016-01-27 Thread Pranaya Behera

Hi,
 I am using solr 5.4.0. In the admin ui i can see for each node 
there are 2 shards with leader color. zookeeper is configured correctly 
as in the using the example config only on standalone server.


On Thursday 28 January 2016 03:16 AM, Susheel Kumar wrote:

Hi,

I haven't seen this error before but which version of Solr you are using &
assume zookeeper is configured correctly. Do you see nodes
down/active/leader etc. under Cloud in Admin UI?

Thanks,
Susheel

On Wed, Jan 27, 2016 at 11:51 AM, Pranaya Behera 
wrote:


Hi,
  I have created one solrcloud collection with this
`curl "
http://localhost:8983/solr/admin/collections?action=CREATE&name=card&numShards=2&replicationFactor=2&maxShardsPerNode=2&createNodeSet=localhost:8983,localhost:8984,localhost:8985&collection.configName=igp
"

It gave me success. And when I saw in solr admin ui: i got  to see the
collection name as card and pointing to two shards in the radial graph but
nothing on the graph tab.  Both shards are in leader color.

When I tried to index data to this collection it gave me this error:

Indexing cardERROR StatusLogger No log4j2 configuration file found.
Using default configuration: logging only errors to the console.
16:49:21.899 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 0
16:49:21.911 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 1
16:49:21.915 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 2
16:49:21.925 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 3
16:49:21.928 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 4
16:49:21.931 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 5
org.apache.solr.common.SolrException: Could not find a healthy node to
handle the request.
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
 at
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
 at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
 at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:86)
 at com.igp.solrindex.CardIndex.index(CardIndex.java:75)
 at com.igp.solrindex.App.main(App.java:19)


Why I am getting error ?

--
Thanks & Regards
Pranaya Behera




--
Thanks & Regards
Pranaya Behera



Re: Solrcloud error on finding active nodes.

2016-01-27 Thread Pranaya Behera

Hi,
I have checked in the admin UI and now I have created 3 shards 2 
replicas for each shard and 1 shard per node. This is what I get:


{"card":{ "replicationFactor":"2", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", "shards":{ "shard1":{ 
"range":"8000-d554", "state":"active", "replicas":{}}, 
"shard2":{ "range":"d555-2aa9", "state":"active", 
"replicas":{}}, "shard3":{ "range":"2aaa-7fff", 
"state":"active", "replicas":{} There is no replica. How is this 
possible? This is what I used to create the collection: curl 
"http://localhost:8983/solr/admin/collections?action=CREATE&name=card&numShards=3&replicationFactor=2&maxShardsPerNode=1&createNodeSet=localhost:8983_solr,localhost:8984_solr,localhost:8985_solr&createNodeSet.shuffle=true&collection.configName=igp"; 





On Thursday 28 January 2016 03:16 AM, Susheel Kumar wrote:

Hi,

I haven't seen this error before but which version of Solr you are using &
assume zookeeper is configured correctly. Do you see nodes
down/active/leader etc. under Cloud in Admin UI?

Thanks,
Susheel

On Wed, Jan 27, 2016 at 11:51 AM, Pranaya Behera 
wrote:


Hi,
  I have created one solrcloud collection with this
`curl "
http://localhost:8983/solr/admin/collections?action=CREATE&name=card&numShards=2&replicationFactor=2&maxShardsPerNode=2&createNodeSet=localhost:8983,localhost:8984,localhost:8985&collection.configName=igp
"

It gave me success. And when I saw in solr admin ui: i got  to see the
collection name as card and pointing to two shards in the radial graph but
nothing on the graph tab.  Both shards are in leader color.

When I tried to index data to this collection it gave me this error:

Indexing cardERROR StatusLogger No log4j2 configuration file found.
Using default configuration: logging only errors to the console.
16:49:21.899 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 0
16:49:21.911 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 1
16:49:21.915 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 2
16:49:21.925 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 3
16:49:21.928 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 4
16:49:21.931 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 5
org.apache.solr.common.SolrException: Could not find a healthy node to
handle the request.
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
 at
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
     at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
 at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:86)
 at com.igp.solrindex.CardIndex.index(CardIndex.java:75)
 at com.igp.solrindex.App.main(App.java:19)


Why I am getting error ?

--
Thanks & Regards
Pranaya Behera




--
Thanks & Regards
Pranaya Behera



Re: Solrcloud error on finding active nodes.

2016-01-27 Thread Pranaya Behera
Its only happening when I specify the createNodeSet (list of nodes comma 
separated). If I remove this then it works as expected.


On Thursday 28 January 2016 12:45 PM, Pranaya Behera wrote:

Hi,
I have checked in the admin UI and now I have created 3 shards 2 
replicas for each shard and 1 shard per node. This is what I get:
{"card":{ "replicationFactor":"2", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", "shards":{ 
"shard1":{ "range":"8000-d554", "state":"active", 
"replicas":{}}, "shard2":{ "range":"d555-2aa9", 
"state":"active", "replicas":{}}, "shard3":{ 
"range":"2aaa-7fff", "state":"active", "replicas":{} There 
is no replica. How is this possible? This is what I used to create the 
collection: curl 
"http://localhost:8983/solr/admin/collections?action=CREATE&name=card&numShards=3&replicationFactor=2&maxShardsPerNode=1&createNodeSet=localhost:8983_solr,localhost:8984_solr,localhost:8985_solr&createNodeSet.shuffle=true&collection.configName=igp"; 




On Thursday 28 January 2016 03:16 AM, Susheel Kumar wrote:

Hi,

I haven't seen this error before but which version of Solr you are using &
assume zookeeper is configured correctly. Do you see nodes
down/active/leader etc. under Cloud in Admin UI?

Thanks,
Susheel

On Wed, Jan 27, 2016 at 11:51 AM, Pranaya Behera
wrote:


Hi,
  I have created one solrcloud collection with this
`curl" 
http://localhost:8983/solr/admin/collections?action=CREATE&name=card&numShards=2&replicationFactor=2&maxShardsPerNode=2&createNodeSet=localhost:8983,localhost:8984,localhost:8985&collection.configName=igp 
"


It gave me success. And when I saw in solr admin ui: i got  to see the
collection name as card and pointing to two shards in the radial graph but
nothing on the graph tab.  Both shards are in leader color.

When I tried to index data to this collection it gave me this error:

Indexing cardERROR StatusLogger No log4j2 configuration file found.
Using default configuration: logging only errors to the console.
16:49:21.899 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 0
16:49:21.911 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 1
16:49:21.915 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 2
16:49:21.925 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 3
16:49:21.928 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 4
16:49:21.931 [main] ERROR
org.apache.solr.client.solrj.impl.CloudSolrClient - Request to collection
card failed due to (510) org.apache.solr.common.SolrException: Could not
find a healthy node to handle the request., retry? 5
org.apache.solr.common.SolrException: Could not find a healthy node to
handle the request.
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:954)
 at
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
 at
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
     at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
 at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:86)
 at com.igp.solrindex.CardIndex.index(CardIndex.java:75)
 at com.igp.solrindex.App.main(App.java:19)


Why I am getting error ?

--
Thanks & Regards
Pranaya Behera




--
Thanks & Regards
Pranaya Behera


--
Thanks & Regards
Pranaya Behera



How to get parent as well as children with one query ?

2016-02-01 Thread Pranaya Behera

Hi,
I have my parent document mapped in the field named isParent: 
Boolean value. Now children has their own ids, they don't match to parent.


I am searching query.setQuery("level:0") This gives me all the parent 
documents but not the associated children.
I have looked at the 
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-BlockJoinQueryParsers 
and 
https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents#TransformingResultDocuments-[child]-ChildDocTransformerFactory 
But I couldnt fully understand how this is to achieve. Could someone 
give one example on how to achieve in both cases ?


--
Thanks & Regards
Pranaya Behera



Multi-level nested documents query

2016-02-04 Thread Pranaya Behera

Hi,
 I have documents that are indexed are like this:

product
-isParent:true
   - child1
   -isParent:true
   - child1_1
   - child1_2
   - child1_3
   - child2
   -isParent:true
   - child2_1
   - child2_2
   - child2_3

I have used fl=*,[child parentFilter=isParent:true] it doesnt give back 
the children.
This expand=true&expand.q=*:*&expand.field=_root_&expand.rows=100 seems 
to work give me all the children but not in a single format like the 
above query does when I have only one level of nested documents but for 
multilevel it just gives me only the parent not the children.


--
Thanks & Regards
Pranaya Behera



Re: Multi-level nested documents query

2016-02-04 Thread Pranaya Behera

Hi Mikhail,
 Thank you for the link. I will check that blog post.

On Friday 05 February 2016 01:42 AM, Mikhail Khludnev wrote:

Hello,

I'm not sure that it's achievable overall, but at least you need to  use
different parent fields/terms/filters across levels like in
http://blog.griddynamics.com/2013/12/grandchildren-and-siblings-with-block.html


On Thu, Feb 4, 2016 at 8:39 PM, Pranaya Behera 
wrote:


Hi,
  I have documents that are indexed are like this:

product
-isParent:true
- child1
-isParent:true
- child1_1
- child1_2
- child1_3
- child2
-isParent:true
- child2_1
- child2_2
- child2_3

I have used fl=*,[child parentFilter=isParent:true] it doesnt give back
the children.
This expand=true&expand.q=*:*&expand.field=_root_&expand.rows=100 seems to
work give me all the children but not in a single format like the above
query does when I have only one level of nested documents but for
multilevel it just gives me only the parent not the children.

--
Thanks & Regards
Pranaya Behera






--
Thanks & Regards
Pranaya Behera



Currency field doubts

2016-03-02 Thread Pranaya Behera

Hi,
 For currency, as suggested in the wiki and guide, the field type 
is currency and the defaults would take usd and will take the exchange 
rates from the currency.xml file located in the conf dir. We have script 
that talks to google apis for the current currency exchange and 
symlinked to the conf dir for the xml file. In solr cloud mode if any 
config changes this need to be uploaded to zookeeper and then a reload 
required for the collection to know that there are changes in the 
config. As the wiki says " Replication is supported, given that you 
explicitly configure replication forcurrency.xml. Upon the arrival of a 
new version of currency.xml, Solr slaves will do a core reload and begin 
using the new exchange rates. SeeSolrReplication 
<https://wiki.apache.org/solr/SolrReplication#How_are_configuration_files_replicated.3F>for 
more.SolrCloud <https://wiki.apache.org/solr/SolrCloud>is also supported 
since we use ResourceLoader to load the file." But when I tried to do so 
it didnt neither uploaded the configsets to zookeeper nor reload the 
collection. How to go about this without manual zookeeper upload and 
reload of collection.


And now lets say the currency is being stored as USD and some in INR. 
While querying we can provide in the fl param as currency(fieldname, 
CURRENCY_CODES) e.g. currency(mrp, INR), currency(mrp, USD) and it will 
give the result with respect to currency.xml file. Is it possible to 
return calculated mrp in two different currency e.g. if the mrp would 
return more than just one currency. currency(mrp, INR, USD, EUR) as I 
try this I get an error. Is it possible to do so, and how ?


--
Thanks & Regards
Pranaya Behera




How to get stats on currency field?

2016-04-14 Thread Pranaya Behera

Hi,
I have a currency field type. How do I get StatsComponent to work 
with it ? Currently StatsComponent works with strings, numerics but not 
currency field.
Another question is how to copy only the value part from a currency 
field ? e.g. if my field name is "mrp" and the value is "62.00, USD", if 
the currency field cannot be used in the StatsComponent then how can I 
copy only 62.00 to another field ?


Streaming expression for suggester

2016-04-30 Thread Pranaya Behera

Hi,
 I have two collections lets name them as A and B. I want to 
suggester to work on both the collection while searching on the 
front-end application.
In collection A I have 4 different fields. I want to use all of them for 
the suggester. Shall I copy them to a new field of combined of the 4 
fields and use it on the spellcheck component and then use that field 
for the suggester?

In collection B I have only 1 field.

When user searches something in the front-end application, I would like 
to show results from the both collections. Is streaming expression would 
be a viable option here ? If so, how ? I couldn't find any related 
document for the suggester streaming expression. If not, then how would 
I approach this ?


Re: Streaming expression for suggester

2016-05-01 Thread Pranaya Behera

Hi Joel,
If you could point me in the right direction I would like 
to take shot.


On Sunday 01 May 2016 10:38 PM, Joel Bernstein wrote:

This is the type of thing that Streaming Expressions does well, but there
isn't one yet for the suggester. Feel free to add a SuggestStream jira
ticket, it should be very easy to add.


Joel Bernstein
http://joelsolr.blogspot.com/

On Sat, Apr 30, 2016 at 6:30 AM, Pranaya Behera 
wrote:


Hi,
  I have two collections lets name them as A and B. I want to suggester
to work on both the collection while searching on the front-end application.
In collection A I have 4 different fields. I want to use all of them for
the suggester. Shall I copy them to a new field of combined of the 4 fields
and use it on the spellcheck component and then use that field for the
suggester?
In collection B I have only 1 field.

When user searches something in the front-end application, I would like to
show results from the both collections. Is streaming expression would be a
viable option here ? If so, how ? I couldn't find any related document for
the suggester streaming expression. If not, then how would I approach this ?





Re: Streaming expression for suggester

2016-05-02 Thread Pranaya Behera

I cant return other fields in the response if I use SuggestComponent ?

On Monday 02 May 2016 08:13 AM, Joel Bernstein wrote:

Sure take a look at the RandomStream. You can copy the basic structure of
it but have it work with the suggester. The link below shows the test cases
as well:

https://github.com/apache/lucene-solr/commit/7b5f12e622f10206f3ab3bf9f79b9727c73c6def

Joel Bernstein
http://joelsolr.blogspot.com/

On Sun, May 1, 2016 at 2:45 PM, Pranaya Behera 
wrote:


Hi Joel,
 If you could point me in the right direction I would like to
take shot.


On Sunday 01 May 2016 10:38 PM, Joel Bernstein wrote:


This is the type of thing that Streaming Expressions does well, but there
isn't one yet for the suggester. Feel free to add a SuggestStream jira
ticket, it should be very easy to add.


Joel Bernstein
http://joelsolr.blogspot.com/

On Sat, Apr 30, 2016 at 6:30 AM, Pranaya Behera 
wrote:

Hi,

   I have two collections lets name them as A and B. I want to
suggester
to work on both the collection while searching on the front-end
application.
In collection A I have 4 different fields. I want to use all of them for
the suggester. Shall I copy them to a new field of combined of the 4
fields
and use it on the spellcheck component and then use that field for the
suggester?
In collection B I have only 1 field.

When user searches something in the front-end application, I would like
to
show results from the both collections. Is streaming expression would be
a
viable option here ? If so, how ? I couldn't find any related document
for
the suggester streaming expression. If not, then how would I approach
this ?






Sorting on child document field.

2016-05-18 Thread Pranaya Behera

Hi,

 How can I sort the results i.e. from a block join parent query 
using the field from child document field ?


Thanks & Regards

Pranaya Behera



Re: Sorting on child document field.

2016-05-19 Thread Pranaya Behera
While searching in the lucene code base I found 
/ToParentBlockJoinSortField /but its not in the solr or even in solrj as 
well. How would I use it with solrj as I can't find anything to query it 
through the UI.


On Thursday 19 May 2016 11:29 AM, Pranaya Behera wrote:

Hi,

 How can I sort the results i.e. from a block join parent query 
using the field from child document field ?


Thanks & Regards

Pranaya Behera





Re: Sorting on child document field.

2016-05-19 Thread Pranaya Behera

Example would be:
Lets say that I have a product document with regular fields as name, 
price, desc, is_parent. it has child documents such as

CA:: fields as a,b,c,rank
and another child document as
CB:: fields as  x,y,z.
I am using the query where {!parent which="is_parent:true"}a:some AND 
b:somethingelse , here only CA child document is used for searching no 
other child document has been touched. this CA has rank field. I want to 
sort the parents using this field.
Product contains multiple CA documents. But the query matches only one 
document exactly.


On Thursday 19 May 2016 04:09 PM, Pranaya Behera wrote:
While searching in the lucene code base I found 
/ToParentBlockJoinSortField /but its not in the solr or even in solrj 
as well. How would I use it with solrj as I can't find anything to 
query it through the UI.


On Thursday 19 May 2016 11:29 AM, Pranaya Behera wrote:

Hi,

 How can I sort the results i.e. from a block join parent query 
using the field from child document field ?


Thanks & Regards

Pranaya Behera







Re: Sorting on child document field.

2016-05-20 Thread Pranaya Behera

Adding lucene user mailing list to it.


On Thursday 19 May 2016 06:55 PM, Pranaya Behera wrote:

Example would be:
Lets say that I have a product document with regular fields as name, 
price, desc, is_parent. it has child documents such as

CA:: fields as a,b,c,rank
and another child document as
CB:: fields as  x,y,z.
I am using the query where {!parent which="is_parent:true"}a:some AND 
b:somethingelse , here only CA child document is used for searching no 
other child document has been touched. this CA has rank field. I want 
to sort the parents using this field.
Product contains multiple CA documents. But the query matches only one 
document exactly.


On Thursday 19 May 2016 04:09 PM, Pranaya Behera wrote:
While searching in the lucene code base I found 
/ToParentBlockJoinSortField /but its not in the solr or even in solrj 
as well. How would I use it with solrj as I can't find anything to 
query it through the UI.


On Thursday 19 May 2016 11:29 AM, Pranaya Behera wrote:

Hi,

 How can I sort the results i.e. from a block join parent query 
using the field from child document field ?


Thanks & Regards

Pranaya Behera









Re: Sorting on child document field.

2016-05-23 Thread Pranaya Behera

Hi Mikhail,
 I saw the blog post tried to do that with parent block 
query {!parent} as I dont have the reference for the parent in the child 
to use in the {!join}. This is my result. 
https://gist.github.com/shadow-fox/b728683b27a2f39d1b5e1aac54b7a8fb . 
This yields me the results in desc even if I am using score=max. How 
would I get it in the asc order ? Any suggestions where I am messing up?


On Saturday 21 May 2016 12:03 AM, Mikhail Khludnev wrote:

Hello,

Check this
http://blog-archive.griddynamics.com/2015/08/scoring-join-party-in-solr-53.html
Let me know if you need further comments.

On Thu, May 19, 2016 at 4:25 PM, Pranaya Behera 
wrote:


Example would be:
Lets say that I have a product document with regular fields as name,
price, desc, is_parent. it has child documents such as
CA:: fields as a,b,c,rank
and another child document as
CB:: fields as  x,y,z.
I am using the query where {!parent which="is_parent:true"}a:some AND
b:somethingelse , here only CA child document is used for searching no
other child document has been touched. this CA has rank field. I want to
sort the parents using this field.
Product contains multiple CA documents. But the query matches only one
document exactly.


On Thursday 19 May 2016 04:09 PM, Pranaya Behera wrote:


While searching in the lucene code base I found
/ToParentBlockJoinSortField /but its not in the solr or even in solrj as
well. How would I use it with solrj as I can't find anything to query it
through the UI.

On Thursday 19 May 2016 11:29 AM, Pranaya Behera wrote:


Hi,

  How can I sort the results i.e. from a block join parent query
using the field from child document field ?

Thanks & Regards

Pranaya Behera








Re: Sorting on child document field.

2016-05-23 Thread Pranaya Behera

Hi Mikhail,
Thanks. Missed it completely thought it would handle by 
default.


On Monday 23 May 2016 02:08 PM, Mikhail Khludnev wrote:

https://cwiki.apache.org/confluence/display/solr/Common+Query+Parameters#CommonQueryParameters-ThesortParameter

sort=score asc

On Mon, May 23, 2016 at 11:17 AM, Pranaya Behera 
mailto:pranaya.beh...@igp.com>> wrote:


Hi Mikhail,
 I saw the blog post tried to do that with parent
block query {!parent} as I dont have the reference for the parent
in the child to use in the {!join}. This is my result.
https://gist.github.com/shadow-fox/b728683b27a2f39d1b5e1aac54b7a8fb
. This yields me the results in desc even if I am using score=max.
How would I get it in the asc order ? Any suggestions where I am
messing up?


On Saturday 21 May 2016 12:03 AM, Mikhail Khludnev wrote:

Hello,

Check this

http://blog-archive.griddynamics.com/2015/08/scoring-join-party-in-solr-53.html
Let me know if you need further comments.

On Thu, May 19, 2016 at 4:25 PM, Pranaya Behera
mailto:pranaya.beh...@igp.com>>
wrote:

Example would be:
Lets say that I have a product document with regular
fields as name,
price, desc, is_parent. it has child documents such as
CA:: fields as a,b,c,rank
and another child document as
CB:: fields as  x,y,z.
I am using the query where {!parent
which="is_parent:true"}a:some AND
b:somethingelse , here only CA child document is used for
searching no
other child document has been touched. this CA has rank
field. I want to
sort the parents using this field.
Product contains multiple CA documents. But the query
matches only one
document exactly.


On Thursday 19 May 2016 04:09 PM, Pranaya Behera wrote:

While searching in the lucene code base I found
/ToParentBlockJoinSortField /but its not in the solr
or even in solrj as
well. How would I use it with solrj as I can't find
anything to query it
through the UI.

On Thursday 19 May 2016 11:29 AM, Pranaya Behera wrote:

Hi,

  How can I sort the results i.e. from a block
join parent query
using the field from child document field ?

Thanks & Regards

    Pranaya Behera







--
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics


<mailto:mkhlud...@griddynamics.com>




Block Join Facet not giving results.

2016-06-13 Thread Pranaya Behera

Hi,

I have followed what the documentation says in this page: 
https://cwiki.apache.org/confluence/display/solr/BlockJoin+Faceting


This is my current select requestHandler in solrconfig.xml



  explicit
  10


  bjqFacetComponent



And the bjqFacetComponent is:
class="org.apache.solr.search.join.BlockJoinFacetComponent"/>
class="org.apache.solr.search.join.BlockJoinDocSetFacetComponent"/>


class="org.apache.solr.handler.component.SearchHandler">

  
/bjqfacet
  
  
bjqFacetComponent
  


class="org.apache.solr.handler.component.SearchHandler">

  
/bjqdocsetfacet
  
  
bjqDocsetFacetComponent
  


As the documentation says.
I am using solr 6.0.1, I have copied the schema to 
solr/server/configsets/ and uploaded to zookeeper via command line and 
then reloaded the collection and re-indexed the collection as well. But 
the select handler never responds to child.facet.field for a field in 
child documents. It always gives me zero result with nothing inside the 
array. I have looked at the document that I am indexing and found that 
indeed there is data in my child document to match the facet field, but 
alas no results.
It neither gives results with select handler nor with bjqfacet handler. 
With select handler all I am getting is the keys but not the values i.e. 
count , counts are always zero. With bjqfacet handler I am getting an 
empty array, no keys no values.


--
Thanks & Regards
Pranaya Behera



Re: Block Join Facet not giving results.

2016-06-13 Thread Pranaya Behera

Hi Mikhail,
Here is the response for

 q=*:*&debugQuery=true:

https://gist.github.com/shadow-fox/495c50cda339e2a18550e41a524f03f0


On Tuesday 14 June 2016 01:59 AM, Mikhail Khludnev wrote:

Can you post response on q=*:*&debugQuery=true?

On Mon, Jun 13, 2016 at 5:01 PM, Pranaya Behera 
wrote:


Hi,

 I have followed what the documentation says in this page:
https://cwiki.apache.org/confluence/display/solr/BlockJoin+Faceting

This is my current select requestHandler in solrconfig.xml


 
   explicit
   10
 
 
   bjqFacetComponent
 


And the bjqFacetComponent is:




   
 /bjqfacet
   
   
 bjqFacetComponent
   



   
 /bjqdocsetfacet
   
   
 bjqDocsetFacetComponent
   


As the documentation says.
I am using solr 6.0.1, I have copied the schema to solr/server/configsets/
and uploaded to zookeeper via command line and then reloaded the collection
and re-indexed the collection as well. But the select handler never
responds to child.facet.field for a field in child documents. It always
gives me zero result with nothing inside the array. I have looked at the
document that I am indexing and found that indeed there is data in my child
document to match the facet field, but alas no results.
It neither gives results with select handler nor with bjqfacet handler.
With select handler all I am getting is the keys but not the values i.e.
count , counts are always zero. With bjqfacet handler I am getting an empty
array, no keys no values.

--
Thanks & Regards
Pranaya Behera






--
Thanks & Regards
Pranaya Behera



Re: Block Join Facet not giving results.

2016-06-14 Thread Pranaya Behera

Here it is:
https://gist.github.com/shadow-fox/150c1e5d11cccd4a5bafd307c717ff85

On Tuesday 14 June 2016 01:03 PM, Mikhail Khludnev wrote:


OK. And how does response looks like on meaningful child.facet.field 
request with debugQuery?


14 июня 2016 г. 8:12 пользователь "Pranaya Behera" 
mailto:pranaya.beh...@igp.com>> написал:


Hi Mikhail,
Here is the response for

 q=*:*&debugQuery=true:

https://gist.github.com/shadow-fox/495c50cda339e2a18550e41a524f03f0


On Tuesday 14 June 2016 01:59 AM, Mikhail Khludnev wrote:

Can you post response on q=*:*&debugQuery=true?

On Mon, Jun 13, 2016 at 5:01 PM, Pranaya Behera
mailto:pranaya.beh...@igp.com>>
wrote:

Hi,

 I have followed what the documentation says in this page:
https://cwiki.apache.org/confluence/display/solr/BlockJoin+Faceting

This is my current select requestHandler in solrconfig.xml


 
   explicit
   10
 
 
   bjqFacetComponent
 


And the bjqFacetComponent is:




   
 /bjqfacet
   
   
 bjqFacetComponent
   



   
 /bjqdocsetfacet
   
   
 bjqDocsetFacetComponent
   


As the documentation says.
I am using solr 6.0.1, I have copied the schema to
solr/server/configsets/
and uploaded to zookeeper via command line and then
reloaded the collection
and re-indexed the collection as well. But the select
handler never
responds to child.facet.field for a field in child
documents. It always
gives me zero result with nothing inside the array. I have
looked at the
document that I am indexing and found that indeed there is
data in my child
document to match the facet field, but alas no results.
It neither gives results with select handler nor with
bjqfacet handler.
With select handler all I am getting is the keys but not
the values i.e.
count , counts are always zero. With bjqfacet handler I am
getting an empty
array, no keys no values.

--
    Thanks & Regards
Pranaya Behera




-- 
Thanks & Regards

Pranaya Behera



--
Thanks & Regards
Pranaya Behera



Filter query wrt main query on block join

2016-06-14 Thread Pranaya Behera

Hi,
 I have indexed nested documents into solr.
How do I filter on the main query using block join query?
Here is what I have in the sense of documents:
Document A -> id, name, title, is_parent=true
Document B -> id, x,y,z
Document C -> id, a , b
Document B & C are child to A. I want to get all the parent which 
children has x and y. So the main query becomes:

q={!parent which="is_parent:true"}x:"Some string" y:"Some other string"

Now I want to filter on the result set of the previous query on how many 
parent has children a.
Is fq={!parent which="is_parent:true"}a:"Specific String" along with the 
q i.e. specified above is correct ?


The main query i.e. "q" is it correct in terms of syntax. If not how can 
I improve that?
What would be a correct "fq" for filtering the resultset based on the 
children the resultset's each document has ?




Getting dynamic fields using LukeRequest.

2016-08-09 Thread Pranaya Behera

Hi,
 I have the following script to retrieve all the fields in the 
collection. I am using SolrCloud 6.1.0.

LukeRequest lukeRequest = new LukeRequest();
lukeRequest.setNumTerms(0);
lukeRequest.setShowSchema(false);
LukeResponse lukeResponse = lukeRequest.process(cloudSolrClient);
Map fieldInfoMap = 
lukeResponse.getFieldInfo();
for (Map.Entry entry : 
fieldInfoMap.entrySet()) {
  entry.getKey(); // Here fieldInfoMap is size of 0 for sometime and 
sometime it is getting incomplete data.

}


Setting showSchema to true doesn't yield any result. Only making it 
false yields result that too incomplete data. As I can see in the doc 
that it has more than what it is saying it has.


LukeRequest hits 
/solr/product/admin/luke?numTerms=0&wt=javabin&version=2 HTTP/1.1 .


How it should be configured for solrcloud ?
I have already mentioned



in the solrconfig.xml. It doesn't matter whether it is present in the 
solrconfig or not as I am requesting it from solrj.




Fwd: Getting dynamic fields using LukeRequest.

2016-08-09 Thread Pranaya Behera




 Forwarded Message 
Subject:Getting dynamic fields using LukeRequest.
Date:   Tue, 9 Aug 2016 18:22:15 +0530
From:   Pranaya Behera 
To: solr-user@lucene.apache.org



Hi,
  I have the following script to retrieve all the fields in the
collection. I am using SolrCloud 6.1.0.
LukeRequest lukeRequest = new LukeRequest();
lukeRequest.setNumTerms(0);
lukeRequest.setShowSchema(false);
LukeResponse lukeResponse = lukeRequest.process(cloudSolrClient);
Map fieldInfoMap =
lukeResponse.getFieldInfo();
for (Map.Entry entry :
fieldInfoMap.entrySet()) {
   entry.getKey(); // Here fieldInfoMap is size of 0 for sometime and
sometime it is getting incomplete data.
}


Setting showSchema to true doesn't yield any result. Only making it
false yields result that too incomplete data. As I can see in the doc
that it has more than what it is saying it has.

LukeRequest hits
/solr/product/admin/luke?numTerms=0&wt=javabin&version=2 HTTP/1.1 .

How it should be configured for solrcloud ?
I have already mentioned



in the solrconfig.xml. It doesn't matter whether it is present in the
solrconfig or not as I am requesting it from solrj.



Re: Getting dynamic fields using LukeRequest.

2016-08-09 Thread Pranaya Behera

Hi Steve,
  I did look at the schema api but it only gives the 
defined dynamic fields not the indexed dynamic fields. For indexed 
fields with the rule of the defined dynamic field I guess LukeRequest is 
the only option. (Please correct me if I am wrong.)


Hence I am unable to fetch each and every indexed field with the defined 
dynamic field.


On 09/08/16 19:26, Steve Rowe wrote:

Not sure what the issue is with LukeRequest, but Solrj has Schema API support: 
<http://lucene.apache.org/solr/6_1_0/solr-solrj/org/apache/solr/client/solrj/request/schema/SchemaRequest.DynamicFields.html>

You can see which options are supported here: 
<https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-ListDynamicFields>

--
Steve
www.lucidworks.com


On Aug 9, 2016, at 8:52 AM, Pranaya Behera  wrote:

Hi,
 I have the following script to retrieve all the fields in the collection. 
I am using SolrCloud 6.1.0.
LukeRequest lukeRequest = new LukeRequest();
lukeRequest.setNumTerms(0);
lukeRequest.setShowSchema(false);
LukeResponse lukeResponse = lukeRequest.process(cloudSolrClient);
Map fieldInfoMap = lukeResponse.getFieldInfo();
for (Map.Entry entry : fieldInfoMap.entrySet()) 
{
  entry.getKey(); // Here fieldInfoMap is size of 0 for sometime and sometime 
it is getting incomplete data.
}


Setting showSchema to true doesn't yield any result. Only making it false 
yields result that too incomplete data. As I can see in the doc that it has 
more than what it is saying it has.

LukeRequest hits /solr/product/admin/luke?numTerms=0&wt=javabin&version=2 
HTTP/1.1 .

How it should be configured for solrcloud ?
I have already mentioned



in the solrconfig.xml. It doesn't matter whether it is present in the 
solrconfig or not as I am requesting it from solrj.





Re: Getting dynamic fields using LukeRequest.

2016-08-09 Thread Pranaya Behera
And also when I hit the request for each individual shard I get some 
results that are close to it using /admin/luke endpoint but to the whole 
collection it doesnt even show that have dynamic fields.


On 10/08/16 11:23, Pranaya Behera wrote:

Hi Steve,
  I did look at the schema api but it only gives the 
defined dynamic fields not the indexed dynamic fields. For indexed 
fields with the rule of the defined dynamic field I guess LukeRequest 
is the only option. (Please correct me if I am wrong.)


Hence I am unable to fetch each and every indexed field with the 
defined dynamic field.


On 09/08/16 19:26, Steve Rowe wrote:
Not sure what the issue is with LukeRequest, but Solrj has Schema API 
support: 
<http://lucene.apache.org/solr/6_1_0/solr-solrj/org/apache/solr/client/solrj/request/schema/SchemaRequest.DynamicFields.html>


You can see which options are supported here: 
<https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-ListDynamicFields>


--
Steve
www.lucidworks.com

On Aug 9, 2016, at 8:52 AM, Pranaya Behera  
wrote:


Hi,
 I have the following script to retrieve all the fields in the 
collection. I am using SolrCloud 6.1.0.

LukeRequest lukeRequest = new LukeRequest();
lukeRequest.setNumTerms(0);
lukeRequest.setShowSchema(false);
LukeResponse lukeResponse = lukeRequest.process(cloudSolrClient);
Map fieldInfoMap = 
lukeResponse.getFieldInfo();
for (Map.Entry entry : 
fieldInfoMap.entrySet()) {
  entry.getKey(); // Here fieldInfoMap is size of 0 for sometime and 
sometime it is getting incomplete data.

}


Setting showSchema to true doesn't yield any result. Only making it 
false yields result that too incomplete data. As I can see in the 
doc that it has more than what it is saying it has.


LukeRequest hits 
/solr/product/admin/luke?numTerms=0&wt=javabin&version=2 HTTP/1.1 .


How it should be configured for solrcloud ?
I have already mentioned

class="org.apache.solr.handler.admin.LukeRequestHandler" />


in the solrconfig.xml. It doesn't matter whether it is present in 
the solrconfig or not as I am requesting it from solrj.








Re: Getting dynamic fields using LukeRequest.

2016-08-12 Thread Pranaya Behera

Hi,
 With solrj I am getting inconsistent results. Previously it was 
working great. But now it is not giving any expanded results while 
querying the same search in solr admin it gives the expanded query. When 
I say previously is that after the first release of 6.1.0, now 
LukeRequest is not working, getExpandedResults() always gives me 0 docs. 
Only expanded result works in solr admin but not the LukeRequest, for 
LukeRequest I have to query on each shard to get all the dynamic indexed 
fields.

Any solutions to this ?  Please mention if any response is needed.

On 10/08/16 11:49, Pranaya Behera wrote:
And also when I hit the request for each individual shard I get some 
results that are close to it using /admin/luke endpoint but to the 
whole collection it doesnt even show that have dynamic fields.


On 10/08/16 11:23, Pranaya Behera wrote:

Hi Steve,
  I did look at the schema api but it only gives the 
defined dynamic fields not the indexed dynamic fields. For indexed 
fields with the rule of the defined dynamic field I guess LukeRequest 
is the only option. (Please correct me if I am wrong.)


Hence I am unable to fetch each and every indexed field with the 
defined dynamic field.


On 09/08/16 19:26, Steve Rowe wrote:
Not sure what the issue is with LukeRequest, but Solrj has Schema 
API support: 
<http://lucene.apache.org/solr/6_1_0/solr-solrj/org/apache/solr/client/solrj/request/schema/SchemaRequest.DynamicFields.html>


You can see which options are supported here: 
<https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-ListDynamicFields>


--
Steve
www.lucidworks.com

On Aug 9, 2016, at 8:52 AM, Pranaya Behera  
wrote:


Hi,
 I have the following script to retrieve all the fields in the 
collection. I am using SolrCloud 6.1.0.

LukeRequest lukeRequest = new LukeRequest();
lukeRequest.setNumTerms(0);
lukeRequest.setShowSchema(false);
LukeResponse lukeResponse = lukeRequest.process(cloudSolrClient);
Map fieldInfoMap = 
lukeResponse.getFieldInfo();
for (Map.Entry entry : 
fieldInfoMap.entrySet()) {
  entry.getKey(); // Here fieldInfoMap is size of 0 for sometime 
and sometime it is getting incomplete data.

}


Setting showSchema to true doesn't yield any result. Only making it 
false yields result that too incomplete data. As I can see in the 
doc that it has more than what it is saying it has.


LukeRequest hits 
/solr/product/admin/luke?numTerms=0&wt=javabin&version=2 HTTP/1.1 .


How it should be configured for solrcloud ?
I have already mentioned

class="org.apache.solr.handler.admin.LukeRequestHandler" />


in the solrconfig.xml. It doesn't matter whether it is present in 
the solrconfig or not as I am requesting it from solrj.










Inconsistent results with solr admin ui and solrj

2016-08-13 Thread Pranaya Behera

Hi,
I am running solr 6.1.0 with solrcloud. We have 3 instance of 
zookeeper and 3 instance of solrcloud. All three of them are active and 
up. One collection has 3 shards, each shard has 2 replicas.


Everytime query whether from solrj or admin ui, getting inconsistent 
results. e.g.

1. numFound is always fluctuating.
2. facet count shows the count for a field, filter query on that field 
gets 0 results.
3. luke requests work(not sure whether gives correct info of all the 
dynamic field) on per shard not on collection when invoked from curl but 
doesnt work when called from solrj.
4. admin ui shows expanded results, same query goes from solrj, 
getExpandedResults() gives 0 docs.


What would be cause of all this ? Any pointer to look for an error 
anything in the logs.


Re: Inconsistent results with solr admin ui and solrj

2016-08-13 Thread Pranaya Behera

Hi,
 I am using Java client i.e. SorlJ.

On 13/08/16 16:31, GW wrote:

No offense intended, but you are looking at a problem with your work. You
need to explain what you are doing not what is happening.

If you are trying to use PHP and the latest PECL/PEAR, it does not work so
well. It is considerably older than Solr 6.1.
This was the only issue I ran into with 6.1.






On 13 August 2016 at 06:10, Pranaya Behera  wrote:


Hi,
 I am running solr 6.1.0 with solrcloud. We have 3 instance of
zookeeper and 3 instance of solrcloud. All three of them are active and up.
One collection has 3 shards, each shard has 2 replicas.

Everytime query whether from solrj or admin ui, getting inconsistent
results. e.g.
1. numFound is always fluctuating.
2. facet count shows the count for a field, filter query on that field
gets 0 results.
3. luke requests work(not sure whether gives correct info of all the
dynamic field) on per shard not on collection when invoked from curl but
doesnt work when called from solrj.
4. admin ui shows expanded results, same query goes from solrj,
getExpandedResults() gives 0 docs.

What would be cause of all this ? Any pointer to look for an error
anything in the logs.





Re: Inconsistent results with solr admin ui and solrj

2016-08-13 Thread Pranaya Behera

Hi Alexandre,
  I am sure I am firing the same queries with the 
same collection everytime.

How do WireShark will help ? I am sorry not experienced with that tool.

On 13/08/16 17:37, Alexandre Rafalovitch wrote:

Are you sure you are issuing the same queries to the same collections and
the same request handlers.

I would verify that before all else. Using network sniffers (Wireshark) if
necessary.

Regards,
 Alex

On 13 Aug 2016 8:11 PM, "Pranaya Behera"  wrote:

Hi,
 I am running solr 6.1.0 with solrcloud. We have 3 instance of zookeeper
and 3 instance of solrcloud. All three of them are active and up. One
collection has 3 shards, each shard has 2 replicas.

Everytime query whether from solrj or admin ui, getting inconsistent
results. e.g.
1. numFound is always fluctuating.
2. facet count shows the count for a field, filter query on that field gets
0 results.
3. luke requests work(not sure whether gives correct info of all the
dynamic field) on per shard not on collection when invoked from curl but
doesnt work when called from solrj.
4. admin ui shows expanded results, same query goes from solrj,
getExpandedResults() gives 0 docs.

What would be cause of all this ? Any pointer to look for an error anything
in the logs.





Re: Inconsistent results with solr admin ui and solrj

2016-08-15 Thread Pranaya Behera

Hi,
 a.) Yes index is static, not updated live. We index new documents over 
old documents by this sequesce, deleteall docs, add 10 freshly fetched 
from db, after adding all the docs to cloud instance, commit. Commit 
happens only once per collection,
b.) I took one shard and below are the results for the each replica, it 
has 2 replica.

Replica - 2
Last Modified: 33 minutes ago
Num Docs: 127970
Max Doc: 127970
Heap Memory Usage: -1
Deleted Docs: 0
Version: 14530
Segment Count: 5
Optimized: yes
Current: yes
Data:  /var/solr/data/product_shard1_replica2/data
Index: /var/solr/data/product_shard1_replica2/data/index.20160816040537452
Impl:  org.apache.solr.core.NRTCachingDirectoryFactory

Replica - 1
Last Modified: about 19 hours ago
Num Docs: 234013
Max Doc: 234013
Heap Memory Usage: -1
Deleted Docs: 0
Version: 14272
Segment Count: 7
Optimized: yes
Current: no
Data:  /var/solr/data/product_shard1_replica1/data
Index: /var/solr/data/product_shard1_replica1/data/index
Impl:  org.apache.solr.core.NRTCachingDirectoryFactory

c.) With the admin ui: if I query for all, *:* it gives different 
numFound each time.

e.g.
1.

|{ "responseHeader":{ "zkConnected":true, "status":0, "QTime":7, 
"params":{ "q":"*:*", "indent":"on", "wt":"json", "_":"1471322871767"}}, 
"response":{"numFound":452300,"start":0,"maxScore":1.0, 2. |
|{ "responseHeader":{ "zkConnected":true, "status":0, "QTime":23, 
"params":{ "q":"*:*", "indent":"on", "wt":"json", "_":"1471322871767"}}, 
"response":{"numFound":574013,"start":0,"maxScore":1.0, This is queried 
live from the solr instances. |


It happens with any type of queries, if I search in parent document or 
search through child documents to get parents. sorting is used in both 
cases but with different field, while doingblock join query sortingis on 
the child document field, otherwise on the parent document field.


d.) I dont find any errors in the logs. All warnings only.

On 14/08/16 02:56, Jan Høydahl wrote:

Could it be that your cluster is not in sync, so that when Solr picks three 
nodes, results will vary depending on what replica answers?

A few questions:

a) Is your index static, i.e. not being updated live?
b) Can you try to go directly to the core menu of both replicas for each shard, 
and compare numDocs / maxDocs for each? Both replicas in each shard should have 
same count.
c) What are you querying on and sorting by? Does it happen with only one query 
and sorting?
d) Are there any errors in the logs?

If possible, please share some queries, responses, config, screenshots etc.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


13. aug. 2016 kl. 12.10 skrev Pranaya Behera :

Hi,
I am running solr 6.1.0 with solrcloud. We have 3 instance of zookeeper and 
3 instance of solrcloud. All three of them are active and up. One collection 
has 3 shards, each shard has 2 replicas.

Everytime query whether from solrj or admin ui, getting inconsistent results. 
e.g.
1. numFound is always fluctuating.
2. facet count shows the count for a field, filter query on that field gets 0 
results.
3. luke requests work(not sure whether gives correct info of all the dynamic 
field) on per shard not on collection when invoked from curl but doesnt work 
when called from solrj.
4. admin ui shows expanded results, same query goes from solrj, 
getExpandedResults() gives 0 docs.

What would be cause of all this ? Any pointer to look for an error anything in 
the logs.






Re: Inconsistent results with solr admin ui and solrj

2016-08-16 Thread Pranaya Behera

Hi,
 I did as you said, now it is coming ok.
And what are the things to look for while checking about these kind of 
issues, such as mismatch count, lukerequest not returning all the fields 
etc. The doc sync is one, how can I programmatically use the info and 
sync them ? Is there any method in solrj?


On 16/08/16 14:50, Jan Høydahl wrote:

Hi,

There is clearly something wrong when your two replicas are not in sync. Could you 
go to the “Cloud->Tree” tab of admin UI and look in the overseer queue whether 
you find signs of stuck jobs or something?
Btw - what warnings do you see in the logs? Anything repeatedly popping up?

I would also try the following:
1. Take down the node hosting replica 1 (assuming that replica2 is the correct, 
most current)
2. Manually empty the data folder
3. Take the node up again
4. Verify that a full index recovery happens, and that they get back in sync
5. Run your indexing procedure.
6. Verify that both replicas are still in sync

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


16. aug. 2016 kl. 06.51 skrev Pranaya Behera :

Hi,
a.) Yes index is static, not updated live. We index new documents over old 
documents by this sequesce, deleteall docs, add 10 freshly fetched from db, 
after adding all the docs to cloud instance, commit. Commit happens only once 
per collection,
b.) I took one shard and below are the results for the each replica, it has 2 
replica.
Replica - 2
Last Modified: 33 minutes ago
Num Docs: 127970
Max Doc: 127970
Heap Memory Usage: -1
Deleted Docs: 0
Version: 14530
Segment Count: 5
Optimized: yes
Current: yes
Data:  /var/solr/data/product_shard1_replica2/data
Index: /var/solr/data/product_shard1_replica2/data/index.20160816040537452
Impl:  org.apache.solr.core.NRTCachingDirectoryFactory

Replica - 1
Last Modified: about 19 hours ago
Num Docs: 234013
Max Doc: 234013
Heap Memory Usage: -1
Deleted Docs: 0
Version: 14272
Segment Count: 7
Optimized: yes
Current: no
Data:  /var/solr/data/product_shard1_replica1/data
Index: /var/solr/data/product_shard1_replica1/data/index
Impl:  org.apache.solr.core.NRTCachingDirectoryFactory

c.) With the admin ui: if I query for all, *:* it gives different numFound each 
time.
e.g.
1.

|{ "responseHeader":{ "zkConnected":true, "status":0, "QTime":7, "params":{ "q":"*:*", "indent":"on", "wt":"json", 
"_":"1471322871767"}}, "response":{"numFound":452300,"start":0,"maxScore":1.0, 2. |
|{ "responseHeader":{ "zkConnected":true, "status":0, "QTime":23, "params":{ "q":"*:*", "indent":"on", "wt":"json", 
"_":"1471322871767"}}, "response":{"numFound":574013,"start":0,"maxScore":1.0, This is queried live from the solr instances. |

It happens with any type of queries, if I search in parent document or search 
through child documents to get parents. sorting is used in both cases but with 
different field, while doingblock join query sortingis on the child document 
field, otherwise on the parent document field.

d.) I dont find any errors in the logs. All warnings only.

On 14/08/16 02:56, Jan Høydahl wrote:

Could it be that your cluster is not in sync, so that when Solr picks three 
nodes, results will vary depending on what replica answers?

A few questions:

a) Is your index static, i.e. not being updated live?
b) Can you try to go directly to the core menu of both replicas for each shard, 
and compare numDocs / maxDocs for each? Both replicas in each shard should have 
same count.
c) What are you querying on and sorting by? Does it happen with only one query 
and sorting?
d) Are there any errors in the logs?

If possible, please share some queries, responses, config, screenshots etc.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


13. aug. 2016 kl. 12.10 skrev Pranaya Behera :

Hi,
I am running solr 6.1.0 with solrcloud. We have 3 instance of zookeeper and 
3 instance of solrcloud. All three of them are active and up. One collection 
has 3 shards, each shard has 2 replicas.

Everytime query whether from solrj or admin ui, getting inconsistent results. 
e.g.
1. numFound is always fluctuating.
2. facet count shows the count for a field, filter query on that field gets 0 
results.
3. luke requests work(not sure whether gives correct info of all the dynamic 
field) on per shard not on collection when invoked from curl but doesnt work 
when called from solrj.
4. admin ui shows expanded results, same query goes from solrj, 
getExpandedResults() gives 0 docs.

What would be cause of all this ? Any pointer to look for an error anything in 
the logs.






SolrCore is loading in the middle of indexing.

2016-08-23 Thread Pranaya Behera

Hi,
In the middle of indexing solrcore gets reloaded and causing 503 
error. Here is the stack trace of the issue.


[main] ERROR org.apache.solr.client.solrj.impl.CloudSolrClient - Request 
to collection product failed due to (503) 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
Error from server at http://x.x.x.x:8983/solr/product_shard3_replica1: 
Expected mime type application/octet-stream but got text/html. 



Error 503 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore 
is loading,code=503}


HTTP ERROR 503
Problem accessing /solr/product_shard3_replica1/update. Reason:
 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore 
is loading,code=503}



, retry? 0
[main] ERROR com.igp.solrindex.ProductIndex - Exception is
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error 
from server at http://10.0.2.6:8983/solr/product_shard3_replica1: 
Expected mime type application/octet-stream but got text/html. 



Error 503 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore 
is loading,code=503}


HTTP ERROR 503
Problem accessing /solr/product_shard3_replica1/update. Reason:
 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore 
is loading,code=503}




at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at com.igp.solrindex.ProductIndex.index(ProductIndex.java:225) 
[solrindex-1.0-SNAPSHOT.jar:?]
at com.igp.solrindex.App.main(App.java:17) 
[solrindex-1.0-SNAPSHOT.jar:?]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
Error from server at http://x.x.x.x:8983/solr/product_shard3_replica1: 
Expected mime type application/octet-stream but got text/html. 



Error 503 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore 
is loading,code=503}


HTTP ERROR 503
Problem accessing /solr/product_shard3_replica1/update. Reason:
 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore 
is loading,code=503}




at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:404) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:357) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$14(CloudSolrClient.java:674) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_91]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229) 
~[solrindex-1.0-SNAPSHOT.jar:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[?:1.8.0_91]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[?:1.8.0_91]

at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_91]


By raising this issue the indexing never completes.
What could be the issue here for the mime type and the core loading.
I am using 6.1.0 with sorlcloud in 3 instances with 3 zookeeper in each 
instance.




Nodes goes down but never recovers.

2017-04-20 Thread Pranaya Behera
Hi,
 Through SolrJ I am trying to upload configsets and create
collections in my solrcloud.

Setup:
1 Standalone zookeeper listening on 2181 port. version 3.4.10
-- bin/zkServer.sh start
3 Starting solr nodes. (All running from the same solr.home) version
6.5.0 and as well in 6.2.1
-- bin/solr -c -z localhost:2181 -p 8983
-- bin/solr -c -z localhost:2181 -p 8984
-- bin/solr -c -z localhost:2181 -p 8985

After first run of my java application to upload the config and create
the collections in solr through zookeeper is seemless and working
fine.
Here is the clusterstatus after the first run.
https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes-json

Stopped one solr node via:
-- bin/solr stop -p 8985
clusterstatus changed to:
https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes1down-json

Till now everything is as expected.

Here is the remaining part where it confuses me.

Bring the down node back to life. Clusterstatus changed from 2 node
down with 1 node not found to 3 node down including the new node that
just brought up.
https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes3down-json
Expected result should be all the other nodes should be in active mode
and this one would be recovery mode and then it would be active mode,
as this node had data before i stopped it using the script.

Now I added one more node to the cluster via
-- bin/solr -c -z localhost:2181 -p 8986
The clusterstatus changed to:
https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-4node3down-json
This one just retains the previous state and adds the node to the cluster.


When bringing up the removed node which was previously in the cluster
which was registered to the zookeeper and has data about the
collections be registered as active rather than making every other
node down ? If so what is the solution to this ?

When we add more nodes to an existing cluster, how to ensure that it
also gets the same collections/data i.e. basically synchronizes with
the other nodes which are present in the node rather than manually
create collection for that specific node ? As you can see from the
lastly added node's clusterstate it is there in the live_nodes but
never got the collections into its data dir.
Is there any other way to add a node with the existing cluster with
the cluster data ?

For the completion here is the code that is used to upload config and
create collection through CloudSolrClient in Solrj.(Not full code but
part of it where the operation is happening.)
https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-code-java
Thats all there is for a collection to create: upload configsets to
zookeeper, create collection and reload collection if required.

This I have tried in my local Mac OS Sierra and also in AWS env which
same effect.



-- 
Thanks & Regards
Pranaya PR Behera


Re: Nodes goes down but never recovers.

2017-04-20 Thread Pranaya Behera
Hi,
 Can someone from the mailing list also confirm the same findings
? I am at wit's end on what to do to fix this. Please guide me to
create a patch for the same.

On Thu, Apr 20, 2017 at 3:13 PM, Pranaya Behera
 wrote:
> Hi,
>  Through SolrJ I am trying to upload configsets and create
> collections in my solrcloud.
>
> Setup:
> 1 Standalone zookeeper listening on 2181 port. version 3.4.10
> -- bin/zkServer.sh start
> 3 Starting solr nodes. (All running from the same solr.home) version
> 6.5.0 and as well in 6.2.1
> -- bin/solr -c -z localhost:2181 -p 8983
> -- bin/solr -c -z localhost:2181 -p 8984
> -- bin/solr -c -z localhost:2181 -p 8985
>
> After first run of my java application to upload the config and create
> the collections in solr through zookeeper is seemless and working
> fine.
> Here is the clusterstatus after the first run.
> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes-json
>
> Stopped one solr node via:
> -- bin/solr stop -p 8985
> clusterstatus changed to:
> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes1down-json
>
> Till now everything is as expected.
>
> Here is the remaining part where it confuses me.
>
> Bring the down node back to life. Clusterstatus changed from 2 node
> down with 1 node not found to 3 node down including the new node that
> just brought up.
> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes3down-json
> Expected result should be all the other nodes should be in active mode
> and this one would be recovery mode and then it would be active mode,
> as this node had data before i stopped it using the script.
>
> Now I added one more node to the cluster via
> -- bin/solr -c -z localhost:2181 -p 8986
> The clusterstatus changed to:
> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-4node3down-json
> This one just retains the previous state and adds the node to the cluster.
>
>
> When bringing up the removed node which was previously in the cluster
> which was registered to the zookeeper and has data about the
> collections be registered as active rather than making every other
> node down ? If so what is the solution to this ?
>
> When we add more nodes to an existing cluster, how to ensure that it
> also gets the same collections/data i.e. basically synchronizes with
> the other nodes which are present in the node rather than manually
> create collection for that specific node ? As you can see from the
> lastly added node's clusterstate it is there in the live_nodes but
> never got the collections into its data dir.
> Is there any other way to add a node with the existing cluster with
> the cluster data ?
>
> For the completion here is the code that is used to upload config and
> create collection through CloudSolrClient in Solrj.(Not full code but
> part of it where the operation is happening.)
> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-code-java
> Thats all there is for a collection to create: upload configsets to
> zookeeper, create collection and reload collection if required.
>
> This I have tried in my local Mac OS Sierra and also in AWS env which
> same effect.
>
>
>
> --
> Thanks & Regards
> Pranaya PR Behera



-- 
Thanks & Regards
Pranaya PR Behera


Re: Nodes goes down but never recovers.

2017-04-20 Thread Pranaya Behera
Hi Erick,
  Even if they use different solr.home which I have also
tested in AWS environment there also is the same problem.

Can someone verify the first message in their local ?

On Fri, Apr 21, 2017 at 2:27 AM, Erick Erickson  wrote:
> Have you looked at the Solr logs on the node you try to bring back up?
> There are sometimes much more informative messages in the log files.
> The proverbial "smoking gun" would be messages about write locks.
>
> You say they are all using the same solr.home, which is probably the
> source of a lot of your issues. Take a look at the directory structure
> after you start up the example and you'll see different -s parameters
> for each of the instances started on the same machine, so the startup
> looks something like:
>
> bin/solr start -c -z localhost:2181 -p 898$1 -s example/cloud/node1/solr
> bin/solr start -c -z localhost:2181 -p 898$1 -s example/cloud/node2/solr
>
> and the like.
>
> Best,
> Erick
>
> On Thu, Apr 20, 2017 at 11:01 AM, Pranaya Behera
>  wrote:
>> Hi,
>>  Can someone from the mailing list also confirm the same findings
>> ? I am at wit's end on what to do to fix this. Please guide me to
>> create a patch for the same.
>>
>> On Thu, Apr 20, 2017 at 3:13 PM, Pranaya Behera
>>  wrote:
>>> Hi,
>>>  Through SolrJ I am trying to upload configsets and create
>>> collections in my solrcloud.
>>>
>>> Setup:
>>> 1 Standalone zookeeper listening on 2181 port. version 3.4.10
>>> -- bin/zkServer.sh start
>>> 3 Starting solr nodes. (All running from the same solr.home) version
>>> 6.5.0 and as well in 6.2.1
>>> -- bin/solr -c -z localhost:2181 -p 8983
>>> -- bin/solr -c -z localhost:2181 -p 8984
>>> -- bin/solr -c -z localhost:2181 -p 8985
>>>
>>> After first run of my java application to upload the config and create
>>> the collections in solr through zookeeper is seemless and working
>>> fine.
>>> Here is the clusterstatus after the first run.
>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes-json
>>>
>>> Stopped one solr node via:
>>> -- bin/solr stop -p 8985
>>> clusterstatus changed to:
>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes1down-json
>>>
>>> Till now everything is as expected.
>>>
>>> Here is the remaining part where it confuses me.
>>>
>>> Bring the down node back to life. Clusterstatus changed from 2 node
>>> down with 1 node not found to 3 node down including the new node that
>>> just brought up.
>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes3down-json
>>> Expected result should be all the other nodes should be in active mode
>>> and this one would be recovery mode and then it would be active mode,
>>> as this node had data before i stopped it using the script.
>>>
>>> Now I added one more node to the cluster via
>>> -- bin/solr -c -z localhost:2181 -p 8986
>>> The clusterstatus changed to:
>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-4node3down-json
>>> This one just retains the previous state and adds the node to the cluster.
>>>
>>>
>>> When bringing up the removed node which was previously in the cluster
>>> which was registered to the zookeeper and has data about the
>>> collections be registered as active rather than making every other
>>> node down ? If so what is the solution to this ?
>>>
>>> When we add more nodes to an existing cluster, how to ensure that it
>>> also gets the same collections/data i.e. basically synchronizes with
>>> the other nodes which are present in the node rather than manually
>>> create collection for that specific node ? As you can see from the
>>> lastly added node's clusterstate it is there in the live_nodes but
>>> never got the collections into its data dir.
>>> Is there any other way to add a node with the existing cluster with
>>> the cluster data ?
>>>
>>> For the completion here is the code that is used to upload config and
>>> create collection through CloudSolrClient in Solrj.(Not full code but
>>> part of it where the operation is happening.)
>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-code-java
>>> Thats all there is for a collection to create: upload configsets to
>>> zookeeper, create collection and reload collection if required.
>>>
>>> This I have tried in my local Mac OS Sierra and also in AWS env which
>>> same effect.
>>>
>>>
>>>
>>> --
>>> Thanks & Regards
>>> Pranaya PR Behera
>>
>>
>>
>> --
>> Thanks & Regards
>> Pranaya PR Behera



-- 
Thanks & Regards
Pranaya PR Behera


Re: Nodes goes down but never recovers.

2017-04-24 Thread Pranaya Behera
Any other solutions for this ?

On Fri, Apr 21, 2017 at 9:42 AM, Pranaya Behera
 wrote:
> Hi Erick,
>   Even if they use different solr.home which I have also
> tested in AWS environment there also is the same problem.
>
> Can someone verify the first message in their local ?
>
> On Fri, Apr 21, 2017 at 2:27 AM, Erick Erickson  
> wrote:
>> Have you looked at the Solr logs on the node you try to bring back up?
>> There are sometimes much more informative messages in the log files.
>> The proverbial "smoking gun" would be messages about write locks.
>>
>> You say they are all using the same solr.home, which is probably the
>> source of a lot of your issues. Take a look at the directory structure
>> after you start up the example and you'll see different -s parameters
>> for each of the instances started on the same machine, so the startup
>> looks something like:
>>
>> bin/solr start -c -z localhost:2181 -p 898$1 -s example/cloud/node1/solr
>> bin/solr start -c -z localhost:2181 -p 898$1 -s example/cloud/node2/solr
>>
>> and the like.
>>
>> Best,
>> Erick
>>
>> On Thu, Apr 20, 2017 at 11:01 AM, Pranaya Behera
>>  wrote:
>>> Hi,
>>>  Can someone from the mailing list also confirm the same findings
>>> ? I am at wit's end on what to do to fix this. Please guide me to
>>> create a patch for the same.
>>>
>>> On Thu, Apr 20, 2017 at 3:13 PM, Pranaya Behera
>>>  wrote:
>>>> Hi,
>>>>  Through SolrJ I am trying to upload configsets and create
>>>> collections in my solrcloud.
>>>>
>>>> Setup:
>>>> 1 Standalone zookeeper listening on 2181 port. version 3.4.10
>>>> -- bin/zkServer.sh start
>>>> 3 Starting solr nodes. (All running from the same solr.home) version
>>>> 6.5.0 and as well in 6.2.1
>>>> -- bin/solr -c -z localhost:2181 -p 8983
>>>> -- bin/solr -c -z localhost:2181 -p 8984
>>>> -- bin/solr -c -z localhost:2181 -p 8985
>>>>
>>>> After first run of my java application to upload the config and create
>>>> the collections in solr through zookeeper is seemless and working
>>>> fine.
>>>> Here is the clusterstatus after the first run.
>>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes-json
>>>>
>>>> Stopped one solr node via:
>>>> -- bin/solr stop -p 8985
>>>> clusterstatus changed to:
>>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes1down-json
>>>>
>>>> Till now everything is as expected.
>>>>
>>>> Here is the remaining part where it confuses me.
>>>>
>>>> Bring the down node back to life. Clusterstatus changed from 2 node
>>>> down with 1 node not found to 3 node down including the new node that
>>>> just brought up.
>>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-3nodes3down-json
>>>> Expected result should be all the other nodes should be in active mode
>>>> and this one would be recovery mode and then it would be active mode,
>>>> as this node had data before i stopped it using the script.
>>>>
>>>> Now I added one more node to the cluster via
>>>> -- bin/solr -c -z localhost:2181 -p 8986
>>>> The clusterstatus changed to:
>>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-4node3down-json
>>>> This one just retains the previous state and adds the node to the cluster.
>>>>
>>>>
>>>> When bringing up the removed node which was previously in the cluster
>>>> which was registered to the zookeeper and has data about the
>>>> collections be registered as active rather than making every other
>>>> node down ? If so what is the solution to this ?
>>>>
>>>> When we add more nodes to an existing cluster, how to ensure that it
>>>> also gets the same collections/data i.e. basically synchronizes with
>>>> the other nodes which are present in the node rather than manually
>>>> create collection for that specific node ? As you can see from the
>>>> lastly added node's clusterstate it is there in the live_nodes but
>>>> never got the collections into its data dir.
>>>> Is there any other way to add a node with the existing cluster with
>>>> the cluster data ?
>>>>
>>>> For the completion here is the code that is used to upload config and
>>>> create collection through CloudSolrClient in Solrj.(Not full code but
>>>> part of it where the operation is happening.)
>>>> https://gist.github.com/shadow-fox/5874f8b5de93fff0f5bcc8886be81d4d#file-code-java
>>>> Thats all there is for a collection to create: upload configsets to
>>>> zookeeper, create collection and reload collection if required.
>>>>
>>>> This I have tried in my local Mac OS Sierra and also in AWS env which
>>>> same effect.
>>>>
>>>>
>>>>
>>>> --
>>>> Thanks & Regards
>>>> Pranaya PR Behera
>>>
>>>
>>>
>>> --
>>> Thanks & Regards
>>> Pranaya PR Behera
>
>
>
> --
> Thanks & Regards
> Pranaya PR Behera



-- 
Thanks & Regards
Pranaya PR Behera


Autoscaling in 8.2

2019-10-31 Thread Pranaya Behera
Hi,
 I have one node started with solrcloud. I have created one collection
with the default configsets. When I am creating another node joining the
same cluster i.e. zookeeper chroot, is there any way to create the
collection ? Currently it is just sitting idle, not doing any work.
In the documentation it is not mentioned about this. Only replicas and
shards.
While a new node joins the cluster, is there a way to propagate the
collection creation and later all the shards/replicas being created
accordingly?

-- 
Thanks & Regards
Pranaya PR Behera


Re: Autoscaling in 8.2

2019-10-31 Thread Pranaya Behera
So my setup is this:
1 node
1 collection (created via collection API)
1 shard
1 replica
This node is connected to zookeeper.

Now let's say I added one more node to the zookeeper, this node is new,
hence there won't be any collection created for me automatically. It is a
barebane node i.e. connected to the same cluster. Is there a way to sync
the data in node 2 which doesn't have a collection as it is joined recently
and also no collection has been created.

On Thu, 31 Oct, 2019, 5:58 PM Jörn Franke,  wrote:

> You need to create a replica of the collection on the  other node:
>
> https://lucene.apache.org/solr/guide/6_6/collections-api.html
>
> See addreplica
>
> >> Am 31.10.2019 um 09:46 schrieb Pranaya Behera  >:
> > Hi,
> > I have one node started with solrcloud. I have created one collection
> > with the default configsets. When I am creating another node joining the
> > same cluster i.e. zookeeper chroot, is there any way to create the
> > collection ? Currently it is just sitting idle, not doing any work.
> > In the documentation it is not mentioned about this. Only replicas and
> > shards.
> > While a new node joins the cluster, is there a way to propagate the
> > collection creation and later all the shards/replicas being created
> > accordingly?
> >
> > --
> > Thanks & Regards
> > Pranaya PR Behera
>