Nested Document is flattened even with @Field(child = true) annotation

2017-05-19 Thread biplobbiswas
I have the following structure for the class used for indexing a solr
document. I am using solrj 5.5.2 (same solr is being used on the cluster
with the collection in solr cloud mode having 3 shards)

I added the @Field(child = true) to the ChnagedAttribute object and even
though my document is indexed, it is flattened, such that, the object is
treated as a different document. 

So rather than having 3 documents, I am getting 6 docs from solr when I
query for everything.

Any help regarding this issue is really appreciated.

@Data
public class EventDocument implements Serializable {

  private static final long serialVersionUID = 1L;
  
@Field("id")
private String solrId;
@Field("eventName_t")
private String eventName;
@Field("Message_t")
private String message;
@Field(child = true)
private ChangedAttribute changedAttributes;

}

@Data
class ChangedAttribute implements Serializable {

private static final long serialVersionUID = 1L;

@Field("id")
private String id;
@Field("AttributeName_t")
private String attributeName;
@Field("OldValue_t")
private String oldValue;
@Field("NewValue_t")
private String newValue;
}





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Nested-Document-is-flattened-even-with-Field-child-true-annotation-tp4335877.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Nested Document is flattened even with @Field(child = true) annotation

2017-05-19 Thread biplobbiswas
Update,

I checked with the following example as well and this also flattens the
results.

I took the example from here -
https://issues.apache.org/jira/browse/SOLR-1945


package com.airplus.poc.edl.spark.auditeventindexer;
import java.io.IOException;

import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.beans.Field;
import org.apache.solr.client.solrj.impl.CloudSolrClient;

/**
 * @author Biplob Biswas on 19.05.2017.
 */

public class SolrNestedTest {

  public static void main(String[] args) throws IOException,
  SolrServerException {
new SolrNestedTest().test();
  }

  public void test() throws IOException, SolrServerException {

String zkHostString = "host:2181/solr";
CloudSolrClient client = new CloudSolrClient(zkHostString);

Test test = new Test();
test.setId("2");
Child c = new Child();
c.child = true;
c.id = "1";
test.setChild(c);
client.addBean("event_store", test, 10);

client.close();

  }

  public class Child {
@Field
public String id;
@Field
public boolean child;
  }

  public class Test {

@Field
private String id;

@Field(child = true)
private Child child;

public String getId() {
  return id;
}

public void setId(String id) {
  this.id = id;
}

public Child getChild() {
  return child;
}

public void setChild(Child child) {
  this.child = child;
}

  }
}



The response back  - 

{
  "responseHeader": {
"status": 0,
"QTime": 8,
"params": {
  "q": "*:*",
  "indent": "true",
  "wt": "json",
  "_": "1495194572357"
}
  },
  "response": {
"numFound": 2,
"start": 0,
"maxScore": 1,
"docs": [
  {
"id": "1",
"child": [
  true
]
  },
  {
"id": "2",
"_version_": 1567825059298410500
  }
]
  }
}



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Nested-Document-is-flattened-even-with-Field-child-true-annotation-tp4335877p4335878.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Nested Document is flattened even with @Field(child = true) annotation

2017-05-19 Thread Mikhail Khludnev
Hello,

You need to use
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-BlockJoinQueryParsers
and
https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents#TransformingResultDocuments-[child]-ChildDocTransformerFactory
to get the nested data back.

On Fri, May 19, 2017 at 2:52 PM, biplobbiswas  wrote:

> Update,
>
> I checked with the following example as well and this also flattens the
> results.
>
> I took the example from here -
> https://issues.apache.org/jira/browse/SOLR-1945
>
>
> package com.airplus.poc.edl.spark.auditeventindexer;
> import java.io.IOException;
>
> import org.apache.solr.client.solrj.SolrServerException;
> import org.apache.solr.client.solrj.beans.Field;
> import org.apache.solr.client.solrj.impl.CloudSolrClient;
>
> /**
>  * @author Biplob Biswas on 19.05.2017.
>  */
>
> public class SolrNestedTest {
>
>   public static void main(String[] args) throws IOException,
>   SolrServerException {
> new SolrNestedTest().test();
>   }
>
>   public void test() throws IOException, SolrServerException {
>
> String zkHostString = "host:2181/solr";
> CloudSolrClient client = new CloudSolrClient(zkHostString);
>
> Test test = new Test();
> test.setId("2");
> Child c = new Child();
> c.child = true;
> c.id = "1";
> test.setChild(c);
> client.addBean("event_store", test, 10);
>
> client.close();
>
>   }
>
>   public class Child {
> @Field
> public String id;
> @Field
> public boolean child;
>   }
>
>   public class Test {
>
> @Field
> private String id;
>
> @Field(child = true)
> private Child child;
>
> public String getId() {
>   return id;
> }
>
> public void setId(String id) {
>   this.id = id;
> }
>
> public Child getChild() {
>   return child;
> }
>
> public void setChild(Child child) {
>   this.child = child;
> }
>
>   }
> }
>
>
>
> The response back  -
>
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 8,
> "params": {
>   "q": "*:*",
>   "indent": "true",
>   "wt": "json",
>   "_": "1495194572357"
> }
>   },
>   "response": {
> "numFound": 2,
> "start": 0,
> "maxScore": 1,
> "docs": [
>   {
> "id": "1",
> "child": [
>   true
> ]
>   },
>   {
> "id": "2",
> "_version_": 1567825059298410500
>   }
> ]
>   }
> }
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/Nested-Document-is-flattened-even-with-Field-
> child-true-annotation-tp4335877p4335878.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Sincerely yours
Mikhail Khludnev


Re: Nested Document is flattened even with @Field(child = true) annotation

2017-05-19 Thread biplobbiswas
Hi 
Mikhail Khludnev-2 wrote
> Hello,
> 
> You need to use
> https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-BlockJoinQueryParsers
> and
> https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents#TransformingResultDocuments-[child]-ChildDocTransformerFactory
> to get the nested data back.
> 
> 
> -- 
> Sincerely yours
> Mikhail Khludnev

I had already gone through those links you posted and they talk about
retrieving after indexing. My problem is that my documents are not indexed
in a nested structure.

Can you please look at the first comment as well where I posted a sample
code and sample response which i get back.

Because its creating distinct documents for nested structure




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Nested-Document-is-flattened-even-with-Field-child-true-annotation-tp4335877p4335891.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr in NAS or Network Shared Drive

2017-05-19 Thread Ravi Kumar Taminidi
Hello,  Scenario: Currently we have 2 Solr Servers running in 2 different 
servers (linux), Is there any way can we make the Core to be located in NAS or 
Network shared Drive so both the solrs using the same Index.

Let me know if any performance issues, our size of Index is appx 1GB.

Thanks

Ravi

-Original Message-
From: biplobbiswas [mailto:revolutionisme+s...@gmail.com] 
Sent: Friday, May 19, 2017 9:23 AM
To: solr-user@lucene.apache.org
Subject: Re: Nested Document is flattened even with @Field(child = true) 
annotation

Hi
Mikhail Khludnev-2 wrote
> Hello,
> 
> You need to use
> https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherPa
> rsers-BlockJoinQueryParsers
> and
> https://cwiki.apache.org/confluence/display/solr/Transforming+Result+D
> ocuments#TransformingResultDocuments-[child]-ChildDocTransformerFactor
> y
> to get the nested data back.
> 
> 
> --
> Sincerely yours
> Mikhail Khludnev

I had already gone through those links you posted and they talk about 
retrieving after indexing. My problem is that my documents are not indexed in a 
nested structure.

Can you please look at the first comment as well where I posted a sample code 
and sample response which i get back.

Because its creating distinct documents for nested structure




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Nested-Document-is-flattened-even-with-Field-child-true-annotation-tp4335877p4335891.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr in NAS or Network Shared Drive

2017-05-19 Thread David Hastings
ive always wanted to experiment with this, but you have to be very careful
that only one of the cores, or neither, can do ANY writes, also if you have
a suggester index you need to make sure that each core builds their own
independently.  In any case from every thing ive read the general answer is
dont do it.  would like to hear other peoples thoughts on this however.

On Fri, May 19, 2017 at 10:33 AM, Ravi Kumar Taminidi <
ravi.tamin...@whitepine-st.com> wrote:

> Hello,  Scenario: Currently we have 2 Solr Servers running in 2 different
> servers (linux), Is there any way can we make the Core to be located in NAS
> or Network shared Drive so both the solrs using the same Index.
>
> Let me know if any performance issues, our size of Index is appx 1GB.
>
> Thanks
>
> Ravi
>
> -Original Message-
> From: biplobbiswas [mailto:revolutionisme+s...@gmail.com]
> Sent: Friday, May 19, 2017 9:23 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Nested Document is flattened even with @Field(child = true)
> annotation
>
> Hi
> Mikhail Khludnev-2 wrote
> > Hello,
> >
> > You need to use
> > https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherPa
> > rsers-BlockJoinQueryParsers
> > and
> > https://cwiki.apache.org/confluence/display/solr/Transforming+Result+D
> > ocuments#TransformingResultDocuments-[child]-ChildDocTransformerFactor
> > y
> > to get the nested data back.
> >
> >
> > --
> > Sincerely yours
> > Mikhail Khludnev
>
> I had already gone through those links you posted and they talk about
> retrieving after indexing. My problem is that my documents are not indexed
> in a nested structure.
>
> Can you please look at the first comment as well where I posted a sample
> code and sample response which i get back.
>
> Because its creating distinct documents for nested structure
>
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/Nested-Document-is-flattened-even-with-Field-
> child-true-annotation-tp4335877p4335891.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Session expired when executing streaming expression, but no long GC pauses ...

2017-05-19 Thread Timothy Potter
I'm executing a streaming expr and get this error:

Caused by: org.apache.solr.common.SolrException: Could not load
collection from ZK:
MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1098)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:638)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1482)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1092)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
at 
org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:356)
... 38 more
Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for
/collections/MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002/state.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:356)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:353)
at 
org.apache.solr.common.cloud.ZkStateReader.fetchCollectionState(ZkStateReader.java:1110)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1096)
... 43 more

I've scoured the GC logs for solr and there are no long pauses
(nothing even over 1 second) ... any ideas why that session could be
expired?


Re: Nested Document is flattened even with @Field(child = true) annotation

2017-05-19 Thread biplobbiswas
Wait, if I understand correctly, the documents would be indexed like that but
we can get back the document as nested if we perform the
blockjoinqueryparsing? 

So if I query normally with the default parser I would get all documents
separately? 
Did i understand correctly? 

Thanks & regards
Biplob



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Nested-Document-is-flattened-even-with-Field-child-true-annotation-tp4335877p4335911.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Session expired when executing streaming expression, but no long GC pauses ...

2017-05-19 Thread Joel Bernstein
You get this every time you run the expression?

Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, May 19, 2017 at 10:44 AM, Timothy Potter 
wrote:

> I'm executing a streaming expr and get this error:
>
> Caused by: org.apache.solr.common.SolrException: Could not load
> collection from ZK:
> MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
> at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(
> ZkStateReader.java:1098)
> at org.apache.solr.common.cloud.ZkStateReader$
> LazyCollectionRef.get(ZkStateReader.java:638)
> at org.apache.solr.client.solrj.impl.CloudSolrClient.
> getDocCollection(CloudSolrClient.java:1482)
> at org.apache.solr.client.solrj.impl.CloudSolrClient.
> requestWithRetryOnStaleState(CloudSolrClient.java:1092)
> at org.apache.solr.client.solrj.impl.CloudSolrClient.request(
> CloudSolrClient.java:1057)
> at org.apache.solr.client.solrj.io.stream.FacetStream.open(
> FacetStream.java:356)
> ... 38 more
> Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired for
> /collections/MovieLens_Ratings_f2e6f8b0_3199_11e7_
> b8ab_0242ac110002/state.json
> at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:127)
> at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
> at org.apache.solr.common.cloud.SolrZkClient$7.execute(
> SolrZkClient.java:356)
> at org.apache.solr.common.cloud.SolrZkClient$7.execute(
> SolrZkClient.java:353)
> at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(
> ZkCmdExecutor.java:60)
> at org.apache.solr.common.cloud.SolrZkClient.getData(
> SolrZkClient.java:353)
> at org.apache.solr.common.cloud.ZkStateReader.
> fetchCollectionState(ZkStateReader.java:1110)
> at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(
> ZkStateReader.java:1096)
> ... 43 more
>
> I've scoured the GC logs for solr and there are no long pauses
> (nothing even over 1 second) ... any ideas why that session could be
> expired?
>


Re: Solr in NAS or Network Shared Drive

2017-05-19 Thread Rick Leir
For an experiment, mount the NAS filesystem ro (readonly). Is there any way to 
tell Solr not to bother with a lockfile? And what happens if an update or add 
gets requested by mistake, does it take down Solr?

Why not do this all the simple way, and just replicate?

On May 19, 2017 10:41:19 AM EDT, David Hastings  
wrote:
>ive always wanted to experiment with this, but you have to be very
>careful
>that only one of the cores, or neither, can do ANY writes, also if you
>have
>a suggester index you need to make sure that each core builds their own
>independently.  In any case from every thing ive read the general
>answer is
>dont do it.  would like to hear other peoples thoughts on this however.
>
>On Fri, May 19, 2017 at 10:33 AM, Ravi Kumar Taminidi <
>ravi.tamin...@whitepine-st.com> wrote:
>
>> Hello,  Scenario: Currently we have 2 Solr Servers running in 2
>different
>> servers (linux), Is there any way can we make the Core to be located
>in NAS
>> or Network shared Drive so both the solrs using the same Index.
>>
>> Let me know if any performance issues, our size of Index is appx 1GB.
>>
>> Thanks
>>
>> Ravi
>>
>> -Original Message-
>> From: biplobbiswas [mailto:revolutionisme+s...@gmail.com]
>> Sent: Friday, May 19, 2017 9:23 AM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Nested Document is flattened even with @Field(child =
>true)
>> annotation
>>
>> Hi
>> Mikhail Khludnev-2 wrote
>> > Hello,
>> >
>> > You need to use
>> >
>https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherPa
>> > rsers-BlockJoinQueryParsers
>> > and
>> >
>https://cwiki.apache.org/confluence/display/solr/Transforming+Result+D
>> >
>ocuments#TransformingResultDocuments-[child]-ChildDocTransformerFactor
>> > y
>> > to get the nested data back.
>> >
>> >
>> > --
>> > Sincerely yours
>> > Mikhail Khludnev
>>
>> I had already gone through those links you posted and they talk about
>> retrieving after indexing. My problem is that my documents are not
>indexed
>> in a nested structure.
>>
>> Can you please look at the first comment as well where I posted a
>sample
>> code and sample response which i get back.
>>
>> Because its creating distinct documents for nested structure
>>
>>
>>
>>
>> --
>> View this message in context: http://lucene.472066.n3.
>> nabble.com/Nested-Document-is-flattened-even-with-Field-
>> child-true-annotation-tp4335877p4335891.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>

-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

Re: Nested Document is flattened even with @Field(child = true) annotation

2017-05-19 Thread Rick Leir
Yes! And the join queries get complicated. Yonick has some good blogs on this.

On May 19, 2017 11:05:52 AM EDT, biplobbiswas  
wrote:
>Wait, if I understand correctly, the documents would be indexed like
>that but
>we can get back the document as nested if we perform the
>blockjoinqueryparsing? 
>
>So if I query normally with the default parser I would get all
>documents
>separately? 
>Did i understand correctly? 
>
>Thanks & regards
>Biplob
>
>
>
>--
>View this message in context:
>http://lucene.472066.n3.nabble.com/Nested-Document-is-flattened-even-with-Field-child-true-annotation-tp4335877p4335911.html
>Sent from the Solr - User mailing list archive at Nabble.com.

-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

RE: Solr in NAS or Network Shared Drive

2017-05-19 Thread Davis, Daniel (NIH/NLM) [C]
Better off to just do Replication to the slave using the replication handler.

However, if there  is no network connectivity, e.g. this is an offsite 
cold/warm spare, then here is a solution:

The NAS likely supports some Copy-on-write/snapshotting capabilities.   If your 
systems people will work with you, you can use the replication/backup handler 
to take a NAS snapshot just after hard commit, and then have the snapshot 
replicated to another volume.   I suspect Solr will have to be started on the 
cold/warm spare when you do a failover to offsite, because I know of no way to 
have the OS react to events when a snapshot is replicated by the NAS.

This kind of solution is what you might see for an Oracle, or any other binary 
ACID database, so you can look at best practices for integrating these products 
with Netapp or EMC Celera for more ideas.

-Original Message-
From: Rick Leir [mailto:rl...@leirtech.com] 
Sent: Friday, May 19, 2017 12:40 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr in NAS or Network Shared Drive

For an experiment, mount the NAS filesystem ro (readonly). Is there any way to 
tell Solr not to bother with a lockfile? And what happens if an update or add 
gets requested by mistake, does it take down Solr?

Why not do this all the simple way, and just replicate?

On May 19, 2017 10:41:19 AM EDT, David Hastings  
wrote:
>ive always wanted to experiment with this, but you have to be very 
>careful that only one of the cores, or neither, can do ANY writes, also 
>if you have a suggester index you need to make sure that each core 
>builds their own independently.  In any case from every thing ive read 
>the general answer is dont do it.  would like to hear other peoples 
>thoughts on this however.
>
>On Fri, May 19, 2017 at 10:33 AM, Ravi Kumar Taminidi < 
>ravi.tamin...@whitepine-st.com> wrote:
>
>> Hello,  Scenario: Currently we have 2 Solr Servers running in 2
>different
>> servers (linux), Is there any way can we make the Core to be located
>in NAS
>> or Network shared Drive so both the solrs using the same Index.
>>
>> Let me know if any performance issues, our size of Index is appx 1GB.
>>
>> Thanks
>>
>> Ravi
>>
>> -Original Message-
>> From: biplobbiswas [mailto:revolutionisme+s...@gmail.com]
>> Sent: Friday, May 19, 2017 9:23 AM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Nested Document is flattened even with @Field(child =
>true)
>> annotation
>>
>> Hi
>> Mikhail Khludnev-2 wrote
>> > Hello,
>> >
>> > You need to use
>> >
>https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherPa
>> > rsers-BlockJoinQueryParsers
>> > and
>> >
>https://cwiki.apache.org/confluence/display/solr/Transforming+Result+D
>> >
>ocuments#TransformingResultDocuments-[child]-ChildDocTransformerFactor
>> > y
>> > to get the nested data back.
>> >
>> >
>> > --
>> > Sincerely yours
>> > Mikhail Khludnev
>>
>> I had already gone through those links you posted and they talk about 
>> retrieving after indexing. My problem is that my documents are not
>indexed
>> in a nested structure.
>>
>> Can you please look at the first comment as well where I posted a
>sample
>> code and sample response which i get back.
>>
>> Because its creating distinct documents for nested structure
>>
>>
>>
>>
>> --
>> View this message in context: http://lucene.472066.n3.
>> nabble.com/Nested-Document-is-flattened-even-with-Field-
>> child-true-annotation-tp4335877p4335891.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>

--
Sorry for being brief. Alternate email is rickleir at yahoo dot com 


Re: Solr in NAS or Network Shared Drive

2017-05-19 Thread David Hastings
The reason for me to want to try it is because replication is not possible
on the single machine, as the index size is around 350gb+another 400gb, and
i dont have enough SSD to cover a replication from the master node.  Also i
have a theory and heard this as well from a presentation at the LR
conference in Boston this past year, that multiple solr instances on one
machine performs better than multiple machines, would be interesting to
have solr have a "read only"/"listen" state to do no writing to the index,
but keep referencing the index properties/version files.

On Fri, May 19, 2017 at 1:26 PM, Davis, Daniel (NIH/NLM) [C] <
daniel.da...@nih.gov> wrote:

> Better off to just do Replication to the slave using the replication
> handler.
>
> However, if there  is no network connectivity, e.g. this is an offsite
> cold/warm spare, then here is a solution:
>
> The NAS likely supports some Copy-on-write/snapshotting capabilities.   If
> your systems people will work with you, you can use the replication/backup
> handler to take a NAS snapshot just after hard commit, and then have the
> snapshot replicated to another volume.   I suspect Solr will have to be
> started on the cold/warm spare when you do a failover to offsite, because I
> know of no way to have the OS react to events when a snapshot is replicated
> by the NAS.
>
> This kind of solution is what you might see for an Oracle, or any other
> binary ACID database, so you can look at best practices for integrating
> these products with Netapp or EMC Celera for more ideas.
>
> -Original Message-
> From: Rick Leir [mailto:rl...@leirtech.com]
> Sent: Friday, May 19, 2017 12:40 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr in NAS or Network Shared Drive
>
> For an experiment, mount the NAS filesystem ro (readonly). Is there any
> way to tell Solr not to bother with a lockfile? And what happens if an
> update or add gets requested by mistake, does it take down Solr?
>
> Why not do this all the simple way, and just replicate?
>
> On May 19, 2017 10:41:19 AM EDT, David Hastings <
> hastings.recurs...@gmail.com> wrote:
> >ive always wanted to experiment with this, but you have to be very
> >careful that only one of the cores, or neither, can do ANY writes, also
> >if you have a suggester index you need to make sure that each core
> >builds their own independently.  In any case from every thing ive read
> >the general answer is dont do it.  would like to hear other peoples
> >thoughts on this however.
> >
> >On Fri, May 19, 2017 at 10:33 AM, Ravi Kumar Taminidi <
> >ravi.tamin...@whitepine-st.com> wrote:
> >
> >> Hello,  Scenario: Currently we have 2 Solr Servers running in 2
> >different
> >> servers (linux), Is there any way can we make the Core to be located
> >in NAS
> >> or Network shared Drive so both the solrs using the same Index.
> >>
> >> Let me know if any performance issues, our size of Index is appx 1GB.
> >>
> >> Thanks
> >>
> >> Ravi
> >>
> >> -Original Message-
> >> From: biplobbiswas [mailto:revolutionisme+s...@gmail.com]
> >> Sent: Friday, May 19, 2017 9:23 AM
> >> To: solr-user@lucene.apache.org
> >> Subject: Re: Nested Document is flattened even with @Field(child =
> >true)
> >> annotation
> >>
> >> Hi
> >> Mikhail Khludnev-2 wrote
> >> > Hello,
> >> >
> >> > You need to use
> >> >
> >https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherPa
> >> > rsers-BlockJoinQueryParsers
> >> > and
> >> >
> >https://cwiki.apache.org/confluence/display/solr/Transforming+Result+D
> >> >
> >ocuments#TransformingResultDocuments-[child]-ChildDocTransformerFactor
> >> > y
> >> > to get the nested data back.
> >> >
> >> >
> >> > --
> >> > Sincerely yours
> >> > Mikhail Khludnev
> >>
> >> I had already gone through those links you posted and they talk about
> >> retrieving after indexing. My problem is that my documents are not
> >indexed
> >> in a nested structure.
> >>
> >> Can you please look at the first comment as well where I posted a
> >sample
> >> code and sample response which i get back.
> >>
> >> Because its creating distinct documents for nested structure
> >>
> >>
> >>
> >>
> >> --
> >> View this message in context: http://lucene.472066.n3.
> >> nabble.com/Nested-Document-is-flattened-even-with-Field-
> >> child-true-annotation-tp4335877p4335891.html
> >> Sent from the Solr - User mailing list archive at Nabble.com.
> >>
>
> --
> Sorry for being brief. Alternate email is rickleir at yahoo dot com
>


Re: Solr in NAS or Network Shared Drive

2017-05-19 Thread Florian Gleixner
On 19.05.2017 16:33, Ravi Kumar Taminidi wrote:
> Hello,  Scenario: Currently we have 2 Solr Servers running in 2 different 
> servers (linux), Is there any way can we make the Core to be located in NAS 
> or Network shared Drive so both the solrs using the same Index.
> 
> Let me know if any performance issues, our size of Index is appx 1GB.
> 
> Thanks
> 
> Ravi
> 

The operating system can cache a local filesystem for a infinitely long
time, because no one else is allowed to change the data. With network
filesystems, the operating system can not be sure, that the data have
not been altered by another one. So usually caches on network
filesystems are frequently invalidated.

I think you loose the caching from the OS - memory speed vs. network
filesystem speed! Not sure if mmap helps here 



signature.asc
Description: OpenPGP digital signature


Re: Session expired when executing streaming expression, but no long GC pauses ...

2017-05-19 Thread Timothy Potter
No, not every time, but there was no GC pause on the Solr side (no
gaps in the log, nothing in the gc log) ... in the zk log, I do see
this around the same time:

2017-05-05T13:59:52,362 - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] -
Closed socket connection for client /127.0.0.1:54140 which had
sessionid 0x15bd8bdd3500022
2017-05-05T13:59:52,818 - WARN
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@357] - caught
end of stream exception
org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to
read additional data from client sessionid 0x15bd8bdd3500023, likely
client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
[zookeeper-3.4.6.jar:3.4.6-1569965]
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
[zookeeper-3.4.6.jar:3.4.6-1569965]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_66-internal]
2017-05-05T13:59:52,819 - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] -
Closed socket connection for client /127.0.0.1:54200 which had
sessionid 0x15bd8bdd3500023
...

2017-05-05T14:00:00,001 - INFO  [SessionTracker:ZooKeeperServer@347] -
Expiring session 0x15bd8bdd3500023, timeout of 1ms exceeded

On Fri, May 19, 2017 at 9:48 AM, Joel Bernstein  wrote:
> You get this every time you run the expression?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, May 19, 2017 at 10:44 AM, Timothy Potter 
> wrote:
>
>> I'm executing a streaming expr and get this error:
>>
>> Caused by: org.apache.solr.common.SolrException: Could not load
>> collection from ZK:
>> MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
>> at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(
>> ZkStateReader.java:1098)
>> at org.apache.solr.common.cloud.ZkStateReader$
>> LazyCollectionRef.get(ZkStateReader.java:638)
>> at org.apache.solr.client.solrj.impl.CloudSolrClient.
>> getDocCollection(CloudSolrClient.java:1482)
>> at org.apache.solr.client.solrj.impl.CloudSolrClient.
>> requestWithRetryOnStaleState(CloudSolrClient.java:1092)
>> at org.apache.solr.client.solrj.impl.CloudSolrClient.request(
>> CloudSolrClient.java:1057)
>> at org.apache.solr.client.solrj.io.stream.FacetStream.open(
>> FacetStream.java:356)
>> ... 38 more
>> Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException:
>> KeeperErrorCode = Session expired for
>> /collections/MovieLens_Ratings_f2e6f8b0_3199_11e7_
>> b8ab_0242ac110002/state.json
>> at org.apache.zookeeper.KeeperException.create(
>> KeeperException.java:127)
>> at org.apache.zookeeper.KeeperException.create(
>> KeeperException.java:51)
>> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
>> at org.apache.solr.common.cloud.SolrZkClient$7.execute(
>> SolrZkClient.java:356)
>> at org.apache.solr.common.cloud.SolrZkClient$7.execute(
>> SolrZkClient.java:353)
>> at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(
>> ZkCmdExecutor.java:60)
>> at org.apache.solr.common.cloud.SolrZkClient.getData(
>> SolrZkClient.java:353)
>> at org.apache.solr.common.cloud.ZkStateReader.
>> fetchCollectionState(ZkStateReader.java:1110)
>> at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(
>> ZkStateReader.java:1096)
>> ... 43 more
>>
>> I've scoured the GC logs for solr and there are no long pauses
>> (nothing even over 1 second) ... any ideas why that session could be
>> expired?
>>


Query on recovering from Shard Loss

2017-05-19 Thread Sudershan Madhavan
Hi,
Good day. I have a solrcloud cluster which has a collection with RF=2 and 
NumShards=3 on 6 Nodes. We want to test how to recover from unexpected 
situations like shard loss. So we will probably execute an rm -rf on the solr 
data directory on one of the replica or master. Now the question is, how will 
this shredded node recover from the shard loss? Are manual steps required(if 
yes, then what needs to be done), or will it automatically recover from the 
replica?






Regards,

SUDERSHAN MADHAVAN | Digital River World Payments | Senior Database 
Administrator
m: +46 733 312525 |  
smadha...@digitalriver.com | 
digitalriver.com
Managing Director: Michael Roos | Registered Office: Stockholm | Company 
Number: 556548-0026
Textilgatan 31, SE-120 30 Stockholm, Sweden
[In-2C-14px]  Follow us on 
LinkedIn



Re: Query on recovering from Shard Loss

2017-05-19 Thread Erick Erickson
We need to clearly distinguish between losing a _shard_ and losing a
_replica_. You have RF=2 so you have two replicas in each shard.

If you stop a Solr node hosting one of the two replicas you'll see that the
leader function will switch to the running replica.

Now if you nuke the data directory then restart the node, then the index
will be replicated from the leader and no data is lost. It's all automatic,
just sit back and watch.

If you nuke the data directory for _all_ replicas that make up a particular
shard (2 in this case), there's nothing you can do to get it back except
re-index.

Best,
Erick

On Fri, May 19, 2017 at 4:51 AM, Sudershan Madhavan <
smadha...@digitalriver.com> wrote:

> Hi,
>
> Good day. I have a solrcloud cluster which has a collection with RF=2 and
> NumShards=3 on 6 Nodes. We want to test how to recover from unexpected
> situations like shard loss. So we will probably execute an *rm -rf *on
> the solr data directory on one of the replica or master. Now the question
> is, how will this shredded node recover from the shard loss? Are manual
> steps required(if yes, then what needs to be done), or will it
> automatically recover from the replica?
>
>
>
>
>
>
>
>
>
> Regards,
>
>
>
> SUDERSHAN MADHAVAN *|* Digital River World Payments *|* Senior Database
> Administrator
> m: +46 733 312525 *|*  smadha...@digitalriver.com *|* digitalriver.com
> 
>
> Managing Director: Michael Roos *|* Registered Office: Stockholm *|*
> Company Number: 556548-0026
>
> Textilgatan 31, SE-120 30 Stockholm, Sweden
>
> *[image: In-2C-14px]*   
> *Follow
> us on* LinkedIn 
>
>
>


RE: Solr in NAS or Network Shared Drive

2017-05-19 Thread Davis, Daniel (NIH/NLM) [C]
Docker has a "layered" filesystem strategy, where new writes are written to a 
top layer, so maybe there's a way to do this with docker.
Pretty speculative, but:

- Start a docker container based on an image containing Solr, but no index data.
- Build your index within the image.
- Shutdown solr and build a new Docker image from the container.
- Now start two new docker containers from that image, both running Solr.

Getting to this architecture may have some gotchas, as you clearly don't want 
to reindex 350gb+400gb, and don't have the storage to copy it over into a 
docker image.  Maybe OS-level backup/restore could solve this problem.   Also, 
getting docker to store/load images from NFS is a small detail - they either 
have configuration or you can use mount/symbolic links.

Pardon the outlandish solution, but I tend to think Systems engineering fix 
first, and Java coding second.   I think that maybe they could help on the Java 
side over at lucene-user, because a Java solution would need to be pretty deep, 
and involving changing some of the basics on how (whether) locking is done.

-Original Message-
From: David Hastings [mailto:hastings.recurs...@gmail.com] 
Sent: Friday, May 19, 2017 1:33 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr in NAS or Network Shared Drive

The reason for me to want to try it is because replication is not possible on 
the single machine, as the index size is around 350gb+another 400gb, and i dont 
have enough SSD to cover a replication from the master node.  Also i have a 
theory and heard this as well from a presentation at the LR conference in 
Boston this past year, that multiple solr instances on one machine performs 
better than multiple machines, would be interesting to have solr have a "read 
only"/"listen" state to do no writing to the index, but keep referencing the 
index properties/version files.

On Fri, May 19, 2017 at 1:26 PM, Davis, Daniel (NIH/NLM) [C] < 
daniel.da...@nih.gov> wrote:

> Better off to just do Replication to the slave using the replication 
> handler.
>
> However, if there  is no network connectivity, e.g. this is an offsite 
> cold/warm spare, then here is a solution:
>
> The NAS likely supports some Copy-on-write/snapshotting capabilities.   If
> your systems people will work with you, you can use the 
> replication/backup handler to take a NAS snapshot just after hard commit, and 
> then have the
> snapshot replicated to another volume.   I suspect Solr will have to be
> started on the cold/warm spare when you do a failover to offsite, 
> because I know of no way to have the OS react to events when a 
> snapshot is replicated by the NAS.
>
> This kind of solution is what you might see for an Oracle, or any 
> other binary ACID database, so you can look at best practices for 
> integrating these products with Netapp or EMC Celera for more ideas.
>
> -Original Message-
> From: Rick Leir [mailto:rl...@leirtech.com]
> Sent: Friday, May 19, 2017 12:40 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr in NAS or Network Shared Drive
>
> For an experiment, mount the NAS filesystem ro (readonly). Is there 
> any way to tell Solr not to bother with a lockfile? And what happens 
> if an update or add gets requested by mistake, does it take down Solr?
>
> Why not do this all the simple way, and just replicate?
>
> On May 19, 2017 10:41:19 AM EDT, David Hastings < 
> hastings.recurs...@gmail.com> wrote:
> >ive always wanted to experiment with this, but you have to be very 
> >careful that only one of the cores, or neither, can do ANY writes, 
> >also if you have a suggester index you need to make sure that each 
> >core builds their own independently.  In any case from every thing 
> >ive read the general answer is dont do it.  would like to hear other 
> >peoples thoughts on this however.
> >
> >On Fri, May 19, 2017 at 10:33 AM, Ravi Kumar Taminidi < 
> >ravi.tamin...@whitepine-st.com> wrote:
> >
> >> Hello,  Scenario: Currently we have 2 Solr Servers running in 2
> >different
> >> servers (linux), Is there any way can we make the Core to be 
> >> located
> >in NAS
> >> or Network shared Drive so both the solrs using the same Index.
> >>
> >> Let me know if any performance issues, our size of Index is appx 1GB.
> >>
> >> Thanks
> >>
> >> Ravi
> >>
> >> -Original Message-
> >> From: biplobbiswas [mailto:revolutionisme+s...@gmail.com]
> >> Sent: Friday, May 19, 2017 9:23 AM
> >> To: solr-user@lucene.apache.org
> >> Subject: Re: Nested Document is flattened even with @Field(child =
> >true)
> >> annotation
> >>
> >> Hi
> >> Mikhail Khludnev-2 wrote
> >> > Hello,
> >> >
> >> > You need to use
> >> >
> >https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherP
> >a
> >> > rsers-BlockJoinQueryParsers
> >> > and
> >> >
> >https://cwiki.apache.org/confluence/display/solr/Transforming+Result+
> >D
> >> >
> >ocuments#TransformingResultDocuments-[child]-ChildDocTransformerFacto
> >r
> >> > y
> >> > to get the nes

Re: Session expired when executing streaming expression, but no long GC pauses ...

2017-05-19 Thread Joel Bernstein
Odd, I haven't run into this behavior. Are you getting the disconnect from
the client side, or is this happening in a stream being run inside Solr?



Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, May 19, 2017 at 1:40 PM, Timothy Potter 
wrote:

> No, not every time, but there was no GC pause on the Solr side (no
> gaps in the log, nothing in the gc log) ... in the zk log, I do see
> this around the same time:
>
> 2017-05-05T13:59:52,362 - INFO
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] -
> Closed socket connection for client /127.0.0.1:54140 which had
> sessionid 0x15bd8bdd3500022
> 2017-05-05T13:59:52,818 - WARN
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@357] - caught
> end of stream exception
> org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to
> read additional data from client sessionid 0x15bd8bdd3500023, likely
> client has closed socket
> at org.apache.zookeeper.server.NIOServerCnxn.doIO(
> NIOServerCnxn.java:228)
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> at org.apache.zookeeper.server.NIOServerCnxnFactory.run(
> NIOServerCnxnFactory.java:208)
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_66-internal]
> 2017-05-05T13:59:52,819 - INFO
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] -
> Closed socket connection for client /127.0.0.1:54200 which had
> sessionid 0x15bd8bdd3500023
> ...
>
> 2017-05-05T14:00:00,001 - INFO  [SessionTracker:ZooKeeperServer@347] -
> Expiring session 0x15bd8bdd3500023, timeout of 1ms exceeded
>
> On Fri, May 19, 2017 at 9:48 AM, Joel Bernstein 
> wrote:
> > You get this every time you run the expression?
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
> >
> > On Fri, May 19, 2017 at 10:44 AM, Timothy Potter 
> > wrote:
> >
> >> I'm executing a streaming expr and get this error:
> >>
> >> Caused by: org.apache.solr.common.SolrException: Could not load
> >> collection from ZK:
> >> MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
> >> at org.apache.solr.common.cloud.ZkStateReader.
> getCollectionLive(
> >> ZkStateReader.java:1098)
> >> at org.apache.solr.common.cloud.ZkStateReader$
> >> LazyCollectionRef.get(ZkStateReader.java:638)
> >> at org.apache.solr.client.solrj.impl.CloudSolrClient.
> >> getDocCollection(CloudSolrClient.java:1482)
> >> at org.apache.solr.client.solrj.impl.CloudSolrClient.
> >> requestWithRetryOnStaleState(CloudSolrClient.java:1092)
> >> at org.apache.solr.client.solrj.impl.CloudSolrClient.request(
> >> CloudSolrClient.java:1057)
> >> at org.apache.solr.client.solrj.io.stream.FacetStream.open(
> >> FacetStream.java:356)
> >> ... 38 more
> >> Caused by: org.apache.zookeeper.KeeperException$
> SessionExpiredException:
> >> KeeperErrorCode = Session expired for
> >> /collections/MovieLens_Ratings_f2e6f8b0_3199_11e7_
> >> b8ab_0242ac110002/state.json
> >> at org.apache.zookeeper.KeeperException.create(
> >> KeeperException.java:127)
> >> at org.apache.zookeeper.KeeperException.create(
> >> KeeperException.java:51)
> >> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
> >> at org.apache.solr.common.cloud.SolrZkClient$7.execute(
> >> SolrZkClient.java:356)
> >> at org.apache.solr.common.cloud.SolrZkClient$7.execute(
> >> SolrZkClient.java:353)
> >> at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(
> >> ZkCmdExecutor.java:60)
> >> at org.apache.solr.common.cloud.SolrZkClient.getData(
> >> SolrZkClient.java:353)
> >> at org.apache.solr.common.cloud.ZkStateReader.
> >> fetchCollectionState(ZkStateReader.java:1110)
> >> at org.apache.solr.common.cloud.ZkStateReader.
> getCollectionLive(
> >> ZkStateReader.java:1096)
> >> ... 43 more
> >>
> >> I've scoured the GC logs for solr and there are no long pauses
> >> (nothing even over 1 second) ... any ideas why that session could be
> >> expired?
> >>
>


Re: Query on recovering from Shard Loss

2017-05-19 Thread Susheel Kumar
If you remove the content of data directory, then it should sync up
automatically. Give a try.

Thanks,
Susheel

On Fri, May 19, 2017 at 7:51 AM, Sudershan Madhavan <
smadha...@digitalriver.com> wrote:

> Hi,
>
> Good day. I have a solrcloud cluster which has a collection with RF=2 and
> NumShards=3 on 6 Nodes. We want to test how to recover from unexpected
> situations like shard loss. So we will probably execute an *rm -rf *on
> the solr data directory on one of the replica or master. Now the question
> is, how will this shredded node recover from the shard loss? Are manual
> steps required(if yes, then what needs to be done), or will it
> automatically recover from the replica?
>
>
>
>
>
>
>
>
>
> Regards,
>
>
>
> SUDERSHAN MADHAVAN *|* Digital River World Payments *|* Senior Database
> Administrator
> m: +46 733 312525 *|*  smadha...@digitalriver.com *|* digitalriver.com
> 
>
> Managing Director: Michael Roos *|* Registered Office: Stockholm *|*
> Company Number: 556548-0026
>
> Textilgatan 31, SE-120 30 Stockholm, Sweden
>
> *[image: In-2C-14px]*   
> *Follow
> us on* LinkedIn 
>
>
>


Re: Session expired when executing streaming expression, but no long GC pauses ...

2017-05-19 Thread Piyush Kunal
The reason is GC pauses mostly at the client side and not the server side.
I guess you are using solrj client and this exception is thrown in the
client logs.

On Fri, May 19, 2017 at 11:46 PM, Joel Bernstein  wrote:

> Odd, I haven't run into this behavior. Are you getting the disconnect from
> the client side, or is this happening in a stream being run inside Solr?
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, May 19, 2017 at 1:40 PM, Timothy Potter 
> wrote:
>
> > No, not every time, but there was no GC pause on the Solr side (no
> > gaps in the log, nothing in the gc log) ... in the zk log, I do see
> > this around the same time:
> >
> > 2017-05-05T13:59:52,362 - INFO
> > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] -
> > Closed socket connection for client /127.0.0.1:54140 which had
> > sessionid 0x15bd8bdd3500022
> > 2017-05-05T13:59:52,818 - WARN
> > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@357] - caught
> > end of stream exception
> > org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to
> > read additional data from client sessionid 0x15bd8bdd3500023, likely
> > client has closed socket
> > at org.apache.zookeeper.server.NIOServerCnxn.doIO(
> > NIOServerCnxn.java:228)
> > [zookeeper-3.4.6.jar:3.4.6-1569965]
> > at org.apache.zookeeper.server.NIOServerCnxnFactory.run(
> > NIOServerCnxnFactory.java:208)
> > [zookeeper-3.4.6.jar:3.4.6-1569965]
> > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_66-internal]
> > 2017-05-05T13:59:52,819 - INFO
> > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] -
> > Closed socket connection for client /127.0.0.1:54200 which had
> > sessionid 0x15bd8bdd3500023
> > ...
> >
> > 2017-05-05T14:00:00,001 - INFO  [SessionTracker:ZooKeeperServer@347] -
> > Expiring session 0x15bd8bdd3500023, timeout of 1ms exceeded
> >
> > On Fri, May 19, 2017 at 9:48 AM, Joel Bernstein 
> > wrote:
> > > You get this every time you run the expression?
> > >
> > > Joel Bernstein
> > > http://joelsolr.blogspot.com/
> > >
> > > On Fri, May 19, 2017 at 10:44 AM, Timothy Potter  >
> > > wrote:
> > >
> > >> I'm executing a streaming expr and get this error:
> > >>
> > >> Caused by: org.apache.solr.common.SolrException: Could not load
> > >> collection from ZK:
> > >> MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
> > >> at org.apache.solr.common.cloud.ZkStateReader.
> > getCollectionLive(
> > >> ZkStateReader.java:1098)
> > >> at org.apache.solr.common.cloud.ZkStateReader$
> > >> LazyCollectionRef.get(ZkStateReader.java:638)
> > >> at org.apache.solr.client.solrj.impl.CloudSolrClient.
> > >> getDocCollection(CloudSolrClient.java:1482)
> > >> at org.apache.solr.client.solrj.impl.CloudSolrClient.
> > >> requestWithRetryOnStaleState(CloudSolrClient.java:1092)
> > >> at org.apache.solr.client.solrj.impl.CloudSolrClient.request(
> > >> CloudSolrClient.java:1057)
> > >> at org.apache.solr.client.solrj.io.stream.FacetStream.open(
> > >> FacetStream.java:356)
> > >> ... 38 more
> > >> Caused by: org.apache.zookeeper.KeeperException$
> > SessionExpiredException:
> > >> KeeperErrorCode = Session expired for
> > >> /collections/MovieLens_Ratings_f2e6f8b0_3199_11e7_
> > >> b8ab_0242ac110002/state.json
> > >> at org.apache.zookeeper.KeeperException.create(
> > >> KeeperException.java:127)
> > >> at org.apache.zookeeper.KeeperException.create(
> > >> KeeperException.java:51)
> > >> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
> > >> at org.apache.solr.common.cloud.SolrZkClient$7.execute(
> > >> SolrZkClient.java:356)
> > >> at org.apache.solr.common.cloud.SolrZkClient$7.execute(
> > >> SolrZkClient.java:353)
> > >> at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(
> > >> ZkCmdExecutor.java:60)
> > >> at org.apache.solr.common.cloud.SolrZkClient.getData(
> > >> SolrZkClient.java:353)
> > >> at org.apache.solr.common.cloud.ZkStateReader.
> > >> fetchCollectionState(ZkStateReader.java:1110)
> > >> at org.apache.solr.common.cloud.ZkStateReader.
> > getCollectionLive(
> > >> ZkStateReader.java:1096)
> > >> ... 43 more
> > >>
> > >> I've scoured the GC logs for solr and there are no long pauses
> > >> (nothing even over 1 second) ... any ideas why that session could be
> > >> expired?
> > >>
> >
>


Re: Performance warning: Overlapping onDeskSearchers=2 solr

2017-05-19 Thread Shawn Heisey
On 5/17/2017 9:15 AM, Jason Gerlowski wrote:
> A strawman new message could be: "Performance warning: Overlapping
> onDeskSearchers=2; consider reducing commit frequency if performance
> problems encountered"
>
> Happy to create a JIRA/patch for this; just wanted to get some
> feedback first in case there's an obvious reason the messages don't
> get explicit about the cause.

This sounds like a good idea to me.  Bikeshedding on the message:

"Performance warning: Overlapping onDeskSearchers=2; consider reducing
commit frequency and/or cache warming"

Thanks,
Shawn



Solr 6.6 release date?

2017-05-19 Thread Chiradeep Das
When is the Solr 6.6 release date?

Waiting for the Zookeeper 3.4.10 upgrade in 6.6 version.


Regards,

Chiradeep


Re: Solr in NAS or Network Shared Drive

2017-05-19 Thread Rick Leir
> multiple solr instances on one machine performs better than multiple 

Does the machine have enough RAM to support all the instances? Again, time for 
an experiment!
-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

Re: Solr in NAS or Network Shared Drive

2017-05-19 Thread David Hastings
Mt thought would be that the machine would need only the same amount of ram
minus the heap size of the second instance of solr, since it will be file
caching the index into memory only once since its the same files, but read
by both solr instances.  my solr slaves have about 150gb each.

On Fri, May 19, 2017 at 2:45 PM, Rick Leir  wrote:

> > multiple solr instances on one machine performs better than multiple
>
> Does the machine have enough RAM to support all the instances? Again, time
> for an experiment!
> --
> Sorry for being brief. Alternate email is rickleir at yahoo dot com


Re: Solr in NAS or Network Shared Drive

2017-05-19 Thread Erick Erickson
One problem here is how to open new searchers on the r/o core.
Consider  the autocommit setting. The cycle is
> when the first doc comes in, start your timer
> x milliseconds later, do a commit and (perhaps) open a new searcher.

but the core referencing the index in R/O mode doesn't have any update
event to start the timer.

Even if you issue a commit to the R/O copy, there is short-circuiting
in the code that says "since nothing's changed, I'll just ignore
this". And by definition, the commit is an update call

I suppose you could force things here by issuing a reload command on
the R/O core...

WARNING: Since I strongly discourage this it's not something I've
personally verified

And the risk of corrupting your index because someone inadvertently
does something unexpected is high.

I always come back to the question of why in the world spend
engineering time on this when the  cost of a new 1TB disk is so low. I
realize there are environments where you can't "just plug in another
disk", but you see where I'm going.

Best,
Erick

On Fri, May 19, 2017 at 11:47 AM, David Hastings
 wrote:
> Mt thought would be that the machine would need only the same amount of ram
> minus the heap size of the second instance of solr, since it will be file
> caching the index into memory only once since its the same files, but read
> by both solr instances.  my solr slaves have about 150gb each.
>
> On Fri, May 19, 2017 at 2:45 PM, Rick Leir  wrote:
>
>> > multiple solr instances on one machine performs better than multiple
>>
>> Does the machine have enough RAM to support all the instances? Again, time
>> for an experiment!
>> --
>> Sorry for being brief. Alternate email is rickleir at yahoo dot com


Re: Solr 6.6 release date?

2017-05-19 Thread Erick Erickson
It's under way now, perhaps as early as next week some time.

Best,
Erick

On Fri, May 19, 2017 at 11:41 AM, Chiradeep Das
 wrote:
> When is the Solr 6.6 release date?
>
> Waiting for the Zookeeper 3.4.10 upgrade in 6.6 version.
>
>
> Regards,
>
> Chiradeep


Re: Solr Admin Documents tab

2017-05-19 Thread Shawn Heisey
On 5/17/2017 10:42 AM, Rick Leir wrote:
> Chris, Shawn,
> I am using 5.2.1 . Neither the array (Shawn) nor the document list (Chris) 
> works for me in the Admin panel. However, CSV works fine.
>
> Clearly we are long overdue for an upgrade. 

I checked the PDF reference guide for 5.2 and it looks like the curl
examples are the same in that version of the reference guide as they are
in the current guide.  Maybe 5.2.1 has a bug.  If so, it would need to
be demonstrated in a current version.

I found this bug, but it's OLD, and fixed clear back in 3.x:

https://issues.apache.org/jira/browse/SOLR-2496

What exactly happens when you try the format I pointed you to and the
format that Chris suggested?  Are you getting an error?  If so, seeing
the full error message from the solr log might be helpful.

Thanks,
Shawn



Re: Solr in NAS or Network Shared Drive

2017-05-19 Thread David Hastings
I agree completely, it was just something ive always wanted to try doing.
 if my indexes were smaller id just fire up a bunch of slaves on a single
machine and nginx them out, but even 2tb SSD's are some what expensive and
theres not always enough ports on the servers to keep adding more.

On Fri, May 19, 2017 at 3:33 PM, Erick Erickson 
wrote:

> One problem here is how to open new searchers on the r/o core.
> Consider  the autocommit setting. The cycle is
> > when the first doc comes in, start your timer
> > x milliseconds later, do a commit and (perhaps) open a new searcher.
>
> but the core referencing the index in R/O mode doesn't have any update
> event to start the timer.
>
> Even if you issue a commit to the R/O copy, there is short-circuiting
> in the code that says "since nothing's changed, I'll just ignore
> this". And by definition, the commit is an update call
>
> I suppose you could force things here by issuing a reload command on
> the R/O core...
>
> WARNING: Since I strongly discourage this it's not something I've
> personally verified
>
> And the risk of corrupting your index because someone inadvertently
> does something unexpected is high.
>
> I always come back to the question of why in the world spend
> engineering time on this when the  cost of a new 1TB disk is so low. I
> realize there are environments where you can't "just plug in another
> disk", but you see where I'm going.
>
> Best,
> Erick
>
> On Fri, May 19, 2017 at 11:47 AM, David Hastings
>  wrote:
> > Mt thought would be that the machine would need only the same amount of
> ram
> > minus the heap size of the second instance of solr, since it will be file
> > caching the index into memory only once since its the same files, but
> read
> > by both solr instances.  my solr slaves have about 150gb each.
> >
> > On Fri, May 19, 2017 at 2:45 PM, Rick Leir  wrote:
> >
> >> > multiple solr instances on one machine performs better than multiple
> >>
> >> Does the machine have enough RAM to support all the instances? Again,
> time
> >> for an experiment!
> >> --
> >> Sorry for being brief. Alternate email is rickleir at yahoo dot com
>


Solr coreContainer shut down

2017-05-19 Thread Chetas Joshi
Hello,

I am trying to set up a solrCloud (6.5.0/6.5.1). I have installed Solr as a
service.
Every time I start solr servers, they come up but one by one the
coreContainers start shutting down on their own within 1-2 minutes of their
being up.

Here are the solr logs

2017-05-19 20:45:30.926 INFO  (main) [   ] o.e.j.s.Server Started @1600ms

2017-05-19 20:47:21.252 INFO  (ShutdownMonitor) [   ] o.a.s.c.CoreContainer
Shutting down CoreContainer instance=1364767791

2017-05-19 20:47:21.262 INFO  (ShutdownMonitor) [   ] o.a.s.c.Overseer
Overseer (id=169527934494244988-:8983_solr-n_06) closing

2017-05-19 20:47:21.263 INFO
(OverseerStateUpdate-169527934494244988-:8983_solr-n_06)
[   ] o.a.s.c.Overseer Overseer Loop exiting : :8983_solr

2017-05-19 20:47:21.268 INFO  (ShutdownMonitor) [   ]
o.a.s.m.SolrMetricManager Closing metric reporters for: solr.node


The coreContainer just shuts down (no info in the solr logs). Is the jetty
servlet container having some issue? Is it possible to look at the Jetty
servlet container logs?

Thanks!


Re: How to do CDCR with basic auth?

2017-05-19 Thread Shawn Feldman
I have the same exact issue on my box.  Basic auth works in 6.4.2 but fails
in 6.5.1.  I assume its a bug.  probably just hasn't been acknowledged yet.

On Sun, May 14, 2017 at 2:37 PM Xie, Sean  wrote:

> Configured the JVM:
>
> -Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthConfigurer
> -Dbasicauth=solr:SolrRocks
>
> Configured the CDCR.
>
> Started the Source cluster and
> Getting the log:
>
> .a.s.h.CdcrUpdateLogSynchronizer Caught unexpected exception
> java.lang.IllegalArgumentException: Credentials may not be null
> at org.apache.http.util.Args.notNull(Args.java:54)
> at org.apache.http.auth.AuthState.update(AuthState.java:113)
> at
> org.apache.solr.client.solrj.impl.PreemptiveAuth.process(PreemptiveAuth.java:56)
> at
> org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
> at
> org.apache.http.protocol.HttpRequestExecutor.preProcess(HttpRequestExecutor.java:166)
> at
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:485)
> at
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
> at
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at
> org.apache.solr.handler.CdcrUpdateLogSynchronizer$UpdateLogSynchronisation.run(CdcrUpdateLogSynchronizer.java:146)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
>
>
> Somehow, the cdcr didn’t pickup the credentials when using the
> PreemptiveAuth.
>
> Is it a bug?
>
> Thanks
> Sean
>
>
>
> On 5/14/17, 3:09 PM, "Xie, Sean"  wrote:
>
> So I have configured two clusters (source and target) with basic auth
> with solr:SolrRocks, but when starting the source node, log is showing it
> couldn’t read the authentication info.
>
> I already added the –Dbasicauth=solr:SolrRocks to the JVM of the solr
> instance. Not sure where else I can configure the solr to use the auth.
>
> When starting the CDCR, the log is:
>
> 2017-05-14 15:01:02.915 WARN  (qtp1348949648-21) [c:COL1 s:shard1
> r:core_node2 x:COL1_shard1_replica2] o.a.s.h.CdcrReplicatorManager Unable
> to instantiate the log reader for target collection COL1
> org.apache.solr.client.solrj.SolrServerException:
> java.lang.IllegalArgumentException: Credentials may not be null
> at
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:473)
> at
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
> at
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1376)
> at
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1127)
> at
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
> at
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at
> org.apache.solr.handler.CdcrReplicatorManager.getCheckpoint(CdcrReplicatorManager.java:196)
> at
> org.apache.solr.handler.CdcrReplicatorManager.initLogReaders(CdcrReplicatorManager.java:159)
> at
> org.apache.solr.handler.CdcrReplicatorManager.stateUpdate(CdcrReplicatorManager.java:134)
> at
> org.apache.solr.handler.CdcrStateManager.callback(CdcrStateManager.java:36)
> at
> org.apache.solr.handler.CdcrProcessStateManager.setState(CdcrProcessStateManager.java:93)
> at
> org.apache.solr.handler.CdcrRequestHandler.handleStartAction(CdcrRequestHandler.java:352)
> at
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:178)
> at

Re: How to do CDCR with basic auth?

2017-05-19 Thread Shawn Feldman
i added a ticket

https://issues.apache.org/jira/browse/SOLR-10718

we'll see what happens

On Fri, May 19, 2017 at 3:03 PM Shawn Feldman 
wrote:

> I have the same exact issue on my box.  Basic auth works in 6.4.2 but
> fails in 6.5.1.  I assume its a bug.  probably just hasn't been
> acknowledged yet.
>
> On Sun, May 14, 2017 at 2:37 PM Xie, Sean  wrote:
>
>> Configured the JVM:
>>
>> -Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthConfigurer
>> -Dbasicauth=solr:SolrRocks
>>
>> Configured the CDCR.
>>
>> Started the Source cluster and
>> Getting the log:
>>
>> .a.s.h.CdcrUpdateLogSynchronizer Caught unexpected exception
>> java.lang.IllegalArgumentException: Credentials may not be null
>> at org.apache.http.util.Args.notNull(Args.java:54)
>> at org.apache.http.auth.AuthState.update(AuthState.java:113)
>> at
>> org.apache.solr.client.solrj.impl.PreemptiveAuth.process(PreemptiveAuth.java:56)
>> at
>> org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
>> at
>> org.apache.http.protocol.HttpRequestExecutor.preProcess(HttpRequestExecutor.java:166)
>> at
>> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:485)
>> at
>> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>> at
>> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>> at
>> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>> at
>> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
>> at
>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
>> at
>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
>> at
>> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>> at
>> org.apache.solr.handler.CdcrUpdateLogSynchronizer$UpdateLogSynchronisation.run(CdcrUpdateLogSynchronizer.java:146)
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at
>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:748)
>>
>>
>> Somehow, the cdcr didn’t pickup the credentials when using the
>> PreemptiveAuth.
>>
>> Is it a bug?
>>
>> Thanks
>> Sean
>>
>>
>>
>> On 5/14/17, 3:09 PM, "Xie, Sean"  wrote:
>>
>> So I have configured two clusters (source and target) with basic auth
>> with solr:SolrRocks, but when starting the source node, log is showing it
>> couldn’t read the authentication info.
>>
>> I already added the –Dbasicauth=solr:SolrRocks to the JVM of the solr
>> instance. Not sure where else I can configure the solr to use the auth.
>>
>> When starting the CDCR, the log is:
>>
>> 2017-05-14 15:01:02.915 WARN  (qtp1348949648-21) [c:COL1 s:shard1
>> r:core_node2 x:COL1_shard1_replica2] o.a.s.h.CdcrReplicatorManager Unable
>> to instantiate the log reader for target collection COL1
>> org.apache.solr.client.solrj.SolrServerException:
>> java.lang.IllegalArgumentException: Credentials may not be null
>> at
>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:473)
>> at
>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
>> at
>> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1376)
>> at
>> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1127)
>> at
>> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
>> at
>> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>> at
>> org.apache.solr.handler.CdcrReplicatorManager.getCheckpoint(CdcrReplicatorManager.java:196)
>> at
>> org.apache.solr.handler.CdcrReplicatorManager.initLogReaders(CdcrReplicatorManager.java:159)
>> at
>> org.apache.solr.handler.CdcrReplicatorManager.stateUpdate(CdcrReplicatorManager.java:134)
>> at
>> org.apache.solr.handler.CdcrStateManager.callback(CdcrStateManager.java:36)
>> at
>> org.apache.solr.handler.CdcrProcessStateManager.setState(CdcrProcessStateManager.java:93)

Re: Solr coreContainer shut down

2017-05-19 Thread Chetas Joshi
I found the reason why this is happening!
I am using chef and running install_sol_service.sh with option -n -f. So,
every time chef-client runs it is stopping the already running solr
instance. Now, I have removed option -f (no upgrade) but running into an
error.

I have a question on the following peice of code.

if [ ! "$SOLR_UPGRADE" = "YES" ]; then

  if [ -f "/etc/init.d/$SOLR_SERVICE" ]; then

print_usage "/etc/init.d/$SOLR_SERVICE already exists! Perhaps Solr is
already setup as a service on this host? To upgrade Solr use the -f option."

exit 1

  fi


  if [ -e "$SOLR_EXTRACT_DIR/$SOLR_SERVICE" ]; then

print_usage "$SOLR_EXTRACT_DIR/$SOLR_SERVICE already exists! Please
move this directory / link or choose a different service name using the -s
option."

exit 1

  fi

fi


If I don't wanna upgrade and there is an already installed service, why
should it be exit 1 and not exit 0? Shouldn't it be like

if [ ! "$SOLR_UPGRADE" = "YES" ]; then

  if [ -f "/etc/init.d/$SOLR_SERVICE" ]; then

print_usage "/etc/init.d/$SOLR_SERVICE already exists! Perhaps Solr is
already setup as a service on this host? To upgrade Solr use the -f option."

*exit 0*

  fi


Thanks!

On Fri, May 19, 2017 at 1:59 PM, Chetas Joshi 
wrote:

> Hello,
>
> I am trying to set up a solrCloud (6.5.0/6.5.1). I have installed Solr as
> a service.
> Every time I start solr servers, they come up but one by one the
> coreContainers start shutting down on their own within 1-2 minutes of their
> being up.
>
> Here are the solr logs
>
> 2017-05-19 20:45:30.926 INFO  (main) [   ] o.e.j.s.Server Started @1600ms
>
> 2017-05-19 20:47:21.252 INFO  (ShutdownMonitor) [   ]
> o.a.s.c.CoreContainer Shutting down CoreContainer instance=1364767791
>
> 2017-05-19 20:47:21.262 INFO  (ShutdownMonitor) [   ] o.a.s.c.Overseer
> Overseer (id=169527934494244988-:8983_solr-n_06) closing
>
> 2017-05-19 20:47:21.263 INFO  (OverseerStateUpdate-169527934494244988-
> :8983_solr-n_06) [   ] o.a.s.c.Overseer Overseer Loop
> exiting : :8983_solr
>
> 2017-05-19 20:47:21.268 INFO  (ShutdownMonitor) [   ]
> o.a.s.m.SolrMetricManager Closing metric reporters for: solr.node
>
>
> The coreContainer just shuts down (no info in the solr logs). Is the jetty
> servlet container having some issue? Is it possible to look at the Jetty
> servlet container logs?
>
> Thanks!
>


Re: Solr coreContainer shut down

2017-05-19 Thread Shawn Heisey
On 5/19/2017 5:05 PM, Chetas Joshi wrote:
> If I don't wanna upgrade and there is an already installed service, why
> should it be exit 1 and not exit 0? Shouldn't it be like
>
> if [ ! "$SOLR_UPGRADE" = "YES" ]; then
>
>   if [ -f "/etc/init.d/$SOLR_SERVICE" ]; then
>
> print_usage "/etc/init.d/$SOLR_SERVICE already exists! Perhaps Solr is
> already setup as a service on this host? To upgrade Solr use the -f option."
>
> *exit 0*
>
>   fi

When the script reaches this point, the installation has failed, because
the service already exists and the script wasn't asked to upgrade it. 
That is why it exits with a value of 1.  If it were to exit with 0,
whatever called the script would assume that the installation was
successful -- which is not what has happened.

Why are you installing Solr again when it is already installed?

Thanks,
Shawn