composite hash

2017-06-05 Thread Shawn Feldman
I am indexing with a composite hash of "myshard/3!myid"

If i want to query with the _route_ param, what does my route look like

_route_=myshard/3!
or
_route_=myshard!
?

shawn


Re: composite hash

2017-06-05 Thread Shawn Feldman
If i add the /3 will i need to reindex?

On Mon, Jun 5, 2017 at 11:50 AM Susheel Kumar  wrote:

> Its should be _route_=myshard/3!
>
> On Mon, Jun 5, 2017 at 12:54 PM, Shawn Feldman 
> wrote:
>
> > I am indexing with a composite hash of "myshard/3!myid"
> >
> > If i want to query with the _route_ param, what does my route look like
> >
> > _route_=myshard/3!
> > or
> > _route_=myshard!
> > ?
> >
> > shawn
> >
>


dynamic fields during segment merge

2017-06-06 Thread Shawn Feldman
When solr is merging segments of the tlog what impact do dynamic fields
have?  If i have 1k dynamic fields do i pay the cost on every merge or only
if the documents have those fields?

-shawn


Re: How to do CDCR with basic auth?

2017-06-06 Thread Shawn Feldman
looks like this ticket was fixed in 6.6 SOLR-10718
<http://issues.apache.org/jira/browse/SOLR-10718>

On Fri, May 19, 2017 at 3:19 PM Shawn Feldman 
wrote:

> i added a ticket
>
> https://issues.apache.org/jira/browse/SOLR-10718
>
> we'll see what happens
>
> On Fri, May 19, 2017 at 3:03 PM Shawn Feldman 
> wrote:
>
>> I have the same exact issue on my box.  Basic auth works in 6.4.2 but
>> fails in 6.5.1.  I assume its a bug.  probably just hasn't been
>> acknowledged yet.
>>
>> On Sun, May 14, 2017 at 2:37 PM Xie, Sean  wrote:
>>
>>> Configured the JVM:
>>>
>>> -Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthConfigurer
>>> -Dbasicauth=solr:SolrRocks
>>>
>>> Configured the CDCR.
>>>
>>> Started the Source cluster and
>>> Getting the log:
>>>
>>> .a.s.h.CdcrUpdateLogSynchronizer Caught unexpected exception
>>> java.lang.IllegalArgumentException: Credentials may not be null
>>> at org.apache.http.util.Args.notNull(Args.java:54)
>>> at org.apache.http.auth.AuthState.update(AuthState.java:113)
>>> at
>>> org.apache.solr.client.solrj.impl.PreemptiveAuth.process(PreemptiveAuth.java:56)
>>> at
>>> org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
>>> at
>>> org.apache.http.protocol.HttpRequestExecutor.preProcess(HttpRequestExecutor.java:166)
>>> at
>>> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:485)
>>> at
>>> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>>> at
>>> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>>> at
>>> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>>> at
>>> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
>>> at
>>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
>>> at
>>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
>>> at
>>> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>>> at
>>> org.apache.solr.handler.CdcrUpdateLogSynchronizer$UpdateLogSynchronisation.run(CdcrUpdateLogSynchronizer.java:146)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at
>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:748)
>>>
>>>
>>> Somehow, the cdcr didn’t pickup the credentials when using the
>>> PreemptiveAuth.
>>>
>>> Is it a bug?
>>>
>>> Thanks
>>> Sean
>>>
>>>
>>>
>>> On 5/14/17, 3:09 PM, "Xie, Sean"  wrote:
>>>
>>> So I have configured two clusters (source and target) with basic
>>> auth with solr:SolrRocks, but when starting the source node, log is showing
>>> it couldn’t read the authentication info.
>>>
>>> I already added the –Dbasicauth=solr:SolrRocks to the JVM of the
>>> solr instance. Not sure where else I can configure the solr to use the auth.
>>>
>>> When starting the CDCR, the log is:
>>>
>>> 2017-05-14 15:01:02.915 WARN  (qtp1348949648-21) [c:COL1 s:shard1
>>> r:core_node2 x:COL1_shard1_replica2] o.a.s.h.CdcrReplicatorManager Unable
>>> to instantiate the log reader for target collection COL1
>>> org.apache.solr.client.solrj.SolrServerException:
>>> java.lang.IllegalArgumentException: Credentials may not be null
>>> at
>>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:473)
>>> at
>>&g

_version_ as LongPointField returns error

2017-06-12 Thread Shawn Feldman
I changed all my TrieLong Fields to Point fields.  _version_ always returns
an error unless i turn on docvalues

  
  

Getting this error when i index.  Any ideas?


 Remote error message: Point fields can't use FieldCache. Use
docValues=true for field: _version_
solr2_1|at
org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:973)
solr2_1|at
org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1912)
solr2_1|at
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)
solr2_1|at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78)
solr2_1|at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
solr2_1|at org.apache.solr.core.SolrCore.execute(SolrCore.java:2440)
solr2_1|at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)


Re: _version_ as LongPointField returns error

2017-06-12 Thread Shawn Feldman
logged this ticket https://issues.apache.org/jira/browse/SOLR-10872

On Mon, Jun 12, 2017 at 10:08 AM Shawn Feldman 
wrote:

> I changed all my TrieLong Fields to Point fields.  _version_ always
> returns an error unless i turn on docvalues
>
>   
>   
>
> Getting this error when i index.  Any ideas?
>
>
>  Remote error message: Point fields can't use FieldCache. Use
> docValues=true for field: _version_
> solr2_1|at
> org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:973)
> solr2_1|at
> org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1912)
> solr2_1|at
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)
> solr2_1|at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78)
> solr2_1|at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> solr2_1|at
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2440)
> solr2_1|at
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>


Re: _version_ as LongPointField returns error

2017-06-12 Thread Shawn Feldman
Why do you need doc values though?  i'm never going to sort by version

On Mon, Jun 12, 2017 at 10:13 AM Yonik Seeley  wrote:

> I think the _version_ field should be
>  - indexed="false"
>  - stored="false"
>  - docValues="true"
>
> -Yonik
>
>
> On Mon, Jun 12, 2017 at 12:08 PM, Shawn Feldman 
> wrote:
> > I changed all my TrieLong Fields to Point fields.  _version_ always
> returns
> > an error unless i turn on docvalues
> >
> >   
> >   
> >
> > Getting this error when i index.  Any ideas?
> >
> >
> >  Remote error message: Point fields can't use FieldCache. Use
> > docValues=true for field: _version_
> > solr2_1|at
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:973)
> > solr2_1|at
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1912)
> > solr2_1|at
> >
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)
> > solr2_1|at
> >
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78)
> > solr2_1|at
> >
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> > solr2_1|at
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2440)
> > solr2_1|at
> > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>


Re: _version_ as LongPointField returns error

2017-06-12 Thread Shawn Feldman
Should i make stored=false?  don't i need _version_ for the mvcc semantics?


On Mon, Jun 12, 2017 at 10:41 AM Chris Hostetter 
wrote:

>
> just replying to some comments/discussion in general rather then
> individual msgs/sentences..
>
> * uninversion/FieldCache of *singlevalued* Points fields was fixed in
> SOLR-10472
>
> * currently a bad idea to use indexed="true" Points for _version_ due to
> SOLR-10832
>
> * AFAICT it's a good idea (in general, regardless of type) to use
> indexed="true" docValues="true" for _version_ (once SOLR-10832 is fixed)
> to ensure VersionInfo.getMaxVersionFromIndex doesn't make core
> load/reloads (and CDCR aparently) slow.
>
>
>
> : Date: Mon, 12 Jun 2017 12:32:50 -0400
> : From: Yonik Seeley 
> : Reply-To: solr-user@lucene.apache.org
> : To: "solr-user@lucene.apache.org" 
> : Subject: Re: _version_ as LongPointField returns error
> :
> : On Mon, Jun 12, 2017 at 12:24 PM, Shawn Feldman 
> wrote:
> : > Why do you need doc values though?  i'm never going to sort by version
> :
> : Solr needs a quick lookup from docid->_version_
> : If you don't have docValues, Solr tries to create an in-memory version
> : (via the FieldCache).  That's not yet supported for Point* fields.
> :
> : -Yonik
> :
> : > On Mon, Jun 12, 2017 at 10:13 AM Yonik Seeley 
> wrote:
> : >
> : >> I think the _version_ field should be
> : >>  - indexed="false"
> : >>  - stored="false"
> : >>  - docValues="true"
> : >>
> : >> -Yonik
> : >>
> : >>
> : >> On Mon, Jun 12, 2017 at 12:08 PM, Shawn Feldman <
> shawn.feld...@gmail.com>
> : >> wrote:
> : >> > I changed all my TrieLong Fields to Point fields.  _version_ always
> : >> returns
> : >> > an error unless i turn on docvalues
> : >> >
> : >> >   
> : >> >/>
> : >> >
> : >> > Getting this error when i index.  Any ideas?
> : >> >
> : >> >
> : >> >  Remote error message: Point fields can't use FieldCache. Use
> : >> > docValues=true for field: _version_
> : >> > solr2_1|at
> : >> >
> : >>
> org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:973)
> : >> > solr2_1|at
> : >> >
> : >>
> org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1912)
> : >> > solr2_1|at
> : >> >
> : >>
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)
> : >> > solr2_1|at
> : >> >
> : >>
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78)
> : >> > solr2_1|at
> : >> >
> : >>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> : >> > solr2_1|at
> : >> org.apache.solr.core.SolrCore.execute(SolrCore.java:2440)
> : >> > solr2_1|at
> : >> > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> : >>
> :
>
> -Hoss
> http://www.lucidworks.com/
>


losing records during solr updates

2017-03-27 Thread Shawn Feldman
When we restart solr on a leader node while we are doing updates, we've
noticed that some small percentage of data is lost.  maybe 9 records out of
1k.  Updating using min_rf=3 or full quorum seems to resolve this since our
rf = 3.  Updates then seem to only succeed when all nodes are back up. Why
would we see record loss during a node restart?  I assumed the transaction
log would get replayed.  We have a 4 node cluster with 24 shards.

-shawn


Re: losing records during solr updates

2017-03-27 Thread Shawn Feldman
We are also hard committing at 15 sec and soft committing at 30 sec.  I've
found if we change syncLevel to fsync then we don't lose any data

On Mon, Mar 27, 2017 at 1:30 PM Shawn Feldman 
wrote:

> 6.4.2
>
> On Mon, Mar 27, 2017 at 1:29 PM Alexandre Rafalovitch 
> wrote:
>
> What version of Solr is it?
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 27 March 2017 at 15:25, Shawn Feldman  wrote:
> > When we restart solr on a leader node while we are doing updates, we've
> > noticed that some small percentage of data is lost.  maybe 9 records out
> of
> > 1k.  Updating using min_rf=3 or full quorum seems to resolve this since
> our
> > rf = 3.  Updates then seem to only succeed when all nodes are back up.
> Why
> > would we see record loss during a node restart?  I assumed the
> transaction
> > log would get replayed.  We have a 4 node cluster with 24 shards.
> >
> > -shawn
>
>


Re: losing records during solr updates

2017-03-27 Thread Shawn Feldman
6.4.2

On Mon, Mar 27, 2017 at 1:29 PM Alexandre Rafalovitch 
wrote:

> What version of Solr is it?
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 27 March 2017 at 15:25, Shawn Feldman  wrote:
> > When we restart solr on a leader node while we are doing updates, we've
> > noticed that some small percentage of data is lost.  maybe 9 records out
> of
> > 1k.  Updating using min_rf=3 or full quorum seems to resolve this since
> our
> > rf = 3.  Updates then seem to only succeed when all nodes are back up.
> Why
> > would we see record loss during a node restart?  I assumed the
> transaction
> > log would get replayed.  We have a 4 node cluster with 24 shards.
> >
> > -shawn
>


Re: losing records during solr updates

2017-03-27 Thread Shawn Feldman
Ercan, I think you responded to the wrong thread

On Mon, Mar 27, 2017 at 2:02 PM Ercan Karadeniz 
wrote:

> 6.4.2 (latest available) or shall I use another one for familiarization
> purposes?
>
>
> 
> Von: Alexandre Rafalovitch 
> Gesendet: Montag, 27. März 2017 21:28
> An: solr-user
> Betreff: Re: losing records during solr updates
>
> What version of Solr is it?
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
> Home | Solr Start<http://www.solr-start.com/>
> www.solr-start.com
> Welcome to the collection of resources to make Apache Solr search engine
> more comprehensible to beginner and intermediate users. While Solr is very
> easy to start with ...
>
>
>
>
>
> On 27 March 2017 at 15:25, Shawn Feldman  wrote:
> > When we restart solr on a leader node while we are doing updates, we've
> > noticed that some small percentage of data is lost.  maybe 9 records out
> of
> > 1k.  Updating using min_rf=3 or full quorum seems to resolve this since
> our
> > rf = 3.  Updates then seem to only succeed when all nodes are back up.
> Why
> > would we see record loss during a node restart?  I assumed the
> transaction
> > log would get replayed.  We have a 4 node cluster with 24 shards.
> >
> > -shawn
>


Re: losing records during solr updates

2017-03-27 Thread Shawn Feldman
Here is the solr log of our test node restarting
https://s3.amazonaws.com/uploads.hipchat.com/17705/1138911/fvKS3t5uAnoi0pP/solrlog.txt



On Mon, Mar 27, 2017 at 2:10 PM Shawn Feldman 
wrote:

> Ercan, I think you responded to the wrong thread
>
> On Mon, Mar 27, 2017 at 2:02 PM Ercan Karadeniz <
> ercan_karade...@hotmail.com> wrote:
>
> 6.4.2 (latest available) or shall I use another one for familiarization
> purposes?
>
>
> 
> Von: Alexandre Rafalovitch 
> Gesendet: Montag, 27. März 2017 21:28
> An: solr-user
> Betreff: Re: losing records during solr updates
>
> What version of Solr is it?
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
> Home | Solr Start<http://www.solr-start.com/>
> www.solr-start.com
> Welcome to the collection of resources to make Apache Solr search engine
> more comprehensible to beginner and intermediate users. While Solr is very
> easy to start with ...
>
>
>
>
>
> On 27 March 2017 at 15:25, Shawn Feldman  wrote:
> > When we restart solr on a leader node while we are doing updates, we've
> > noticed that some small percentage of data is lost.  maybe 9 records out
> of
> > 1k.  Updating using min_rf=3 or full quorum seems to resolve this since
> our
> > rf = 3.  Updates then seem to only succeed when all nodes are back up.
> Why
> > would we see record loss during a node restart?  I assumed the
> transaction
> > log would get replayed.  We have a 4 node cluster with 24 shards.
> >
> > -shawn
>
>


Re: losing records during solr updates

2017-03-27 Thread Shawn Feldman
This update seems suspicious,  the adds with the same id seem like a
closure issue in the retry.

---
solr1_1 | 2017-03-27 20:19:12.397 INFO (qtp575335780-17) [c:goseg s:shard24
r:core_node12 x:goseg_shard24_replica2] o.a.s.u.p.LogUpdateProcessorFactory
[goseg_shard24_replica2] webapp=/solr path=/update
params={update.distrib=FROMLEADER&distrib.from=
http://172.17.0.10:8983/solr/goseg_shard24_replica1/&min_rf=3&wt=javabin&version=2}{add=[dev_list_segmentation_test_76661_recipients!batch4...@x.com
(1563055570139742208), dev_list_segmentation_test_76661_recipients!
batch4...@x.com (1563055570141839360),
dev_list_segmentation_test_76661_recipients!batch4...@x.com
(1563055570141839361), dev_list_segmentation_test_76661_recipients!
batch4...@x.com (1563055570142887936),
dev_list_segmentation_test_76661_recipients!batch4...@x.com
(1563055570143936512), dev_list_segmentation_test_76661_recipients!
batch4...@x.com (1563055570143936513),
dev_list_segmentation_test_76661_recipients!batch4...@x.com
(1563055570143936514), dev_list_segmentation_test_76661_recipients!
batch4...@x.com (1563055570143936515),
dev_list_segmentation_test_76661_recipients!batch4...@x.com
(1563055570144985088), dev_list_segmentation_test_76661_recipients!
batch4...@x.com (1563055570144985089)]} 0 23



On Mon, Mar 27, 2017 at 3:04 PM Shawn Feldman 
wrote:

> Here is the solr log of our test node restarting
>
> https://s3.amazonaws.com/uploads.hipchat.com/17705/1138911/fvKS3t5uAnoi0pP/solrlog.txt
>
>
>
> On Mon, Mar 27, 2017 at 2:10 PM Shawn Feldman 
> wrote:
>
> Ercan, I think you responded to the wrong thread
>
> On Mon, Mar 27, 2017 at 2:02 PM Ercan Karadeniz <
> ercan_karade...@hotmail.com> wrote:
>
> 6.4.2 (latest available) or shall I use another one for familiarization
> purposes?
>
>
> 
> Von: Alexandre Rafalovitch 
> Gesendet: Montag, 27. März 2017 21:28
> An: solr-user
> Betreff: Re: losing records during solr updates
>
> What version of Solr is it?
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
> Home | Solr Start<http://www.solr-start.com/>
> www.solr-start.com
> Welcome to the collection of resources to make Apache Solr search engine
> more comprehensible to beginner and intermediate users. While Solr is very
> easy to start with ...
>
>
>
>
>
> On 27 March 2017 at 15:25, Shawn Feldman  wrote:
> > When we restart solr on a leader node while we are doing updates, we've
> > noticed that some small percentage of data is lost.  maybe 9 records out
> of
> > 1k.  Updating using min_rf=3 or full quorum seems to resolve this since
> our
> > rf = 3.  Updates then seem to only succeed when all nodes are back up.
> Why
> > would we see record loss during a node restart?  I assumed the
> transaction
> > log would get replayed.  We have a 4 node cluster with 24 shards.
> >
> > -shawn
>
>


Re: How to do CDCR with basic auth?

2017-05-19 Thread Shawn Feldman
I have the same exact issue on my box.  Basic auth works in 6.4.2 but fails
in 6.5.1.  I assume its a bug.  probably just hasn't been acknowledged yet.

On Sun, May 14, 2017 at 2:37 PM Xie, Sean  wrote:

> Configured the JVM:
>
> -Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthConfigurer
> -Dbasicauth=solr:SolrRocks
>
> Configured the CDCR.
>
> Started the Source cluster and
> Getting the log:
>
> .a.s.h.CdcrUpdateLogSynchronizer Caught unexpected exception
> java.lang.IllegalArgumentException: Credentials may not be null
> at org.apache.http.util.Args.notNull(Args.java:54)
> at org.apache.http.auth.AuthState.update(AuthState.java:113)
> at
> org.apache.solr.client.solrj.impl.PreemptiveAuth.process(PreemptiveAuth.java:56)
> at
> org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
> at
> org.apache.http.protocol.HttpRequestExecutor.preProcess(HttpRequestExecutor.java:166)
> at
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:485)
> at
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
> at
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at
> org.apache.solr.handler.CdcrUpdateLogSynchronizer$UpdateLogSynchronisation.run(CdcrUpdateLogSynchronizer.java:146)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
>
>
> Somehow, the cdcr didn’t pickup the credentials when using the
> PreemptiveAuth.
>
> Is it a bug?
>
> Thanks
> Sean
>
>
>
> On 5/14/17, 3:09 PM, "Xie, Sean"  wrote:
>
> So I have configured two clusters (source and target) with basic auth
> with solr:SolrRocks, but when starting the source node, log is showing it
> couldn’t read the authentication info.
>
> I already added the –Dbasicauth=solr:SolrRocks to the JVM of the solr
> instance. Not sure where else I can configure the solr to use the auth.
>
> When starting the CDCR, the log is:
>
> 2017-05-14 15:01:02.915 WARN  (qtp1348949648-21) [c:COL1 s:shard1
> r:core_node2 x:COL1_shard1_replica2] o.a.s.h.CdcrReplicatorManager Unable
> to instantiate the log reader for target collection COL1
> org.apache.solr.client.solrj.SolrServerException:
> java.lang.IllegalArgumentException: Credentials may not be null
> at
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:473)
> at
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
> at
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1376)
> at
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1127)
> at
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
> at
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at
> org.apache.solr.handler.CdcrReplicatorManager.getCheckpoint(CdcrReplicatorManager.java:196)
> at
> org.apache.solr.handler.CdcrReplicatorManager.initLogReaders(CdcrReplicatorManager.java:159)
> at
> org.apache.solr.handler.CdcrReplicatorManager.stateUpdate(CdcrReplicatorManager.java:134)
> at
> org.apache.solr.handler.CdcrStateManager.callback(CdcrStateManager.java:36)
> at
> org.apache.solr.handler.CdcrProcessStateManager.setState(CdcrProcessStateManager.java:93)
> at
> org.apache.solr.handler.CdcrRequestHandler.handleStartAction(CdcrRequestHandler.java:352)
> at
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:178)
> at

Re: How to do CDCR with basic auth?

2017-05-19 Thread Shawn Feldman
i added a ticket

https://issues.apache.org/jira/browse/SOLR-10718

we'll see what happens

On Fri, May 19, 2017 at 3:03 PM Shawn Feldman 
wrote:

> I have the same exact issue on my box.  Basic auth works in 6.4.2 but
> fails in 6.5.1.  I assume its a bug.  probably just hasn't been
> acknowledged yet.
>
> On Sun, May 14, 2017 at 2:37 PM Xie, Sean  wrote:
>
>> Configured the JVM:
>>
>> -Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthConfigurer
>> -Dbasicauth=solr:SolrRocks
>>
>> Configured the CDCR.
>>
>> Started the Source cluster and
>> Getting the log:
>>
>> .a.s.h.CdcrUpdateLogSynchronizer Caught unexpected exception
>> java.lang.IllegalArgumentException: Credentials may not be null
>> at org.apache.http.util.Args.notNull(Args.java:54)
>> at org.apache.http.auth.AuthState.update(AuthState.java:113)
>> at
>> org.apache.solr.client.solrj.impl.PreemptiveAuth.process(PreemptiveAuth.java:56)
>> at
>> org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
>> at
>> org.apache.http.protocol.HttpRequestExecutor.preProcess(HttpRequestExecutor.java:166)
>> at
>> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:485)
>> at
>> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>> at
>> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>> at
>> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>> at
>> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
>> at
>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
>> at
>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
>> at
>> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>> at
>> org.apache.solr.handler.CdcrUpdateLogSynchronizer$UpdateLogSynchronisation.run(CdcrUpdateLogSynchronizer.java:146)
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at
>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:748)
>>
>>
>> Somehow, the cdcr didn’t pickup the credentials when using the
>> PreemptiveAuth.
>>
>> Is it a bug?
>>
>> Thanks
>> Sean
>>
>>
>>
>> On 5/14/17, 3:09 PM, "Xie, Sean"  wrote:
>>
>> So I have configured two clusters (source and target) with basic auth
>> with solr:SolrRocks, but when starting the source node, log is showing it
>> couldn’t read the authentication info.
>>
>> I already added the –Dbasicauth=solr:SolrRocks to the JVM of the solr
>> instance. Not sure where else I can configure the solr to use the auth.
>>
>> When starting the CDCR, the log is:
>>
>> 2017-05-14 15:01:02.915 WARN  (qtp1348949648-21) [c:COL1 s:shard1
>> r:core_node2 x:COL1_shard1_replica2] o.a.s.h.CdcrReplicatorManager Unable
>> to instantiate the log reader for target collection COL1
>> org.apache.solr.client.solrj.SolrServerException:
>> java.lang.IllegalArgumentException: Credentials may not be null
>> at
>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:473)
>> at
>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
>> at
>> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1376)
>> at
>> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1127)
>> at
>> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
>> at
>> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)

Zookeeper-aware golang library

2017-05-30 Thread Shawn Feldman
I've been working on a go library for solr that is zk aware for the past
few months.  Here is my library.  let me know if you'd like to contribute.
I compute the hashrange given a key and route and return the desired
servers in the cluster then have an sdk for query and for retries given
network outages.  We've found that just using a load balancer leads to some
data loss when nodes reboot which is why i emulated the java solr client
and rotate through a list of solr servers.

https://github.com/sendgrid/go-solr