Re: Issue while using Document Routing in SolrCloud 6.1

2017-10-11 Thread Emir Arnautović
Hi Ketan,
Each shard is a separate index and if you are indexing 100doc/sec without 
routing with two shards, you are indexing 50 docs/shard. If you have routing, 
and all documents are from single tenant, single shard has to be able to 
process 100doc/sec. If you have two nodes it means that you have to process 
same number of documents with half of resources. But even with one node it is 
expected to be slower because more docs/sec means more merging and expected to 
have lower throughput.
If you change your tests to index both tenants at the same time, you should not 
experience lower throughput.
When it comes to search latency, it is because you do not have merge phase in 
your query and when you talk about search throughput in addition to latency, 
you use half of resources so you can expect to double the throughput with 
routing, in your case.

HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 11 Oct 2017, at 07:31, Ketan Thanki  wrote:
> 
> 
> Thanks Emir,
> 
> As mentions below, I have indexing the using two tenant and my data are 
> currently belongs to only one shard which also shows impact in retrieval much 
> faster but While insert its seems slower.
> So is there any specific reason for that.
> 
> Please do needful.
> 
> Regards,
> Ketan.
> 
> 
> 
> Hi Ketan,
> Is it possible that you are indexing only one tenant and that is causing 
> single shard to become
> hotspot?
> 
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> 
> 
> From: Ketan Thanki
> Sent: Tuesday, October 10, 2017 4:18 PM
> To: 'solr-user@lucene.apache.org'
> Subject: Issue while using Document Routing in SolrCloud 6.1
> 
> Hi,
> 
> Need the help regarding to below mention query.
> I have configure the 2 collections with each has 2 shard and 2 replica and i 
> have implemented Composite document routing for my unique field 'Id' where I 
> have use 2 level Tenant route as mentions below.
> e.g : projectId:158380 modelId:3606 where tenants bits use as 
> projectId/2!modelId/8 for below ID
> "id":"79190!450!0003606#001#001#0#002754269#11760499351"
> 
> Issue: its seems that my retrieval get faster but the insertion was slower 
> compared to without routing changes
> 
> Please  do needful.
> 
> Regards,
> Ketan.
> 



Indexing files from HDFS

2017-10-11 Thread István
Hi,

I have Solr 4.10.3 part of a CDH5 installation and I would like to index
huge amount of CSV files on HDFS. I was wondering what is the best way of
doing that.

Here is the current approach:

data.csv:

id, fruit
10, apple
20, orange

Indexing with the following command using search-mr-1.0.0-cdh5.11.1-job.jar

hadoop --config /etc/hadoop/conf.cloudera.yarn jar \
/opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-1.0.0-cdh5.11.1-job.jar
\
org.apache.solr.hadoop.MapReduceIndexerTool \
-D 'mapred.child.java.opts=-Xmx500m' --log4j \
/opt/cloudera/parcels/CDH/share/doc/search/examples/solr-nrt/log4j.properties
--morphline-file \
/home/user/readCSV.conf \
--output-dir hdfs://name-node.server.com:8020/user/solr/output --verbose
--go-live \
--zk-host name-node.server.com:2181/solr --collection collection0 \
hdfs://name-node.server.com:8020/user/solr/input

This leads to the following exception:

2219 [main] INFO  org.apache.solr.hadoop.MapReduceIndexerTool  - Indexing 1
files using 1 real mappers into 1 reducers
Error: java.io.IOException: Batch Write Failure
at org.apache.solr.hadoop.BatchWriter.throwIf(BatchWriter.java:239)
..
Caused by: org.apache.solr.common.SolrException: ERROR: [doc=100] unknown
field 'file_path'
at
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:185)
at
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:78)

It appears to me that the schema does not have file_path. The collection is
created through Hue and it properly identifies the two fields id and fruit.
I found out that the search-mr tool has the following code that references
the file_path:

https://github.com/cloudera/search/blob/cdh5-1.0.0_5.2.0/search-mr/src/main/java/org/apache/solr/hadoop/HdfsFileFieldNames.java#L30

I am not sure what to do in order to be able to index files on HDFS. I have
two guesses:

- add the fields definied in the search tool to the schema when I create it
(not sure how that work through Hue)
- disable the HDFS meatadata insertion when inserting data

Has anybody seen this before?

Thanks,
Istvan




-- 
the sun shines for all


Parsing of rq queries in LTR

2017-10-11 Thread Binoy Dalal
Hi,
For an LTR query, is there any way of checking how the `rq` is being
parsed? or specifically how the `efi` queries are treated?

For e.g. let's say my `rq` looks like this:
"rq":"{!ltr model=my_efi_model efi.text=my car}"

And my corresponding feature is:
SolrFeature [name=my_efi, params={q={!field f=efi_field}${text}}]

I want to see how the `my_efi` feature processes the query `q={!field
f=efi_field}${text}`.
Does it do something like `efi_field:my efi_field:car` or `efi_field=my
default_field=car` etc.

The debug query option does not provide this information and the solr logs
don't record the execution of queries made for feature value calculation.

Any inputs are much appreciated.
-- 
Regards,
Binoy Dalal


Re: Indexing files from HDFS

2017-10-11 Thread Erick Erickson
You probably get much more informed responses from
the Cloudera folks, especially about Hue.

Best,
Erick

On Wed, Oct 11, 2017 at 6:05 AM, István  wrote:
> Hi,
>
> I have Solr 4.10.3 part of a CDH5 installation and I would like to index
> huge amount of CSV files on HDFS. I was wondering what is the best way of
> doing that.
>
> Here is the current approach:
>
> data.csv:
>
> id, fruit
> 10, apple
> 20, orange
>
> Indexing with the following command using search-mr-1.0.0-cdh5.11.1-job.jar
>
> hadoop --config /etc/hadoop/conf.cloudera.yarn jar \
> /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-1.0.0-cdh5.11.1-job.jar
> \
> org.apache.solr.hadoop.MapReduceIndexerTool \
> -D 'mapred.child.java.opts=-Xmx500m' --log4j \
> /opt/cloudera/parcels/CDH/share/doc/search/examples/solr-nrt/log4j.properties
> --morphline-file \
> /home/user/readCSV.conf \
> --output-dir hdfs://name-node.server.com:8020/user/solr/output --verbose
> --go-live \
> --zk-host name-node.server.com:2181/solr --collection collection0 \
> hdfs://name-node.server.com:8020/user/solr/input
>
> This leads to the following exception:
>
> 2219 [main] INFO  org.apache.solr.hadoop.MapReduceIndexerTool  - Indexing 1
> files using 1 real mappers into 1 reducers
> Error: java.io.IOException: Batch Write Failure
> at org.apache.solr.hadoop.BatchWriter.throwIf(BatchWriter.java:239)
> ..
> Caused by: org.apache.solr.common.SolrException: ERROR: [doc=100] unknown
> field 'file_path'
> at
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:185)
> at
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:78)
>
> It appears to me that the schema does not have file_path. The collection is
> created through Hue and it properly identifies the two fields id and fruit.
> I found out that the search-mr tool has the following code that references
> the file_path:
>
> https://github.com/cloudera/search/blob/cdh5-1.0.0_5.2.0/search-mr/src/main/java/org/apache/solr/hadoop/HdfsFileFieldNames.java#L30
>
> I am not sure what to do in order to be able to index files on HDFS. I have
> two guesses:
>
> - add the fields definied in the search tool to the schema when I create it
> (not sure how that work through Hue)
> - disable the HDFS meatadata insertion when inserting data
>
> Has anybody seen this before?
>
> Thanks,
> Istvan
>
>
>
>
> --
> the sun shines for all


query Slower with Document Routing while Use on Heavy Index Size

2017-10-11 Thread Ketan Thanki
HI,

I have issue as mentions below while use Document Routing.

1: Query  is slower with heavy index for below detail.
Config: 4 shard and 4 replica,with  8.5 GB Index Size(2GB Index Size for each 
shard).
-With routing parameter:
q=worksetid_l:2028446%20AND%20modelid_l:23718&rows=1&_route_=1041040!2964!&wt=json
 taking QTime=3.4
-Without routing parameter:
q=worksetid_l:2028446%20AND%20modelid_l:23718&rows=1&wt=json taking 
QTime=3.2

2:Query is faster with lowest index for below detail.
Config: 2 shard and 2 replica  with 190 mb for 1 shard (Data index only in 1 
shard with document routing)
-With routing parameter:
q=worksetid_l:103963 AND 
modelid_l:3611&rows=1&wt=json&_route_=79190!451!&wt=json taking QTime=39
-Without routing parameter:
q=worksetid_l:103963 AND modelid_l:3611&rows=1&wt=json taking QTime=1.4

so issue is while query on heavy index size its getting slower with routing 
parameter which is same like without routing parameter.

Please do needful.


-Ketan.

Please cast a vote for Asite in the 2017 Construction Computing Awards: Click 
here to Vote

[CC Award Winners!]



Querying a specific replica in SolrCloud

2017-10-11 Thread Chris Ulicny
Hi,

We're trying to investigate a possible data issue between two replicas in
our cloud setup. We have docValues enabled for a string field, and when we
facet by it, the results come back with the expected numbers per value, or
zero for all values.

Is there a way to tell which replica is handling a request via debug or
some other parameter, or to specify which replica to route the request to?

Thanks,
Chris


solr 7.0.1: exception running post to crawl simple website

2017-10-11 Thread Kevin Layer
I want to use solr to index a markdown website.  The files
are in native markdown, but they are served in HTML (by markserv).

Here's what I did:

docker run --name solr -d -p 8983:8983 -t solr
docker exec -it --user=solr solr bin/solr create_core -c handbook

Then, to crawl the site:

quadra[git:master]$ docker exec -it --user=solr solr bin/post -c handbook 
http://quadra.franz.com:9091/index.md -recursive 10 -delay 0 -filetypes md
/docker-java-home/jre/bin/java -classpath /opt/solr/dist/solr-core-7.0.1.jar 
-Dauto=yes -Drecursive=10 -Ddelay=0 -Dfiletypes=md -Dc=handbook -Ddata=web 
org.apache.solr.util.SimplePostTool http://quadra.franz.com:9091/index.md
SimplePostTool version 5.0.0
Posting web pages to Solr url http://localhost:8983/solr/handbook/update/extract
Entering auto mode. Indexing pages with content-types corresponding to file 
endings md
SimplePostTool: WARNING: Never crawl an external web site faster than every 10 
seconds, your IP will probably be blocked
Entering recursive mode, depth=10, delay=0s
Entering crawl at level 0 (1 links total, 1 new)
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.solr.util.SimplePostTool$PageFetcher.readPageFromUrl(SimplePostTool.java:1138)
at org.apache.solr.util.SimplePostTool.webCrawl(SimplePostTool.java:603)
at 
org.apache.solr.util.SimplePostTool.postWebPages(SimplePostTool.java:563)
at 
org.apache.solr.util.SimplePostTool.doWebMode(SimplePostTool.java:365)
at org.apache.solr.util.SimplePostTool.execute(SimplePostTool.java:187)
at org.apache.solr.util.SimplePostTool.main(SimplePostTool.java:172)
quadra[git:master]$ 


Any ideas on what I did wrong?

Thanks.

Kevin


Re: Querying a specific replica in SolrCloud

2017-10-11 Thread Erick Erickson
You can route a request to a specific replica by
solr_node:port/solr/collection1_shard1_replica1/query?distrib=false&blah
blah blah

The "distrib=false" bit will cause the query to go to that replica and
only that replica. You can get the shard (collection1_shard1_replica1)
from the admin UI "cores" dropdown.

You can also try adding "&shards.info=true" to the standart request like:
solr_node:port/solr/collection/query?shards.info=true&blah blah blah

Best,
Erick

On Wed, Oct 11, 2017 at 7:58 AM, Chris Ulicny  wrote:
> Hi,
>
> We're trying to investigate a possible data issue between two replicas in
> our cloud setup. We have docValues enabled for a string field, and when we
> facet by it, the results come back with the expected numbers per value, or
> zero for all values.
>
> Is there a way to tell which replica is handling a request via debug or
> some other parameter, or to specify which replica to route the request to?
>
> Thanks,
> Chris


Re: Solr staying constant on popularity indexes

2017-10-11 Thread S G
I find myself in the same boat as TI when a Solr node goes into recovery.
Solr UI and the logs are really of no help at that time.
It would be really nice to enhance the Solr UI with the features mentioned
in the original post.


On Tue, Oct 10, 2017 at 4:14 AM, Charlie Hull  wrote:

> On 10/10/2017 11:02, Bernd Fehling wrote:
>
>> Questions coming to my mind:
>>
>> Is there a "Resiliency Status" page for SolrCloud somewhere?
>>
>> How would SolrCloud behave in a Jepsen test?
>>
>
> This has been done in 2014 - see https://lucidworks.com/2014/12
> /10/call-maybe-solrcloud-jepsen-flaky-networks/
>
> Charlie
>
>>
>> Regards
>> Bernd
>>
>> Am 10.10.2017 um 09:22 schrieb Toke Eskildsen:
>>
>>> On Mon, 2017-10-09 at 20:50 -0700, Tech Id wrote:
>>>
 Being a long term Solr user, I tried to do a little comparison myself
 and actually found some interesting features in ES.

 1. No zookeeper  - I have burnt my hands with some zookeeper issues
 in the past and it is no fun to deal with. Kafka and Storm are also
 trying to burden zookeeper less and less because ZK cannot handle
 heavy traffic.

>>>
>>> ZooKeeper is not the easiest beast to tame, but it does have its
>>> plusses. The greatest being that it is pretty good at what it does:
>>> https://aphyr.com/posts/291-call-me-maybe-zookeeper
>>>
>>> Home-cooked distribution systems might be a lot easier to use,
>>> primarily because they tend to be a perfect fit for the technology they
>>> support, but they are hard to get right:
>>> https://aphyr.com/posts/323-call-me-maybe-elasticsearch-1-5-0
>>>
>>> 2. REST APIs - this is a big wow over the complicated syntax Solr
 uses. I think V2 APIs are coming to address this, but they did come a
 bit late in the game.

>>>
>>> I guess you mean JSON APIs? Anyway, I fully agree that the old Solr
>>> syntax is extremely clunky as soon as we move beyond the simple "just
>>> supply a few search terms"-scenario.
>>>
>>> - Toke Eskildsen, Royal Danish Library
>>>
>>>
>> ---
>> This email has been checked for viruses by AVG.
>> http://www.avg.com
>>
>>
>
> --
> Charlie Hull
> Flax - Open Source Enterprise Search
>
> tel/fax: +44 (0)8700 118334
> mobile:  +44 (0)7767 825828
> web: www.flax.co.uk
>


Re: Querying a specific replica in SolrCloud

2017-10-11 Thread Chris Ulicny
Thanks! I was trying the distrib=false option but was apparently using it
incorrectly for the cloud. The shard.info parameter was what I was
originally looking for.


On Wed, Oct 11, 2017 at 1:09 PM Erick Erickson 
wrote:

> You can route a request to a specific replica by
> solr_node:port/solr/collection1_shard1_replica1/query?distrib=false&blah
> blah blah
>
> The "distrib=false" bit will cause the query to go to that replica and
> only that replica. You can get the shard (collection1_shard1_replica1)
> from the admin UI "cores" dropdown.
>
> You can also try adding "&shards.info=true" to the standart request like:
> solr_node:port/solr/collection/query?shards.info=true&blah blah blah
>
> Best,
> Erick
>
> On Wed, Oct 11, 2017 at 7:58 AM, Chris Ulicny  wrote:
> > Hi,
> >
> > We're trying to investigate a possible data issue between two replicas in
> > our cloud setup. We have docValues enabled for a string field, and when
> we
> > facet by it, the results come back with the expected numbers per value,
> or
> > zero for all values.
> >
> > Is there a way to tell which replica is handling a request via debug or
> > some other parameter, or to specify which replica to route the request
> to?
> >
> > Thanks,
> > Chris
>


Inconsistent results for facet queries

2017-10-11 Thread Chris Ulicny
Hi,

We've run into a strange issue with our deployment of solrcloud 6.3.0.
Essentially, a standard facet query on a string field usually comes back
empty when it shouldn't. However, every now and again the query actually
returns the correct values. This is only affecting a single shard in our
setup.

The behavior pattern generally looks like the query works properly when it
hasn't been run recently, and then returns nothing after the query seems to
have been cached (< 50ms QTime). Wait a while and you get the correct
result followed by blanks. It doesn't matter which replica of the shard is
queried; the results are the same.

The general query in question looks like
/select?q=*:*&facet=true&facet.field=market&rows=0&fq=

The field is defined in the schema as 

There are numerous other fields defined similarly, and they do not exhibit
the same behavior when used as the facet.field value. They consistently
return the right results on the shard in question.

If we add facet.method=enum to the query, we get the correct results every
time (though slower. So our assumption is that something is sporadically
working when the fc method is chosen by default.

A few other notes about the collection. This collection is not freshly
indexed, but has not had any particularly bad failures beyond follower
replicas going down due to PKIAuthentication timeouts (has been fixed). It
has also had a full reindex after a schema change added docValues some
fields (including the one above), but the collection wasn't emptied first.
We are using the composite router to co-locate documents.

Currently, our plan is just to reindex all of the documents on the affected
shard to see if that fixes the problem. Any ideas on what might be
happening or ways to troubleshoot this are appreciated.

Thanks,
Chris


RE: Parsing of rq queries in LTR

2017-10-11 Thread Brian Yee
I have a similar question. I performing my feature extraction with the 
following:

fl= [features+efi.query=bakeware 3-piece set]

I'm pretty sure the dash is causing my query to error. But I'm also not sure 
how the spaces impacts the efi param. I tried putting the term in quotes, but 
that does not work.

--Brian

-Original Message-
From: Binoy Dalal [mailto:binoydala...@gmail.com] 
Sent: Wednesday, October 11, 2017 10:51 AM
To: SOLR users group 
Subject: Parsing of rq queries in LTR

Hi,
For an LTR query, is there any way of checking how the `rq` is being parsed? or 
specifically how the `efi` queries are treated?

For e.g. let's say my `rq` looks like this:
"rq":"{!ltr model=my_efi_model efi.text=my car}"

And my corresponding feature is:
SolrFeature [name=my_efi, params={q={!field f=efi_field}${text}}]

I want to see how the `my_efi` feature processes the query `q={!field 
f=efi_field}${text}`.
Does it do something like `efi_field:my efi_field:car` or `efi_field=my 
default_field=car` etc.

The debug query option does not provide this information and the solr logs 
don't record the execution of queries made for feature value calculation.

Any inputs are much appreciated.
--
Regards,
Binoy Dalal


Re: Inconsistent results for facet queries

2017-10-11 Thread Erick Erickson
bq: ...but the collection wasn't emptied first

This is what I'd suspect is the problem. Here's the issue: Segments
aren't merged identically on all replicas. So at some point you had
this field indexed without docValues, changed that and re-indexed. But
the segment merging could "read" the first segment it's going to merge
and think it knows about docValues for that field, when in fact that
segment had the old (non-DV) definition.

This would not necessarily be the same on all replicas even on the _same_ shard.

This can propagate through all following segment merges IIUC.

So my bet is that if you index into a new collection, everything will
be fine. You can also just delete everything first, but I usually
prefer a new collection so I'm absolutely and positively sure that the
above can't happen.

Best,
Erick

On Wed, Oct 11, 2017 at 12:51 PM, Chris Ulicny  wrote:
> Hi,
>
> We've run into a strange issue with our deployment of solrcloud 6.3.0.
> Essentially, a standard facet query on a string field usually comes back
> empty when it shouldn't. However, every now and again the query actually
> returns the correct values. This is only affecting a single shard in our
> setup.
>
> The behavior pattern generally looks like the query works properly when it
> hasn't been run recently, and then returns nothing after the query seems to
> have been cached (< 50ms QTime). Wait a while and you get the correct
> result followed by blanks. It doesn't matter which replica of the shard is
> queried; the results are the same.
>
> The general query in question looks like
> /select?q=*:*&facet=true&facet.field=market&rows=0&fq=
>
> The field is defined in the schema as  docValues="true"/>
>
> There are numerous other fields defined similarly, and they do not exhibit
> the same behavior when used as the facet.field value. They consistently
> return the right results on the shard in question.
>
> If we add facet.method=enum to the query, we get the correct results every
> time (though slower. So our assumption is that something is sporadically
> working when the fc method is chosen by default.
>
> A few other notes about the collection. This collection is not freshly
> indexed, but has not had any particularly bad failures beyond follower
> replicas going down due to PKIAuthentication timeouts (has been fixed). It
> has also had a full reindex after a schema change added docValues some
> fields (including the one above), but the collection wasn't emptied first.
> We are using the composite router to co-locate documents.
>
> Currently, our plan is just to reindex all of the documents on the affected
> shard to see if that fixes the problem. Any ideas on what might be
> happening or ways to troubleshoot this are appreciated.
>
> Thanks,
> Chris


Getting user-level KeeperException

2017-10-11 Thread Gunalan V
Hello,

Could someone please let me know what this user-level keeper exception in
zookeeper mean? and How to fix the same.





Thanks,
GVK

2017-10-12 01:56:25,276 [myid:3] - INFO  
[CommitProcessor:3:ZooKeeperServer@687] - Established session 0x35f0e3edd390001 
with negotiated timeout 15000 for client /10.138.66.12:33935
 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 
(n.peerEpoch) LEADING (my state)
2017-10-12 01:42:01,778 [myid:2] - INFO  
[LearnerHandler-/10.138.66.12:47249:LearnerHandler@346] - Follower sid: 3 : 
info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@26153a56
2017-10-12 01:42:01,781 [myid:2] - INFO  
[LearnerHandler-/10.138.66.12:47249:LearnerHandler@401] - Synchronizing with 
Follower sid: 3 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0
2017-10-12 01:42:01,781 [myid:2] - INFO  
[LearnerHandler-/10.138.66.12:47249:LearnerHandler@475] - Sending SNAP
2017-10-12 01:42:01,781 [myid:2] - INFO  
[LearnerHandler-/10.138.66.12:47249:LearnerHandler@499] - Sending snapshot last 
zxid of peer is 0x0  zxid of leader is 0x1sent zxid of db as 0x1
2017-10-12 01:42:01,789 [myid:2] - INFO  
[LearnerHandler-/10.138.66.12:47249:LearnerHandler@535] - Received 
NEWLEADER-ACK message from 3
2017-10-12 01:45:39,311 [myid:2] - INFO  [SyncThread:2:FileTxnLog@203] - 
Creating new log file: log.10001
2017-10-12 01:45:39,645 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@486] - Processed session termination for 
sessionid: 0x35f0e3edd39
2017-10-12 01:51:46,310 [myid:2] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182:NIOServerCnxnFactory@192] - Accepted 
socket connection from /10.138.66.12:39420
2017-10-12 01:51:46,376 [myid:2] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182:ZooKeeperServer@942] - Client 
attempting to establish new session at /10.138.66.12:39420
2017-10-12 01:51:46,389 [myid:2] - INFO  
[CommitProcessor:2:ZooKeeperServer@687] - Established session 0x25f0e3de4e5 
with negotiated timeout 3 for client /10.138.66.12:39420
2017-10-12 01:51:46,470 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@486] - Processed session termination for 
sessionid: 0x25f0e3de4e5
2017-10-12 01:51:46,474 [myid:2] - INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2182:NIOServerCnxn@1044] - Closed socket 
connection for client /10.138.66.12:39420 which had sessionid 0x25f0e3de4e5
2017-10-12 01:56:24,946 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@486] - Processed session termination for 
sessionid: 0x15f0e3de4e1
2017-10-12 01:56:25,307 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@648] - Got user-level KeeperException when 
processing sessionid:0x35f0e3edd390001 type:create cxid:0x7 zxid:0x1003c 
txntype:-1 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists 
for /overseer
2017-10-12 01:56:25,315 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@648] - Got user-level KeeperException when 
processing sessionid:0x35f0e3edd390001 type:create cxid:0xd zxid:0x1003e 
txntype:-1 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists 
for /overseer
2017-10-12 01:56:25,316 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@648] - Got user-level KeeperException when 
processing sessionid:0x35f0e3edd390001 type:create cxid:0xe zxid:0x1003f 
txntype:-1 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists 
for /overseer
2017-10-12 01:56:25,318 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@648] - Got user-level KeeperException when 
processing sessionid:0x35f0e3edd390001 type:create cxid:0x10 zxid:0x10040 
txntype:-1 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists 
for /overseer
2017-10-12 01:56:25,323 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@648] - Got user-level KeeperException when 
processing sessionid:0x35f0e3edd390001 type:create cxid:0x16 zxid:0x10042 
txntype:-1 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists 
for /overseer
2017-10-12 01:56:25,328 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@648] - Got user-level KeeperException when 
processing sessionid:0x35f0e3edd390001 type:create cxid:0x1c zxid:0x10044 
txntype:-1 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists 
for /overseer
2017-10-12 01:56:25,394 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@648] - Got user-level KeeperException when 
processing sessionid:0x35f0e3edd390001 type:create cxid:0x51 zxid:0x1004f 
txntype:-1 reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists 
for /overseer
2017-10-12 01:56:25,399 [myid:2] - INFO  [ProcessThread(sid:2 
cport:-1)::PrepRequestProcessor@648] - Got user-level KeeperException when 
processing sessionid:0x35f0e3edd390001 type:create cxid:0x52 zxid