println(termPositionsEnum.getPayload()); // returns null
}
*luke *
<http://lucene.472066.n3.nabble.com/file/n4145641/luke.png>
Am i missing some configuration or i am doing in a wrong way ??? Any help in
resolving this issue will be appreciated.
Thanks in advance
Ranjith Venkatesan
--
Vie
I have explained in the above post with screenshots. Indexing gets failed
when any node is down and also shard splitting is in progress
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Split-Shard-Document-loss-and-down-time-tp4082002p4082994.html
Sent from the Solr - Us
Hi Erick,
I have a question. Suppose if any error occurred during shard split , is
there any approach to revert back the split action? . This is seriously
breaking my head. For me documents are getting lost when any of the node for
that shard is dead when split shard is in progress.
Thanks
Ran
I have tried in 4.4 too. It also produces same kind of problem only.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Split-Shard-Document-loss-and-down-time-tp4082002p4082180.html
Sent from the Solr - User mailing list archive at Nabble.com.
ateRequest.java:117)
at
org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168)
at
org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146)
at tokyosolrindex.Main.main(Main.java:44)/
Is there any approach to overcome this???
Thanks in advance
RANJITH VENKATESAN
--
View this mes
Thanks for the reply. I think this approach will work only for new
collections. Is there any approach to shift some existing cores to a new
machine or node??
--
View this message in context:
http://lucene.472066.n3.nabble.com/Machine-memory-full-tp4080511p4081235.html
Sent from the Solr - User
. Currently one
of the machine memory is 60% filled. My question is how to handle this
scenario without interrupting the service.
Thanks in advance
RANJITH VENKATESAN
--
View this message in context:
http://lucene.472066.n3.nabble.com/Machine-memory-full-tp4080511.html
Sent from the Solr
: My assumption is ,maxconnections will be based on Jetty or Tomcat. Is
it so?? If not how to configure maxConnections in solr and zookeeper.
In my case 1000 users may search simultaneously. And also indexing will also
happen at the same time.
Thanks in advance
Ranjith Venkatesan
--
View this
closed. and hence the node could able to join to cloud.
But when network is available again, after that exception , node couldnt
able to join to cloud.
Thanks in advance
RANJITH VENKATESAN
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Zookeeper-Too-Many-file
Hi,
tickTime in zookeeper was high. When i reduced it to 2000ms solr node status
gets updated in <20s. Hence resolved my issue. Thanks for helping me.
I have one more question.
1. Is it advisable to reduce the tickTime further.
2. Or whats the most appropriate tickTime which gives maximum perfo
My zkClientTimeout is set to 15000 by default.
I am using external zookeeper-3.4.5 which is also running in 3 machines. I
am using only one shard with replication factor being set to 3.
Normal shutdown updates the solr state as soon as the node gets down.. I am
facing issue with abrupt shut down
We are going to use solr in production. There are chances that the machine
itself might shutdown due to power failure or the network is disconnected
due to manual intervention. We need to address those cases as well to build
a robust system..
--
View this message in context:
http://lucene.47206
The same scenario happens if network to any one of the machine is
unavailable. (i.e if we manually disconnect network cable also, status of
the node not gets updated immediately).
Pls help me in this issue
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Live-Nodes-not-
quot;Gone" node.
My question is
1. Why does it takes so much time to update the status of the inactive node.
2. And if the leader node itself is killed means, i cant able to use the
service till the status of the node gets updated.
Thanks in advance
Ranjith Venkatesan
--
View thi
Hi,
I am new to solr. I want to find size of collection dynamically via solrj. I
tried many ways but i couldnt succeed in any of those. Pls help me with this
issue.
Thanks in advance
Ranjith Venkatesan
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Collection-s
15 matches
Mail list logo