Solr 4.7 Payload
Hi all, I am evaluating Payload of lucene. I am using solr4.7.2 for this. I could able to index with payload, but i couldnt able to retrieve payload from DocsAndPositionsEnum. It is returning just null. But terms.hasPayloads() is returning true. And i can able to see payload value in luke (image attached below). I have following schema for payload field , *schema.xml* *My indexing code,* for(int i=1;i<=1000;i++) { SolrInputDocument doc1= new SolrInputDocument(); doc1.addField("id", "test:"+i); doc1.addField("uid", ""+i); doc1.addField("payloads", "_UID_|"+i+"f"); doc1.addField("content", "test"); server.add(doc1); if(i%1 == 0) { server.commit(); } } server.commit(); *Search code :* DocsAndPositionsEnum termPositionsEnum = solrSearcher.getAtomicReader().termPositionsEnum(t); int doc = -1; while((doc = termPositionsEnum.nextDoc()) != DocsAndPositionsEnum.NO_MORE_DOCS) { System.out.println(termPositionsEnum.getPayload()); // returns null } *luke * <http://lucene.472066.n3.nabble.com/file/n4145641/luke.png> Am i missing some configuration or i am doing in a wrong way ??? Any help in resolving this issue will be appreciated. Thanks in advance Ranjith Venkatesan -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-7-Payload-tp4145641.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr Live Nodes not updating immediately
Hi, I am new to solr. Currently i m using Solr-4.3.0. I had setup a solrcloud setup in 3 machines. If I kill a node running in any of the machine using "/kill -9/", status of the killed node is not updating immediately in web console of solr. I takes hardly /20+ mins/ to mark that as "Gone" node. My question is 1. Why does it takes so much time to update the status of the inactive node. 2. And if the leader node itself is killed means, i cant able to use the service till the status of the node gets updated. Thanks in advance Ranjith Venkatesan -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Live-Nodes-not-updating-immediately-tp4076560.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Live Nodes not updating immediately
The same scenario happens if network to any one of the machine is unavailable. (i.e if we manually disconnect network cable also, status of the node not gets updated immediately). Pls help me in this issue -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Live-Nodes-not-updating-immediately-tp4076560p4076621.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Live Nodes not updating immediately
We are going to use solr in production. There are chances that the machine itself might shutdown due to power failure or the network is disconnected due to manual intervention. We need to address those cases as well to build a robust system.. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Live-Nodes-not-updating-immediately-tp4076560p4076633.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Live Nodes not updating immediately
My zkClientTimeout is set to 15000 by default. I am using external zookeeper-3.4.5 which is also running in 3 machines. I am using only one shard with replication factor being set to 3. Normal shutdown updates the solr state as soon as the node gets down.. I am facing issue with abrupt shut down(kill -9) or network problem. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Live-Nodes-not-updating-immediately-tp4076560p4076826.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Live Nodes not updating immediately
Hi, tickTime in zookeeper was high. When i reduced it to 2000ms solr node status gets updated in <20s. Hence resolved my issue. Thanks for helping me. I have one more question. 1. Is it advisable to reduce the tickTime further. 2. Or whats the most appropriate tickTime which gives maximum performance and also solr node gets updated in lesser time. I hereby included my zoo.cfg configuration tickTime=2000 dataDir=/home/local/ranjith-1785/sources/solrcloud/zookeeper-3.4.5_Server1/zoodata clientPort = 2181 initLimit=5 syncLimit=2 maxClientCnxns=180 server.1=localhost:2888:3888 server.2=localhost:3000:4000 server.3=localhost:2500:3500 -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Live-Nodes-not-updating-immediately-tp4076560p4077467.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr Zookeeper - Too Many file descriptors on network failure
Hi, I am having an issue with network failure to one of the node (or many). When network is down, number of sockets in that machine keeps on increasing, At a point it throws too many file descriptors exeption. When network is available before that exception, all the open sockets are getting closed. and hence the node could able to join to cloud. But when network is available again, after that exception , node couldnt able to join to cloud. Thanks in advance RANJITH VENKATESAN -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Zookeeper-Too-Many-file-descriptors-on-network-failure-tp4077979.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr-Max connections
Hi, I am using solr-4.3.0 with zookeeper-3.4.5. My scenario is, users will communicate with solr via zookeeper ports. *My question is how many users can simultaneously access the solr. * In zookeeper i configured maxClientCxns, but that is for max connections from a single host(User??) Note: My assumption is ,maxconnections will be based on Jetty or Tomcat. Is it so?? If not how to configure maxConnections in solr and zookeeper. In my case 1000 users may search simultaneously. And also indexing will also happen at the same time. Thanks in advance Ranjith Venkatesan -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Max-connections-tp4078008.html Sent from the Solr - User mailing list archive at Nabble.com.
Machine memory full
Hi Currently i am using solr-4.3 for my product. I will be creating a collection for each user. So number of collections keeps on increasing. I have hosted 3 solr servers and 3 zookeeper servers, each of size 400GB with 8 GB RAM. There is possibility of memory(400GB) gets filled at sometime. Currently one of the machine memory is 60% filled. My question is how to handle this scenario without interrupting the service. Thanks in advance RANJITH VENKATESAN -- View this message in context: http://lucene.472066.n3.nabble.com/Machine-memory-full-tp4080511.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Machine memory full
Thanks for the reply. I think this approach will work only for new collections. Is there any approach to shift some existing cores to a new machine or node?? -- View this message in context: http://lucene.472066.n3.nabble.com/Machine-memory-full-tp4080511p4081235.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr Split Shard - Document loss and down time
We are using solr 4.3.0 for our search application. We will be splitting shard at run time. I simulated a scenario, let me explain that first. I am indexing some 2 docs via solrj. At the same time i m triggering my split shard command. And also, I am killing the leader node when both the above operations are in progress. In the above case i m facing *downtime(indexing fails), document loss,* and also delete shard after split shard not gets completed. alt="Cloud View after split shard gets completed."/> <http://lucene.472066.n3.nabble.com/file/n4082002/Solr2.png> <http://lucene.472066.n3.nabble.com/file/n4082002/Solr3.png> <http://lucene.472066.n3.nabble.com/file/n4082002/Solr4.png> And also error was thrown during indexing. let me post that too /Doc:::11203 Doc:::11204 1 Aug, 2013 9:24:52 PM org.apache.solr.common.cloud.ZkStateReader$2 process INFO: A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3) Doc:::11205 org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:331) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:306) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117) at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168) at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146) at tokyosolrindex.Main.main(Main.java:44)/ Is there any approach to overcome this??? Thanks in advance RANJITH VENKATESAN -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Split-Shard-Document-loss-and-down-time-tp4082002.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Split Shard - Document loss and down time
I have tried in 4.4 too. It also produces same kind of problem only. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Split-Shard-Document-loss-and-down-time-tp4082002p4082180.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Split Shard - Document loss and down time
Hi Erick, I have a question. Suppose if any error occurred during shard split , is there any approach to revert back the split action? . This is seriously breaking my head. For me documents are getting lost when any of the node for that shard is dead when split shard is in progress. Thanks Ranjith -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Split-Shard-Document-loss-and-down-time-tp4082002p4082973.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Split Shard - Document loss and down time
I have explained in the above post with screenshots. Indexing gets failed when any node is down and also shard splitting is in progress -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Split-Shard-Document-loss-and-down-time-tp4082002p4082994.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr Collection's Size
Hi, I am new to solr. I want to find size of collection dynamically via solrj. I tried many ways but i couldnt succeed in any of those. Pls help me with this issue. Thanks in advance Ranjith Venkatesan -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-Collection-s-Size-tp4054011.html Sent from the Solr - User mailing list archive at Nabble.com.