First. for tweets committing every 500 docs is much too frequent. Especially from the client and super-especially if you have multiple clients running. I'd recommend you just configure solrconfig this way as a place to start and do NOT commit from any clients. 1> a hard commit (openSearcher=false) every minute (or maybe 5 minutes) 2> a soft commit every minute
This latter governs how long it'll be between when a doc is indexed and when can be searched. Here's a long post about how all this works: https://lucidworks.com/blog/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ As far as the rest, it's a puzzle definitely. If it continues, a complete stack trace would be a good thing to start with. Best, Erick On Sat, Nov 8, 2014 at 9:47 AM, Bruno Osiek <baos...@gmail.com> wrote: > Hi, > > I am a newbie SolrCloud enthusiast. My goal is to implement an > infrastructure to enable text analysis (clustering, classification, > information extraction, sentiment analysis, etc). > > My development environment consists of one machine, quad-core processor, > 16GB RAM and 1TB HD. > > Have started implementing Apache Flume, Twitter as source and SolrCloud > (within JBoss AS 7) as sink. Using Zookeeper (5 servers) to upload > configuration and managing cluster. > > The pseudo-distributed cluster consists of one collection, three shards > each with three replicas. > > Everything runs smoothly for a while. After 50.000 tweets committed > (actually CloudSolrServer commits every batch consisting of 500 documents) > randomly SolrCloud starts logging exceptions: Lucene file not found, > IndexWriter cannot be opened, replication unsuccessful and the likes. > Recovery starts with no success until replica goes down. > > Have tried different Solr versions (4.10.2, 4.9.1 and lastly 4.8.1) with > same results. > > I have looked everywhere for help before writing this email. My guess right > now is that the problem lies with SolrCloud and Zookeeper connection, > although haven't seen any such exception. > > Any reference or help will be welcomed. > > Cheers, > B.