This worked perfectly for me Erick. Thank you.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-replace-a-solr-cloud-node-tp4287556p4288655.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have a 4 machine cluster with ~100 collections. Each collection has
numShards=2 and replicationFactor=2. Data directory size of each node is
~120GB. One of my node is having some hardware issue, so I need to replace
it. How can I do that without taking whole cluster down. IP of new node will
be
Thanks for reply Shawn. I will try it out.
The reason that I am forced to do a hard commit through code is to handle a
problem I am facing with transaction logs.
I am forced to delete tlogs manually at regular interval and hence I want to
issue a hard commit before deleting them to ensure that no
How can I issue a hard commit through SolrJ such that openSearcher=false?
Also how can I issue same request through http? Will this work -
curl
"http://localhost:8983/solr/collection1/update?commit=true&openSearcher=false";
--
View this message in context:
http://lucene.472066.n3.nabble.c
Should I open a JIRA, in case there is no explanation of why all of a sudden
transaction logs start piling up for some shard/replica?
I have provided very detailed explanation in a different thread:
http://lucene.472066.n3.nabble.com/Transaction-logs-not-getting-deleted-td4184635.html
Also can s
Can someone please reply to these questions?
Thanks in advance.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-tlog-and-soft-commit-tp4193105p4193311.html
Sent from the Solr - User mailing list archive at Nabble.com.
Its for both. I am facing some problem, and I want to get to the root of it
by understanding what happens when we issue an update.
The problem I am facing is that sometimes, old transaction logs are not
getting deleted for my solr cloud setup for one or two replicas, no matter
how many times I do
Thanks Eric. Its super helpful!
So here's my understanding so far:
1. On update, write the doc to tlog(which will be used only for recovery)
2. As soon as the docs size becomes greater than ramBufferSize, flush it to
the latest segment inside the index directory.
3. Upto this point, even though t
Thanks for reply Yonik. I am very new to solrcloud and trying to understand
how the update requests are handled and what exactly happens at file system
level.
1. So lets say I send an update request, and I don't issue any type of
commit(neither hard nor soft), so will the document ever touch index
I want to know what all thing gets written to index from tlog directory
whenever a soft commit is issued.
I have a test SolrCloud setup and I can see that even if I disable the
hardcommit, and if I only issue soft commits, then also index directory
keeps increasing little by little, so I am presu
Dear Experts,
I have a solrcloud setup - 8 machines, 7 collections(replicationFactor=2,
numShards=8). Transaction log for one of the replica of a collection is not
getting deleted and has grown to ~4GB.
Here's the stats for this collection:
*Solr Version:* 4.10.0
*NumDocs:* 33.5 million
*Softcom
Dear Experts,
I have a strange problem where select q=*:* is returning different number of
documents. Sometime its returning numFound = 5866712 and sometimes it
returns numFound = 5852274. *numFound is always one of these 2 values.*
Here is the query:
*http://localhost:5011/solr/mycollection/s
How you fixed this? I am also getting same error with 4.10?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Errors-on-index-in-SolrCloud-ConcurrentUpdateSolrServer-Runner-run-tp4107661p4162544.html
Sent from the Solr - User mailing list archive at Nabble.com.
Is there any open source or commercial analyzer for Kannada language?
If anyone have experience with indexing Kannada documents, please share the
relevant information.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Analyzer-for-Kannada-language-tp4157382.html
Sent fr
What is the best way to merge 2 collections into one?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Merge-two-collections-in-SolrCloud-tp4146930.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Chris. I understand this. But this test is to determine the *maximum*
latency a query can have and hence I have disabled all caches.
After disabling all caches in solrconfig, I was able to remove "latency
variation" for a single query in most of the cases. But still *sort* queries
are show
Yes, I have also commented "newSearcher" and "firstSearcher" queries in
solrConfig.xml
--
View this message in context:
http://lucene.472066.n3.nabble.com/Disable-all-caches-in-Solr-tp4144933p4144935.html
Sent from the Solr - User mailing list archive at Nabble.com.
I want to run some query benchmarks, so I want to disable all type of caches
in solr. I commented out filterCache, queryResultCache and documentCache in
solrConfig.xml. I don't care about Result Window Size cause numdocs is 10 in
all the cases.
Are there any other hidden caches which I should know
I have a 4 machine cluster. I want to create a collection with 1 shard and 1
replica. So I only need 2 machines. Is there a way I can explicitly define
the machines on which my new collection should be created.
--
View this message in context:
http://lucene.472066.n3.nabble.com/While-creating
Solr Experts,
Is there a way to delete a single field from the index(without reindexing).
Lets say i have documents like below:
abab ababa
bh vsha sa
abab ababa
bh vsha sa
Now I want to delete field named "data_type" from *ALL *the documents. Is
this possible with
20 matches
Mail list logo