Well, we are doing same thing(in a way). we have to do frequent deletions in
mass, at a time we are deleting around 20M+ documents.All i am doing is after
deletion i am firing the below command on each of our solr node and keep some
patience as it take way much time.
curl -vvv
"http://node1.so
Hi Vaibhav,
Could you check with the directory *suggest.dictionary* mySuggester is present
or not, try making it with mkdir, if still problem persist try giving full path.
I found good article in below link check with that too.
[http://romiawasthy.blogspot.com/2014/06/configure-solr-suggester.
ally limited by memory, so trying to fit enough of your
> single huge index into memory may be problematical.
>
> This feels like an XY problem, _why_ are you asking about this? What
> is the use-case you want to handle by this?
>
> Best,
> Erick
>
> On T
Just a dumb question but how can i make solr cloud fault tolerant for queries ?
why i am asking this question because, i have 12 different physical server and
i am running 12 solr shards on that, whenever any one of them is going down
because of any reason it gives me below error, i have 3 zook
FYI, I searched the google for this problem but didn't find any satisfactory
answer.Here is the current situation : I have the 8 shards in my solr cloud
backed up with 3 zookeeper all are setup on AWS EC2 instances, all 8 are leader
with no replicas.I have only 1 collection say collection1 divid