Hi Erick,
Thanks for your kind reply.
In order to deal with more documents in SolrCloud, we are thinking to use
many collections and each of collection will also have several shards.
Basic idea to deal with much document is that when a collection is filled
with much data, we will create a new col
Hmmm, I sure hope you have _lots_ of shards. At that rate, a single
shard is probably going to run up against internal limits in a _very_
short time (the most docs I've seen successfully served on a single
shard run around 300M).
It seems, to handle any reasonable retention period, you need lots a
In my case, injest rate is very high(above 300K docs/sec) and data are kept
inserted. So CPU is already bottleneck because of indexing.
older-style master/slave replication with http or scp takes long to copy
big files from master/slave.
That's why I setup two separate Solr Clouds. One for indexi
I guess I'm not quite sure what the point is. So can you back up a bit
and explain what problem this is trying to solve? Because all it
really appears to be doing that's not already done with stock Solr
is saving some disk space, and perhaps your "reader" SolrCloud
is having some more cycles to dev
Hi Jae,
Sounds a bit complicated and messy to me, but maybe I'm missing something.
What are you trying to accomplish with this approach? Which problems do
you have that are making you look for non-straight forward setup?
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Managem