I am trying to decide what the right approach would be, to have one big core
and many smaller cores hosted by a solr instance.

I think there may be trade offs either way but wanted to see what others do. 
And by small I mean about 5-10 million documents, large may be 50 million.

It seems like small cores are better because
- If one server can host say 70 million documents (before memory issues) we
can get really close with a bunch of small indexes, vs only being able to
host one 50 million document index.  And when a software update comes out
that allows us to host 90 million then we could add a few more small
indexes. 
- It takes less time to build ten 5 million document indexes than one 50
million document index.

It seems like larger cores are better because
- Each core returns their result set, so if I want 1000 results and their
are 100 cores the network is transferring 100000 documents for that search. 
Where if I had only 10 much larger cores only 10000 documents would be sent
over the network.
- It would prolong my time until I hit uri length limits being that there
would be less cores in my system.

Any thoughts???  Other trade-offs???

How do you find what the right size for you is?

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Better-to-have-lots-of-smaller-cores-or-one-really-big-core-tp3017973p3017973.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to