: I managed to assign the individual cores to a collection using the collection
: API to create the collection and then the solr.xml to define the core(s) and
: it's collection. This *seemed* to work. I even test indexed a set of documents
: checking totals before and after as well as content. Aga
On 01/02/2014 12:44 PM, Chris Hostetter wrote:
: Not really ... uptime is irrelevant because they aren't in production. I just
: don't want to spend the time reloading 1TB of documents.
terminologiy confusion: you mean you don't wnat to *reindex* all of the
documents ... in solr "reloading" a c
: Not really ... uptime is irrelevant because they aren't in production. I just
: don't want to spend the time reloading 1TB of documents.
terminologiy confusion: you mean you don't wnat to *reindex* all of the
documents ... in solr "reloading" a core means something specific &
different from w
On 01/02/2014 08:29 AM, michael.boom wrote:
Hi David,
"They are loaded with a lot of data so avoiding a reload is of the utmost
importance."
Well, reloading a core won't cause any data loss. Is it 100% availability
during the process is what you need?
Not really ... uptime is irrelevant becaus
ucene.472066.n3.nabble.com/combining-cores-into-a-collection-tp4109090p4109101.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I have a few cores on the same machine that share the schema.xml and
solrconfig.xml from an earlier setup. Basically from the older
distribution method of using
shards=localhost:1234/core1,localhost:1234/core2[,etc]
for searching.
They are unique sets of documents, i.e., no overlap of