Hello!

I assumed that re-indexing can be painful in your case, if it wouldn't
you probably would re-index by now :) I guess (didn't test it myself),
that you can create another collection inside your cluster, use the
old codec for Lucene 4.0 (setting the version in solrconfig.xml should
be enough) and re-indexing, but still re-indexing will have to be
done. Or maybe someone knows a better way ?



-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

> On Fri, Mar 1, 2013 at 11:28 AM, Rafał Kuć <r....@solr.pl> wrote:
>> Hello!
>>
>> I suppose the only way to make this work will be reindexing the data.
>> Solr 4.1 uses Lucene 4.1 as you know, which introduced new default
>> codec with stored fields compression and this is one of the reasons
>> you can't read that index with 4.0.
>>

> Thank you. My first inclination is to "reindex" the documents, but the
> only store of these documents is the Solr index itself. I am trying to
> find solutions to create a new core and to index the data in the old
> core into the new core. I'm not finding any good ways of going about
> this.

> Note that we are talking about ~18,000,000 (yes, 18 million) small
> documents similar to 'tweets' (mostly under 1 KiB each, very very few
> over 5 KiB).


Reply via email to