Fair point indeed. Depends on how your update process works though. One can
do the trick of assigning batch numbers to an indexing run and deleting
documents that aren’t from that reindexing run for example, so it’s not
necessary to overwrite documents to “replace” them per se.
Erik
>>Or perhaps use the UUID auto id feature.
if i use UUID, then how i can update particular document, i think using
this ,there will not any document identity
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-unique-key-in-Schema-tp4240550p4240563.html
Sent from the
>>Or perhaps use the UUID auto id feature.
if i use UUID, then how i can update particular document, i think using
this ,there will not any document identity
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-unique-key-in-Schema-tp4240550p4240557.html
Sent from the
Make each document have a composite unique key: user-1, user-2, review-1,...
Etc.
Easier said than done if you're just posting the CSV directly to Solr but an
update script could help.
Or perhaps use the UUID auto id feature.
Erik
> On Nov 17, 2015, at 08:14, Mugeesh Husain wrote:
>
>
When you index into Solr, you are overlapping the definitions into one
schema. Therefore, you will need a unified uniqueKey.
There is a couple of approaches:
1) Maybe you don't actually store the data as three types of entities.
Think about what you will want to find and structure the data to
matc