Thanks for sharing your solution and experience.

I'm just thinking about to load all article data (100 mio.)
and all personal data (4 mio.) into one core with a selector
field "db" containing either "article" or "pdata".
But still not really satisfied with this solution.

Anyway, MySQL is a good hint.

Regards,
Bernd

Am 30.05.2016 um 13:36 schrieb John Blythe:
> We had previously done something of the sort. With some sources of truth type 
> of cores we would do initial searches on customer transaction data before 
> fetching the related information from those "truth" tables. We would use the 
> various pertinent fields from results #1 to find related data in core #2.
> 
> We moved just last week to making this match during the initial processing 
> stage of core #1. Instead of processing our data during a large xml import we 
> moved towards the processing saving this information in a new table (in 
> MySQL) and then have Solr do a quick read directly from that source. It's 
> given us more flexibility and a ton more speed/efficiency in terms of giving 
> our users that second tier data right out the gate w their first result set.
> 
> Worth noting: our hand was a bit forced as some search results would need to 
> be in the thousands and as such the secondary lookup would be incredibly slow 
> and painful, so YMMV
> 
> 
> On May 30, 2016, 6:21 AM -0400, Bernd 
> Fehling<bernd.fehl...@uni-bielefeld.de>, wrote:
>> Has anyone experiences with searching in two indices?
>>
>> E.g. having one index with nearly static data (like personal data)
>> and a second index with articles which changes pretty much.
>>
>> A search would then start for articles and from the list of results
>> (e.g. first page, 10 articles) start a sub search in the second
>> index for personal data to display the results side by side.
>>
>> Has anyone managed this and how?
>>
>> If not, how would you try to solve this?
>>
>>
>> Regards,
>> Bernd
> 

Reply via email to