About large XML files and http overhead: you can tell solr to load the
file directly from a file system. This will stream thousands of
documents in one XML file without loading everything in memory at
once.

This is a new book on Solr. It will help you through this early learning phase.

http://www.packtpub.com/solr-1-4-enterprise-search-server

On Mon, Nov 2, 2009 at 6:24 AM, Alexey Serba <ase...@gmail.com> wrote:
> Hi Eugene,
>
>> - ability to iterate over all documents, returned in search, as Lucene does
>>  provide within a HitCollector instance. We would need to extract and
>>  aggregate various fields, stored in index, to group results and aggregate 
>> them
>>  in some way.
>> ....
>> Also I did not find any way in the tutorial to access the search results with
>> all fields to be processed by our application.
>>
> http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Faceted-Search-Solr
> Check out Faceted Search, probably you can achieve your goal by using
> Facet Component
>
> There's also Field Collapsing patch
> http://wiki.apache.org/solr/FieldCollapsing
>
>
> Alex
>



-- 
Lance Norskog
goks...@gmail.com

Reply via email to