See also Adrien Grand's blog post on this feature (he implemented it):

<http://blog.jpountz.net/post/35667727458/stored-fields-compression-in-lucene-4-1>

Steve

On Feb 20, 2013, at 7:22 AM, Erick Erickson <erickerick...@gmail.com> wrote:

> bq: Does the new compressed stored field format in Solr 4.1 do anything to
> reduce the number of disk seeks required to retrieve all document fields?
> 
> Probably, but I doubt by a whole lot. Although I confess I really don't
> know the guts. Let's assume that all the stored content for a doc is
> contiguous. The odds of having more than one doc in a block read from disk
> goes up with compression which would reduce the number of seeks. But, the
> odds of any of the top 20 docs in a corpus of, say, 20M docs being close
> enough for this to happen is probably pretty small.
> 
> But read Uwe's excellent blog on MMapDirectory here:
> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
> 
> Best,
> Erick
> 
> 
> On Tue, Feb 19, 2013 at 11:24 PM, Shawn Heisey <s...@elyograg.org> wrote:
> 
>> On 2/19/2013 6:47 PM, Erick Erickson wrote:
>> 
>>> It Depends (tm). Storing data in a Solr index pretty much just consumes
>>> disk space, the *.fdt and *.fdx files aren't really germane to the amount
>>> of memory needed for search. There will be some additional memory
>>> requirements for the document cache though. And you'll also have resources
>>> consumed if you use the &fl=*, there's more disk seeking going on to fetch
>>> the fields.
>>> 
>> 
>> Does the new compressed stored field format in Solr 4.1 do anything to
>> reduce the number of disk seeks required to retrieve all document fields?
>> 
>> Thanks,
>> Shawn
>> 
>> 

Reply via email to