because eventually you'd run out of file handles. Imagine a
long-running server with 100,000 segments. Totally
unmanageable.

I think shawn was emphasizing that RAM requirements don't
depend on the number of segments. There are other
resources that file consume however.

Best
Erick

On Fri, Oct 5, 2012 at 1:08 PM, jame vaalet <jamevaa...@gmail.com> wrote:
> hi Shawn,
> thanks for the detailed explanation.
> I have got one doubt, you said it doesn matter how many segments index have
> but then why does solr has this merge policy which merges segments
> frequently?  why can it leave the segments as it is rather than merging
> smaller one's into bigger one?
>
> thanks
> .
>
> On 5 October 2012 05:46, Shawn Heisey <s...@elyograg.org> wrote:
>
>> On 10/4/2012 3:22 PM, jame vaalet wrote:
>>
>>> so imagine i have merged the 150 Gb index into single segment, this would
>>> make a single segment of 150 GB in memory. When new docs are indexed it
>>> wouldn't alter this 150 Gb index unless i update or delete the older docs,
>>> right? will 150 Gb single segment have problem with memory swapping at OS
>>> level?
>>>
>>
>> Supplement to my previous reply:  the real memory mentioned in the last
>> paragraph does not include the memory that the OS uses to cache disk
>> access.  If more memory is needed and all the free memory is being used by
>> the disk cache, the OS will throw away part of the disk cache (a
>> near-instantaneous operation that should never involve disk I/O) and give
>> that memory to the application that requests it.
>>
>> Here's a very good breakdown of how memory gets used with MMapDirectory in
>> Solr.  It's applicable to any program that uses memory mapping, not just
>> Solr:
>>
>> http://java.dzone.com/**articles/use-lucene%E2%80%99s-**mmapdirectory<http://java.dzone.com/articles/use-lucene%E2%80%99s-mmapdirectory>
>>
>> Thanks,
>> Shawn
>>
>>
>
>
> --
>
> -JAME

Reply via email to