ot;jun Wang"
> | To: solr-user@lucene.apache.org
> | Sent: Wednesday, October 10, 2012 6:36:09 PM
> | Subject: Re: segment number during optimize of index
> |
> | I have an other question, does the number of segment affect speed for
> | update index?
> |
> | 2012/10/10 jame
ith 10s of gigabytes of memory. Without it, all large
programs run much more slowly than they could. It is not a Solr or JVM problem.
- Original Message -
| From: "jun Wang"
| To: solr-user@lucene.apache.org
| Sent: Wednesday, October 10, 2012 6:36:09 PM
| Subject: Re: s
I have an other question, does the number of segment affect speed for
update index?
2012/10/10 jame vaalet
> Guys,
> thanks for all the inputs, I was continuing my research to know more about
> segments in Lucene. Below are my conclusion, please correct me if am wrong.
>
>1. Segments are ind
Guys,
thanks for all the inputs, I was continuing my research to know more about
segments in Lucene. Below are my conclusion, please correct me if am wrong.
1. Segments are independent sub-indexes in seperate file, while indexing
its better to create new segment as it doesnt have to modify a
If I were you and not knowing all your details...
I would optimize indices that are static (not being modified) and
would optimize down to 1 segment.
I would do it when search traffic is low.
Otis
--
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - h
My first reaction is you have too much stuff on a single machine. Your
cumulative index size is 2.4 TB. Granted, it's a beefy machine, but still...
And index size isn't all the helpful, as it includes the raw stored data which
doesn't really come into play for sizing things, subtract out the
*.fdt
Hi Eric,
I am in a major dilemma with my index now. I have got 8 cores each around
300 GB in size and half of them are deleted documents in it and above that
each has got around 100 segments as well. Do i issue a expungeDelete and
allow the merge policy to take care of the segments or optimize the
because eventually you'd run out of file handles. Imagine a
long-running server with 100,000 segments. Totally
unmanageable.
I think shawn was emphasizing that RAM requirements don't
depend on the number of segments. There are other
resources that file consume however.
Best
Erick
On Fri, Oct 5,
hi Shawn,
thanks for the detailed explanation.
I have got one doubt, you said it doesn matter how many segments index have
but then why does solr has this merge policy which merges segments
frequently? why can it leave the segments as it is rather than merging
smaller one's into bigger one?
thank
On 10/4/2012 3:22 PM, jame vaalet wrote:
so imagine i have merged the 150 Gb index into single segment, this would
make a single segment of 150 GB in memory. When new docs are indexed it
wouldn't alter this 150 Gb index unless i update or delete the older docs,
right? will 150 Gb single segment h
On 10/4/2012 3:22 PM, jame vaalet wrote:
so imagine i have merged the 150 Gb index into single segment, this would
make a single segment of 150 GB in memory. When new docs are indexed it
wouldn't alter this 150 Gb index unless i update or delete the older docs,
right? will 150 Gb single segment h
so imagine i have merged the 150 Gb index into single segment, this would
make a single segment of 150 GB in memory. When new docs are indexed it
wouldn't alter this 150 Gb index unless i update or delete the older docs,
right? will 150 Gb single segment have problem with memory swapping at OS
leve
You can certainly optimize down to just 1 segment.
Note that this is the most expensive option and that when you do that
you may actually hurt performance for a bit because Solr/Lucene may
need to re-read a bunch of data from the index for sorting and
faceting purposes. You will also invalidate t
13 matches
Mail list logo