Hi,
Are you using the compound file format? If yes, then, have u set it properly
in solrconfig.xml, if not, then, change to:
true (this is by default 'false') under
the tags:
...
and, ...
Aleksander Stensby wrote:
>
> Hey guys,
> I'm getting some strange behavior here, and I'm won
Hi,
Just wanted to know, Is the DataImportHandler available in solr1.3
thread-safe?. I would like to use multiple instances of data import handler
running concurrently and posting my various set of data from DB to Index.
Can I do this by registering the DIH multiple times with various names in
so
Just use the query analysis link with appropriate values. It will show how
each filter factories and analyzers breaks the terms during various analysis
levels. Specially check EnglishPorterFilterFactory analysis
Jeff Newburn wrote:
>
> I am trying to figure out how the synonym filter processe
We are about to release Field collapsing in our production site, but the
index size is not as big as yours.
Definitely collapsing is an added overhead. You can do some load testing and
bench mark on some dataset as you would expect on your production project as
SOLR-236 is currently available only
I have been reading the SOLR 1.3 wiki, which says that to fetch documents
from each cores in a multi-cores setup we need to request each core
independently.
What i was under impression that SOLR multi-core feature might be using
lucene's multisearcher to search among multiple cores.
Anyone with
One correction:
I have set documentcache as:
initialsize=512
size=710
autowarmcount=512
The total insertion in documentcache goes upto max 45 with 0 evictions
in a day. Which means it never grows to 710.
Thanx
Mike Klaas wrote:
>
>
> On 22-May-08, at 4:27 AM, guru
> Sent: Wednesday, May 21, 2008 2:23 PM
> To: solr-user@lucene.apache.org
> Subject: Re: SOLR OOM (out of memory) problem
>
>
> On 21-May-08, at 4:46 AM, gurudev wrote:
>
>>
>> Just to add more:
>>
>> The JVM heap allocated is 6GB with initial heap si
Hi Akeel
-Stopwords are general words of language, which, as such do not contain any
meaning in searches like; a,an, the, where, who, am etc. The analyzer in
lucene ignores such words and do not index them. You can also specify you
own stopwords in stopwords.txt in SOLR
-Protwords are the words
Just to add more:
The JVM heap allocated is 6GB with initial heap size as 2GB. We use
quadro(which is 8 cpus) on linux servers for SOLR slaves.
We use facet searches, sorting.
document cache is set to 7 million (which is total documents in index)
filtercache 1
gurudev wrote:
>
&
Hi
We currently host index of size approx 12GB on 5 SOLR slaves machines, which
are load balanced under cluster. At some point of time, which is after 8-10
hours, some SOLR slave would give Out of memory error, after which it just
stops responding, which then requires restart and after restart it
10 matches
Mail list logo