On Fri, Aug 01, 2008 at 03:36:13PM -0400, Ian Connor wrote:
> I have a number of documents in files
>
> 1.xml
> 2.xml
> ...
> 17M.xml
>
> I have been using cat to join them all together:
>
> cat 1.xml 2.xml ... 1000.xml | grep -v '<\/add>' > /tmp/post.xml
>
> and posting them with cur
ze of your documents and the
analysis being done on them.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Ian Connor <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Friday, August 1, 2008 5:08:00 PM
> Subject: Re: f
I am on fedora and just running with jetty (I guess that means it will
not just use as much RAM as I have and I need to specify it when I
load java).
So, if I have 8GB RAM are you suggesting that I set the -Xmx 5000M or
something large and then set merge to:
1
should I also increase any of t
Configure Solr to use as much RAM as you can afford and not merge too often via
mergeFactor.
It's not clear (to me) from your explanation when you see 3000 docs/second and
when only 100 docs/second.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message