Getting a solid-state drive might help
--
View this message in context:
http://lucene.472066.n3.nabble.com/millions-of-records-problem-tp3427796p3431309.html
Sent from the Solr - User mailing list archive at Nabble.com.
gt; Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Lucene ecosystem search :: http://search-lucene.com/
>
>
> >
> >From: Jesús Martín García
> >To: solr-user@lucene.apache.org
> >Sent: Monday, October 17, 2011 6:19 AM
&g
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
>
>From: Jesús Martín García
>To: solr-user@lucene.apache.org
>Sent: Monday, October 17, 2011 6:19 AM
>Subject: millions of records pr
You could use this technique? I'm currently reading up on it
http://khaidoan.wikidot.com/solr-common-gram-filter
On 17 October 2011 12:57, Jan Høydahl wrote:
> Hi,
>
> What exactly do you mean by "slow" search? 1s? 10s?
> Which operating system, how many CPUs, which servlet container and how muc
Hi,
What exactly do you mean by "slow" search? 1s? 10s?
Which operating system, how many CPUs, which servlet container and how much RAM
have you allocated to your JVM? (-Xmx)
What kind and size of docs? Your numbers indicate about 100bytes per doc?
What kind of searches? Facets? Sorting? Wildcard
Hi,
I've got 500 millions of documents in solr everyone with the same number
of fields an similar width. The version of solr which I used is 1.4.1
with lucene 2.9.3.
I don't have the option to use shards so the whole index has to be in a
machine...
The size of the index is about 50Gb and t