it issue here:
https://issues.apache.org/jira/browse/LUCENE-2420
I am not aware of any explicit mention of the single-index Lucene document
limit at the Solr level.
-- Jack Krupansky
-Original Message-
From: tosenthu
Sent: Monday, May 28, 2012 11:34 AM
To: solr-user@lucene.apache.org
You went over the max limit for number of docs.
On Monday, May 28, 2012, tosenthu wrote:
> Hi
>
> I have a index of size 1 Tb.. And I prepared this by setting up a
> background
> script to index records. The index was fine last 2 days, and i have not
> disturbed the process. Suddenly when i queri
aller size is needed.
-- Jack Krupansky
-Original Message-
From: tosenthu
Sent: Monday, May 28, 2012 1:25 PM
To: solr-user@lucene.apache.org
Subject: Re: Negative value in numFound
The RAM is about 14.5G. Allocated for Tomcat..
I have now 2 shards. But I was in an impression i can hand
The RAM is about 14.5G. Allocated for Tomcat..
I have now 2 shards. But I was in an impression i can handle it with couple
of Shards. But in this case i need to have shards which can only grow up
2^31-1 records and many such shards to support 12 Billion records.
I will try to have more cores and
between shards so
that no shard grows too large? That gets back to the preceding question.
-- Jack Krupansky
-Original Message-
From: tosenthu
Sent: Monday, May 28, 2012 11:34 AM
To: solr-user@lucene.apache.org
Subject: Re: Negative value in numFound
Hi
It is a multicore but when i se
OOM is a problem.
You need more RAM and more machines, and maybe more shards.
-- Jack Krupansky
-Original Message-
From: tosenthu
Sent: Monday, May 28, 2012 11:29 AM
To: solr-user@lucene.apache.org
Subject: Re: Negative value in numFound
There was an Out Of Memory.. But still the
Hi
It is a multicore but when i searched the shards query even then i get this
response
which is again a negative value.
Might be the total number of records may be > 2147483647 (2^31-1), But is
this limitation documented anywhere. What is the strategy to over come this
situation. Expectation
In some cases multi-shard architecture might significantly slow down the
search process at this index size...
By the way, how much RAM do you use?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Negative-value-in-numFound-tp3986398p3986438.html
Sent from the Solr - User maili
There was an Out Of Memory.. But still the indexing was happening further..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Negative-value-in-numFound-tp3986398p3986437.html
Sent from the Solr - User mailing list archive at Nabble.com.
Is this for a single-shard or multi-shard index?
There is a 2^31-1 limit for a single Lucene index since document numbers are
"int" (32-bit signed in Java) in Lucene, but with Solr shards you can have a
multiple of that, based on number of shards.
If you are multi-shard, maybe one of the shar
Hm... Have you any errors in logs? During search, during indexing?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Negative-value-in-numFound-tp3986398p3986426.html
Sent from the Solr - User mailing list archive at Nabble.com.
The details are below
Solr : 3.5
Using a Schema file with 53 fields and 8 fields indexed among them.
OS : CentOS 5.4 64 Bit
Java : 1.6.0 64 Bit
Apache Tomcat : 7.0.22
Intel(R) Xeon(R) CPU L5518 @ 2.13GHz (16 Processors)
/dev/mapper/index 5.9T 1.9T 4.0T 33% /Index
Had around 2 Billion Record
Hi!
Can you please show your hardware parameters, version of Solr, that you're
using and schema.xml file?
thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Negative-value-in-numFound-tp3986398p3986408.html
Sent from the Solr - User mailing list archive at Nabble.com.
13 matches
Mail list logo