Thanks everyone!
On Thu, Aug 30, 2012 at 11:11 AM, pravesh wrote:
> We have a 48GB index size on a single shard. 20+ million documents.
> Recently
> migrated to SOLR 3.5
> But we have a cluster of SOLR servers for hosting searches. But i do see to
> migrate to SOLR sharding going forward.
>
>
>
We have a 48GB index size on a single shard. 20+ million documents. Recently
migrated to SOLR 3.5
But we have a cluster of SOLR servers for hosting searches. But i do see to
migrate to SOLR sharding going forward.
Thanx
Pravesh
--
View this message in context:
http://lucene.472066.n3.nabble.
Here's a blog outlining why this is so hard to answer:
http://searchhub.org/dev/2012/07/23/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/
Just one example from your post, you mention index size as
a metric. It's often useless. Stored data ('stored="true" ') is placed
in file
Unfortunately the answer for this can vary quite a bit based on a
number of factors:
1. Whether or not fields are stored,
2. Document size,
3. Total term count,
4. Solr version
etc.
We have two major indexes, one for servicing online queries, and one
for batch processing. Our batch index is perf
If u wanna use one index file to do it, i think u know how to do when u read
my this mail.
I think maybe you can divid it into serveral ?(i don't know how to define
it.) everyone have one master and serveral slaver if u use solr...one
request do serveral query.
it can reduce index file size and i
: I'd be interested to know what is the ideal size for an index to achieve 1
: sec response time for queries. I'd appreciate if you can share any numbers.
that's a fairly impossible question to answer ... the lucene email
archives have lots of discusssion about how the number of documents isn't
r
nal Message
From: Mike Klaas <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Tuesday, March 27, 2007 6:20:40 PM
Subject: Re: maximum index size
On 3/27/07, Kevin Osborn <[EMAIL PROTECTED]> wrote:
> I know there are a bunch of variables here (RAM, number of fields, hits,
Hi Mike,
I'd be interested to know what is the ideal size for an index to achieve 1
sec response time for queries. I'd appreciate if you can share any numbers.
Thanks,
Venkatesh
On 3/27/07, Mike Klaas <[EMAIL PROTECTED]> wrote:
On 3/27/07, Kevin Osborn <[EMAIL PROTECTED]> wrote:
> I know ther
did
that.
Thanks,
Otis
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simpy -- http://www.simpy.com/ - Tag - Search - Share
- Original Message
From: Mike Klaas <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Tuesday, March 27, 2007 6:20:40 PM
Subject: Re:
Hi Andre,
Comments are inline.
What hardware are you running?
4 Dual-proc 64 GB blades for each searcher and a broker that merges results
on 64 bit SUSE linux running JDK 1.6 with 8GB Heap.
Do you use collection distribution?
Nope. I use hadoop to index the documents.
Thanks,
Venkatesh
On
>I've 50 million documents each about 10K in size and I've 4 index
partitions each consisting of 12.5 million documents. Each index
partition is about 80GB. A search typically takes about 3-5 seconds.
Single word searches are faster than multi-word searches. I'm still
working on finding the ideal i
I've 50 million documents each about 10K in size and I've 4 index partitions
each consisting of 12.5 million documents. Each index partition is about
80GB. A search typically takes about 3-5 seconds. Single word searches are
faster than multi-word searches. I'm still working on finding the ideal
i
On 3/27/07, Kevin Osborn <[EMAIL PROTECTED]> wrote:
> If you are going to store a document for each customer then some field
> must indicate to which customer the document instance belongs. In
> that case, why not index a single copy of each document, with a field
> containing a list of custome
- Original Message
From: Mike Klaas <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Tuesday, March 27, 2007 3:20:40 PM
Subject: Re: maximum index size
If you are going to store a document for each customer then some field
must indicate to which customer the document in
On 3/27/07, Kevin Osborn <[EMAIL PROTECTED]> wrote:
I know there are a bunch of variables here (RAM, number of fields, hits, etc.),
but I am trying to get a sense of how big of an index in terms of number of
documents Solr can reasonable handle. I have heard indexes of 3-4 million
documents ru
15 matches
Mail list logo