On 1/7/2014 7:48 AM, Steven Bower wrote:
> I was looking at the code for getIndexSize() on the ReplicationHandler to
> get at the size of the index on disk. From what I can tell, because this
> does directory.listAll() to get all the files in the directory, the size on
> disk includes not only what
I was looking at the code for getIndexSize() on the ReplicationHandler to
get at the size of the index on disk. From what I can tell, because this
does directory.listAll() to get all the files in the directory, the size on
disk includes not only what is searchable at the moment but potentially
also
Interesting bit, thanks* *Rafał!
On Mon, Apr 8, 2013 at 12:54 PM, Rafał Kuć wrote:
> Hello!
>
> Let me answer the first part of your question. Please have a look at
>
> https://svn.apache.org/repos/asf/lucene/dev/trunk/dev-tools/size-estimator-lucene-solr.xls
> It should help you make an estim
Hello!
Let me answer the first part of your question. Please have a look at
https://svn.apache.org/repos/asf/lucene/dev/trunk/dev-tools/size-estimator-lucene-solr.xls
It should help you make an estimation about your index size.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr -
This may not be a well detailed question but I will try to make it clear.
I am crawling web pages and will index them at SolrCloud 4.2. What I want
to predict is the index size.
I will have approximately 2 billion web pages and I consider each of them
will be 100 Kb.
I know that it depends on sto
emory. We haven't pushed this into production yet, but initial
load-testing results look promising.
Hope this helps!
> -Original Message-
> From: Jim Adams [mailto:jasolru...@gmail.com]
> Sent: Tuesday, June 23, 2009 1:24 PM
> To: solr-user@lucene.apache.org
> Subje
Can anyone give me a rule of thumb for knowing when you need to go to
multicore or shards? How many records can be in an index before it breaks
down? Does it break down? Is it 10 million? 20 million? 50 million?
Thanks, Jim