thanks for all your inputs.
On Fri, Apr 22, 2011 at 8:36 PM, Otis Gospodnetic-2 [via Lucene] <
ml-node+2851624-1936255218-340...@n3.nabble.com> wrote:
> Rahul,
>
> Here's a suggestion:
> Write a simple app that uses *Lucene* to create N indices, one for each of
> the
> documents you want to tes
Rahul,
Here's a suggestion:
Write a simple app that uses *Lucene* to create N indices, one for each of the
documents you want to test. Then you can look at their sizes on disk.
Not sure if it's super valuable to see sizes of individual documents, but you
can do it as described above.
Of course
I think you could approximate this with some empirical measurements, i.e. index
1,000 'typical' documents and see what the resulting index size it. Of course
you may need to adjust this number upwards if there is a lot of variability in
document size.
When I built the search engine that ran fe
There's no way I know of to do this.
Why is this important to you? Because I'm not
sure what actionable information this gives you.
The number will vary based on whether the fields
are stored or not. And storing the fields has
very little effect on search memory requirements.
What are you hoping