There's no good answer here. But a 3GB index isn't very big, I doubt that
sharding is necessary. Absolute numbers aren't much use here IMO.
What you probably want to do is create a stress test for your system. Run
the stress test occasionally and when the average response times start
to go up, you're hitting some limit and investigate from there. Make sure
you aren't measuring cache warming etc, the QTime in the response is
what you should be paying most attention to.


As for whether field collapsing is supported in a sharded environment,
see: http://wiki.apache.org/solr/FieldCollapsing

Best
Erick

On Thu, Jan 12, 2012 at 1:41 AM, Sujatha Arun <suja.a...@gmail.com> wrote:
> Hello,
>
> I am Looking into trigger point for sharding Indexes based on response time
> ,and would like to define an acceptable response time.
>
> Given  a  3GB index ,when Can i think of sharding .The response times being
> variable based on the query and varies from 100ms to
> 600ms .We are running solr 1.3 with a collapse path for this instance.
>
> A simple query gets a quick response ,however a complex Boolean query with
> collapse takes a longer time .Should we base the acceptable time base on
> the most complex or simple queries or somewhere in between.
>
> Does sharding  work with a collapse patch?
>
> Any pointer on an acceptable response time definition and at what  point we
> can think of sharding  based on the  response time?
>
>
> Regards
> Sujatha

Reply via email to