(I'm working with Raghav on this): We've got several parallel workers that
add documents in batches of 16 through pysolr, and using commitWithin at 60
seconds when the commit causes solr to freeze; if the commit is only 5
seconds, then everything seems to work fine. In both cases, throughput is
around 500 documents / second.

We can certainly give it a try with the Beta.

Thanks,
/Martin

On Mon, Aug 27, 2012 at 7:30 PM, Mark Miller <markrmil...@gmail.com> wrote:

> How are you adding the docs? In batch, streaming, a doc at a time?
>
> Any chance you can try with the Beta?
>
> On Mon, Aug 27, 2012 at 9:35 AM, Raghav Karol <r...@issuu.com> wrote:
> > Hello *,
> >
> > We are using SolrClould 4.0 - Alpha and have a 4 machine setup.
> >
> > Machine 1 - 16 Solr cores - Shard 1 - 16
> > Machine 2 - 16 Solr cores - Shard 17 - 32
> > Machine 3 - 16 Solr cores - Replica 1 - 16
> > Machine 4 - 16 Solr cores - Replice 17 - 32
> >
> > Index at 500 docs/sec and committing every 60 seconds, i.e., 30,000
> Documents causes Solr to freeze. There is nothing in the logs to indicate
> errors or replication activity - Solr just appeared to freeze.
> >
> > Increasing the commit frequency we observed that commits of at most
> 2,500 docs worked fine.
> >
> > Are we using SolrCloud and replications incorrectly?
> >
> > --
> > Raghav
> >
> >
> >
>
>
>
> --
> - Mark
>
> http://www.lucidimagination.com
>

Reply via email to