I would say that putting more Solr instances, each one with your own data
directory could help if you can qualify your docs, in such a way that you
can put "A" type docs in index "A", "B" type docs in index "B", and so on.

2009/1/21 wojtekpia <wojte...@hotmail.com>

>
> I'm using a recent version of Sun's JVM (6 update 7) and am using the
> concurrent generational collector. I've tried several other collectors,
> none
> seemed to help the situation.
>
> I've tried reducing my heap allocation. The search performance got worse as
> I reduced the heap. I didn't monitor the garbage collector in those tests,
> but I imagine that it would've gotten better. (As a side note, I do lots of
> faceting and sorting, I have 10M records in this index, with an approximate
> index file size of 10GB).
>
> This index is on a single machine, in a single Solr core. Would splitting
> it
> across multiple Solr cores on a single machine help? I'd like to find the
> limit of this machine before spreading the data to more machines.
>
> Thanks,
>
> Wojtek
> --
> View this message in context:
> http://www.nabble.com/Performance-%22dead-zone%22-due-to-garbage-collection-tp21588427p21590150.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>


-- 
Alexander Ramos Jardim

Reply via email to