Hi Shawn, I expect that indexing is a little bit slower with replication but in my case is 3 times worst. I don't explain this.
The monitored consumption of resources is: All the test have point out an I/O utilization of 100MB/s during loading data on disk cache, disk cache utilization of 20GB and core utilization of 100% (all 8 cores) so it seems that the bottleneck are cores and not RAM. I don't expect a performance improvement increasing RAM. Am I wrong? Thanks, Luca On Fri, Jan 8, 2016 at 4:40 PM, Shawn Heisey <apa...@elyograg.org> wrote: > On 1/8/2016 7:55 AM, Luca Quarello wrote: > > I used solr5.3.1 and I sincerely expected response times with replica > > configuration near to response times without replica configuration. > > > > Do you agree with me? > > > > I read here > > > http://lucene.472066.n3.nabble.com/Solr-Cloud-Query-Scaling-td4110516.html > > that "Queries do not need to be routed to leaders; they can be handled by > > any replica in a shard. Leaders are only needed for handling update > > requests. " > > > > I haven't found this behaviour. In my case CONF2 e CONF3 have all > replicas > > on VM2 but analyzing core utilization during a request is 100% on both > > machines. Why? > > Indexing is a little bit slower with replication -- the update must > happen on all replicas. > > If your index is sharded (which I believe you did indicate in your > initial message), you may find that all replicas get used even for > queries. It is entirely possible that some of the shard subqueries will > be processed on one replica and some of them will be processed on other > replicas. I do not know if this commonly happens, but I would not be > surprised if it does. If the machines are sized appropriately for the > index, this separation should speed up queries, because you have the > resources of multiple machines handling one query. > > That phrase "sized appropriately" is very important. Your initial > message indicated that you have a 90GB index, and that you are running > in virtual machines. Typically VMs have fairly small memory sizes. It > is very possible that you simply don't have enough memory in the VM for > good performance with an index that large. With 90GB of index data on > one machine, I would hope for at least 64GB of RAM, and I would prefer > to have 128GB. If there is more than 90GB of data on one machine, then > even more memory would be needed. > > Thanks, > Shawn > >