On 5/10/2016 7:46 PM, lltvw wrote:
> the args used to start solr are as following, and upload my screen shot to
> http://www.yupoo.com/photos/qzone3927066199/96064170/, please help to take a
> look, thanks.
>
> -DSTOP.PORT=7989
> -DSTOP.KEY=
> -DzkHost=node1:2181,node2:2181,node3:2181/solr
> -Dso
On 5/9/2016 11:42 PM, lltvw wrote:
> By using jps command double check the parms used to start solr, i found that
> the max heap size already set to 10G. So I made a big mistake yesterday.
>
> But by using solr admin UI, I select the collection with performance problem,
> in the overview page I
On Tue, 2016-05-10 at 00:41 +0800, lltvw wrote:
> Recently we setup a 4.10 solrcloud env with about 9000 doc indexed
> in it,this solrcloud with 12 shards, each shard on one separate
> machine, but when we try to search some infor on solrcloud, the
> response time is about 300ms.
Could you pr
On 5/9/2016 9:11 PM, lltvw wrote:
> You are right, the max heap is 512MB, thanks.
90 million documents split into 12 shards means 7.5 million documents
per shard.
With that many documents and a 512MB heap, you're VERY lucky if Solr
doesn't experience OutOfMemoryError problems -- which will make S
On 5/9/2016 4:41 PM, lltvw wrote:
> Shawn, thanks.
>
> Each machine with 48G memory installed, and now with 20G free, I check JVM
> heap size use solr admin UI, the heap size is about 20M.
What is the *max* heap? An unmodified install of Solr 5.x or later has
a max heap of 512MB.
In the admin
On 5/9/2016 10:52 AM, lltvw wrote:
> Sorry, I missed the size of each shard, the size is about 3G each. Thanks.
>
> 在 2016-05-10 00:41:13,lltvw 写道:
>> Recently we setup a 4.10 solrcloud env with about 9000 doc indexed in
>> it,this solrcloud with 12 shards, each shard on one separate machine