I see that I'm outdated in my tuning as per 
http://wiki.zimbra.com/wiki/OpenLDAP_Performance_Tuning.
 I'll do these changes and observe the servers.

 In the mean time a new observation regarding syncrepl is that if I do 
ldapmodify for 5 - 10 entries in one go the changes get replicated but if I 
change about 300 entries in one go, then some entries do not get replicated on 
some servers. These changes dont get replicated until I change those entries 
again.

 Does this indicate provider resource problem or consumer resource problem ( or 
normal behaviour ) ?



----- Original Message -----
From: Quanah Gibson-Mount
Sent: 03/11/12 02:05 AM
To: Amol Kulkarni, Howard Chu
Subject: Re: slow or inconsistent syncrepl

 --On Saturday, March 10, 2012 2:34 PM +0100 Amol Kulkarni 
<[email protected]> wrote: > > >>> That depends entirely on the speed of 
your server and network. > > Taking the server part first - I'm also doubt that 
my provider server is > underpowered. > > Following are the ldap specific 
parameters on our provider server : > > entries : 0.5 million > avg entry size 
: 1k to 4k > cachesize : 0.5 million > dncachesize : 0.5 million > database 
type : bdb > bdb cachesize : 3 gb > ldap threads : 16 > > Foll is hardware 
configuration of the provider server : > ram : 8gb > swap : 8gb > cpu : Virtual 
machine with 4 cpus. (vmware vsphere) > architecture : 64 bit > > Also we have 
some administrative services/tasks running on the provider > server other than 
openldap. But the sar output seems normal i.e iowait > below 10 and idle time 
above 50. > > Does the hardware configuration seem ok for the given ldap size 
and > configuration ? > > If not, can u suggest some changes? You don't prov!
 ide any useful information for suggesting changes. I would advise you read 
over <http://wiki.zimbra.com/wiki/OpenLDAP_Performance_Tuning> -- Quanah 
Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, 
Inc. -------------------- Zimbra :: the leader in open source messaging and 
collaboration

Reply via email to