[ 
https://issues.apache.org/jira/browse/HADOOP-15976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16868457#comment-16868457
 ] 

Lukas Majercak commented on HADOOP-15976:
-----------------------------------------

We do not use LDAP anymore, but this sounds reasonable. The only thing I'd say 
is that now LdapGroupsMapping has a failover feature, so you might wanna look 
into that too, maybe just override failover() function and return immediately. 
Anyway, this should be clearer once we have unit tests for this new 
MultiLdapGroupsMapping

> NameNode Performance degradation When Single LdapServer become a  bottleneck 
> in Ldap-based mapping module 
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-15976
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15976
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: common
>    Affects Versions: 3.1.1
>            Reporter: fengyongshe
>            Assignee: fengyongshe
>            Priority: Major
>         Attachments: HADOOP-15976.patch, image003(12-05-1(12-05-10-36-26).jpg
>
>
> 2000+ nodes cluster, We use OpenLdap to manager users and groups . when 
> LdapGroupsMapping used , Group look-up cause segment fault include NameNode 
> Performance degradation & name node crashes . 
> WARN security.Groups: Potential performance problem:
>  getGroups(user=xxxx) took 46817 milliseconds.
>  INFO namenode.FSNamesysatem(FSNamesystemLoclk.java:writeUnlock(252))- 
> FSNameSystem write lock held for 46817 ms via java.lang.thread.getStackTrace
> We Found the Ldap Server become the bottleneck for NN operations, Single Ldap 
> Server  only support hundred request per seconds
> ps. The Server was running nslcd 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to