[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15977325#comment-15977325
 ] 

Yongjun Zhang commented on HADOOP-14104:
----------------------------------------

Hive code
{code}
private boolean isEncryptionEnabled(DFSClient client, Configuration conf) {
  try {
    DFSClient.class.getMethod("isHDFSEncryptionEnabled");
  } catch (NoSuchMethodException e) {
    // the method is available since Hadoop-2.7.1
    // if we run with an older Hadoop, check this ourselves
    return !conf.getTrimmed(DFSConfigKeys.DFS_ENCRYPTION_KEY_PROVIDER_URI, 
"").isEmpty();
  }
  return client.isHDFSEncryptionEnabled();
}
{code}


> Client should always ask namenode for kms provider path.
> --------------------------------------------------------
>
>                 Key: HADOOP-14104
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14104
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: kms
>            Reporter: Rushabh S Shah
>            Assignee: Rushabh S Shah
>             Fix For: 2.8.1, 3.0.0-alpha3
>
>         Attachments: HADOOP-14104-branch-2.8.patch, 
> HADOOP-14104-branch-2.patch, HADOOP-14104-trunk.patch, 
> HADOOP-14104-trunk-v1.patch, HADOOP-14104-trunk-v2.patch, 
> HADOOP-14104-trunk-v3.patch, HADOOP-14104-trunk-v4.patch, 
> HADOOP-14104-trunk-v5.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to