[
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904225#comment-15904225
]
Andrew Wang commented on HADOOP-14104:
--------------------------------------
Hi Yongjun,
bq. When the target is the remote cluster, could we fail to find Codec info or
wrong codec info because we don't have remote cluster's configuration?
The FileEncryptionInfo has a cipher type field. If the client doesn't support
the cipher, then it can't read/write the file, and will throw an exception. If
it does support that cipher, it uses its local config to determine the correct
codec implementation to use (i.e. java or native).
>From a safety point of view, we're okay to use the local config. If the client
>is too old and doesn't understand a new cipher type, it'll abort. Supporting a
>new cipher necessarily requires upgrading the client (and potentially also
>installing native libraries), so I think this behavior is okay.
> Client should always ask namenode for kms provider path.
> --------------------------------------------------------
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
> Issue Type: Improvement
> Components: kms
> Reporter: Rushabh S Shah
> Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch,
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]