[
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yi Liu updated HADOOP-10603:
----------------------------
Attachment: HADOOP-10603.patch
This patch is split from HADOOP-10150 with a bit modification, it’s *not* a
final patch and I’m improving it.
The {Encryptor} and {Decryptor} interfaces in the patch are similar with
{Compressor} and {Decompressor} interfaces.
There is also a {DirectDecompressor} interface having:
{code} public void decompress(ByteBuffer src, ByteBuffer dst) throws
IOException;{code}
This can avoid some bytes copy, should we define {Encryptor} and {Decryptor} in
this way instead of definition in the patch?
{CryptoFSDataOutputStream} extends and wrap {FSDataOutputStream}, and
{CryptoFSDataInputStream} extends and wrap {FSDataInputStream}. They can be
used in Hadoop FileSystem to supply encryption/decryption functionalities.
The test cases will be updated after we finalize. I have another JIRA to cover
testing crypto streams in HDFS.
Thoughts?
> Crypto input and output streams implementing Hadoop stream interfaces
> ---------------------------------------------------------------------
>
> Key: HADOOP-10603
> URL: https://issues.apache.org/jira/browse/HADOOP-10603
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: security
> Reporter: Alejandro Abdelnur
> Assignee: Yi Liu
> Fix For: 3.0.0
>
> Attachments: HADOOP-10603.patch
>
>
> A common set of Crypto Input/Output streams. They would be used by
> CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills.
> Note we cannot use the JDK Cipher Input/Output streams directly because we
> need to support the additional interfaces that the Hadoop FileSystem streams
> implement (Seekable, PositionedReadable, ByteBufferReadable,
> HasFileDescriptor, CanSetDropBehind, CanSetReadahead,
> HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).
--
This message was sent by Atlassian JIRA
(v6.2#6252)