[ 
https://issues.apache.org/jira/browse/HADOOP-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13732684#comment-13732684
 ] 

Daryn Sharp commented on HADOOP-9820:
-------------------------------------

bq. exception is being thrown because currently the only header that is 
acceptable when wrapping is enabled is the RPC-header-callId=sasl with the 
SASL-state=wrapped header. If you don't get that then throw the exception 
(which will go with its own response header).

The more specific cases I had in mind:
* Client and server are using mismatched ciphers.  The server can't decode the 
wrapped data.  The server doesn't know what cipher is the client is using so it 
can't send a wrapped response with the exception.  Sending a fatal non-wrapped 
RPC exception of "wrong cipher" exposes no sensitive data.    I guess we just 
close the connection and the client sees EOF.  
* Server wants to send a non-sensitive control messages like "is session alive" 
or "close session".  Requiring non-sensitive messages to be wrapped/unwrapped 
seems overkill.

All said, I'll disallow non-wrapped responses.

bq. SaslRpcClient.SaslRpc*Stream should be named SaslRpcClient.Wrapped*Stream.
Ok.

bq. The default stream buffer size should be configurable instead of hard coded 
"64*1024".
That's the spec default if the buffer size isn't negotiated so it can't be a 
configurable option.  There are java properties to request a different buffer 
size, but if we want to add hadoop config options to override those then that's 
a separate feature.
                
> RPCv9 wire protocol is insufficient to support multiplexing
> -----------------------------------------------------------
>
>                 Key: HADOOP-9820
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9820
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc, security
>    Affects Versions: 3.0.0, 2.1.0-beta
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Blocker
>         Attachments: HADOOP-9820.patch
>
>
> RPCv9 is intended to allow future support of multiplexing.  This requires all 
> wire messages to be tagged with a RPC header so a demux can decode and route 
> the messages accordingly.
> RPC ping packets and SASL QOP wrapped data is known to not be tagged with a 
> header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to