[ 
https://issues.apache.org/jira/browse/HADOOP-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14602588#comment-14602588
 ] 

Tony Reix commented on HADOOP-12106:
------------------------------------

I think that the issue is there, when data is built:

./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java


  private void process(ByteBuffer inBuffer, ByteBuffer outBuffer)
        throws IOException {
...
 >> HERE >>>       int n = cipher.update(inBuffer, outBuffer);
System.out.println("@ JceAesCtrCryptoCodec.process: 0" + " n= " + n + " 
inputSize= " + inputSize );
        if (n < inputSize) {
          /**
           * Typically code will not get here. Cipher#update will consume all
           * input data and put result in outBuffer.
           * Cipher#doFinal will reset the crypto context.
           */
          contextReset = true;
System.out.println("@ JceAesCtrCryptoCodec.process: 0" + " Typically code will 
not get here" + " contextReset= " + contextReset);
          cipher.doFinal(inBuffer, outBuffer);
        }

On AIX, I get traces like:
@ CryptoOutputStream.write: 1 len= 5283 remaining= 8192
@ CryptoOutputStream.write: END len= 0 remaining= 8192
@ CryptoOutputStream.encrypt: 0 padding= 0 inBuffer.position()= 5283
@ CryptoOutputStream.encrypt: 01 encryptor.encrypt
@ JceAesCtrCryptoCodec.encrypt: 0
@ JceAesCtrCryptoCodec.process: 0 n= 5280 inputSize= 5283
@ JceAesCtrCryptoCodec.process: 0 Typically code will not get here 
contextReset= true

Though on Ubuntu/x86_64/OpenJDK, I get:
@ CryptoOutputStream.write: END len= 0 remaining= 8192
@ CryptoOutputStream.encrypt: 0 padding= 0 inBuffer.position()= 4149
@ CryptoOutputStream.encrypt: 01 encryptor.encrypt
@ JceAesCtrCryptoCodec.encrypt: 0
@ JceAesCtrCryptoCodec.process: 0 n= 4149 inputSize= 4149


At the end of  CryptoOutputStream.write() , the line:
              int n = cipher.update(inBuffer, outBuffer);
returns: n < inputSize on AIX/IBMJVM though with Linux/OpenJDK we have : n == 
inputSize .

It looks that cipher belongs to: javax.crypto.Cipher .



More traces :

System.out.println("@ JceAesCtrCryptoCodec.process: BEFORE cipher.update" + " 
inBuffer= " + inBuffer.toString() + " outBuffer= " + outBuffer.toString());
        int n = cipher.update(inBuffer, outBuffer);
System.out.println("@ JceAesCtrCryptoCodec.process: 0" + " n= " + n + " 
inputSize= " + inputSize );
System.out.println("@ JceAesCtrCryptoCodec.process: AFTER cipher.update" + " 
inBuffer= " + inBuffer.toString() + " outBuffer= " + outBuffer.toString());


AIX/IBM JVM:
@ JceAesCtrCryptoCodec.process: BEFORE cipher.update
        inBuffer  = java.nio.DirectByteBuffer[pos=0 lim=2634 cap=8192]
        outBuffer= java.nio.DirectByteBuffer[pos=0 lim=8192 cap =8192]
@ JceAesCtrCryptoCodec.process: 0 n= 2624 inputSize= 2634
@ JceAesCtrCryptoCodec.process: AFTER  cipher.update
        inBuffer  = java.nio.DirectByteBuffer[pos=2634 lim=2634 cap=8192]
        outBuffer= java.nio.DirectByteBuffer[pos=2624 lim=8192 cap=8192]

2624 instead of 2634 !!!


OpenJDK:
@ JceAesCtrCryptoCodec.process: BEFORE cipher.update
        inBuffer  = java.nio.DirectByteBuffer[pos=0 lim=7623 cap=8192]
        outBuffer= java.nio.DirectByteBuffer[pos=0 lim=8192 cap=8192]
@ JceAesCtrCryptoCodec.process: 0 n= 7623 inputSize= 7623
@ JceAesCtrCryptoCodec.process: AFTER  cipher.update
        inBuffer  = java.nio.DirectByteBuffer[pos=7623 lim=7623 cap=8192]
        outBuffer= java.nio.DirectByteBuffer[pos=7623 lim=8192 cap=8192]


Yes, cipher.update(inBuffer, outBuffer) does not work fine with IBM JVM.

However, I do not know the expected behavior of Cypher.update() .
Maybe not returning  n==inputSize  is a correct - but rare - behavior, and 
Hadoop code is not handling correctly this rare case since it has never been 
tested with IBM JVM ?

> org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS fails with IBM JVM
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-12106
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12106
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.6.0, 2.7.0
>         Environment: Hadoop 2.60 and 2.7+
>  - AIX/PowerPC/IBMJVM
>  - Ubuntu/i386/IBMJVM
>            Reporter: Tony Reix
>         Attachments: mvn.Test.TestCryptoStreamsForLocalFS.res20.AIX.Errors, 
> mvn.Test.TestCryptoStreamsForLocalFS.res20.Ubuntu-i386.IBMJVM.Errors, 
> mvn.Test.TestCryptoStreamsForLocalFS.res22.OpenJDK.Errors
>
>
> On AIX (IBM JVM available only), many sub-tests of :
>    org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
> fail:
>  Tests run: 13, Failures: 5, Errors: 1, Skipped: 
>   - testCryptoIV
>   - testSeek
>   - testSkip
>   - testAvailable
>   - testPositionedRead
> When testing SAME exact code on Ubuntu/i386 :
>   - with OpenJDK, all tests are OK
>   - with IBM JVM, tests randomly fail.
> The issue may be in the IBM JVM, or in some Hadoop code that not perfectly 
> handles differences due to different IBM JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to