[ 
https://issues.apache.org/jira/browse/HADOOP-17293?focusedWorklogId=495943&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-495943
 ]

ASF GitHub Bot logged work on HADOOP-17293:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Oct/20 15:07
            Start Date: 06/Oct/20 15:07
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on pull request #2361:
URL: https://github.com/apache/hadoop/pull/2361#issuecomment-704335369


   tests with markers=keep & delete all good other than
   * continuous read() under-fulfillment breaking tests which expect their 
buffers to always be full. Root cause clearly some networking issue, but where?
   * ITestS3AInputStreamPerformance stream had to be reopened 3 times, rather 
than 1. Didn't come back on a rerun.
   
   ```
   [INFO] Running 
org.apache.hadoop.fs.s3a.auth.delegation.ITestRoleDelegationInFileystem
   [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
386.753 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
   [ERROR] 
testDecompressionSequential128K(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
  Time elapsed: 215.757 s  <<< FAILURE!
   java.lang.AssertionError: 
   open operations in
   null expected:<1> but was:<3>
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failNotEquals(Assert.java:834)
        at org.junit.Assert.assertEquals(Assert.java:645)
        at 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.assertOpenOperationCount(ITestS3AInputStreamPerformance.java:189)
        at 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.assertStreamOpenedExactlyOnce(ITestS3AInputStreamPerformance.java:181)
        at 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testDecompressionSequential128K(ITestS3AInputStreamPerformance.java:324)
   ```
   
   Assumption: again, network playing up.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 495943)
    Time Spent: 1h  (was: 50m)

> refreshing S3Guard records after TTL-triggered-HEAD breaks some workflows
> -------------------------------------------------------------------------
>
>                 Key: HADOOP-17293
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17293
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> an incidental part of HDP-13230 was a fix to innerGetFileStatus, wherein 
> after a HEAD request we would update the DDB record, so resetting it's TTL
> Applications which did remote updates of buckets without going through 
> s3guard are now triggering failures in applications in the cluster when they 
> go to open the file



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to