[
https://issues.apache.org/jira/browse/HADOOP-18456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606651#comment-17606651
]
ASF GitHub Bot commented on HADOOP-18456:
-----------------------------------------
steveloughran opened a new pull request, #4909:
URL: https://github.com/apache/hadoop/pull/4909
The fix for HADOOP-18456/IMPALA-11592 is, if our hypothesis is correct,
in WeakReferenceMap.create() where a strong reference to the new value
is kept in a local variable *and referred to later* so that the JVM will not
GC it.
### Description of PR
WeakReferenceMap.create() is resistant and resilient to GC taking place
during
its creation process.
Local variables were renamed to show when refs are strong vs. weak.
### How was this patch tested?
There's a new test, but otherwise code review.
### For code changes:
- [X] Does the title or this PR starts with the corresponding JIRA issue id
(e.g. 'HADOOP-17799. Your PR title ...')?
- [ ] Object storage: have the integration tests been executed and the
endpoint declared according to the connector-specific documentation?
- [ ] If adding new dependencies to the code, are these dependencies
licensed in a way that is compatible for inclusion under [ASF
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`,
`NOTICE-binary` files?
> NullPointerException in ObjectListingIterator's constructor
> -----------------------------------------------------------
>
> Key: HADOOP-18456
> URL: https://issues.apache.org/jira/browse/HADOOP-18456
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Reporter: Quanlong Huang
> Assignee: Steve Loughran
> Priority: Blocker
>
> We saw NullPointerExceptions in Impala's S3 tests: IMPALA-11592. It's thrown
> from the hadoop jar:
> {noformat}
> Caused by: java.lang.NullPointerException
> at
> org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.<init>(Listing.java:621)
> at
> org.apache.hadoop.fs.s3a.Listing.createObjectListingIterator(Listing.java:163)
> at
> org.apache.hadoop.fs.s3a.Listing.createFileStatusListingIterator(Listing.java:144)
> at
> org.apache.hadoop.fs.s3a.Listing.getListFilesAssumingDir(Listing.java:212)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListFiles(S3AFileSystem.java:4790)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listFiles$37(S3AFileSystem.java:4732)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2363)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2382)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.listFiles(S3AFileSystem.java:4731)
> at
> org.apache.impala.common.FileSystemUtil.listFiles(FileSystemUtil.java:754)
> ... {noformat}
> We are using a private build of the hadoop jar. Version: CDP
> 3.1.1.7.2.16.0-164
> Code snipper of where the NPE throws:
> {code:java}
> 604 @Retries.RetryRaw
> 605 ObjectListingIterator(
> 606 Path listPath,
> 607 S3ListRequest request,
> 608 AuditSpan span) throws IOException {
> 609 this.listPath = listPath;
> 610 this.maxKeys = listingOperationCallbacks.getMaxKeys();
> 611 this.request = request;
> 612 this.objectsPrev = null;
> 613 this.iostats = iostatisticsStore()
> 614 .withDurationTracking(OBJECT_LIST_REQUEST)
> 615 .withDurationTracking(OBJECT_CONTINUE_LIST_REQUEST)
> 616 .build();
> 617 this.span = span;
> 618 this.s3ListResultFuture = listingOperationCallbacks
> 619 .listObjectsAsync(request, iostats, span);
> 620 this.aggregator =
> IOStatisticsContext.getCurrentIOStatisticsContext()
> 621 .getAggregator(); // <---- thrown here
> 622 }
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]