[
https://issues.apache.org/jira/browse/HADOOP-18456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17607051#comment-17607051
]
ASF GitHub Bot commented on HADOOP-18456:
-----------------------------------------
steveloughran commented on code in PR #4909:
URL: https://github.com/apache/hadoop/pull/4909#discussion_r975123137
##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWeakReferenceMap.java:
##########
@@ -125,11 +128,67 @@ public void testDemandCreateEntries() {
}
+ /**
+ * It is an error to have a factory which returns null.
+ */
+ @Test
+ public void testFactoryReturningNull() throws Throwable {
+ referenceMap = new WeakReferenceMap<>(
+ (k) -> null,
+ null);
+ intercept(NullPointerException.class, () ->
+ referenceMap.get(0));
+ }
+
+ /**
+ * Test the WeakReferenceThreadMap extension.
+ */
+ @Test
+ public void testWeakReferenceThreadMapRejectsNullAssignment()
+ throws Throwable {
+ WeakReferenceThreadMap<String> threadMap = new WeakReferenceThreadMap<>(
+ id -> "Entry for thread ID " + id,
+ null);
+
+ Assertions.assertThat(threadMap.setForCurrentThread("hello"))
+ .describedAs("current thread map value on first set")
+ .isNull();
+
+ // second attempt returns itself
+ Assertions.assertThat(threadMap.setForCurrentThread("hello"))
Review Comment:
ok
> NullPointerException in ObjectListingIterator's constructor
> -----------------------------------------------------------
>
> Key: HADOOP-18456
> URL: https://issues.apache.org/jira/browse/HADOOP-18456
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 3.3.9
> Reporter: Quanlong Huang
> Assignee: Steve Loughran
> Priority: Blocker
> Labels: pull-request-available
>
> We saw NullPointerExceptions in Impala's S3 tests: IMPALA-11592. It's thrown
> from the hadoop jar:
> {noformat}
> Caused by: java.lang.NullPointerException
> at
> org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.<init>(Listing.java:621)
> at
> org.apache.hadoop.fs.s3a.Listing.createObjectListingIterator(Listing.java:163)
> at
> org.apache.hadoop.fs.s3a.Listing.createFileStatusListingIterator(Listing.java:144)
> at
> org.apache.hadoop.fs.s3a.Listing.getListFilesAssumingDir(Listing.java:212)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListFiles(S3AFileSystem.java:4790)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listFiles$37(S3AFileSystem.java:4732)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524)
> at
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2363)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2382)
> at
> org.apache.hadoop.fs.s3a.S3AFileSystem.listFiles(S3AFileSystem.java:4731)
> at
> org.apache.impala.common.FileSystemUtil.listFiles(FileSystemUtil.java:754)
> ... {noformat}
> We are using a private build of the hadoop jar. Version: CDP
> 3.1.1.7.2.16.0-164
> Code snipper of where the NPE throws:
> {code:java}
> 604 @Retries.RetryRaw
> 605 ObjectListingIterator(
> 606 Path listPath,
> 607 S3ListRequest request,
> 608 AuditSpan span) throws IOException {
> 609 this.listPath = listPath;
> 610 this.maxKeys = listingOperationCallbacks.getMaxKeys();
> 611 this.request = request;
> 612 this.objectsPrev = null;
> 613 this.iostats = iostatisticsStore()
> 614 .withDurationTracking(OBJECT_LIST_REQUEST)
> 615 .withDurationTracking(OBJECT_CONTINUE_LIST_REQUEST)
> 616 .build();
> 617 this.span = span;
> 618 this.s3ListResultFuture = listingOperationCallbacks
> 619 .listObjectsAsync(request, iostats, span);
> 620 this.aggregator =
> IOStatisticsContext.getCurrentIOStatisticsContext()
> 621 .getAggregator(); // <---- thrown here
> 622 }
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]