steveloughran commented on code in PR #6010:
URL: https://github.com/apache/hadoop/pull/6010#discussion_r1312878914


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java:
##########
@@ -345,6 +345,7 @@ private void uploadBlockAsync(DataBlocks.DataBlock 
blockToUpload,
             return null;
           } finally {
             IOUtils.close(blockUploadData);
+            blockToUpload.close();

Review Comment:
   right, because the line above calls BlockUploadData.close(), a lot of the 
cleanup should take place already; that .startUpload() call on L315 sets the 
blockToUpload.buffer ref to null, so that reference doesn't retain a hold on 
the bytebuffer.
   
   this is why we haven't seen problems yet like OOM/disk storage...cleanup was 
happening.
   
   can you review the code to make sure the sequence of L348 always does the 
right thing and not fail due to some attempted double delete of the file...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to