[
https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479174#comment-16479174
]
Steve Loughran commented on HADOOP-15469:
-----------------------------------------
Risk of change? only if you execute a job where at job setup all was good and
yet at job complete something not _temporary has arrived. Its just a safety
check , but one which isn't handling things.
If we wanted to retain it, you could do something like
if (exists(dest)) ls dest, filter out temp _* and .* files then fail iff the
filtered list is non-empty
> S3A directory committer commit job fails if _temporary directory created
> under dest
> -----------------------------------------------------------------------------------
>
> Key: HADOOP-15469
> URL: https://issues.apache.org/jira/browse/HADOOP-15469
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.1.0
> Environment: spark test runs
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Attachments: HADOOP-15469-001.patch
>
>
> The directory staging committer fails in commit job if any temporary
> files/dirs have been created. Spark work can create such a dir for placement
> of absolute files.
> This is because commitJob() looks for the dest dir existing, not containing
> non-hidden files.
> As the comment says, "its kind of superfluous". More specifically, it means
> jobs which would commit with the classic committer & overwrite=false will fail
> Proposed fix: remove the check
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]