[
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987305#comment-14987305
]
Brahma Reddy Battula commented on HADOOP-12053:
-----------------------------------------------
bq. The patch adds test TestHarFileSystemBasics#testCheckPath, but I'm not sure
the test really covers the bug. I tried reverting the main code change and
running the test, and it still passed. Ideally, we'd have a test that fails
before the fix and then passes after the fix.
I think the test fails without other changes in patch, before HADOOP-12304, The
exception thrown is like below
{noformat}org.apache.hadoop.fs.InvalidPathException: Invalid path name Wrong
FS:
har://file-localhost/hadoop/work/hadoop-common-project/hadoop-common/build/test/data/localfs/path1/path2/my.har,
expected: har://file-localhost:0
at
org.apache.hadoop.fs.AbstractFileSystem.checkPath(AbstractFileSystem.java:391)
at
org.apache.hadoop.fs.TestHarFileSystemBasics.testCheckPath(TestHarFileSystemBasics.java:417)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){noformat}
But with HADOOP-12304, Test wont fail with/without other changes in patch.
So IMO, HADOOP-12304 fixed this issue itself, but terminologies used in
descriptions are different. In other words this jira is duplicate of
HADOOP-12304.
But when I looked into following trace (see {{DelegateToFileSystem.java:124}}
based on line number) where I had seen the issue, it tells that HADOOP-12304
patch has there.
{noformat}
Exception in thread "main" org.apache.hadoop.fs.InvalidPathException: Invalid
path name Wrong FS:
har://hdfs-hacluster/tmp/archived/application_1444976980780_0001-application_1444976980780_0001-1444977302478.har/securedn/logs/application_1444976980780_0001,
expected: har://hdfs-hacluster/
at
org.apache.hadoop.fs.AbstractFileSystem.checkPath(AbstractFileSystem.java:390)
at
org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:124)
at org.apache.hadoop.fs.FileContext$15.next(FileContext.java:1169)
at org.apache.hadoop.fs.FileContext$15.next(FileContext.java:1165)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1165)
at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1630)
{noformat}
[~jira.shegalov] , As [~cnauroth] pointed , should write a testcase to
demonstrates this bug. can you update testcase..?
> Harfs defaulturiport should be Zero ( should not -1)
> ----------------------------------------------------
>
> Key: HADOOP-12053
> URL: https://issues.apache.org/jira/browse/HADOOP-12053
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.7.0
> Reporter: Brahma Reddy Battula
> Assignee: Gera Shegalov
> Priority: Critical
> Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch,
> HADOOP-12053.003.patch
>
>
> The harfs overrides the "getUriDefaultPort" method of AbstractFilesystem, and
> returns "-1" . But "-1" can't pass the "checkPath" method when the
> {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
> *Test Code :*
> {code}
> for (FileStatus file : files) {
> String[] edges = file.getPath().getName().split("-");
> if (applicationId.toString().compareTo(edges[0]) >= 0 &&
> applicationId.toString().compareTo(edges[1]) <= 0) {
> Path harPath = new Path("har://" +
> file.getPath().toUri().getPath());
> harPath = harPath.getFileSystem(conf).makeQualified(harPath);
> remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
> harPath, applicationId, appOwner,
> LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
> if
> (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
> {
> remoteDirSet.add(remoteAppDir);
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)