[
https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=566404&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-566404
]
ASF GitHub Bot logged work on HADOOP-17531:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 15/Mar/21 17:22
Start Date: 15/Mar/21 17:22
Worklog Time Spent: 10m
Work Description: ayushtkn commented on pull request #2732:
URL: https://github.com/apache/hadoop/pull/2732#issuecomment-799599534
Thanx @steveloughran for trying that out. I will figure out a way to run
that UT.
I found a doc:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html#Supporting_FileSystems_with_login_and_authentication_parameters
I will sort out the cred stuff and try following this doc, let me know if
this isn't the best or correct doc to follow.
I added a HDFS contract test as well and that fetched me a same exception as
S3A:
```
java.lang.IllegalArgumentException: Wrong FS:
file:/Users/ayushsaxena/code/hadoop-code/osCode/hadoop/hadoop-tools/hadoop-distcp/target/test-dir/TestHDFSContractDistCp/testDistCpWithIterator/local/dest,
expected: hdfs://localhost:58099
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:806)
at
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:257)
at
org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:1272)
at
org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:1259)
at
org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1204)
at
org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1200)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:1218)
at
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2161)
at org.apache.hadoop.fs.FileSystem$5.<init>(FileSystem.java:2287)
at org.apache.hadoop.fs.FileSystem.listFiles(FileSystem.java:2284)
at
org.apache.hadoop.tools.contract.AbstractContractDistCpTest.testDistCpWithIterator(AbstractContractDistCpTest.java:622)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
I fixed it, So I suppose S3A should work, But the ABFS stuff I need to
check, I feel it doesn't have a filesystem check (`checkPath`), but I will
figure out. Thanx!!!
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 566404)
Time Spent: 4h 20m (was: 4h 10m)
> DistCp: Reduce memory usage on copying huge directories
> -------------------------------------------------------
>
> Key: HADOOP-17531
> URL: https://issues.apache.org/jira/browse/HADOOP-17531
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Ayush Saxena
> Assignee: Ayush Saxena
> Priority: Critical
> Labels: pull-request-available
> Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log
>
> Time Spent: 4h 20m
> Remaining Estimate: 0h
>
> Presently distCp, uses the producer-consumer kind of setup while building the
> listing, the input queue and output queue are both unbounded, thus the
> listStatus grows quite huge.
> Rel Code Part :
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635
> This goes on bredth-first traversal kind of stuff(uses queue instead of
> earlier stack), so if you have files at lower depth, it will like open up the
> entire tree and the start processing....
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]