[ 
https://issues.apache.org/jira/browse/HADOOP-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18035711#comment-18035711
 ] 

ASF GitHub Bot commented on HADOOP-18090:
-----------------------------------------

github-actions[bot] commented on PR #4700:
URL: https://github.com/apache/hadoop/pull/4700#issuecomment-3494233982

   We're closing this stale PR because it has been open for 100 days with no 
activity. This isn't a judgement on the merit of the PR in any way. It's just a 
way of keeping the PR queue manageable.
   If you feel like this was a mistake, or you would like to continue working 
on it, please feel free to re-open it and ask for a committer to remove the 
stale tag and review again.
   Thanks all for your contribution.




> Exclude com/jcraft/jsch classes from being shaded/relocated
> -----------------------------------------------------------
>
>                 Key: HADOOP-18090
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18090
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: build
>    Affects Versions: 3.3.1
>            Reporter: mkv
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Spark 3.2.0 transitively introduces hadoop-client-api and 
> hadoop-client-runtime dependencies.
> When we create a SFTPFileSystem instance 
> (org.apache.hadoop.fs.sftp.SFTPFileSystem) it tries to load the relocated 
> classes from _com.jcraft.jsch_ package.
> The filesystem instance creation fails with error:
> {code:java}
> java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.com.jcraft.jsch.SftpException
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357) {code}
> Excluding client from transitive load of spark and directly using 
> hadoop-common/hadoop-client is the way its working for us.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to