[ 
https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14127882#comment-14127882
 ] 

Colin Patrick McCabe commented on HADOOP-11064:
-----------------------------------------------

bq. I think that's a good sign that we could port hadoop.dll to use CMake and 
eliminate these kinds of pitfalls. That's a topic for another time though.

I agree, it would be great to see the Windows build of {{hadoop.dll}} use CMake.

bq. FWIW, I'd prefer some kind of solution that bundles the native build into a 
versioned jar. As far as I can tell, that provides the cleanest isolation for 
this use case. Unfortunately, the build infrastructure requirements for this 
look infeasible to me in the short-term.

It seems like this is a general issue with native dependencies of jars 
submitted through YARN.  It would be better to solve it inside YARN, than to 
force a draconian internal API compatibility policy on to libhadoop.

bq. These issues make me think the versioning solution needs additional design 
work and buy-in from a large portion of the community before we settle on it. 
Restoring the old function signatures would be helpful to unblock downstream 
projects that are trying to test against 2.6.0-SNAPSHOT clusters right now. 
Granted, 2.6.0-SNAPSHOT isn't a real release, so we're technically under no 
obligation, but I see it as a matter of good citizenship.

I think the versioning solution does show good citizenship.  We are going out 
of our way to work around this issue with YARN and native dependencies.  It's 
not a perfect workaround, but I don't see how it can be, unless YARN is changed 
to somehow distribute these dependencies (as you hinted at?)

Also, what are the "old" function signatures?  The ones for Hadoop 2.4?  2.5?  
2.5.1?  libhadoop changes all the time and its development isn't frozen.

I am + 1 for v3 of the patch... and based on previous conversations, I think 
most people would be OK with adding versioning to libhadoop.  Perhaps bring it 
up on hadoop-dev if you think this needs more eyes?

I apologize again for being difficult here... you guys have done a lot of good 
work on the native library and on Windows support.  But I want to make sure 
we're not constrained in the future.  It may be a while before we can get a 
perfect solution to this.

> UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 
> method changes
> --------------------------------------------------------------------------------------
>
>                 Key: HADOOP-11064
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11064
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: native
>    Affects Versions: 2.6.0
>         Environment: Hadoop 2.6 cluster, trying to run code containing hadoop 
> 2.4 JARs
>            Reporter: Steve Loughran
>            Assignee: Colin Patrick McCabe
>            Priority: Blocker
>         Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, 
> HADOOP-11064.003.patch
>
>
> The private native method names and signatures in {{NativeCrc32}} were 
> changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed 
> link errors when they try to perform checksums. 
> This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless 
> rebuilt and repackaged with the hadoop- 2.6 JARs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to