I've never found the rationale for GHC's binary incompatibility
very convincing (yes, we want cross-package optimizations, and
yes, we do like it if GHC V(n+1) does a better job at compiling
package P than GHC Vn did; but why can't GHC V(n+1) do
at least as good a job as GHC Vn with package P compiled by
GHC Vn? augment the .hi-files format, don't replace it completely;
or have a generic it-works-with-all-versions-but-wont-be-fast section,
preceded by a preferably-use-this-for-speed-version-x section).
It's more than that: the RTS ABI changes between versions too. For example
in 6.8 we introduced pointer tagging, which is a complete ABI change. In
6.12 we'll probably introduce the new code generator, which will no doubt
include a complete revamp of the calling conventions: hence complete ABI
incompatibility.
Right, thanks for pointing that out. But that only affects GHC Api
clients using the full range, front-end/back-end/runtime, such as GHC
itself, or GHCi. While many GHC Api clients, like Haddock or HaRe,
only need to use the frontend, and for them, a partial solution that does
not address the RTS ABI issue would be sufficient, right?
And then, a Haddock 2 built with V1 could process source code
depending on libraries built with V2, by reading the portable parts
of their interface files, even if GHC V1 wouldn't be able to link or
load the V2-built libraries.
Without such a partial solution, GHC Api based tools like Haddock 2
seem to be severely crippled compared to their non GHC Api based
predecessors, as the Haddock GHC build issues demonstrate.
Yes we could probably make new modules able to call old
modules by implementing the necessary compatibility goop in the code
generator, but that's a lot of work, and hard to get right.
I've sometimes wondered why we can make GHC V1-compiled code
available to C clients, but not to GHC V2-compiled clients (or even to
Hugs/yhc/etc). I assume the reasons are efficiency and non-automated
data type conversions?
In the special case of linking GHC V2-built libraries with GHC V1-built
libraries, couldn't each library use its own version of an RTS package,
as it would if exported via FFI?
I've been saying to various people that one of our top priorities for 6.12
should be to make it possible to build packages with a well-defined ABI, so
that you can upgrade a package in-place without having to rebuild
everything that depends on it. So that when we ship a new minor version of
GHC, we can ensure that the packages have the same ABIs as the previous
version, so you won't have to rebuild all your locally-installed packages
(we'd have to require no RTS ABI changes between minor versions).
That sounds very promising. Any progress in reducing binary
incompatibilities between libraries built with different versions of
GHC would be welcome.
The stuff I've been doing with using fingerprints for recompilation
checking is a step in this direction: we can now calculate a fingerprint
for the ABI of a package. The missing part is being able to compile a
package with a stable, predictable, ABI - and that's the part I want to
tackle in 6.12. There will still be ABI incompatibility between major GHC
releases, but between major releases we'll be a lot more free to replace
individual packages and the compiler independently.
Ah, I'd been wondering about that when I saw the patch notice,
but couldn't see how it would help with cross-GHC-version
compatibility: wouldn't the recompilation checker just notice
that the fingerprint isn't the newest possible and therefore want
to rebuild the old library?
Great to know that the issue is now so high on your list,
Claus
_______________________________________________
Cvs-ghc mailing list
Cvs-ghc@haskell.org
http://www.haskell.org/mailman/listinfo/cvs-ghc