I think you've veered onto a different (but related) topic here. I was talking about binary compatibility between libraries, you're talking about allowing the GHC API to make use of .hi files generated by a different version of GHC.

Right. I was also assuming that, to link with a library, you'd have to
compile against it, but your scenario seems to be: compile against
version A, then link against any compatible version. I can see how
that would arise with dynamically loaded libraries.

You can have binary compatibility without .hi-file compatibility (due to .hi format changes), and vice versa. Hmm, complicated isn't it.

Complicated, indeed. And with increasing use of the GHC Api,
the effects of those complications are spreading. The bootstap
issues are only the first warning.

I'm not proposing right now to strive for any kind of .hi-file compatibility except between minor releases of GHC. That is, I think it would be nice, and quite easliy achievable, for GHC 6.8.3 to read 6.8.2's .hi files (perhaps it can already, I don't know).

Haddock will be part of the Haskell Platform, so it'll come with a set of compatible tools and libraries, so most users won't encounter any problems.

I'm used to building and putting one version of Alex/Happy/Haddock somewhere, and only rebuilding them when I actually need to update them, when builds start to fail, and features need to change. Now, I'll need to rebuild Haddock even if I haven't updated it, and do so for every single version of GHC+libraries I want to use it with. The same is going to happen with every tool built on top of the GHC Api (HaRe, tag file generator, library search engine, ..), so each and every one of those tools will have to be re-built or re-downloaded,
and a separate version of each of those tools will have to be installed,
for every version of the Haskell Platform.

Glueing all components into monolithic blocks of concrete that contain
copies of everything wasn't my idea of how the Haskell Platform would improve things!-)

Please reconsider striving for a form of cross-version compatibility
that will solve this issue for GHC Api clients, as a priority on par with binary compatibility.

In the special case of linking GHC V2-built libraries with GHC V1-built
libraries, couldn't each library use its own version of an RTS package, as it would if exported via FFI?

there are so many problems with this I don't know where to start. Who does the marshalling? who generates the marshalling code? what about cross-runtime pointers? Have you thought this through? :-)

Not really, apparently!-) I was thinking that each item of data would be self-describing and -evaluating, but while evaluating something can call the rts associated with that something, one would still need a way
to inspect/use the evaluation results based on one rts from code based
on another rts. Which is where marshalling issues come in and the idea goes down the drain:-( Perhaps there is a way to have a universal
version-independent description format (eg, you don't need to know
how a function is represented, as long as you know how to call it; you
don't need to know how a data structure is represented, as long as you
can fold over it; etc.), but I doubt that could be bolted on as an afterthought to something as complex as GHC.

Ah, I'd been wondering about that when I saw the patch notice,
but couldn't see how it would help with cross-GHC-version
compatibility: wouldn't the recompilation checker just notice
that the fingerprint isn't the newest possible and therefore want
to rebuild the old library?

I don't think I understand this question, but I'll try to answer. It helps with cross-version compatibility by telling us when we have managed to create a compatible ABI - the fingerprints will be the same (this glosses over a large number of details and problems, many of which I haven't thought through yet, but that's the basic idea).

The point I kept stumbling over was: the same as what? To compare
fingerprints, you need two: one in the binary library, one generated
from somewhere. But to generate a fingerprint, don't you have to
compile the sources?

Perhaps there is no problem there, but it seems to come down to
Haskell's decision not to have module interfaces: I can't say that
I'm importing any module providing a specified interface, I can
only say that I'm importing a given module, with the interface
implied by what that module exports. So I can't have a fingerprint
for a module import interface, and compare that against the fingerprint in a binary library. I can only compare with the fingerprint of another module hopefully exporting a compatible interface.

A related issue are the boot files for recursive modules - they
attempt to describe module interfaces independent of module
source code, to break up the recursion. Or modules parameterised by imports, which would avoid many a need for module recursion, and give a language level representation of what you're trying to achieve with library binary compatibility, but it needs a representation of module interfaces.

Perhaps Haskell' should re-consider the idea of having an
explicit representation of module interfaces, to get all that
work out of the dark and in the open?

Claus

_______________________________________________
Cvs-ghc mailing list
Cvs-ghc@haskell.org
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to