Is the Opal compiler itself written in Opal? I guess it must be, or else you
wouldn't be having this problem.

We have a similar problem with the Zorba XQuery engine. In that case, while
Zorba is coded in C++, some of the C++ files are generated from XQueries,
which leads to a similar chicken/egg problem. Our needs are not exactly the
same as yours, but they're pretty similar, so perhaps what we did would be
educational.

I worked for a long time to come up with a system which met the following
requirements:

1. Allowed building straight out of subversion (ie, without requiring Zorba
to already exist). This obviously requires checking the generated .cpp files
into subversion along with the input XQueries (.xq files).
2. Would automatically re-generate the .cpp files if any of the input .xq
files were modified (obviously only on second or subsequent builds once
there was a functional Zorba executable).
3. Would accomplish (2) in such a way that developers could not easily
forget to check in the new pre-generated .cpp files when the input .xq files
were changed.
4. Would NOT trigger full rebuilds unnecessarily.

That last one was the most difficult. The naive way to meet requirements
(1)-(3) with cmake was to check for a functional Zorba executable at CMake
configuration time and, if it exists, add build rules (using
ADD_CUSTOM_COMMAND) to generate the .cpp files, along with dependencies on
the .xq files. Unfortunately, that meant that if you had a clean tree, then
built, and then executed cmake, the next build would rebuild the world
because all the generated files would get re-generated unnecessarily (some
of them are actually .h files which end up being included by almost
everything).

My solution is fairly complex, but it DOES meet all the requirements.

A. The pre-generated .cpp files are checked in to svn, but in subdirectories
named "pregenerated", NOT in the normal source directories.
B. CMake itself does not check for whether Zorba is functional or not.
Instead it unambiguously adds a custom command which executes a script for
every generated source file. (I wrote the script in CMake also for
portability.) This custom command has dependencies on the .xq files and on
the script file.
C. This script checks to see if Zorba works.
   i. If Zorba does NOT work, it copies the pre-generated source file into
the current *binary* directory using "cmake -E copy_if_different".
  ii. If Zorba DOES work, it executes the XQuery and generates the file into
the current *binary* directory. It then executes "cmake -E
copy_if_different" to copy the newly-generated file back into the correct
"pregenerated" directory. (This is to meet requirement (3) - svn will now
see those files as modified, so it'll be hard to forget to check them in.)

This all works because of a slightly odd quirk in CMake: If you list a .cpp
file as an input to ADD_LIBRARY() or ADD_EXECUTABLE(), CMake will happily
compile that file from either the current source dir O the current binary
dir. In this case, I've set it up such that it will always find the file
only in the binary directory. This allows me to have the fine-grained
control I need over exactly when the .cpp file's timestamp changes, which
makes all the dependency checking work right.

As I said, it's complex, but it meets all our requirements - and I tried
probably a half-dozen simpler arrangements that fell down in one way or
another prior to coming up with this little mess. The biggest downside is
that it defers the "is Zorba working?" check until build time, and in fact
it has to repeat this check for EVERY generated source file, which is
slightly wasteful. However, two facts make this not a big deal:

- The script first checks whether the Zorba executable actually EXISTS,
which is fast. If it doesn't, it quickly falls through to the "copy
pre-generated file" step.
- On later builds, the script is only actually executed when the input .xq
files change, because of the dependencies I set up in ADD_CUSTOM_COMMAND().

The upshot is that the slowness only matters when a developer actually
changes the input .xq files. We generate about 110 files, and that repeated
check adds probably 7 seconds to the build, so it's not the end of the world
even then.

So! We don't have a "distribution tarball" as you do, but this arrangement
allows us to have a single consistent setup in source control that works
whether you're building from scratch or already have a successful build.
Perhaps something similar could meet your needs without having to resort to
checking in a tarball. I would imagine it would be very challenging to keep
that tarball in sync as the input files change!

One important note: The scheme we have currently does NOT work very well in
Visual Studio 2010. It works fine with VS2010's nmake, but if you generate a
VS2010 IDE project, things go awry - I believe the current symptom is that
it tends to rebuild everything ALL the time. We have not yet figured out if
this is a flaw with our scheme, with VS2010, or with CMake's VS2010
generator, although our current bet is on CMake - certainly there have been
other reports of difficulties with custom commands in VS2010. If you're
coming from a strictly Makefile-based project, this probably won't matter to
you, but I thought I should mention it.

Ceej
aka Chris Hillery


2010/9/17 Christoph Höger <choe...@umpa-net.de>

> Hi all,
>
> we are working on a port of our old, handcrafted build system for OPAL
> (http://user.cs.tu-berlin.de/~opal/opal-language.html<http://user.cs.tu-berlin.de/%7Eopal/opal-language.html>)
> to cmake.
>
> The Opal compiler produces C-Code, so in theory bootstrapping that beast
> on any machine should be easy as long as the C-code is shipped in a
> distribution tarball (make will detect it is already present and not run
> the non-existing compiler). After deflating that distribution tarball,
> we re-invoke a configure script on the build tree (to get host
> characteristics like integer sizes) and then simply make it.
>
> When switching to cmake we would not have a configure script and the
> c-sources would live in the build directory. So how can we create such a
> distribution tarball? Is there anything in the cmake universe that could
> help us, or do we have to write a cmake script for that task?
>
> best regards,
>
> Christoph
>
> --
> Christoph Höger
>
> Technische Universität Berlin
> Fakultät IV - Elektrotechnik und Informatik
> Übersetzerbau und Programmiersprachen
>
> Sekr. TEL12-2, Ernst-Reuter-Platz 7, 10587 Berlin
>
> Tel.: +49 (30) 314-24890
> E-Mail: christoph.hoe...@tu-berlin.de
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the CMake FAQ at:
> http://www.cmake.org/Wiki/CMake_FAQ
>
> Follow this link to subscribe/unsubscribe:
> http://www.cmake.org/mailman/listinfo/cmake
>
_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: 
http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake

Reply via email to