Eric Noulard wrote:
2007/12/11, Jason Stewart <[EMAIL PROTECTED]>:
This is not scientific but I wrote a quick perl script to compile a
simple library that we use (GCTPc). It consists of 70 C files with most
of the files between 5K and 6K, a few are as large as 70K. The script
just uses the time() function to grab the elapsed seconds and runs three
tests. The first runs one cl.exe process with all 70 files with the '-c'
flag to only compile. The second compiles each C file with it's own
invocation of cl.exe. The last repeats the first, but with the new,
experimental, '/MP' flag that does multiprocessor builds.

I get the following times for these files:
    all files      : 2 seconds
    single files : 7 seconds
    mp build   : 1 second

I repeated this test with a set of 15 C++ files that are larger with
template code and that take significantly longer and I got the following
times:
    all files      : 60 seconds
    single files : 78 seconds
    mp build   : 51 seconds

So, even on the single processor build the single invocation is almost
25% faster.


Take it all with a grain of salt.

I'm not a big MS Platform user but I like the idea
of compilation speed-up very much.

I personnally use ccache (http://ccache.samba.org/)
on Linux + gcc and there is x2 up to x4 _SPEEDUP_
(when recompiling since initial compilation is slower)
on a C++ project with 50+ files with moderate template usage.

Yes, recompiling goes faster, but if you actually have made any changes to the source code (which is often the case when you're compiling ;-)) then ccache won't give you anything.

Most people I've seen claim that ccache is useful are people who are stuck with a build system with broken dependencies who need to do "make clean" a lot. ;-)

Actually, ccache comes in its best light when you can use it for sharing build-results.

--
/Jesper



_______________________________________________
CMake mailing list
[email protected]
http://www.cmake.org/mailman/listinfo/cmake

Reply via email to