On 2014-08-25, 2:22 PM, Gregory Szorc wrote:
On 8/25/14 11:00 AM, Ehsan Akhgari wrote:
On 2014-08-25, 1:54 PM, Nathan Froyd wrote:
----- Original Message -----
Often my workflow looks something like this:
change files in directory D
rebuild only D, get a list of errors to fix
...iterate until no more errors
try to rebuild a few related directories, fixing errors there
then rebuild the entire tree, hopefully without errors
Often the changes to directory D are to header files that are
included all
over the tree. If I could only do full rebuilds, I would have to wait
for a
bunch of unrelated files to compile before I could see if the
directory I'm
interested in works. Then, if fixing an error required making a
change to
one of the header files again, I would have to wait for a ton of
files to
recompile again. A process that would have taken 20 minutes could be
drawn
out into an hour.
This happens really often when changing the JS engine or xpconnect,
since
files like jsapi.h, jsfriendapi.h, and xpcpublic.h are included almost
everywhere.
Strawman idea: add support for:
mach check-syntax <directory>...
which runs the compiler with -fsyntax-only (GCCish) or /Zs (MSVC) on
the source files in the given directory(ies). This sort of thing
should be fairly straightforward, given that we already support
something similar for generating preprocessed files in a given
directory. Then you get to keep the fast workflow, the build folks
get to ignore single-directory rebuilds, and everybody winds up
happy--well, as soon as people adjust to typing new commands for their
workflow, of course...
(This obviously doesn't address linking errors, but AIUI, the workflow
above isn't about linking errors, but build errors.)
That will do nothing to reduce the overhead of the build system. Most
of the errors that Bill is talking about can be found the first time the
code is exercised by the compiler, so removing the codegen won't help
all that much.
Personally I find 15s of overhead for full builds bad enough to want to
have a way to build only one directory. Also, I wonder if the numbers
that Gregory was suggesting are on our slowest platform: Windows. If
not, they will probably translate to minutes on Windows, which is
unacceptable IMO.
I was citing Linux/OS X. Windows is kind of screwed. New process
overhead and poor I/O through our POSIX-based tools conspire to make the
build times horrible. I don't think there's much we can do on Windows
except a) cut down on the number of processes we create during the build
b) use a build backend like Tup that is monitoring the filesystem and
doesn't have to stat the entire world at build time to construct a
dependency graph.
"kind of screwed" shouldn't be our only answer here. Recently someone
measured the amount of CreateProcess overhead and it turns out that it
takes 60ms for them. <http://llvm.org/bugs/show_bug.cgi?id=20253#c4> I
don't think anybody has measured the IO overhead but with some numbers
we will be able to know where most of the overhead comes from. But
without a non-recursive shell-like build system backend which is very
keen on creating new processes, I don't think we can ever get the full
builds to a low enough overhead that we can kill per-directory builds.
So it would be nice to not try to plan on killing them before we have a
useful replacement in place.
Cheers,
Ehsan
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform