On Thu, Nov 14, 2013 at 09:43:01PM -0800, Gregory Szorc wrote: > C++ developers, > > Over 90% of the CPU time required to build the tree is spent > compiling or linking C/C++. So, anything we can do to make that > faster will make the overall build faster. > > I put together a quick patch [1] to make it rather simple to extract > compiler resource usage and very basic code metrics during builds. > > Simply apply that patch and build with `mach build > --profile-compiler` and your machine will produce all kinds of > potentially interesting measurements. They will be stuffed into > objdir/.mozbuild/compilerprof/. If you don't feel like waiting (it > will take about 5x longer than a regular build because it performs > separate preprocessing, ast, and codegen compiler invocations 3 > times each), grab an archive of an OS X build I just performed from > [2] and extract it to objdir/.mozbuild/.
a while ago I did something similar by just writing a shell script that ran the compiler on perf, and then setting CC / CXX to be that shell script. I'd guess for most purposes doing each step three times is over kill and you can just trust we compile enough files averaging over all of them will give you representative data. > I put together an extremely simple `mach compiler-analyze` command > to sift through the results. e.g. > > $ mach compiler-analyze preprocessor-relevant-lines > $ mach compiler-analyze codegen-sizes > $ mach compiler-analyze codegen-total-times > > Just run `mach help compiler-analyze` to see the full list of what > can be printed. Or, write your own code to analyze the produced JSON > files. > > I'm sure people who love getting down and dirty with C++ will be > interested in this data. I have no doubt that are compiler time and > code size wins waiting to be discovered through this data. We may > even uncover a perf issue or two. Who knows. Its certainly intresting, but I expect finding comonalities between files will be rather nontrivial :( > Here are some questions I have after casually looking at the data: > > * The mean ratio of .o size to lines from preprocessor is 16.35 > bytes/line. Why do 38/4916 (0.8%) files have a ratio over 100? Why > are a lot of these in js/ and gfx? first guess would be templates. > * What's in the 150 files that have 100k+ lines after preprocessing > that makes them so large? including a whole bunch of headers maybe? or maybe they just use a bunch of expando macros? > * Why does lots of js/'s source code gravitate towards the "bad" > extreme for most of the metrics (code size, compiler time, > preprocessor size)? Do we know these things are even true in a bad way? istm if you use less code to produce the same amount of binary code that's a good thing, so it may just be that code in js/src is particularly dense in some way (templates again?). Similarly large preprocessed size may just mean a lot of inline functions which may be a good thing for performance. Trev > Disclaimer: This patch is currently hacked together. If there is an > interest in getting this checked into the tree, I can clean it up > and do it. Just file a bug in Core :: Build Config and I'll make it > happen when I have time. Or, if an interested party wants to > champion getting it landed, I'll happily hand it off :) > > [1] http://hg.gregoryszorc.com/gecko-collab/rev/741f3074e313 > [2] https://people.mozilla.org/~gszorc/compiler_profiles.tar.bz2 > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform