On 2017-03-08 5:12 PM, Steve Fink wrote:
> On 03/08/2017 06:21 AM, Ehsan Akhgari wrote:
>> On 2017-03-07 2:49 PM, Eric Rahm wrote:
>>> I often wonder if unified builds are making things slower for folks
>>> who use
>>> ccache (I assume one file changing would mean a rebuild for the entire
>>> unified chunk), I'm not sure if there's a solution to that but it
>>> would be
>>> interesting to see if compiling w/o ccache is actually faster at this
>>> point.
>> Unified builds are the only way we build so this is only of theoretical
>> interest.  But at any rate, if your use case is building the tree once
>> every couple of days, you should definitely disable ccache with or
>> without unified builds.  ccache is only helpful for folks who end up
>> compiling the same code over and over again (for example, if you use
>> interactive rebase a lot, or if you switch between branches, or use
>> other VCS commands that touch the file modified times without changing
>> the dates and times.  Basically if you don't switch between branches a
>> lot and don't write a lot of C++ code, ccache probably hurts you more
>> than it helps you.
> 
> We only build unified, but it's not a binary thing -- within the JS
> engine, I found that unified builds substantially speed up full
> compiles, but also substantially slow down incremental compiles. 

Yes, that is the general trade-off.

> And
> that wasn't a great tradeoff when it is pretty common to iterate on a
> single C++ file, since you'd end up rebuilding the concatenation of
> dozens of files, which took a long time to get the compile errors out
> of. I found setting FILES_PER_UNIFIED_FILE to 6 to be a pretty good
> compromise [1], but it depends on the average size of your .cpp files
> and the common usage patterns of people using that portion of the tree.
> Oh, and the number of files in a directory -- if you don't have that
> many files and you have a lot of cores, then you want enough chunks to
> keep your cores busy.

Also on the kind of code you're compiling.  C++ compilers can spend a
ton of time doing things like instantiating templates, and parts of the
codebase (like SpiderMonkey) which use more of the C++ features that are
slower to build will have it worse off with more .cpp files per unified
file.

> Also note that dropping the unified sizes did *not* speed up the time to
> get a new JS shell binary after a single-cpp change, oddly enough. The
> compile time was much faster, but it made up for it seemingly by moving
> the time into a slower link. I guess unified compilation is sort of
> doing some of the linking up-front? Still, when iterations are bound by
> the time it takes to see the latest round of compile errors, dropping
> the unified chunk sizes can be a pretty big win. You might want to
> consider it for your component.

It's pretty hard to reason about this.  I think the best idea is to
experiment with different values and find a sweet spot.  To be
completely honest, I don't remember any more how we picked the current
default, so it may just have been a random hard coded number.  Don't
trust the default too much.  :-)

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to