I've been playing around with various Amazon EC2 instances lately. They
offer a x1.32xlarge instance that features 128 vCPUs. So I did what any
engineer with access to a corporate AWS account would do: I obtained an
instance and measured how long it took to build Firefox!

$ time ./mach build
...
Overall system resources - Wall time: 237s; CPU: 37%; Read bytes: 626688;
Write bytes: 15316516864; Read time: 32; Write time: 2306432

real        3m59.590s
user    172m42.840s
sys       12m49.444s

According to `dstat`, we do manage to saturate all available CPU cores and
get 0% idle CPU for a large chunk of the "compile" tier.

I think that's impressive. Even more impressive is there were no race
conditions! The build system has come a long way.

It's worth noting that a c4.8xlarge (which "only" has 36 vCPUs) can do a
full build in 5 minutes, only a minute slower. Some of that is due to a
higher clock speed. But most of that is due to the reality that there are
still large chunks of the build where we can't saturate all available
cores. In particular, there are long periods with low core utilization:

* during configure
* between the "export" and "compile" tiers (WebIDL and IPDL processing
delay start of compile tier)
* during libxul linking

Just thought people would like to know how well C++ compiling scales in our
build system these days. We've come a long way.
_______________________________________________
dev-builds mailing list
dev-builds@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-builds

Reply via email to