On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
On 2015/04/28 23:30, Toan Pham wrote:
Linking is a process of resolving symbols from object files with
external libraries, and it should not take that long. When you have a
chance, try to compile the project under a ram filesystem like tmpfs.
I use it most of the time, even when I have an SSD drive. The tmpfs
will speed up compilation significantly.
With full debug build of thunderbird (without -gsplit-dwarf), I think
libxul.so becomes close to 1GB in size.
So creating it puts much workload in terms of I/O, and
in terms of memory pressure, too, since ordinary linker's data
structure to handle the large number of symbols simply runs out of
32-bit memory space during linking.
Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
I may want to try to set aside 2GB of RAM to tmpfs given this size of
libxul.so, but as of now, linking libxul runs in a reasonable time by
using GNU gold linker and -gsplit-dwarf option to GCC.
Thank you for the suggestion, if something goes wrong with
linking/build I may try to create a large tmpfs using RAM.
At the same time, I have to report that I monitor the memory usage
during build, and saw most of memory is used as cache/buffer during
linking. Hence I am not sure if tmpfs based on RAM will bring about
much speedup in my environment. My CPU is probably 1 ~ 1.5 generation
behind the latest one. That may explain the slow speed.
Linking effectively requires building a list of used files,
concatenating the used sections together, dropping redundant COMDATs,
and then patching offsets into the binary, which requires scanning all
of the .o files at least once--the key, then, is to make sure that the
scanning process never touches disk. In other words, you need enough RAM
that the .o files stay resident in the file system cache from when they
were last touched in a compile-edit-rebuild cycle. Outside of ensuring
you have at least 8GB of RAM [1], the only recommendation I can give for
speeding up is using split-dwarf (removes the ginormous debug
information from the linking equation) and using gold, at least on
Linux. I don't think ramfs style builds speed up builds if the
filesystem is already resident in cache.
[1] To be clear, 8GB of RAM isn't the minimum necessary to build mozilla
code (that number appears to be ~4GB). It's just that if you have less
than 8GB of RAM, you're using an underpowered machine to begin with and
therefore should expect that your build times are going to be slow as a
result. Buying more RAM is a relatively cheap investment and it solves
problems much more effectively than trying to convince people to
radically overhaul build systems.
--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist
_______________________________________________
dev-builds mailing list
dev-builds@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-builds