actullay, now in mozilla build system, gold-ld is the default linker if your OS has already exist gold-ld, follow is codes copy from mozilla/configure, (my code version is tb-beta 34-b1):

10029 if test "$GNU_CC" -a -n "$MOZ_FORCE_GOLD"; then
10030             if $CC -Wl,--version 2>&1 | grep -q "GNU ld"; then
10031         GOLD=$($CC -print-prog-name=ld.gold)
10032         case "$GOLD" in
10033         /*)
10034             ;;
10035         *)
10036             GOLD=$(which $GOLD)
10037             ;;
10038         esac
10039         if test -n "$GOLD"; then
10040             mkdir -p $_objdir/build/unix/gold
10041             rm -f $_objdir/build/unix/gold/ld
10042             ln -s "$GOLD" $_objdir/build/unix/gold/ld
10043 if $CC -B $_objdir/build/unix/gold -Wl,--version 2>&1 | grep -q "GNU gold"; then
10044                 LDFLAGS="$LDFLAGS -B $_objdir/build/unix/gold"
10045             else
10046                 rm -rf $_objdir/build/unix/gold
10047             fi
10048         fi
10049     fi
10050 fi

On 2015年04月29日 07:07, Mike Hommey wrote:
On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:
On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
On 2015/04/28 23:30, Toan Pham wrote:
Linking is a process of resolving symbols from object files with
external libraries, and it should not take that long.  When you have a
chance, try to compile the project under a ram filesystem like tmpfs.
I use it most of the time, even when I have an SSD drive.  The tmpfs
will speed up compilation significantly.


With full debug build of thunderbird (without -gsplit-dwarf), I think
libxul.so becomes close to 1GB in size.
So creating it puts much workload in terms of I/O, and
in terms of memory pressure, too, since ordinary linker's data structure
to handle the large number of symbols simply runs out of  32-bit memory
space during linking.

Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.

I may want to try to set aside 2GB of RAM to tmpfs given this size of
libxul.so, but as of now, linking libxul runs in a reasonable time by
using GNU gold linker and -gsplit-dwarf option to GCC.

Thank you for the suggestion, if something goes wrong with
linking/build I may try to create a large tmpfs using RAM.

At the same time, I have to report that I monitor the memory usage during
build, and saw most of memory is used as cache/buffer during linking.
Hence I am not sure if tmpfs based on RAM will bring about much speedup in
my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
one. That may explain the slow speed.

Linking effectively requires building a list of used files, concatenating
the used sections together, dropping redundant COMDATs, and then patching
offsets into the binary, which requires scanning all of the .o files at
least once--the key, then, is to make sure that the scanning process never
touches disk. In other words, you need enough RAM that the .o files stay
resident in the file system cache from when they were last touched in a
compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of RAM
[1], the only recommendation I can give for speeding up is using split-dwarf
(removes the ginormous debug information from the linking equation) and
using gold, at least on Linux. I don't think ramfs style builds speed up
builds if the filesystem is already resident in cache.

(note gold is the default for local builds)


_______________________________________________
dev-builds mailing list
dev-builds@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-builds

Reply via email to