On 2015年05月06日 10:04, dliang wrote:
> On 2015年04月30日 18:53, ishikawa wrote:
>> On 2015年04月30日 10:51, dliang wrote:
>>> actullay, now in mozilla build system, gold-ld is the default linker if your
>>> OS has already exist gold-ld, follow is codes copy from mozilla/configure,
>>> (my code version is tb-beta 34-b1):
>>>
>>> 10029 if test "$GNU_CC" -a -n "$MOZ_FORCE_GOLD"; then
>>> 10030             if $CC -Wl,--version 2>&1 | grep -q "GNU ld"; then
>>> 10031         GOLD=$($CC -print-prog-name=ld.gold)
>>> 10032         case "$GOLD" in
>>> 10033         /*)
>>> 10034             ;;
>>> 10035         *)
>>> 10036             GOLD=$(which $GOLD)
>>> 10037             ;;
>>> 10038         esac
>>> 10039         if test -n "$GOLD"; then
>>> 10040             mkdir -p $_objdir/build/unix/gold
>>> 10041             rm -f $_objdir/build/unix/gold/ld
>>> 10042             ln -s "$GOLD" $_objdir/build/unix/gold/ld
>>> 10043             if $CC -B $_objdir/build/unix/gold -Wl,--version 2>&1 |
>>> grep -q "GNU gold"; then
>>> 10044                 LDFLAGS="$LDFLAGS -B $_objdir/build/unix/gold"
>>> 10045             else
>>> 10046                 rm -rf $_objdir/build/unix/gold
>>> 10047             fi
>>> 10048         fi
>>> 10049     fi
>>> 10050 fi
>>>
>>> On 2015年04月29日 07:07, Mike Hommey wrote:
>>>> On Tue, Apr 28, 2015 at 05:24:16PM -0500, Joshua Cranmer ? wrote:
>>>>> On 4/28/2015 4:28 PM, ISHIKAWA, Chiaki wrote:
>>>>>> On 2015/04/28 23:30, Toan Pham wrote:
>>>>>>> Linking is a process of resolving symbols from object files with
>>>>>>> external libraries, and it should not take that long.  When you have a
>>>>>>> chance, try to compile the project under a ram filesystem like tmpfs.
>>>>>>> I use it most of the time, even when I have an SSD drive.  The tmpfs
>>>>>>> will speed up compilation significantly.
>>>>>>>
>>>>>>
>>>>>> With full debug build of thunderbird (without -gsplit-dwarf), I think
>>>>>> libxul.so becomes close to 1GB in size.
>>>>>> So creating it puts much workload in terms of I/O, and
>>>>>> in terms of memory pressure, too, since ordinary linker's data structure
>>>>>> to handle the large number of symbols simply runs out of  32-bit memory
>>>>>> space during linking.
>>>>>>
>>>>>> Currently with -gsplit-dwarf,libxul. it is 357.2MB. It is rather large.
>>>>>>
>>>>>> I may want to try to set aside 2GB of RAM to tmpfs given this size of
>>>>>> libxul.so, but as of now, linking libxul runs in a reasonable time by
>>>>>> using GNU gold linker and -gsplit-dwarf option to GCC.
>>>>>>
>>>>>> Thank you for the suggestion, if something goes wrong with
>>>>>> linking/build I may try to create a large tmpfs using RAM.
>>>>>>
>>>>>> At the same time, I have to report that I monitor the memory usage during
>>>>>> build, and saw most of memory is used as cache/buffer during linking.
>>>>>> Hence I am not sure if tmpfs based on RAM will bring about much speedup 
>>>>>> in
>>>>>> my environment. My CPU is probably 1 ~ 1.5 generation behind the latest
>>>>>> one. That may explain the slow speed.
>>>>>
>>>>> Linking effectively requires building a list of used files, concatenating
>>>>> the used sections together, dropping redundant COMDATs, and then patching
>>>>> offsets into the binary, which requires scanning all of the .o files at
>>>>> least once--the key, then, is to make sure that the scanning process never
>>>>> touches disk. In other words, you need enough RAM that the .o files stay
>>>>> resident in the file system cache from when they were last touched in a
>>>>> compile-edit-rebuild cycle. Outside of ensuring you have at least 8GB of 
>>>>> RAM
>>>>> [1], the only recommendation I can give for speeding up is using 
>>>>> split-dwarf
>>>>> (removes the ginormous debug information from the linking equation) and
>>>>> using gold, at least on Linux. I don't think ramfs style builds speed up
>>>>> builds if the filesystem is already resident in cache.
>>>>
>>>> (note gold is the default for local builds)
>>>>
>>
>> Dear Liang,
>>
>> I am not entirely sure if configure in my environment picks up GNU gold
>> automatically.
>> Actually I create a ~/bin/ld shell script that invokes GNU gold linker
>> explicitly.
>> and put my ~/bin near the beginning of PATH environment variable before
>> /usr/bin so that
>> "ld" noticed by configure is always my ~/bin/ld and invokes GNU gold no
>> matter what the symlink from /usr/bin/ld is.
>> (And this trick is useful for OTHER programs that don't look for GNU gold
>> seriously in its configure script.)
>>
>> You might want to check that your GNU gold on your computer under whatever
>> filename is
>> *IS* invoked during the build of TB.
>> (I suppose major linux distribution installs GNU gold linker as ld.gold by
>> default. But you may want to check just in case. What distribution of linux
>> do you use assuming you use linux?)
>>
>> If you link time is still in 10-20 minutes range using GNU gold linker,
>> you might want to increase RAM.
>> How much RAM do you have in your computer?
>> I think 8GB is bare minimum for comfortable linking.
>> (I am assuming that you use 64-bit OS.)
>>
>> Also, what is your CPU? I am just curious.
>> With four cores allocated to a virtual machine, my build time is not that
>> bad after a modification of several files including the not-so-long link
>> time (don't forget -gsplit-dwarf to CC option.) and this is inside 
>> VirtualBox.
>>
>> If you have a few disk drives, try spreading the
>> source files in one disk, the object tree in other disk, etc.
>> on different I/O channels (i.e., not multiplexing them on one bus)
>> if possible although this is easier said than done when your disks are
>> almost full already.
>> Also, try using the fastest disk for object files. [you may need
>> experimenting here.]
>>
>> That is all I can think of speeding up link time significantly.
>>
>> CI
>>
> 
> Dear ishikawa,
> ありがとうございますお忙しいところをお返事のメール,

You are very welcome.
どういたしまして :-)

> Your answer is perfectly solved my problems. Thank you very much.

>From reading the following, you have a
reasonably powerful CPU, I think.
Like my prior experience, slow disk and 8G memory puts a limit on I/O throughput
and thus -gsplit-dwarf was very helpful.
I now use -gsplit-dwarf all the time.

Anyway, welcome to the exciting world of developers who compile TB locally :-)

CI

> Now my TB re-build time has reduce to almost one minute and the time
> used in link libxul.so is nearly 20~50 seconds,  this is all because of
> the build options " -gsplit-dwarf " .
> My cpu info is follow:
> processor     : 3
> vendor_id     : GenuineIntel
> cpu family    : 6
> model         : 58
> model name    : Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
> stepping      : 9
> microcode     : 0x17
> cpu MHz               : 1600.000
> cache size    : 6144 KB
> 
> and my memory info:
> MemTotal:        8067636 kB
> MemFree:         2164108 kB
> Buffers:          564716 kB
> Cached:          3872940 kB
> 
> my linux distribution is Ubuntu 14.04.2 LTS
> 
> These facts proved that "-gsplit-dwarf" is an very useful option.
> About the discussion of gold-ld, i believe in the link process of
> libxul, TB build system use gold-ld, but in compile process , TB build
> system just use ld, so I use command "ln -s /usr/bin/gold /usr/bin/ld"
> to replace ld.
> And about the use of tmpfs, i think this will not save time after
> pratice, because my cpu usage is almost 100% in the all build process,
> so i think the cpu frequency is the limit factor of build time but not
> disk I/O.
> 
_______________________________________________
dev-builds mailing list
dev-builds@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-builds

Reply via email to