RE: A case exposing code sink issue
> Yes, the number of iterations of the i loop simply is too difficult for > our loop iteration calculator to comprehend: > > for (i=k; i<500; i+=k) > > iterates for roundup((500-k)/k) time. In particular if the step is > non-constant our nr-of-iteration calculator gives up. > I'm trying to give an even smaller case, int a[512] ; int *a_p ; int f(int k) { int i ; for(i=0; i: # i_13 = PHI # ivtmp.10_9 = PHI a_p_lsm.6_4 = &a[i_13]; ivtmp.10_1 = ivtmp.10_9 + 4; D.4085_16 = (void *) ivtmp.10_1; MEM[base: D.4085_16, offset: 0B] = 7; i_6 = i_13 + 1; if (i_6 != k_3(D)) goto ; else goto ; : # a_p_lsm.6_11 = PHI a_p = a_p_lsm.6_11; goto ; Why can't we still sunk &a[i_13] out of loop? For example, I expect to generate the code like below, : # i_13 = PHI # ivtmp.10_9 = PHI i_14 = i_13; ivtmp.10_1 = ivtmp.10_9 + 4; D.4085_16 = (void *) ivtmp.10_1; MEM[base: D.4085_16, offset: 0B] = 7; i_6 = i_13 + 1; if (i_6 != k_3(D)) goto ; else goto ; : # a_p_lsm.6_11 = PHI a_p_lsm.6_4 = &a[i_14]; a_p = a_p_lsm.6_11; goto ; This way the computation of &a[i] would be saved within the loop. Any idea? Thanks, -Jiangning
Re: A case exposing code sink issue
On Thu, Dec 22, 2011 at 9:25 AM, Jiangning Liu wrote: >> Yes, the number of iterations of the i loop simply is too difficult fo >> our loop iteration calculator to comprehend: >> >> for (i=k; i<500; i+=k) >> >> iterates for roundup((500-k)/k) time. In particular if the step is >> non-constant our nr-of-iteration calculator gives up. >> > > I'm trying to give an even smaller case, > > int a[512] ; > int *a_p ; > > int f(int k) > { > int i ; > > for(i=0; i { > a_p = &a[i] ; > *a_p = 7 ; > } > } > > For this case, we have a very simple loop step "i++", then we would have the > GIMPLE before expand like below, > > : > # i_13 = PHI > # ivtmp.10_9 = PHI > a_p_lsm.6_4 = &a[i_13]; > ivtmp.10_1 = ivtmp.10_9 + 4; > D.4085_16 = (void *) ivtmp.10_1; > MEM[base: D.4085_16, offset: 0B] = 7; > i_6 = i_13 + 1; > if (i_6 != k_3(D)) > goto ; > else > goto ; > > : > # a_p_lsm.6_11 = PHI > a_p = a_p_lsm.6_11; > goto ; > > Why can't we still sunk &a[i_13] out of loop? For example, I expect to > generate the code like below, > > : > # i_13 = PHI > # ivtmp.10_9 = PHI > i_14 = i_13; > ivtmp.10_1 = ivtmp.10_9 + 4; > D.4085_16 = (void *) ivtmp.10_1; > MEM[base: D.4085_16, offset: 0B] = 7; > i_6 = i_13 + 1; > if (i_6 != k_3(D)) > goto ; > else > goto ; > > : > # a_p_lsm.6_11 = PHI > a_p_lsm.6_4 = &a[i_14]; > a_p = a_p_lsm.6_11; > goto ; > > This way the computation of &a[i] would be saved within the loop. > > Any idea? The job to do this is final value replacement, not sinking (we do not sink non-invariant expressions - you'd have to translate them through the loop-closed SSA exit PHI node, certainly doable, patches welcome ;)). Richard. > Thanks, > -Jiangning > > >
Re: GCC 4.7.0 Status Report (2011-12-06)
On 12/06/11 01:18:28, Joseph S. Myers wrote: > [...] It still seems reasonable to aim for > entering Stage 4 (regression fixes and documentation changes only) in > early January and the 4.7.0 release in March or April. At what point in time would the GCC 4.7 branch be created, and the trunk would then be open for new contributions (not planned for the 4.7 release)? Is that also early Jan.? Thanks, - Gary
Re: GCC 4.7.0 Status Report (2011-12-06)
On Thu, 22 Dec 2011, Gary Funck wrote: > On 12/06/11 01:18:28, Joseph S. Myers wrote: > > [...] It still seems reasonable to aim for > > entering Stage 4 (regression fixes and documentation changes only) in > > early January and the 4.7.0 release in March or April. > > At what point in time would the GCC 4.7 branch be created, > and the trunk would then be open for new contributions > (not planned for the 4.7 release)? Is that also early Jan.? Recent practice has been to create the branch just before making the -rc1 release, so likely in March. -- Joseph S. Myers jos...@codesourcery.com
gcc-4.5-20111222 is now available
Snapshot gcc-4.5-20111222 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.5-20111222/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.5 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_5-branch revision 182640 You'll find: gcc-4.5-20111222.tar.bz2 Complete GCC MD5=827a023f5446d3fd248088ed2a1fed4d SHA1=1e1813bf3e026c39ea706802292d881cdaa1 Diffs from 4.5-20111215 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.5 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: Which Binutils should I use for performing daily regression test on trunk?
On Thu, Dec 22, 2011 at 12:43 AM, Ian Lance Taylor wrote: > Terry Guo writes: > >> I plan to set up daily regression test on trunk for target >> ARM-NONE-EABI and post results to gcc-testresults mailing list. Which >> Binutils should I use, the Binutils trunk or the latest released >> Binutils? And which way is recommended, building from a combined tree >> or building separately? If there is something I should pay attention >> to, please let me know. Thanks very much. > > For gcc testing, the latest released binutils is normally fine. You > should only move to binutils trunk if there is some specific bug you > need to work around temporarily. > > I personally would recommend building binutils separately. If you > choose to build a combined tree, then you should ignore the previous > paragraph and always use binutils trunk. For a combined tree you should > always use sources from the same development date, so using gcc trunk > implies using binutils trunk. > > Ian Combined build with latest gcc and binutils trunk has the advantage of monitoring both trunks. I'd prefer this approach. - Joey
Re: Which Binutils should I use for performing daily regression test on trunk?
> From: Terry Guo > Date: Wed, 21 Dec 2011 04:25:46 +0100 > I plan to set up daily regression test on trunk for target > ARM-NONE-EABI and post results to gcc-testresults mailing > list. Nice. I see others do it for that target, but apparently not for a pristine tree (the results having many failures I don't see for cris-elf and the added "Summary adjusted for local xfails" I don't understand), others restricting to --enable-languages=c. > Which > Binutils should I use, the Binutils trunk or the latest released > Binutils? And which way is recommended, building from a combined tree > or building separately? If there is something I should pay attention > to, please let me know. Thanks very much. You need to worry about newlib and sim too (the latter assuming you use the simulator cohabiting with the gdb project and not e.g. qemu). Maybe it's better to recap how my autotesters work instead of pointing out caveats and making decisions for you. I have one gcc autotester instance (independent updates etc.) for trunk and each open release branch for cris-elf, but I'll use singular below as it's just one script. It imports tarballs, the latest one FAIL-free for cris-elf and cris-axis-linux-gnu exported from my autotester for binutils trunk and similarly one from the sim autotester. The tarball for newlib somewhat similarly, but untested, see below. The gcc autotester imports those tarballs when it's regression-free and not busy testing. It does an extra test-round to check that results from the update do not regress, and bails on the update if there is a regression. All binutils, newlib, and sim tarballs are imported together; if there's a regression in that phase I have to investigate anyway, no use separating the updates. The sim and binutils are each built separately, as there's absolutely no warranty that they're combinable (that they'll build together) at every time, more so as the gcc tree can carry regressions for quite some time (well, at least several months) and even more so for the release-branches. The newlib tarball is combined into the gcc tree (or rather the other way round as gcc files override) but it itself is not tested separately. The newlib testsuite is dead or dysfunctional; all tests fail, and no, I have not reported this or investigated. Mea culpa, but at least that's not a regression. Newlib problems visibile as a regression with a gcc update are dealt with properly. Thus, I have no separate newlib autotester, just a newlib auto-checkout- and-tarball-creator. Autotesters are trigged by emails from the *-cvs@ lists (a procmail recipe feeding "batch") but sleeps for 15 minutes to pick up quick corrections and as a last precaution to avoid hammering the anon cvs/svn servers (a sure-fire way to get blacklisted). I rarely post results ...ok, one sent now. Results are only sent manually. Each gcc autotester bugs me (only me) by email, when there are regressions that I haven't, in a separate file, marked as known after entering a PR or an error e.g. updating (anon update can fail temporarily) and schedules a later restart (with "at") for errors that are expected to be temporary. After a tree update I also check (using "find -newer" on a file created before the update) and exit early if there are only changes in subdirectories and files irrelevant to this target, for example Ada, go, libgomp and target-specific files and directories for other targets. Only one instance of each autotester runs at a time, guarded using "lockfile" and early exits. And now you may wonder why I don't just post the damn script. Let's just consider that a gift; less code to look at. 8-} Actually, the "main" test-bits are in contrib/regression/btest-gcc.sh. The calling script with the "updating" infrastructure haven't seemed generic enough to warrant interest to take me over the cleanup-and-post threshold. And it's all so obvious, at least in retrospect. :) Maybe later. I believe this is about the same as how Geoff K's autotester for powerpc-eabisim (IIRC) used to work. N.B., no update scripts for that setup were posted. Happy Holidays, H-P PS. currently (r182649) regression-free since T0=2007-01-05-16:47:21 (UTC)!