Re: [RFC] Characters per line: from punch card (80) to line printer (132)
On 05/12/2019 18:21, Robin Curtis wrote: My IBM Selectric golfball electronic printer only does 90 characters on A4 in portrait mode………(at 10 cps) (as for my all electric TELEX Teleprinter machine !) Is this debate for real ?! - or is this a Christmas spoof ? I can't speak for the debate, but the pain is real. Andrew
Re: PPC64 libmvec implementation of sincos
On Thu, Dec 5, 2019 at 6:45 PM GT wrote: > > > ‐‐‐ Original Message ‐‐‐ > On Thursday, December 5, 2019 4:44 AM, Richard Biener > wrote: > > ... > ... > ... > > > > > > > I'm trying to identify the source code which needs modification but I > > > need help proceeding. > > > I am comparing two compilations: The first is a simple file with a call > > > to sin in a loop. > > > Vectorization succeeds. The second is an almost identical file but with a > > > call to sincos > > > in the loop. Vectorization fails. > > > In gdb, the earliest code location where the two compilations differ is > > > in function > > > number_of_iterations_exit_assumptions in file tree-ssa-loop-niter.c. Line > > > op0 = gimple_cond_lhs (stmt); > > > returns a tree which when analyzed in function instantiate_scev_r (in > > > file tree-scalar-evolution.c) > > > results in the first branch of the switch being taken for sincos. For > > > sin, the 2nd branch of the > > > switch is taken. > > > How can I correlate stmt in the source line above to the relevant line in > > > any dump among those created > > > using debugging dump option -fdump-tree-all? > > > > grep ;) > > > > Can you provide a testcase with a simd attribute annotated cexpi that > > one can play with? > > > > On an x86_64 system, run Example 2 at this link: > > sourceware.org/glibc/wiki/libmvec > > After verifying vectorization (by finding a name with prefix _ZGV and suffix > _sin in a.out), replace > the call to sin by one to sincos. The file should be similar to this: > > > > #include > > int N = 3200; > double c[3200]; > double b[3200]; > double a[3200]; > > int main (void) > { > int i; > > for (i = 0; i < N; i += 1) > { > sincos (a[i], &b[i], &c[i]); > } > > return (0); > } > > > > In addition to the options shown in Example 2, I passed GCC flags > -fopt-info-all, -fopt-info-internal and > -fdump-tree-all to obtain more verbose messages. > > That should show vectorization failing for sincos, and diagnostics on the > screen indicating reason(s) for > the failure. > > To perform the runs on PPC64 requires building both GCC and GLIBC with > modifications not yet accepted > into the main development branches of the projects. > > Please let me know if you are able to run on x86_64; if not, then perhaps I > can push the local GCC > changes to some github repository. GLIBC changes are available at branch > tuliom/libmvec of the > development repository. So I used void sincos(double x, double *sin, double *cos); _Complex double __attribute__((__simd__("notinbranch"))) __builtin_cexpi (double); int N = 3200; double c[3200]; double b[3200]; double a[3200]; int main (void) { int i; for (i = 0; i < N; i += 1) { sincos (a[i], &b[i], &c[i]); } return (0); } and get t.c:2:58: warning: unsupported return type ‘complex double’ for simd so I suppose that would need fixing / ABI adjustments. Then vectorization fails with the expected t.c:13:3: note: ==> examining statement: _8 = __builtin_cexpi (_1); t.c:13:3: note: get vectype for scalar type: complex double t.c:15:5: missed: not vectorized: unsupported data-type complex double t.c:13:3: missed: can't determine vectorization factor. For the ABI thing the alternative is to go with "something" for sincos and have the vectorizer query that something at cexpi vectorization time, emitting code for that ABI. But of course the vectorizer needs to be teached to deal with the cexpi call in the IL which was very low priority because there wasn't any SIMD implementation of sincos (with whatever ABI). I can help with that to some extent, but I wonder what openmp says to _Complex types and simd functions for those? Jakub? Richard. > Bert.
Re: LTO : question about get_untransformed_body
On Wed, Dec 4, 2019 at 6:03 PM Erick Ochoa wrote: > > > > On 2019-12-04 7:52 a.m., Richard Biener wrote: > > On Tue, Dec 3, 2019 at 11:51 PM Erick Ochoa > > wrote: > >> > >> Hi, > >> > >> I am trying to use the function: `cgraph_node::get_untransformed_body` > >> during > >> the wpa stage of a SIMPLE_IPA_PASS transformation. While the execute > >> function > > > > I think SIMPLE_IPA_PASSes have no "WPA" stage but run at LTRANS time > > (WPA transform stage). So you might simply see that not all bodies are > > available in your LTRANS unit? > > > > This makes a lot of sense, and it is indeed documented in the GCC internals > manual: > > A small inter-procedural pass (SIMPLE_IPA_PASS) is a pass that does > everything at once and thus it can not be executed during WPA in WHOPR > mode. It defines only the Execute stage and during this stage it > accesses and modifies the function bodies. Such passes are useful for > optimization at LGEN or LTRANS time and are used, for example, to > implement early optimization before writing object files. > > I got confused because the dump_file for my pass includes the substring 'wpa'. > Should I file a bug to change the name of the dumpfile to something more like > `ltrans*`? Well, you probably placed your pass in the IPA pipeline rather than the all_late_ipa_passes one? In principle IPA passes _can_ pull in bodies during WPA analysis, that just will be very costly. > Thanks for your help! > > >> is running, I need to access the body of a function in order to iterate > >> over > >> the gimple instructions in the first basic block. I have found that the > >> majority of the cgraph_node will return successfully. However, there are > >> some > >> functions which consistently produce a segmentation fault following this > >> stacktrace: > >> > >> ``` > >> 0xbc2adb crash_signal > >> /home/eochoa/code/gcc/gcc/toplev.c:328 > >> 0xa54858 lto_get_decl_name_mapping(lto_file_decl_data*, char const*) > >> /home/eochoa/code/gcc/gcc/lto-section-in.c:367 > >> 0x7030e7 cgraph_node::get_untransformed_body() > >> /home/eochoa/code/gcc/gcc/cgraph.c:3516 > >> 0x150613f get_bb1_callees > >> /home/eochoa/code/gcc/gcc/ipa-initcall-cp.c:737 > >> 0x150613f reach_nodes_via_bb1_dfs > >> ``` > >> > >> Is there a way for `cgraph_node::get_untransformed_body` to succeed > >> consistently? (I.e. are there some preconditions that I need to make sure > >> are > >> in place before calling cgraph_node::get_untransformed_body? > >> > >> I am using gcc version 10.0.0 20191127 (experimental) > >> > >> Thanks
Re: PPC64 libmvec implementation of sincos
On Fri, Dec 06, 2019 at 11:48:03AM +0100, Richard Biener wrote: > So I used > > void sincos(double x, double *sin, double *cos); > _Complex double __attribute__((__simd__("notinbranch"))) > __builtin_cexpi (double); While Intel-ABI-Vector-Function-2015-v0.9.8.pdf talks about complex numbers, the reason we punt: unsupported return type ‘complex double’ for simd etc. is that we really don't support VECTOR_TYPE with COMPLEX_TYPE element type, I guess the vectorizer doesn't do anything with that either unless some earlier optimization was able to scalarize the complex halves. In theory we could represent the vector counterparts of complex types as just vectors of double width with element type of COMPLEX_TYPE element type, have a look at what exactly ICC does to find out if the vector ordering is real0 complex0 real1 complex1 ... or real0 real1 real2 ... complex0 complex1 complex2 ... and tweak everything that needs to cope. Jakub
Re: PPC64 libmvec implementation of sincos
On Fri, Dec 6, 2019 at 12:15 PM Jakub Jelinek wrote: > > On Fri, Dec 06, 2019 at 11:48:03AM +0100, Richard Biener wrote: > > So I used > > > > void sincos(double x, double *sin, double *cos); > > _Complex double __attribute__((__simd__("notinbranch"))) > > __builtin_cexpi (double); > > While Intel-ABI-Vector-Function-2015-v0.9.8.pdf talks about complex numbers, > the reason we punt: > unsupported return type ‘complex double’ for simd > etc. is that we really don't support VECTOR_TYPE with COMPLEX_TYPE element > type, I guess the vectorizer doesn't do anything with that either unless > some earlier optimization was able to scalarize the complex halves. > In theory we could represent the vector counterparts of complex types > as just vectors of double width with element type of COMPLEX_TYPE element > type, have a look at what exactly ICC does to find out if the vector > ordering is real0 complex0 real1 complex1 ... or > real0 real1 real2 ... complex0 complex1 complex2 ... > and tweak everything that needs to cope. I hope real0 complex0, ... Anyway, the first step is to support vectorizing code where parts of it are already vectors: typedef double v2df __attribute__((vector_size(16))); #define N 1024 v2df a[N]; double b[N]; double c[N]; void foo() { for (int i = 0; i < N; ++i) { v2df tem = a[i]; b[i] = tem[0]; c[i] = tem[1]; } } that can be "re-vectorized" for AVX for example. If you substitute _Complex double for the vector type we only handle it during vectorization because forwprop combines the load and the __real/imag which helps. Richard. > Jakub >
Re: Proposal for the transition timetable for the move to GIT
> On Sep 19, 2019, at 6:34 PM, Maxim Kuvyrkov wrote: > >> On Sep 17, 2019, at 3:02 PM, Richard Earnshaw (lists) >> wrote: >> >> At the Cauldron this weekend the overwhelming view for the move to GIT soon >> was finally expressed. >> > ... >> >> So in summary my proposed timetable would be: >> >> Monday 16th December 2019 - cut off date for picking which git conversion to >> use >> >> Tuesday 31st December 2019 - SVN repo becomes read-only at end of stage 3. >> >> Thursday 2nd January 2020 - (ie read-only + 2 days) new git repo comes on >> line for live commits. >> >> Doing this over the new year holiday period has both advantages and >> disadvantages. On the one hand the traffic is light, so the impact to most >> developers will be quite low; on the other, it is a holiday period, so >> getting the right key folk to help might be difficult. I won't object >> strongly if others feel that slipping a few days (but not weeks) would make >> things significantly easier. > > The timetable looks entirely reasonable to me. > > I have regenerated my primary version this week, and it's up at > https://git.linaro.org/people/maxim-kuvyrkov/gcc-pretty.git/ . So far I have > received only minor issue reports about it, and all known problems have been > fixed. I could use a bit more scrutiny :-). I think now is a good time to give status update on the svn->git conversion I maintain. See https://git.linaro.org/people/maxim-kuvyrkov/gcc-pretty.git/ . 1. The conversion has all SVN live branches converted as branches under refs/heads/* . 2. The conversion has all SVN live tags converted as annotated tags under refs/tags/* . 3. If desired, it would be trivial to add all deleted / leaf SVN branches and tags. They would be named as branches/my-deleted-branch@12345, where @12345 is the revision at which the branch was deleted. Branches created and deleted multiple times would have separate entries corresponding to delete revisions. 4. Git committer and git author entries are very accurate (imo, better than reposurgeon's, but I'm biased). Developers' names and email addresses are mined from commit logs, changelogs and source code and have historically-accurately attributions to employer's email addresses. 5. Since there is interest in reparenting branches to fix cvs2svn merge issues, I've added this feature to my scripts as well (turned out to be trivial). I'll keep the original gcc-pretty.git repo intact and will upload the new one at https://git.linaro.org/people/maxim-kuvyrkov/gcc-reparent.git/ -- should be live by Monday. Finally, there seems to be quite a few misunderstandings about the scripts I've developed and their limitations. Most of these misunderstanding stem from assumption that all git-svn limitations must apply to my scripts. That's not the case. SVN merges, branch/tag reparenting, adjusting of commit logs are all handled correctly in my scripts. I welcome criticism with pointers to revisions which have been incorrectly converted. The general conversion workflow is (this really is a poor-man's translator of one DAG into another): 1. Parse SVN history of entire SVN root (svn log -qv file:///svnrepo/) and build a list of branch points. 2. From the branch points build a DAG of "basic blocks" of revision history. Each basic block is a consecutive set of commits where only the last commit can be a branchpoint. 3. Walk the DAG and ... 4. ... use git-svn to individually convert these basic blocks. 4a. Optionally, post-process git result of basic block conversion using "git filter-branch" and similar tools. Git-svn is used in a limited role, and it does its job very well in this role. Regards, -- Maxim Kuvyrkov https://www.linaro.org
Re: LTO : question about get_untransformed_body
On 2019-12-06 5:50 a.m., Richard Biener wrote: > On Wed, Dec 4, 2019 at 6:03 PM Erick Ochoa > wrote: >> >> >> >> On 2019-12-04 7:52 a.m., Richard Biener wrote: >>> On Tue, Dec 3, 2019 at 11:51 PM Erick Ochoa >>> wrote: Hi, I am trying to use the function: `cgraph_node::get_untransformed_body` during the wpa stage of a SIMPLE_IPA_PASS transformation. While the execute function >>> >>> I think SIMPLE_IPA_PASSes have no "WPA" stage but run at LTRANS time >>> (WPA transform stage). So you might simply see that not all bodies are >>> available in your LTRANS unit? >>> >> >> This makes a lot of sense, and it is indeed documented in the GCC internals >> manual: >> >> A small inter-procedural pass (SIMPLE_IPA_PASS) is a pass that does >> everything at once and thus it can not be executed during WPA in WHOPR >> mode. It defines only the Execute stage and during this stage it >> accesses and modifies the function bodies. Such passes are useful for >> optimization at LGEN or LTRANS time and are used, for example, to >> implement early optimization before writing object files. >> >> I got confused because the dump_file for my pass includes the substring >> 'wpa'. >> Should I file a bug to change the name of the dumpfile to something more like >> `ltrans*`? > > Well, you probably placed your pass in the IPA pipeline rather than the > all_late_ipa_passes one? In principle IPA passes _can_ pull in bodies > during WPA > analysis, that just will be very costly> >> Thanks for your help! I am a bit confused. Can you please clarify: > I think SIMPLE_IPA_PASSes have no "WPA" stage but run at LTRANS time > (WPA transform stage). I did have a SIMPLE_IPA_PASS and the dump file contained "wpa" in it. > Well, you probably placed your pass in the IPA pipeline rather than the > all_late_ipa_passes one? I did place my pass in the IPA pipeline rather than the all_late_ipa_passes one. Are SIMPLE_IPA_PASSes normally used in the all_late_ipa_passes which means they are normally scheduled to run at LTRANS time? However, if I place my SIMPLE_IPA_PASS during the IPA pipeline, then my SIMPLE_IPA_PASS will execute at "WPA" stage? > In principle IPA passes _can_ pull in bodies > during WPA So, if SIMPLE_IPA_PASSes are placed in the IPA pipeline, is there a reason why `cgraph_node::get_untransformed_body` would crash? Thanks again! >> is running, I need to access the body of a function in order to iterate over the gimple instructions in the first basic block. I have found that the majority of the cgraph_node will return successfully. However, there are some functions which consistently produce a segmentation fault following this stacktrace: ``` 0xbc2adb crash_signal /home/eochoa/code/gcc/gcc/toplev.c:328 0xa54858 lto_get_decl_name_mapping(lto_file_decl_data*, char const*) /home/eochoa/code/gcc/gcc/lto-section-in.c:367 0x7030e7 cgraph_node::get_untransformed_body() /home/eochoa/code/gcc/gcc/cgraph.c:3516 0x150613f get_bb1_callees /home/eochoa/code/gcc/gcc/ipa-initcall-cp.c:737 0x150613f reach_nodes_via_bb1_dfs ``` Is there a way for `cgraph_node::get_untransformed_body` to succeed consistently? (I.e. are there some preconditions that I need to make sure are in place before calling cgraph_node::get_untransformed_body? I am using gcc version 10.0.0 20191127 (experimental) Thanks
Re: PPC64 libmvec implementation of sincos
‐‐‐ Original Message ‐‐‐ On Friday, December 6, 2019 6:38 AM, Richard Biener wrote: > On Fri, Dec 6, 2019 at 12:15 PM Jakub Jelinek ja...@redhat.com wrote: > > > On Fri, Dec 06, 2019 at 11:48:03AM +0100, Richard Biener wrote: > > > > > So I used > > > void sincos(double x, double *sin, double *cos); > > > _Complex double attribute((simd("notinbranch"))) > > > __builtin_cexpi (double); > > > > While Intel-ABI-Vector-Function-2015-v0.9.8.pdf talks about complex numbers, > > the reason we punt: > > unsupported return type ‘complex double’ for simd > > etc. is that we really don't support VECTOR_TYPE with COMPLEX_TYPE element > > type, I guess the vectorizer doesn't do anything with that either unless > > some earlier optimization was able to scalarize the complex halves. > > In theory we could represent the vector counterparts of complex types > > as just vectors of double width with element type of COMPLEX_TYPE element > > type, have a look at what exactly ICC does to find out if the vector > > ordering is real0 complex0 real1 complex1 ... or > > real0 real1 real2 ... complex0 complex1 complex2 ... > > and tweak everything that needs to cope. > > I hope real0 complex0, ... > > Anyway, the first step is to support vectorizing code where parts of it are > already vectors: > > typedef double v2df attribute((vector_size(16))); > #define N 1024 > v2df a[N]; > double b[N]; > double c[N]; > void foo() > { > for (int i = 0; i < N; ++i) > { > v2df tem = a[i]; > b[i] = tem[0]; > c[i] = tem[1]; > } > } > > that can be "re-vectorized" for AVX for example. If you substitute > _Complex double for the vector type we only handle it during > vectorization because forwprop combines the load and the > __real/imag which helps. > Are we certain the change we want is to support _Complex double so that cexpi is auto-vectorized? Looking at the resulting executable of the code with sincos in the loop, the only function called is sincos. Not builtin_cexpi or any variant of cexpi. File gcc/builtins.c expands calls to builtin_cexpi to sincos! What is gained by the compiler going through the transformations sincos -> builtin_cexpi -> sincos? Bert.
Re: Proposal for the transition timetable for the move to GIT
Maxim Kuvyrkov : > The general conversion workflow is (this really is a poor-man's translator of > one DAG into another): > > 1. Parse SVN history of entire SVN root (svn log -qv file:///svnrepo/) and > build a list of branch points. > 2. From the branch points build a DAG of "basic blocks" of revision history. > Each basic block is a consecutive set of commits where only the last commit > can be a branchpoint. > 3. Walk the DAG and ... > 4. ... use git-svn to individually convert these basic blocks. > 4a. Optionally, post-process git result of basic block conversion using "git > filter-branch" and similar tools. > > Git-svn is used in a limited role, and it does its job very well in this role. Your approach sounds pretty reasonable except for that part. I don't trust git-svn at *all* - I've collided with it too often during past conversions. It has a nasty habit of leaving damage in places that are difficult to audit. I agree that you've made a best possible effort to avod being bitten by using it only for basic blocks. That was clever and the right thing to do, and I *still* don't trust it. -- http://www.catb.org/~esr/";>Eric S. Raymond
Re: Proposal for the transition timetable for the move to GIT
On December 6, 2019 6:21:11 PM GMT+01:00, "Eric S. Raymond" wrote: >Maxim Kuvyrkov : >> The general conversion workflow is (this really is a poor-man's >translator of one DAG into another): >> >> 1. Parse SVN history of entire SVN root (svn log -qv >file:///svnrepo/) and build a list of branch points. >> 2. From the branch points build a DAG of "basic blocks" of revision >history. Each basic block is a consecutive set of commits where only >the last commit can be a branchpoint. >> 3. Walk the DAG and ... >> 4. ... use git-svn to individually convert these basic blocks. >> 4a. Optionally, post-process git result of basic block conversion >using "git filter-branch" and similar tools. >> >> Git-svn is used in a limited role, and it does its job very well in >this role. > >Your approach sounds pretty reasonable except for that part. I don't >trust git-svn at *all* - I've collided with it too often during >past conversions. It has a nasty habit of leaving damage in places >that are difficult to audit. > >I agree that you've made a best possible effort to avod being bitten >by using it only for basic blocks. That was clever and the right thing >to do, and I *still* don't trust it. To me, looking from the outside, the talks about reposurgeon doing damage and a rewrite (in the last minute) would fix it doesn't make a trustworthy appearance either ;) I guess the basic block usage could be emulated by svn checkouts, svn log and manual diffing and installing revs on the git. And I can't really imagine how that cannot work with git-svn given it is used in the wild. Richard.
Re: LTO : question about get_untransformed_body
On December 6, 2019 5:46:25 PM GMT+01:00, Erick Ochoa wrote: > > >On 2019-12-06 5:50 a.m., Richard Biener wrote: >> On Wed, Dec 4, 2019 at 6:03 PM Erick Ochoa >> wrote: >>> >>> >>> >>> On 2019-12-04 7:52 a.m., Richard Biener wrote: On Tue, Dec 3, 2019 at 11:51 PM Erick Ochoa wrote: > > Hi, > > I am trying to use the function: >`cgraph_node::get_untransformed_body` during > the wpa stage of a SIMPLE_IPA_PASS transformation. While the >execute function I think SIMPLE_IPA_PASSes have no "WPA" stage but run at LTRANS >time (WPA transform stage). So you might simply see that not all bodies >are available in your LTRANS unit? >>> >>> This makes a lot of sense, and it is indeed documented in the GCC >internals >>> manual: >>> >>> A small inter-procedural pass (SIMPLE_IPA_PASS) is a pass that >does >>> everything at once and thus it can not be executed during WPA in >WHOPR >>> mode. It defines only the Execute stage and during this stage it >>> accesses and modifies the function bodies. Such passes are useful >for >>> optimization at LGEN or LTRANS time and are used, for example, to >>> implement early optimization before writing object files. >>> >>> I got confused because the dump_file for my pass includes the >substring 'wpa'. >>> Should I file a bug to change the name of the dumpfile to something >more like >>> `ltrans*`? >> >> Well, you probably placed your pass in the IPA pipeline rather than >the >> all_late_ipa_passes one? In principle IPA passes _can_ pull in >bodies >> during WPA >> analysis, that just will be very costly> >>> Thanks for your help! > >I am a bit confused. >Can you please clarify: > >> I think SIMPLE_IPA_PASSes have no "WPA" stage but run at LTRANS time >> (WPA transform stage). > >I did have a SIMPLE_IPA_PASS and the dump file contained "wpa" in it. > >> Well, you probably placed your pass in the IPA pipeline rather than >the >> all_late_ipa_passes one? > >I did place my pass in the IPA pipeline rather than the >all_late_ipa_passes >one. > >Are SIMPLE_IPA_PASSes normally used in the all_late_ipa_passes which >means they are normally scheduled to run at LTRANS time? However, if >I place my SIMPLE_IPA_PASS during the IPA pipeline, then my >SIMPLE_IPA_PASS >will execute at "WPA" stage? Yes. >> In principle IPA passes _can_ pull in bodies >> during WPA > >So, if SIMPLE_IPA_PASSes are placed in the IPA pipeline, is there a >reason why `cgraph_node::get_untransformed_body` would crash? That I have no idea. Richard. >Thanks again! > >>> > is running, I need to access the body of a function in order to >iterate over > the gimple instructions in the first basic block. I have found >that the > majority of the cgraph_node will return successfully. However, >there are some > functions which consistently produce a segmentation fault >following this > stacktrace: > > ``` > 0xbc2adb crash_signal > /home/eochoa/code/gcc/gcc/toplev.c:328 > 0xa54858 lto_get_decl_name_mapping(lto_file_decl_data*, char >const*) > /home/eochoa/code/gcc/gcc/lto-section-in.c:367 > 0x7030e7 cgraph_node::get_untransformed_body() > /home/eochoa/code/gcc/gcc/cgraph.c:3516 > 0x150613f get_bb1_callees > /home/eochoa/code/gcc/gcc/ipa-initcall-cp.c:737 > 0x150613f reach_nodes_via_bb1_dfs > ``` > > Is there a way for `cgraph_node::get_untransformed_body` to >succeed > consistently? (I.e. are there some preconditions that I need to >make sure are > in place before calling cgraph_node::get_untransformed_body? > > I am using gcc version 10.0.0 20191127 (experimental) > > Thanks
Re: PPC64 libmvec implementation of sincos
On December 6, 2019 5:50:25 PM GMT+01:00, GT wrote: >‐‐‐ Original Message ‐‐‐ >On Friday, December 6, 2019 6:38 AM, Richard Biener > wrote: > >> On Fri, Dec 6, 2019 at 12:15 PM Jakub Jelinek ja...@redhat.com wrote: >> >> > On Fri, Dec 06, 2019 at 11:48:03AM +0100, Richard Biener wrote: >> > >> > > So I used >> > > void sincos(double x, double *sin, double *cos); >> > > _Complex double attribute((simd("notinbranch"))) >> > > __builtin_cexpi (double); >> > >> > While Intel-ABI-Vector-Function-2015-v0.9.8.pdf talks about complex >numbers, >> > the reason we punt: >> > unsupported return type ‘complex double’ for simd >> > etc. is that we really don't support VECTOR_TYPE with COMPLEX_TYPE >element >> > type, I guess the vectorizer doesn't do anything with that either >unless >> > some earlier optimization was able to scalarize the complex halves. >> > In theory we could represent the vector counterparts of complex >types >> > as just vectors of double width with element type of COMPLEX_TYPE >element >> > type, have a look at what exactly ICC does to find out if the >vector >> > ordering is real0 complex0 real1 complex1 ... or >> > real0 real1 real2 ... complex0 complex1 complex2 ... >> > and tweak everything that needs to cope. >> >> I hope real0 complex0, ... >> >> Anyway, the first step is to support vectorizing code where parts of >it are >> already vectors: >> >> typedef double v2df attribute((vector_size(16))); >> #define N 1024 >> v2df a[N]; >> double b[N]; >> double c[N]; >> void foo() >> { >> for (int i = 0; i < N; ++i) >> { >> v2df tem = a[i]; >> b[i] = tem[0]; >> c[i] = tem[1]; >> } >> } >> >> that can be "re-vectorized" for AVX for example. If you substitute >> _Complex double for the vector type we only handle it during >> vectorization because forwprop combines the load and the >> __real/imag which helps. >> > >Are we certain the change we want is to support _Complex double so that >cexpi is auto-vectorized? >Looking at the resulting executable of the code with sincos in the >loop, the only function called >is sincos. Not builtin_cexpi or any variant of cexpi. File >gcc/builtins.c expands calls to builtin_cexpi >to sincos! What is gained by the compiler going through the >transformations sincos -> builtin_cexpi -> >sincos? Yes, we want to support vectorizing cexpi because that is what the compiler will lower sincos to. The sincos API is painful to deal with due to the data dependences it introduces. Now, the vectorizer can of course emit calls to a vectorized sincos it just needs to be able to deal with cexpi input IL. Richard. >Bert.
Re: Proposal for the transition timetable for the move to GIT
Richard Biener : > To me, looking from the outside, the talks about reposurgeon doing damage and > a rewrite (in the last minute) would fix it doesn't make a trustworthy > appearance either ;) *shrug* Hard problems are hard. Every time I do a conversion that is at a record size I have to rebuild parts of the analyzer, because the problem domain is seriously gnarly. I'm having to rebuild more than usual this time because the GCC repo is a monster that stresses the analyzer in particularly unusual ways. Reposurgeon has been used for several major conversions, including groff and Emacs. I don't mean to be nasty to Maxim, but I have not yet seen *anybody* who thought they could get the job done with ad-hoc scripts turn out to be correct. Unfortunately, the costs of failure are often well-hidden problems in the converted history that people trip over months and years later. Experience matters at this. So does staying away from tools like git-svn that are known to be bad. -- http://www.catb.org/~esr/";>Eric S. Raymond
Re: Proposal for the transition timetable for the move to GIT
On 12/6/19 12:46 PM, Eric S. Raymond wrote: Richard Biener : To me, looking from the outside, the talks about reposurgeon doing damage and a rewrite (in the last minute) would fix it doesn't make a trustworthy appearance either ;) *shrug* Hard problems are hard. Every time I do a conversion that is at a record size I have to rebuild parts of the analyzer, because the problem domain is seriously gnarly. I'm having to rebuild more than usual this time because the GCC repo is a monster that stresses the analyzer in particularly unusual ways. Reposurgeon has been used for several major conversions, including groff and Emacs. I don't mean to be nasty to Maxim, but I have not yet seen *anybody* who thought they could get the job done with ad-hoc scripts turn out to be correct. Unfortunately, the costs of failure are often well-hidden problems in the converted history that people trip over months and years later. Experience matters at this. So does staying away from tools like git-svn that are known to be bad. I have nothing useful to contribute regarding the actual mechanics of the repository conversion (I'm a total dummy about the internals of both git and svn and stick with only the most basic usages in my daily work), but from a software engineering and project management perspective I'm also put off by the talk of having to do last-minute rewrites of a massively complex project. [Insert image of prehistoric animals trapped in tar pit here.] Shouldn't it be possible to *test* whether Maxim's git-svn conversion is correct, e.g. by diffing the git and svn versions at appropriate places in the history, or comparing revision histories of each file at branch tips, or something like that? Instead of just asserting that it's full of bugs, without any evidence either way? I'd expect that the same testing would need to be performed on the reposurgeon version in order to have any confidence that it is any less buggy. Do we have any volunteers who could independently work on QA of whatever git repository we end up with? -Sandra
Re: Proposal for the transition timetable for the move to GIT
On 12/6/19 6:21 PM, Eric S. Raymond wrote: Your approach sounds pretty reasonable except for that part. I don't trust git-svn at *all* - I've collided with it too often during past conversions. It has a nasty habit of leaving damage in places that are difficult to audit. So, which steps are we taking to ensure such damage does not occur with either method of conversion? Do we have any verification scripts already? Bernd
Re: Branch and tag deletions
On Wed, Dec 04, 2019 at 09:53:31AM +, Richard Earnshaw (lists) wrote: > On 03/12/2019 22:43, Segher Boessenkool wrote: > >>But you've still got the ongoing branch death issue to deal with, and > >>that was my point. If you want to keep them, and you don't want them > >>polluting the working namespace, you have to do *some* renaming of them. > > > >Sure, but how many of those will there be? This is a different scale > >problem from that with the SVN branches and tags, which makes it a quite > >different problem. > > Over time, likely as many, if not more than for svn. How many per year. How many *random* branches will die each year. It is my hope that people will *not* put their random WIP stuff in the global namespace *at all*, ever. You can put it in a user branch to make it visible to others, and that is good enough for most purposes. Such a topic branch will never be merged to trunk in any case! > In GIT branch > development is the norm for most developers. It's true that most of > those are private and get serialized before upstreaming (unless we move > to a different development model, but that's a different discussion), > but we will likely have at least as many public development branches in > git as we've ever had in SVN. But how many *did* we have in SVN? And how many were just because we did not have user branches, and used a more clunky development style? > The other way to solve it is documentation. We have a web page which is > *supposed* to list all the development branches we have. When a branch > is renamed, the rename can be listed alongside the mark that the branch > is now dead. I still think we should just put everything inherited from SVN into its own namespace, "old-svn" or whatever. And make that exactly what is in SVN now, and don't change it. This is easiest, and does it have any downsides? How many branches do we want to keep alive after all development on them has stopped? How many did we have in SVN? As you suggest, we have not kept track of that well, so the only safe thing to do now is to keep everything. > >Release branches and releases are a very different thing. A release > >is some fixed source, like a tarball. A release branch is a branch, and > >what code that is can (and will, and does) change. > > > >Not that I have good suggestions how to make this less confusing. Well, > >maybe we should keep calling it "gcc-9-branch" and "gcc-9_2_0-release"? > > Branches are branches and appear in the heads name space, tags are tags > and appear in the tags name space. There's no way of confusing the two. git checkout is-this-a-tag-or-a-branch (I know you can say heads/* and tags/*. But the point is, it is confusing to users if you have the same name for two very different things). > >>>What does this mean? "other", "general"? > >> > >>Anything that's not vendor/user specific and not a release - a topic > >>branch most likely > > > >Should we often have those? We can just use user branches for this? > > It depends. Some branches are definitely collaborations, so probably > want to be more public. I'm trying not to be prescriptive in this regard. > > >We *want* to rebase etc. on topic branches: allow non-fast-forwards. > >And that is *very* problematic if multiple people can write to that > >branch. > > Rebasing a publicly visible branch is a no-no. It causes carnage for > anyone trying to track it. But this is straying into development > workflows again, and that's not for discussion during the conversion > (feature creep). Rebasing a per-user branch is explicitly allowed. It only causes problems for people who do not *know* it rebases, as well (you need to rebase stuff that builds on it, too, and you should never merge it -- you shouldn't anyway, but hey). It is exactly the same as when you put out new iterations of a patch set (via email or whatever). Segher
gcc-8-20191206 is now available
Snapshot gcc-8-20191206 is now available on https://gcc.gnu.org/pub/gcc/snapshots/8-20191206/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 8 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-8-branch revision 279068 You'll find: gcc-8-20191206.tar.xzComplete GCC SHA256=5c1e5e1aecb13c31840db29f5eb3ef3c90f1d842b8a8d438756525860e1e1247 SHA1=e6e53c6d7f9e304fdb2a9aa9d07961c39fe3e274 Diffs from 8-20191129 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-8 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: Branch and tag deletions
On Thu, Dec 05, 2019 at 11:55:12AM +0300, Maxim Kuvyrkov wrote: > > On Dec 3, 2019, at 11:10 PM, Richard Earnshaw (lists) > > wrote: > > And see above for the type of fetch spec you'd need to pull and see them > > locally, with the structure I suggest, you can even write > > > > fetch = refs/vendors/ibm/heads/*:refs/remotes/origin/ibm/* > > > > and only the ibm sub-part of that hierarchy would be fetched. > > IMO, it is valuable to have user and vendor branches under default > refs/heads/* hierarchy. I find it useful to see that IBM's or Arm's branches > were updated since I last pulled from upstream. The fact that branches were > updated means that there are development I may want to look at or keep note > of. Yes. And while that can be configured, that has to be done every time someone makes a new clone. And inevitably people will screw this up, and inevitably it causes more problems than it helps when people have different configurations for this. > I feel strongly about vendor branches, and much less strongly about user > branches. Individual users can be less careful in following best git > practices, can commit random stuff and rewrite history of their branches. Absolutely. This should be encouraged even, imo: people should make their "feature" branches contain exactly what they intend to commit. Segher
Minimal GCC version to compile the trunk
Hi all, Right now the trunk does not compile with GCC 4.4.7 (the GCC that comes with CentOS 6; yes I know old) after revision 277200 (October 19). The error message is: ../../gcc/gcc/cp/cvt.c:1043: error: operands to ?: have different types ‘escaped_string’ and ‘const char [1]’ ../../gcc/gcc/cp/cvt.c:1060: error: operands to ?: have different types ‘escaped_string’ and ‘const char [1]’ Is it acceptable to put in a workaround for this? Thanks, Andrew Pinski PS A simplified testcase which shows the issue: #include #include class escaped_string { public: escaped_string () { m_owned = false; m_str = NULL; }; ~escaped_string () { if (m_owned) free (m_str); } operator const char *() const { return m_str; } void escape (const char *); private: escaped_string(const escaped_string&) {} escaped_string& operator=(const escaped_string&) { return *this; } char *m_str; bool m_owned; }; void f(void) { escaped_string a; const char *b = a ? a : ""; }
Re: Proposal for the transition timetable for the move to GIT
On Fri, Dec 06, 2019 at 02:46:04PM -0500, Eric S. Raymond wrote: > Experience matters at this. So does staying away from tools like git-svn that > are known to be bad. git-svn is an excellent tool, if you use it for something it is fit for. And that is what Maxim did. Knowing what tool to use how when and where and how is what experience is. Segher