Re: Need sanity check on DSE vs expander issue

2019-12-20 Thread Jakub Jelinek
On Fri, Dec 20, 2019 at 08:09:26AM +0100, Richard Biener wrote:
> >That (of course) only writes 80 bits of data because of XFmode, leaving
> >48 bits uninitialized.  We then read those bits, or-ing the
> >uninitialized data into ored_words and all hell breaks loose later.
> >
> >Am I losing my mind?  ISTM that dse and the expander have to agree on
> >how much data is written by the store to m.value.
> 
> It looks like MEM_SIZE is wrong here, so you need to figure how we arrive at 
> this (I guess TYPE_SIZE vs. MODE_SIZE mismatch is biting us here?)
> 
> That is, either the MEM should have BLKmode or the mode size should match
> MEM_SIZE. Maybe DSE can avoid looking at MEM_SIZE for non-BLKmode MEMs? 

I guess we need some mode flag whether the mode has any gaps in it (probably
in real.[ch]) and make sure that XFmode (are there any similar modes with this?)
has it set on targets that don't write all the bits.

Jakub



Re: Executable file

2019-12-20 Thread Jonathan Wakely
On Mon, 16 Dec 2019 at 11:49, lindorx  wrote:> I want
to  know how to cpmpile the specified executable format with GCC. I
use GCC on the Windows platform.But I want to compile the ELF format
file.

You need a cross compiler.

But this is the wrong mailing list for your question. Please use the
gcc-help list instead.


Re: Need sanity check on DSE vs expander issue

2019-12-20 Thread Richard Biener
On December 20, 2019 8:25:18 AM GMT+01:00, Jeff Law  wrote:
>On Fri, 2019-12-20 at 08:09 +0100, Richard Biener wrote:
>> On December 20, 2019 3:20:40 AM GMT+01:00, Jeff Law 
>wrote:
>> > I need a sanity check here.
>> > 
>> > Given this code:
>> > 
>> > > typedef union { long double value; unsigned int word[4]; }
>> > memory_long_double;
>> > > static unsigned int ored_words[4];
>> > > static void add_to_ored_words (long double x)
>> > > {
>> > >   memory_long_double m;
>> > >   size_t i;
>> > >   memset (&m, 0, sizeof (m));
>> > >   m.value = x;
>> > >   for (i = 0; i < 4; i++)
>> > > {
>> > >   ored_words[i] |= m.word[i];
>> > > }
>> > > }
>> > > 
>> > 
>> > DSE is removing the memset as it thinks the assignment to m.value
>is
>> > going to set the entire union.
>> > 
>> > But when we translate that into RTL we use XFmode:
>> > 
>> > > ;; m.value ={v} x_6(D);
>> > > 
>> > > (insn 7 6 0 (set (mem/v/j/c:XF (plus:DI (reg/f:DI 77
>> > virtual-stack-vars)
>> > > (const_int -16 [0xfff0])) [2
>m.value+0
>> > S16 A128])
>> > > (reg/v:XF 86 [ x ])) "j.c":13:11 -1
>> > >  (nil))
>> > > 
>> > 
>> > That (of course) only writes 80 bits of data because of XFmode,
>leaving
>> > 48 bits uninitialized.  We then read those bits, or-ing the
>> > uninitialized data into ored_words and all hell breaks loose later.
>> > 
>> > Am I losing my mind?  ISTM that dse and the expander have to agree
>on
>> > how much data is written by the store to m.value.
>> 
>> It looks like MEM_SIZE is wrong here, so you need to figure how we
>arrive at this (I guess TYPE_SIZE vs. MODE_SIZE mismatch is biting us
>here?)
>> 
>> That is, either the MEM should have BLKmode or the mode size should
>match
>> MEM_SIZE. Maybe DSE can avoid looking at MEM_SIZE for non-BLKmode
>MEMs? 
>It's gimple DSE that removes the memset, so it shouldn't be mucking
>around with modes at all.  stmt_kills_ref_p seems to think the
>assignment to m.value sets all of m.
>
>The ao_ref for memset looks reasonable:
>
>> (gdb) p *ref
>> $14 = {ref = 0x0, base = 0x77ffbea0, offset = {long>> = {coeffs = {0}}, }, 
>>   size = {> = {coeffs = {128}}, fields>}, max_size = {> = {
>>   coeffs = {128}}, }, ref_alias_set = 0,
>base_alias_set = 0, volatile_p = false}
>> 
>128 bits with a base of VAR_DECL m.
>
>We looking to see if this statement will kill the ref:
>
>> (gdb) p debug_gimple_stmt (stmt)
>> # .MEM_8 = VDEF <.MEM_6>
>> m.value ={v} x_7(D);
>> $21 = void
>> (gdb) p debug_tree (lhs)
>>  > type volatile XF
>> size 
>> unit-size 
>> align:128 warn_if_not_align:0 symtab:0 alias-set -1
>canonical-type 0x7fffea988690 precision:80>
>> side-effects volatile
>> arg:0 > type sizes-gimplified volatile type_0 BLK size 128> unit-size 
>> align:128 warn_if_not_align:0 symtab:0 alias-set -1
>canonical-type 0x7fffea988348 fields 
>context 
>> pointer_to_this >
>> side-effects addressable volatile used read BLK j.c:10:31
>size  unit-size 0x7fffea7f3d38 16>
>> align:128 warn_if_not_align:0 context 0x7fffea97bd00 add_to_ored_words>
>> chain 0x7fffea9430a8 size_t>
>> used unsigned read DI j.c:11:10
>> size 
>> unit-size 
>> align:64 warn_if_not_align:0 context 0x7fffea97bd00 add_to_ored_words>>>
>> arg:1 > type XF size  unit-size 0x7fffea7f3d38 16>
>> align:128 warn_if_not_align:0 symtab:0 alias-set -1
>canonical-type 0x7fffea8133f0 precision:80
>> pointer_to_this >
>> XF j.c:6:29 size  unit-size
>
>> align:128 warn_if_not_align:0 offset_align 128
>> offset 
>> bit-offset  context
>
>> chain 0x7fffea981f18>
>> TI j.c:6:49 size 
>unit-size 
>> align:32 warn_if_not_align:0 offset_align 128 offset
> bit-offset 0> context >>
>> j.c:13:4 start: j.c:13:3 finish: j.c:13:9>
>> $22 = void
>> 
>
>stmt_kills_ref_p calls get_ref_base_and_extent on that LHS object.  THe
>returned base is the same as ref->base.  The returned offset is zero
>with size/max_size of 128 bits.  So according to
>get_ref_base_and_extent the assignment is going to write 128 bits and
>thus kills  the memset.
>
>One might argue that's where the problems start -- somewhere in
>get_ref_base_and_extent.  
>
>I'm largely offline the next couple weeks...
>
>I don't have any "real" failures I'm tracking because of this, but it
>does cause some configure generated tests to give the wrong result. 
>Thankfully the inconsistency just doesn't matter for any of the
>packages that are affected.

It's certainly something to look at. I'm largely offline already so please file 
a bug report so we don't forget. I'll have a detailed look next year. 

Thanks, 
Richard. 

>
>Jeff



Re: Does gcc automatically lower optimization level for very large routines?

2019-12-20 Thread Richard Biener
On December 20, 2019 1:41:19 AM GMT+01:00, Jeff Law  wrote:
>On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote:
>> Hi, Dmitry,
>> 
>> Thanks for the responds. 
>> 
>> Yes, routine size only cannot determine the complexity of the
>routine. Different compiler analysis might have different formula with
>multiple parameters to compute its complexity. 
>> 
>> However, the common issue is: when the complexity of a specific
>routine for a specific compiler analysis exceeds a threshold, the
>compiler might consume all the available memory and abort the
>compilation. 
>> 
>> Therefore,  in order to avoid the failed compilation due to out of
>memory, some compilers might set a threshold for the complexity of a
>specific compiler analysis (for example, the more aggressive data flow
>analysis), when the threshold is met, the specific aggressive analysis
>will be turned off for this specific routine. Or the optimization level
>will be lowered for the specific routine (and given a warning during
>compilation time for such adjustment).  
>> 
>> I am wondering whether GCC has such capability? Or any option
>provided to increase or decrease the threshold for some of the common
>analysis (for example, data flow)?
>> 
>There are various places where if we hit a limit, then we throttle
>optimization.  But it's not done consistently or pervasively.
>
>Those limits are typically around things like CFG complexity.

Note we also have (not consistently used) -Wmissed-optimizations which is 
supposed to warn when we run into this kind of limiting telling the user which 
knob he might be able to tune. 

Richard. 

>We do _not_ try to recover after an out of memory error, or anything
>like that.
>
>jeff
>> 



Sign in and discover how to isolate “resting trends”, buy them, and watch them snap back to life

2019-12-20 Thread Garrett Richard
Hi There,
Email: g...@gnu.org

Our international company consists of around 25 Internet projects related
to crypto currencies and ICO. Now we recruit staff from around the world.

CHECK IT OUT HERE

Primary salary $448k yearly e-workers needed!

- No Special Skills Required

- No Previous Job Experience Required

- No Crypto Trading Experience Required

Our Requirements:

>> Internet access

>> Computer knowledge

>> A 3-4 hours of free time daily

>> Sharp Mind

Average earnings is $507 daily, part-time/ no establishes hours.

At the moment there are 69 vacancy left.

CHECK IT OUT HERE

 All the Best,

Jacob Luna

HR Dept.

Unsubscribe

ArcaMax Publishing, Inc., 729 Thimble Shoals Blvd., Suite 1-B, Newport News, VA 
23606

OpenACC regression and development pace

2019-12-20 Thread Thomas Koenig

Hi, I just saw this:

FAIL: gfortran.dg/goacc/finalize-1.f   -O   scan-tree-dump-times gimple 
"(?n)#pragma omp target oacc_enter_exit_data 
map\\(delete:MEM\\[\\(c_char \\*\\)[^\\]]+\\] \\[len: [^\\]]+\\]\\) 
map\\(to:del_f_p \\[pointer set, len: [0-9]+\\]\\) 
map\\(alloc:del_f_p\\.data \\[pointer assign, bias: [^\\]]+\\]\\) 
finalize$" 1
FAIL: gfortran.dg/goacc/finalize-1.f   -O   scan-tree-dump-times gimple 
"(?n)#pragma omp target oacc_enter_exit_data 
map\\(force_from:MEM\\[\\(c_char \\*\\)[^\\]]+\\] \\[len: [^\\]]+\\]\\) 
map\\(to:cpo_f_p \\[pointer set, len: [0-9]+\\]\\) 
map\\(alloc:cpo_f_p\\.data \\[pointer assign, bias: [^\\]]+\\]\\) 
finalize$" 1


Regarding what is currently going on with OpenACC: I do not claim to
understand this area of the compiler, but it certainly seems that the
current development is too hasty - too many patches flying around,
too many regressions occurring.  It might be better to slow this
down somewhat, and to conduct a more throrough review process before
committing.

Regards

Thomas


Re: Does gcc automatically lower optimization level for very large routines?

2019-12-20 Thread Qing Zhao
Thanks a lot for all these help.

So, currently, if GCC compilation aborts due to this reason, what’s the best 
way for the user to resolve it? 
I added “#pragma GCC optimize (“O1”) to the large routine in order to 
workaround this issue.  
Is there other better way to do it?

Is GCC planning to resolve such issue better in the future?

thanks.

Qing

> On Dec 20, 2019, at 5:13 AM, Richard Biener  
> wrote:
> 
> On December 20, 2019 1:41:19 AM GMT+01:00, Jeff Law  > wrote:
>> On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote:
>>> Hi, Dmitry,
>>> 
>>> Thanks for the responds. 
>>> 
>>> Yes, routine size only cannot determine the complexity of the
>> routine. Different compiler analysis might have different formula with
>> multiple parameters to compute its complexity. 
>>> 
>>> However, the common issue is: when the complexity of a specific
>> routine for a specific compiler analysis exceeds a threshold, the
>> compiler might consume all the available memory and abort the
>> compilation. 
>>> 
>>> Therefore,  in order to avoid the failed compilation due to out of
>> memory, some compilers might set a threshold for the complexity of a
>> specific compiler analysis (for example, the more aggressive data flow
>> analysis), when the threshold is met, the specific aggressive analysis
>> will be turned off for this specific routine. Or the optimization level
>> will be lowered for the specific routine (and given a warning during
>> compilation time for such adjustment).  
>>> 
>>> I am wondering whether GCC has such capability? Or any option
>> provided to increase or decrease the threshold for some of the common
>> analysis (for example, data flow)?
>>> 
>> There are various places where if we hit a limit, then we throttle
>> optimization.  But it's not done consistently or pervasively.
>> 
>> Those limits are typically around things like CFG complexity.
> 
> Note we also have (not consistently used) -Wmissed-optimizations which is 
> supposed to warn when we run into this kind of limiting telling the user 
> which knob he might be able to tune. 
> 
> Richard. 
> 
>> We do _not_ try to recover after an out of memory error, or anything
>> like that.
>> 
>> jeff



Re: Does gcc automatically lower optimization level for very large routines?

2019-12-20 Thread Jonathan Wakely
On Fri, 20 Dec 2019 at 16:05, Qing Zhao wrote:
>
> Thanks a lot for all these help.
>
> So, currently, if GCC compilation aborts due to this reason, what’s the best 
> way for the user to resolve it?
> I added “#pragma GCC optimize (“O1”) to the large routine in order to 
> workaround this issue.
> Is there other better way to do it?

Make your functions smaller, or get more RAM.

> Is GCC planning to resolve such issue better in the future?

Nobody has stated any such plans, so you can assume there are no such plans.


Re: Proposal for the transition timetable for the move to GIT

2019-12-20 Thread Segher Boessenkool
Hi!

On Wed, Dec 18, 2019 at 01:43:19PM -0700, Jeff Law wrote:
> On Wed, 2019-12-18 at 13:50 -0600, Segher Boessenkool wrote:
> > On Wed, Dec 18, 2019 at 11:07:11AM -0700, Jeff Law wrote:
> > > > That isn't what I said.  I said that freshly constructed complex 
> > > > software
> > > > will have more and deeper errors than stupid simple scripts do (or I
> > > > implied that at least, maybe it wasn't clear).  And I only say this
> > > > because the opposite was claimed, which is laughable imnsho.
> > > But it's not that freshly constructed, at least not in my mind.  All
> > > the experience ESR has from the python implementation carries to the Go
> > > implementation.
> > 
> > What, writing code in Python made him learn Go?
> ?!?  What does that question have to do with anything?

There is a lot more needed to write reliable programs than just domain
knowledge.  git-svn is used for this exact purpose (converting svn
commits to git commits) millions of times per day, for I-don't-know-
how long already.  Yes, I trust that better than newly written code.

The point is completely moot if we actually verify and compare the
resulting trees, of course.

> Ultimately I don't care about the Unix philosophy.  I'm pragmatic.  If
> reposurgeon gives us a better conversion, and it sounds very much like
> it already does, then the fact that it doesn't follow the Unix
> philosophy is irrelevant to me.

Exactly the same here!

But we need to look at the actual candidate conversions to determine this.
Not just say "I like X better than Y".  That is at best subjective; we can
do better than that.

> > > Where I think we could have done better would have been to get more
> > > concrete detail from ESR about the problems with git-svn.  That was
> > > never forthcoming and it's a disappointment.
> > 
> > Yes.  And as far as I can see you can wait forever for it.  Oh well, we
> > have a lot of experience in waiting.
> Umm, no, I'm not suggesting we wait in any way at all.

And neither am I.  I don't wait for things I do not expect to happen.
And I want a Git conversion soon, not wait more years for it.

> Based on what
> I've heard from Joseph, I'd vote today to go with reposurgeon as soon
> as it's convenient for the people doing the conversion and our
> development cycle.
> 
> This highlights one big issue that we have as a project.  Specifically
> that we don't have a clear cut way to make these kinds of technical
> decisions when there isn't unanimous consent.

This isn't a technical decision really.  Both candidate conversions are
perfect technically already (or will be soon).  All that is left is a)
aesthetics, so everyone wants something else; b) some people are dead
set against falsifying history (including me), while other people think
something that looks slightly better is more important; and c) what tags
and branches do we not carry over from svn at all?  We'll keep the svn
repo around as well, anyway.

If Joseph and Richard agree a candidate is good, then I will agree as
well.  All that can be left is nit-picking, and that is not worth it
anyway: the repository will not be perfect no matter what, people have
made mistakes, we can only fix some superficial ones.  Some of those
are practically important (because they are annoying), but most are not.


Segher


Re: C2X Proposal, merge '.' and '->' C operators

2019-12-20 Thread J Decker
On Tue, Dec 17, 2019 at 2:53 AM Florian Weimer  wrote:

> * J. Decker:
>
> > Here's the gist of what I would propose...
> > https://gist.github.com/d3x0r/f496d0032476ed8b6f980f7ed31280da
> >
> > In C, there are two operators . and -> used to access members of struct
> and
> > union types. These operators are specified such that they are always
> paired
> > in usage; for example, if the left hand expression is a pointer to a
> struct
> > or union, then the operator -> MUST be used. There is no occasion where .
> > and -> may be interchanged, given the existing specification.
>
> This is incompatible with C++.  I don't think it's worthwhile to change
> C in this way.
>

ya, while I only just saw this, I thought shortly after posting that c++
compatibility might be an issue; and they have separate operators overrides
for -> and . (which V8 uses such that `Local lo;`  `lo.IsEmpty();`
and `lo->Get()`  are interchangeable.

However, if not specifically overridden it could be possible to make a
similar change there.   (and conversely not having the operator support the
C++ back port wouldn't be an issue).  It's still an error in the native
language context to use '.' on a pointer or '->' on a class/struct... and
the modification is really a patch to that error to just do the other
thing...



>
> Thanks,
> Florian
>
>


Re: C2X Proposal, merge '.' and '->' C operators

2019-12-20 Thread J Decker
On Fri, Dec 20, 2019 at 11:59 AM J Decker  wrote:

>
>
> On Tue, Dec 17, 2019 at 2:53 AM Florian Weimer  wrote:
>
>> * J. Decker:
>>
>> > Here's the gist of what I would propose...
>> > https://gist.github.com/d3x0r/f496d0032476ed8b6f980f7ed31280da
>> >
>> > In C, there are two operators . and -> used to access members of struct
>> and
>> > union types. These operators are specified such that they are always
>> paired
>> > in usage; for example, if the left hand expression is a pointer to a
>> struct
>> > or union, then the operator -> MUST be used. There is no occasion where
>> .
>> > and -> may be interchanged, given the existing specification.
>>
>> This is incompatible with C++.  I don't think it's worthwhile to change
>> C in this way.
>>
>
> ya, while I only just saw this, I thought shortly after posting that c++
> compatibility might be an issue; and they have separate operators overrides
> for -> and . (which V8 uses such that `Local lo;`  `lo.IsEmpty();`
> and `lo->Get()`  are interchangeable.
>
> However, if not specifically overridden it could be possible to make a
> similar change there.   (and conversely not having the operator support the
> C++ back port wouldn't be an issue).  It's still an error in the native
> language context to use '.' on a pointer or '->' on a class/struct... and
> the modification is really a patch to that error to just do the other
> thing...
>
and add -> on references?


>
>
>
>>
>> Thanks,
>> Florian
>>
>>


Re: C2X Proposal, merge '.' and '->' C operators

2019-12-20 Thread J Decker
On Fri, Dec 20, 2019 at 12:03 PM J Decker  wrote:

>
>
> On Fri, Dec 20, 2019 at 11:59 AM J Decker  wrote:
>
>>
>>
>> On Tue, Dec 17, 2019 at 2:53 AM Florian Weimer 
>> wrote:
>>
>>> * J. Decker:
>>>
>>> > Here's the gist of what I would propose...
>>> > https://gist.github.com/d3x0r/f496d0032476ed8b6f980f7ed31280da
>>> >
>>> > In C, there are two operators . and -> used to access members of
>>> struct and
>>> > union types. These operators are specified such that they are always
>>> paired
>>> > in usage; for example, if the left hand expression is a pointer to a
>>> struct
>>> > or union, then the operator -> MUST be used. There is no occasion
>>> where .
>>> > and -> may be interchanged, given the existing specification.
>>>
>>> This is incompatible with C++.  I don't think it's worthwhile to change
>>> C in this way.
>>>
>>
>> ya, while I only just saw this, I thought shortly after posting that c++
>> compatibility might be an issue; and they have separate operators overrides
>> for -> and . (which V8 uses such that `Local lo;`  `lo.IsEmpty();`
>> and `lo->Get()`  are interchangeable.
>>
>> However, if not specifically overridden it could be possible to make a
>> similar change there.   (and conversely not having the operator support the
>> C++ back port wouldn't be an issue).  It's still an error in the native
>> language context to use '.' on a pointer or '->' on a class/struct... and
>> the modification is really a patch to that error to just do the other
>> thing...
>>
> and add -> on references?
>

My first patch was to make the . and -> interchangeable; it could be more
specifically to promote '.' to be either; with the intent to deprecate ->
(in like 2119).
This might simplify the scope of modification to C++; to just augment the
default '.' to behave as -> on a native pointer to a struct/class/union (
I'm not sure how the new safe_ptr templated things end up reacting, I'd
imagine they provide operator overloads, which would take precedence... )


>
>
>>
>>
>>
>>>
>>> Thanks,
>>> Florian
>>>
>>>


Re: Commit messages and the move to git

2019-12-20 Thread Joseph Myers
On Thu, 19 Dec 2019, Jonathan Wakely wrote:

> I've attached an updated list to this mail, which removes the items
> we've analysed. There are 531 remaining.

With the current version of the script (including the various whitelisted 
component pairs discussed) and with data freshly downloaded from Bugzilla 
(but with GCC commit messages from a couple of days ago, I'll do a fresh 
conversion run shortly), I now get a list of 481, attached.

-- 
Joseph S. Myers
jos...@codesourcery.comre PR rtl-optimization/13024 (gcj can't build current rhug [checkme: java SVN 
r73752])
backport: re PR rtl-optimization/12816 (internal compiler error 
pari-2.1.5/Olinux-i686 [checkme: c++ SVN r75851])
revert: re PR tree-optimization/16115 (double-destruction problem with argument 
passing via temporary (breaks auto_ptr) [checkme: c++ SVN r84147])
re PR tree-optimization/15262 ([tree-ssa] Alias analyzer cannot handle 
addressable fields [checkme: c SVN r86398])
re PR rtl-optimization/15857 (Wrong code with optimization >= -O1 [checkme: c++ 
SVN r87429])
re PR c/16403 (Floating-point assignment to int is inconsistent [checkme: 
INVALID SVN r94142])
re PR c++/20505 (internal error when compiling with -ggdb2 and no error with 
-ggdb1 [checkme: debug SVN r97528])
re PR tree-optimization/21562 (Quiet bad codegen (unrolling + tail call 
interaction) [checkme: c SVN r103245])
re PR c/21419 (Accepts writting to const via asm [checkme: tree-optimization 
SVN r104991])
re PR awt/26641 (AWT mouse event handling regression [checkme: libgcj SVN 
r112464])
re PR java/28024 (libjava build failure on Solaris 2.8 (sun4u) [checkme: 
INVALID SVN r114637])
re PR java/28024 (libjava build failure on Solaris 2.8 (sun4u) [checkme: 
INVALID SVN r114639])
re PR driver/30714 (gcj driver doesn't recognize files starting with II 
[checkme: java SVN r121666])
re PR driver/30714 (gcj driver doesn't recognize files starting with II 
[checkme: java SVN r121667])
re PR debug/33739 (Failure of gfortran.dg/literal_character_constant_1_*.F with 
-m64 -g on Darwin [checkme: fortran SVN r130244])
re PR c++/31863 (g++-4.1: out of memory with -O1/-O2 [checkme: middle-end SVN 
r131405])
re PR c/34601 (ICE with undefined enum [checkme: middle-end SVN r131506])
re PR middle-end/34668 (ICE in find_compatible_field with -combine [checkme: c 
SVN r131572])
re PR tree-optimization/34885 (ICE in copy_reference_ops_from_ref, at 
tree-ssa-sccvn.c:574 [checkme: c SVN r131694])
re PR c++/34953 (ICE on destructor + noreturn-function at -O3 [checkme: 
middle-end SVN r131782])
re PR translation/35002 (Incorrect spelling of "hottest" [checkme: c SVN 
r131940])
re PR driver/30330 (-Wdeprecated is not documented [checkme: documentation SVN 
r132131])
re PR c/35526 (ICE on memcpy [checkme: middle-end SVN r133106])
re PR c/35526 (ICE on memcpy [checkme: middle-end SVN r133108])
re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp SVN 
r133195])
re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp SVN 
r133220])
re PR preprocessor/34866 (valgrind error indication in testsuite from 
errors.c:156:cpp_error with gcc.dg/cpp/Wmissingdirs.c [checkme: libcpp SVN 
r134421])
re PR preprocessor/15500 (gcc ignores locale when converting wide string 
literals to wchar_t strings [checkme: libcpp SVN r134441])
re PR preprocessor/33415 (Can't compile .cpp file with UTF-8 BOM. [checkme: 
libcpp SVN r134507])
re PR c++/35652 (offset warning should be given in the front-end [checkme: c 
SVN r134714])
re PR fortran/36117 (Use MPFR for bessel function (optimization, rejects valid 
F2008) [checkme: middle-end SVN r135113])
re PR c++/36185 (wrong code with  -O2 -fgcse-sm [checkme: rtl-optimization SVN 
r135159])
re PR c++/35336 (Broken diagnostic: 'bit_field_ref' not supported by dump_expr 
[checkme: middle-end SVN r136662])
re PR c++/36460 (No space between >'s not always handled in C++0x [checkme: c 
SVN r136919])
re PR middle-end/36571 (Default untyped return for AVR is byte register. 
[checkme: c SVN r136926])
re PR debug/34908 (valgrind error indication from testsuite hashtab.c : 
htab_hash_string [checkme: fortran SVN r136989])
re PR debug/34908 (valgrind error indication from testsuite hashtab.c : 
htab_hash_string [checkme: fortran SVN r137001])
re PR tree-optimization/34371 (verify_stmts failed (incorrect sharing of tree 
nodes) [checkme: fortran SVN r137088])
re PR rtl-optimization/36672 (IRA + -fno-pic ICE in emit_swap_insn, at 
reg-stack.c:829 [checkme: preprocessor SVN r137581])
re PR ada/15479 (Ada manual problems [checkme: documentation SVN r137793])
re PR ada/36957 (ACATS ce3801b ICE emit_move_insn, at expr.c:3381 post tuple 
merge [checkme: tree-optimization SVN r138217])
re PR ada/15479 (Ada manual problems [checkme: documentation SVN r138293])
re PR middle-end/36633 (warning "array subscript is below array bounds" on 
delete [] with -O2, -Wall [checkme: c++ SVN r138425])
re PR c++/17880 (-Wsequence-point doesn't warn inside if, while, do conditions, 
for beg/

Re: Commit messages and the move to git

2019-12-20 Thread Jonathan Wakely
On Fri, 20 Dec 2019 at 20:30, Joseph Myers  wrote:
>
> On Thu, 19 Dec 2019, Jonathan Wakely wrote:
>
> > I've attached an updated list to this mail, which removes the items
> > we've analysed. There are 531 remaining.
>
> With the current version of the script (including the various whitelisted
> component pairs discussed) and with data freshly downloaded from Bugzilla
> (but with GCC commit messages from a couple of days ago, I'll do a fresh
> conversion run shortly), I now get a list of 481, attached.

Should "libcpp" be a compalias of "preprocessor"?

re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp
SVN r133195])
re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp
SVN r133220])
re PR preprocessor/34866 (valgrind error indication in testsuite from
errors.c:156:cpp_error with gcc.dg/cpp/Wmissingdirs.c [checkme: libcpp
SVN r134421])
re PR preprocessor/15500 (gcc ignores locale when converting wide
string literals to wchar_t strings [checkme: libcpp SVN r134441])
re PR preprocessor/33415 (Can't compile .cpp file with UTF-8 BOM.
[checkme: libcpp SVN r134507])


Re: Commit messages and the move to git

2019-12-20 Thread Joseph Myers
On Fri, 20 Dec 2019, Jonathan Wakely wrote:

> On Fri, 20 Dec 2019 at 20:30, Joseph Myers  wrote:
> >
> > On Thu, 19 Dec 2019, Jonathan Wakely wrote:
> >
> > > I've attached an updated list to this mail, which removes the items
> > > we've analysed. There are 531 remaining.
> >
> > With the current version of the script (including the various whitelisted
> > component pairs discussed) and with data freshly downloaded from Bugzilla
> > (but with GCC commit messages from a couple of days ago, I'll do a fresh
> > conversion run shortly), I now get a list of 481, attached.
> 
> Should "libcpp" be a compalias of "preprocessor"?
> 
> re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp
> SVN r133195])
> re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp
> SVN r133220])
> re PR preprocessor/34866 (valgrind error indication in testsuite from
> errors.c:156:cpp_error with gcc.dg/cpp/Wmissingdirs.c [checkme: libcpp
> SVN r134421])
> re PR preprocessor/15500 (gcc ignores locale when converting wide
> string literals to wchar_t strings [checkme: libcpp SVN r134441])
> re PR preprocessor/33415 (Can't compile .cpp file with UTF-8 BOM.
> [checkme: libcpp SVN r134507])

Added that alias, thanks.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Commit messages and the move to git

2019-12-20 Thread Jonathan Wakely
On Fri, 20 Dec 2019 at 21:41, Joseph Myers  wrote:
>
> On Fri, 20 Dec 2019, Jonathan Wakely wrote:
>
> > On Fri, 20 Dec 2019 at 20:30, Joseph Myers  wrote:
> > >
> > > On Thu, 19 Dec 2019, Jonathan Wakely wrote:
> > >
> > > > I've attached an updated list to this mail, which removes the items
> > > > we've analysed. There are 531 remaining.
> > >
> > > With the current version of the script (including the various whitelisted
> > > component pairs discussed) and with data freshly downloaded from Bugzilla
> > > (but with GCC commit messages from a couple of days ago, I'll do a fresh
> > > conversion run shortly), I now get a list of 481, attached.
> >
> > Should "libcpp" be a compalias of "preprocessor"?
> >
> > re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp
> > SVN r133195])
> > re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp
> > SVN r133220])
> > re PR preprocessor/34866 (valgrind error indication in testsuite from
> > errors.c:156:cpp_error with gcc.dg/cpp/Wmissingdirs.c [checkme: libcpp
> > SVN r134421])
> > re PR preprocessor/15500 (gcc ignores locale when converting wide
> > string literals to wchar_t strings [checkme: libcpp SVN r134441])
> > re PR preprocessor/33415 (Can't compile .cpp file with UTF-8 BOM.
> > [checkme: libcpp SVN r134507])
>
> Added that alias, thanks.

I've sent another pull request fixing another 20. Here is the list
with those 20 removed (and this still includes the libcpp vs
preprocessor ones that will be handled by the new alias).
re PR rtl-optimization/13024 (gcj can't build current rhug [checkme: java SVN 
r73752])
backport: re PR rtl-optimization/12816 (internal compiler error 
pari-2.1.5/Olinux-i686 [checkme: c++ SVN r75851])
revert: re PR tree-optimization/16115 (double-destruction problem with argument 
passing via temporary (breaks auto_ptr) [checkme: c++ SVN r84147])
re PR tree-optimization/15262 ([tree-ssa] Alias analyzer cannot handle 
addressable fields [checkme: c SVN r86398])
re PR rtl-optimization/15857 (Wrong code with optimization >= -O1 [checkme: c++ 
SVN r87429])
re PR c++/20505 (internal error when compiling with -ggdb2 and no error with 
-ggdb1 [checkme: debug SVN r97528])
re PR tree-optimization/21562 (Quiet bad codegen (unrolling + tail call 
interaction) [checkme: c SVN r103245])
re PR awt/26641 (AWT mouse event handling regression [checkme: libgcj SVN 
r112464])
re PR driver/30714 (gcj driver doesn't recognize files starting with II 
[checkme: java SVN r121666])
re PR driver/30714 (gcj driver doesn't recognize files starting with II 
[checkme: java SVN r121667])
re PR debug/33739 (Failure of gfortran.dg/literal_character_constant_1_*.F with 
-m64 -g on Darwin [checkme: fortran SVN r130244])
re PR c++/31863 (g++-4.1: out of memory with -O1/-O2 [checkme: middle-end SVN 
r131405])
re PR c/34601 (ICE with undefined enum [checkme: middle-end SVN r131506])
re PR middle-end/34668 (ICE in find_compatible_field with -combine [checkme: c 
SVN r131572])
re PR tree-optimization/34885 (ICE in copy_reference_ops_from_ref, at 
tree-ssa-sccvn.c:574 [checkme: c SVN r131694])
re PR c++/34953 (ICE on destructor + noreturn-function at -O3 [checkme: 
middle-end SVN r131782])
re PR c/35526 (ICE on memcpy [checkme: middle-end SVN r133106])
re PR c/35526 (ICE on memcpy [checkme: middle-end SVN r133108])
re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp SVN 
r133195])
re PR preprocessor/35322 (ICE with incomplete macro [checkme: libcpp SVN 
r133220])
re PR preprocessor/34866 (valgrind error indication in testsuite from 
errors.c:156:cpp_error with gcc.dg/cpp/Wmissingdirs.c [checkme: libcpp SVN 
r134421])
re PR preprocessor/15500 (gcc ignores locale when converting wide string 
literals to wchar_t strings [checkme: libcpp SVN r134441])
re PR preprocessor/33415 (Can't compile .cpp file with UTF-8 BOM. [checkme: 
libcpp SVN r134507])
re PR fortran/36117 (Use MPFR for bessel function (optimization, rejects valid 
F2008) [checkme: middle-end SVN r135113])
re PR c++/36185 (wrong code with  -O2 -fgcse-sm [checkme: rtl-optimization SVN 
r135159])
re PR c++/35336 (Broken diagnostic: 'bit_field_ref' not supported by dump_expr 
[checkme: middle-end SVN r136662])
re PR middle-end/36571 (Default untyped return for AVR is byte register. 
[checkme: c SVN r136926])
re PR debug/34908 (valgrind error indication from testsuite hashtab.c : 
htab_hash_string [checkme: fortran SVN r136989])
re PR debug/34908 (valgrind error indication from testsuite hashtab.c : 
htab_hash_string [checkme: fortran SVN r137001])
re PR tree-optimization/34371 (verify_stmts failed (incorrect sharing of tree 
nodes) [checkme: fortran SVN r137088])
re PR rtl-optimization/36672 (IRA + -fno-pic ICE in emit_swap_insn, at 
reg-stack.c:829 [checkme: preprocessor SVN r137581])
re PR ada/36957 (ACATS ce3801b ICE emit_move_insn, at expr.c:3381 post tuple 
merge [checkme: tree-optimization SVN r138217])
re PR middle-end/36633 (warning "array subscript is below array boun

gcc-8-20191220 is now available

2019-12-20 Thread gccadmin
Snapshot gcc-8-20191220 is now available on
  https://gcc.gnu.org/pub/gcc/snapshots/8-20191220/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 8 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-8-branch 
revision 279679

You'll find:

 gcc-8-20191220.tar.xzComplete GCC

  SHA256=9b72338bccbe3642d6125493b71271c4281e6bcb6b8877e433fa5cb25c5a0990
  SHA1=bb3049d2a259637f96d5282260e3ac3ff7bcc51d

Diffs from 8-20191213 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-8
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Does gcc automatically lower optimization level for very large routines?

2019-12-20 Thread Segher Boessenkool
On Fri, Dec 20, 2019 at 02:57:57AM +0100, Dmitry Mikushin wrote:
> Trying to plan memory consumption ahead-of-work contradicts with the nature
> of the graph traversal. Estimation may work very well for something simple
> like linear or log-linear behavior.

Almost everything we do is (almost) linear.

> But many compiler algorithms are known
> to be polynomial or exponential

Many?  There are a few (register allocation is a well-known example),
but anything more than almost linear is quite easy to make blow up.  It
is also not super hard in most cases to make things linear, it just
needs careful attention.

> (or even worse in case of bugs).

Well, sure, if there is a bug *anything* can go wrong ;-)


Segher


Re: Commit messages and the move to git

2019-12-20 Thread Joseph Myers
On Fri, 20 Dec 2019, Jonathan Wakely wrote:

> I've sent another pull request fixing another 20. Here is the list
> with those 20 removed (and this still includes the libcpp vs
> preprocessor ones that will be handled by the new alias).

Thanks, merged.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Does gcc automatically lower optimization level for very large routines?

2019-12-20 Thread Dmitry Mikushin
Yes, much more. When you traverse a CFG, the analysis develops into a tree
(for example a tree of uses). That is, every basic block could be
*recursively* a root of an individual linear iteration for up to all basic
blocks. Sum them up, and you will get a polynomial expression. I don't
insist that this is the best way, but often the way it is.

пт, 20 дек. 2019 г. в 23:52, Segher Boessenkool :

> On Fri, Dec 20, 2019 at 02:57:57AM +0100, Dmitry Mikushin wrote:
> > Trying to plan memory consumption ahead-of-work contradicts with the
> nature
> > of the graph traversal. Estimation may work very well for something
> simple
> > like linear or log-linear behavior.
>
> Almost everything we do is (almost) linear.
>
> > But many compiler algorithms are known
> > to be polynomial or exponential
>
> Many?  There are a few (register allocation is a well-known example),
> but anything more than almost linear is quite easy to make blow up.  It
> is also not super hard in most cases to make things linear, it just
> needs careful attention.
>
> > (or even worse in case of bugs).
>
> Well, sure, if there is a bug *anything* can go wrong ;-)
>
>
> Segher
>


Re: Commit messages and the move to git

2019-12-20 Thread Jonathan Wakely
On Fri, 20 Dec 2019 at 22:58, Joseph Myers  wrote:
>
> On Fri, 20 Dec 2019, Jonathan Wakely wrote:
>
> > I've sent another pull request fixing another 20. Here is the list
> > with those 20 removed (and this still includes the libcpp vs
> > preprocessor ones that will be handled by the new alias).
>
> Thanks, merged.

... aand another merge request :-)

Going to sleep now though, so that's all from me for now.


Re: Commit messages and the move to git

2019-12-20 Thread Joseph Myers
On Fri, 20 Dec 2019, Jonathan Wakely wrote:

> On Fri, 20 Dec 2019 at 22:58, Joseph Myers  wrote:
> >
> > On Fri, 20 Dec 2019, Jonathan Wakely wrote:
> >
> > > I've sent another pull request fixing another 20. Here is the list
> > > with those 20 removed (and this still includes the libcpp vs
> > > preprocessor ones that will be handled by the new alias).
> >
> > Thanks, merged.
> 
> ... aand another merge request :-)

Thanks, merged.  (This last one won't be in the conversion I'm running 
right now.)

-- 
Joseph S. Myers
jos...@codesourcery.com