gcc.gnu.org/wiki/ – broken because /moin_static1910/ files fail with 404

2022-11-11 Thread Tobias Burnus

Hi all,

this seems to be a very recent regression: https://gcc.gnu.org/wiki/ is 
currently only limited usable.

Looking at the browser console, the problem is:

GET https://gcc.gnu.org/moin_static1910/common/js/common.js[HTTP/2 404
Not Found 131ms]
GEThttps://gcc.gnu.org/moin_static1910/modern/css/common.css[HTTP/2 404
Not Found 131ms] GET
https://gcc.gnu.org/moin_static1910/modern/css/screen.css [HTTP/2 404
Not Found 131ms] GET
https://gcc.gnu.org/moin_static1910/modern/css/print.css [HTTP/2 404 Not
Found 131ms] GET
https://gcc.gnu.org/moin_static1910/modern/css/projection.css [HTTP/2
404 Not Found 131ms] Any idea? Tobias

-
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 
München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas 
Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht 
München, HRB 106955


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Sam James via Gcc


> On 11 Nov 2022, at 03:33, Zack Weinberg  wrote:
> 
> On Thu, Nov 10, 2022, at 10:08 PM, Sam James wrote:
>>> On 10 Nov 2022, at 21:10, Michael Orlitzky  wrote:
>>> While everyone else is discussing big ideas, it would be helpful for me
>>> personally if autoconf just made a release with the latest bugfixes.
>> 
>> Before I dive into the rest of this thread: yes, this is one of
>> my main thoughts on the matter. Autoconf has a huge network
>> effect problem and letting the existing fixes start to propagate
>> would be most helpful.
> 
> It would be relatively easy for me to take a couple hours this weekend and 
> put out a 2.72 release with everything that's already in trunk and nothing 
> else.  Anyone have reasons I _shouldn't_ do that?
> [...]
> 
> I have not been following the y2038 work closely.  Is it going to affect 
> things in a good way or a bad way??

I've started a discussion on libc-alpha about this, but I think it depends on 
how you view
the migration. I've come to the conclusion it's probably good but only after 
thinking
about it a lot. I wish it'd been discussed on the mailing lists first, as it's 
not obvious
that it's okay, and I'm not sure if others will even share my view.

Let's have the conversation there as it'll be easier to track.

Thanks for prompting me to write this up finally.


signature.asc
Description: Message signed with OpenPGP


Re: Links to web pages are broken.

2022-11-11 Thread Martin Liška
On 11/10/22 18:01, Jonathan Wakely wrote:
> Maybe just "docs" or "trunkdocs" or "latestdocs" instead of
> "onlinedocs-new", since that is (1) very long, and (2) will look silly
> in ten years when it's not new and we need to add
> onlinedocs-even-newer 😉

I do support it, it would be probably nicer than the complicated Rewrite rule
Jonathan prepared.

> 
> Or even onlinedocs/latest/ for the new stuff, and leave the old stuff
> there in onlinedocs/ (without linking to it) so that old links work.

I think we should add a new HTML header to the older documentation
saying that's legacy. Something one can see here:
https://matplotlib.org/3.3.4/tutorials/index.html

Martin


Re: Different outputs in Gimple pass dump generated by two different architectures

2022-11-11 Thread Marc Glisse via Gcc

On Thu, 10 Nov 2022, Kevin Lee wrote:


While looking at the failure for gcc.dg/uninit-pred-9_b.c, I observed that
x86-64 and risc-v has a different output for the gimple pass since
r12-4790-g4b3a325f07acebf4
.


Probably since earlier.

What would be causing the difference? Is this intended? Link 
 for details. Thank you!


See LOGICAL_OP_NON_SHORT_CIRCUIT in fold-const.cc (and various discussions 
on the topic in mailing lists and bugzilla).


--
Marc Glisse


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Sam James via Gcc


> On 10 Nov 2022, at 17:16, Zack Weinberg  wrote:
> 
> I’m the closest thing Autoconf has to a lead maintainer at present.
> 
> It’s come to my attention (via https://lwn.net/Articles/913505/ and
> https://fedoraproject.org/wiki/Changes/PortingToModernC) that GCC and
> Clang both plan to disable several “legacy” C language features by
> default in a near-future release (GCC 14, Clang 16) (see the Fedora
> wiki link for a list).  I understand that this change potentially
> breaks a lot of old dusty code, and in particular that
> Autoconf-generated configure scripts use constructs that may *silently
> give the wrong answer to a probe* when a stricter compiler is in use.

Thank you for asking. The fixes in git get us there, I think, as far
as you can, with the exception of the stuff you (and I) mention below.

> 
> [...]
> 
> The biggest remaining (potential) problem, that I’m aware of, is that
> AC_CHECK_FUNC unconditionally declares the function we’re probing for
> as ‘char NAME (void)’, and asks the compiler to call it with no
> arguments, regardless of what its prototype actually is.  It is not
> clear to me whether this will still work with the planned changes to
> the compilers.  Both GCC 12 and Clang 14 have on-by-default warnings
> triggered by ‘extern char memcpy(void);’ (or any other standard
> library function whose prototype is coded into the compiler) and this
> already causes problems for people who run configure scripts with
> CC='cc -Werror'.  Unfortunately this is very hard to fix — we would
> have to build a comprehensive list of library functions into Autoconf,
> mapping each to either its documented prototype or to a header where
> it ought to be declared; in the latter case we would also have to make
> e.g. AC_CHECK_FUNCS([getaddrinfo]) imply AC_CHECK_HEADERS([sys/types.h
> sys/socket.h netdb.h]) which might mess up configure scripts that
> aren’t expecting headers to be probed at that point.
> 
> How important do you think it is for this to be fixed?
> 
> Are there any other changes you would like to see in a near-future
> Autoconf 2.72 in order to make this transition easier?

This might be a WONTFIX but let me mention it just for
the record:
1. AC_CHECK_FUNCS is, as you've noticed, very noisy.

I would support having a hardcoded list for certain CHOSTs
as Rich suggests to reduce noise, but I don't think we can
do this accurately very quickly.

I think for Gentoo's efforts, I might just build up a set
of cache variables for glibc/musl on each arch to
reduce the noise in logs. But that's time consuming
and brittle still, so I'm not sure.

(Note that Gentoo and Fedora are taking two complementary
but different approaches here:
we're diffing config.logs and other compiler
output, in addition to build logs, while Florian for Fedora
Is building a list of functions which *we know* are available
In a specific environment and patching gcc to spit out
logs when something in said list is missing. This mitigates
noise for things like functions in libbsd, which I'm finding
a bit of a pain.)

I'll see what others say.

2. I often have to set the following cache variables to
reduce noise in logs:
* ac_cv_c_undeclared_builtin_options="none needed"
* ac_cv_header_sys_types_h_makedev=no
* gl_cv_compiler_check_decl_option="-Werror=implicit-function-declaration" 
(obviously this is gnulib)
* gl_cv_minmax_in_limits_h=no

I don't know if we can do anything to make these tests smarter
or just leave it as-is. It's fine if we can't, as exporting the cache
vars is not a bad workaround for us when doing testing.

> 
> zw
> 
> p.s. GCC and Clang folks: As long as you’re changing the defaults out
> from under people, can you please also remove the last few predefined
> user-namespace macros (-Dlinux, -Dunix, -Darm, etc) from all the
> -std=gnuXX modes?

I support this as well. This kind of thing has led to endless
bugs in userland, see https://reviews.llvm.org/D137511.



signature.asc
Description: Message signed with OpenPGP


Re: -Wint-conversion, -Wincompatible-pointer-types, -Wpointer-sign: Are they hiding constraint C violations?

2022-11-11 Thread David Brown via Gcc

On 10/11/2022 20:16, Florian Weimer via Gcc wrote:

* Marek Polacek:


On Thu, Nov 10, 2022 at 07:25:21PM +0100, Florian Weimer via Gcc wrote:

GCC accepts various conversions between pointers and ints and different
types of pointers by default, issuing a warning.

I've been reading the (hopefully) relevant partso f the C99 standard,
and it seems to me that C implementations are actually required to
diagnose errors in these cases because they are constraint violations:
the types are not compatible.


It doesn't need to be a hard error, a warning is a diagnostic message,
which is enough to diagnose a violation of any syntax rule or
constraint.

IIRC, the only case where the compiler _must_ emit a hard error is for
#error.


Hmm, you could be right.

The standard says that constraint violations are not undefiend behavior,
but of course it does not define what happens in the presence of a
constraint violation.  So the behavior is undefined by omission.  This
seems to be a contradiction.



Section 5.1.1.3p1 of the C standard covers diagnostics.  (I'm looking at 
the C11 version at the moment, but numbering is mostly consistent 
between C standards.)  If there is at least one constraint violation or 
syntax error in the translation unit, then the compiler must emit at 
least one diagnostic message.  That is all that is required.


The C standard does not (as far as I know) distinguish between "error 
messages" and "warnings", or require that diagnostics stop compilation 
or the production of output files.


So that means a conforming compiler can sum up all warnings and errors 
with a single "You did something wrong" message - and it can still 
produce an object file.  It is even allowed to generate the same message 
when /nothing/ is wrong.  The minimum behaviour to be conforming here is 
not particularly helpful!


Also note that gcc, with default flags, is not a conforming compiler - 
it does not conform to any language standards.  You need at least 
"-std=c99" (or whatever) and "-Wpedantic".  Even then, I think gcc falls 
foul of the rule in 5.1.1.3p1 that says at least one diagnostic must be 
issued for a syntax or constraint violation "even if the behaviour is 
explicitly specified as undefined or implementation-defined".  I am not 
entirely sure, but I think some of the extensions that are enabled even 
in non-gnu standards modes could contradict that.


I personally think the key question for warnings on things like pointer 
compatibility depends on whether the compiler will do what the 
programmer expects.  If you have a target where "int" and "long" are the 
same size, a programmer might use "pointer-to-int" to access a "long", 
and vice-versa.  (This can easily be done accidentally on something like 
32-bit ARM, where "int32_t" is "long" rather than "int".)  If the 
compiler may use this incompatibility for type-based alias analysis and 
optimise on the assumption that the "pointer-to-int" never affects a 
"long", then such mixups should by default be at least a warning, if not 
a hard error.  The primary goal for warnings and error messages must be 
to stop the programmer writing code that is wrong and does not do what 
they expect (as best the compiler can guess what the programmer expects).


The secondary goal is to help the programmer write good quality code, 
and avoid potentially risky constructs - things that might work now, but 
could fail with other compiler versions, flags, targets, etc.  It is not 
unreasonable to have warnings in this category need "-Wall" or explicit 
flags.  (I'd like to see more warnings in gcc by default, and more of 
them as errors, but compatibility with existing build scripts is important.)




I assumed that there was a rule similar to the the rule for #error for
any kind of diagnostic, which would mean that GCC errors are diagnostic
messages in the sense of the standard, but GCC warnings are not.


I believe that both "error" and "warning" messages are "diagnostics" in 
the terms of the standard.


As I said above, the minimum requirements of the standard provide a very 
low bar here.  A useful compiler must do far better (and gcc /does/ do 
far better).




I wonder how C++ handles this.

Thanks,
Florian






Re: gcc.gnu.org/wiki/ – broken because /moin_static1910/ files fail with 404

2022-11-11 Thread Mark Wielaard
Hi Tobias,

On Fri, Nov 11, 2022 at 09:14:17AM +0100, Tobias Burnus wrote:
> this seems to be a very recent regression: https://gcc.gnu.org/wiki/
> is currently only limited usable.
>
> Looking at the browser console, the problem is:
> 
> GET https://gcc.gnu.org/moin_static1910/common/js/common.js [HTTP/2 404 Not 
> Found 131ms]
> GET https://gcc.gnu.org/moin_static1910/modern/css/common.css [HTTP/2 404 Not 
> Found 131ms]
> GET https://gcc.gnu.org/moin_static1910/modern/css/screen.css [HTTP/2 404 Not 
> Found 131ms]
> GET https://gcc.gnu.org/moin_static1910/modern/css/print.css [HTTP/2 404 Not 
> Found 131ms]
> GET https://gcc.gnu.org/moin_static1910/modern/css/projection.css [HTTP/2 404 
> Not Found 131ms]
>
> Any idea? Tobias

I don't know how this happened, but I assume the moin_static1910 link
under /www/gcc/htdocs to the site-package MoinMoin/web/static/htdocs
somehow got misplaced. I added a symlink and all seems fine again.

Cheers,

Mark


Re: gcc.gnu.org/wiki/ – broken because /moin_static1910/ files fail with 404

2022-11-11 Thread Gerald Pfeifer
On Fri, 11 Nov 2022, Mark Wielaard wrote:
> I don't know how this happened, but I assume the moin_static1910 link
> under /www/gcc/htdocs to the site-package MoinMoin/web/static/htdocs
> somehow got misplaced. I added a symlink and all seems fine again.

Thank you, Mark!

I believe this was on me. - I'm pretty sure it happened as I did some 
changes (around install/) in support of Martin's work. User error. :-(

Gerald


Re: Links to web pages are broken.

2022-11-11 Thread Georg-Johann Lay




Am 11.11.22 um 09:48 schrieb Martin Liška:

On 11/10/22 18:01, Jonathan Wakely wrote:

Maybe just "docs" or "trunkdocs" or "latestdocs" instead of
"onlinedocs-new", since that is (1) very long, and (2) will look silly
in ten years when it's not new and we need to add
onlinedocs-even-newer 😉


I do support it, it would be probably nicer than the complicated Rewrite rule
Jonathan prepared.



Or even onlinedocs/latest/ for the new stuff, and leave the old stuff
there in onlinedocs/ (without linking to it) so that old links work.


I think we should add a new HTML header to the older documentation
saying that's legacy. Something one can see here:
https://matplotlib.org/3.3.4/tutorials/index.html

Martin


Why it's legacy?

If I want docs for v12.2, then I don't care whether its generated by 
sphinx or texinfo.


Just using a different documentation system won't render the very 
content legacy?


I think it's a good idea to keep the old docs under their known URLs.


BTW, also search results from search engines are all 404.


Johann




Re: Links to web pages are broken.

2022-11-11 Thread Martin Liška
On 11/11/22 13:14, Georg-Johann Lay wrote:
> 
> 
> Am 11.11.22 um 09:48 schrieb Martin Liška:
>> On 11/10/22 18:01, Jonathan Wakely wrote:
>>> Maybe just "docs" or "trunkdocs" or "latestdocs" instead of
>>> "onlinedocs-new", since that is (1) very long, and (2) will look silly
>>> in ten years when it's not new and we need to add
>>> onlinedocs-even-newer 😉
>>
>> I do support it, it would be probably nicer than the complicated Rewrite rule
>> Jonathan prepared.
>>
>>>
>>> Or even onlinedocs/latest/ for the new stuff, and leave the old stuff
>>> there in onlinedocs/ (without linking to it) so that old links work.
>>
>> I think we should add a new HTML header to the older documentation
>> saying that's legacy. Something one can see here:
>> https://matplotlib.org/3.3.4/tutorials/index.html
>>
>> Martin
> 
> Why it's legacy?
> 
> If I want docs for v12.2, then I don't care whether its generated by sphinx 
> or texinfo.
> 
> Just using a different documentation system won't render the very content 
> legacy?

Sure, that will be very same story only we'll have an older releases using 
Sphinx.
I'm not fully convinced, but e.g. adding the banner for all unsupported 
branches seems
reasonable to me. But I don't have a strong opinion about it.

> 
> I think it's a good idea to keep the old docs under their known URLs.
> 
> 
> BTW, also search results from search engines are all 404.

You mean Google links, right? Correct, so that's why we incline now to keeping
old links and introducing a new url /docs

Thanks for comments,
Martin

> 
> 
> Johann
> 
> 



Handling of large stack objects in GPU code generation -- maybe transform into heap allocation?

2022-11-11 Thread Thomas Schwinge
Hi!

For example, for Fortran code like:

write (*,*) "Hello world"

..., 'gfortran' creates:

struct __st_parameter_dt dt_parm.0;

try
  {
dt_parm.0.common.filename = 
&"source-gcc/libgomp/testsuite/libgomp.oacc-fortran/print-1_.f90"[1]{lb: 1 sz: 
1};
dt_parm.0.common.line = 29;
dt_parm.0.common.flags = 128;
dt_parm.0.common.unit = 6;
_gfortran_st_write (&dt_parm.0);
_gfortran_transfer_character_write (&dt_parm.0, &"Hello world"[1]{lb: 1 
sz: 1}, 11);
_gfortran_st_write_done (&dt_parm.0);
  }
finally
  {
dt_parm.0 = {CLOBBER(eol)};
  }

The issue: the stack object 'dt_parm.0' is a half-KiB in size (yes,
really! -- there's a lot of state in Fortran I/O apparently).  That's a
problem for GPU execution -- here: OpenACC/nvptx -- where typically you
have small stacks.  (For example, GCC/OpenACC/nvptx: 1 KiB per thread;
GCC/OpenMP/nvptx is an exception, because of its use of '-msoft-stack'
"Use custom stacks instead of local memory for automatic storage".)

Now, the Nvidia Driver tries to accomodate for such largish stack usage,
and dynamically increases the per-thread stack as necessary (thereby
potentially reducing parallelism) -- if it manages to understand the call
graph.  In case of libgfortran I/O, it evidently doesn't.  Not being able
to disprove existance of recursion is the common problem, as I've read.
At run time, via 'CU_JIT_INFO_LOG_BUFFER' you then get, for example:

warning : Stack size for entry function 'MAIN__$_omp_fn$0' cannot be 
statically determined

That's still not an actual problem: if the GPU kernel's stack usage still
fits into 1 KiB.  Very often it does, but if, as happens in libgfortran
I/O handling, there is another such 'dt_parm' put onto the stack, the
stack then overflows; device-side SIGSEGV.

(There is, by the way, some similar analysis by Tom de Vries in
 "[nvptx, openacc, openmp, testsuite]
Recursive tests may fail due to thread stack limit".)

Of course, you shouldn't really be doing I/O in GPU kernels, but people
do like their occasional "'printf' debugging", so we ought to make that
work (... without pessimizing any "normal" code).

I assume that generally reducing the size of 'dt_parm' etc. is out of
scope.

There is a way to manually set a per-thread stack size, but it's not
obvious which size to set: that sizes needs to work for the whole GPU
kernel, and should be as low as possible (to maximize parallelism).
I assume that even if GCC did an accurate call graph analysis of the GPU
kernel's maximum stack usage, that still wouldn't help: that's before the
PTX JIT does its own code transformations, including stack spilling.

There exists a 'CU_JIT_LTO' flag to "Enable link-time optimization
(-dlto) for device code".  This might help, assuming that it manages to
simplify the libgfortran I/O code such that the PTX JIT then understands
the call graph.  But: that's available only starting with recent
CUDA 11.4, so not a general solution -- if it works at all, which I've
not tested.

Similarly, we could enable GCC's LTO for device code generation -- but
that's a big project, out of scope at this time.  And again, we don't
know if that at all helps this case.

I see a few options:

(a) Figure out what it is in the libgfortran I/O implementation that
causes "Stack size [...] cannot be statically determined", and re-work
that code to avoid that, or even disable certain things for nvptx, if
feasible.

(b) Also for GCC/OpenACC/nvptx use the GCC/OpenMP/nvptx '-msoft-stack'.
I don't really want to do that however: it does introduce a bit of
complexity in all the generated device code and run-time overhead that we
generally would like to avoid.

(c) I'm contemplating a tweak/compiler pass for transforming such large
stack objects into heap allocation (during nvptx offloading compilation).
'malloc'/'free' do exist; they're slow, but that's not a problem for the
code paths this is to affect.  (Might also add some compile-time
diagnostic, of course.)  Could maybe even limit this to only be used
during libgfortran compilation?  This is then conceptually a bit similar
to (b), but localized to relevant parts only.  Has such a thing been done
before in GCC, that I could build upon?

Any other clever ideas?


Grüße
 Thomas
-
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 
München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas 
Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht 
München, HRB 106955


Re: Handling of large stack objects in GPU code generation -- maybe transform into heap allocation?

2022-11-11 Thread Richard Biener via Gcc
On Fri, Nov 11, 2022 at 3:13 PM Thomas Schwinge  wrote:
>
> Hi!
>
> For example, for Fortran code like:
>
> write (*,*) "Hello world"
>
> ..., 'gfortran' creates:
>
> struct __st_parameter_dt dt_parm.0;
>
> try
>   {
> dt_parm.0.common.filename = 
> &"source-gcc/libgomp/testsuite/libgomp.oacc-fortran/print-1_.f90"[1]{lb: 1 
> sz: 1};
> dt_parm.0.common.line = 29;
> dt_parm.0.common.flags = 128;
> dt_parm.0.common.unit = 6;
> _gfortran_st_write (&dt_parm.0);
> _gfortran_transfer_character_write (&dt_parm.0, &"Hello world"[1]{lb: 
> 1 sz: 1}, 11);
> _gfortran_st_write_done (&dt_parm.0);
>   }
> finally
>   {
> dt_parm.0 = {CLOBBER(eol)};
>   }
>
> The issue: the stack object 'dt_parm.0' is a half-KiB in size (yes,
> really! -- there's a lot of state in Fortran I/O apparently).  That's a
> problem for GPU execution -- here: OpenACC/nvptx -- where typically you
> have small stacks.  (For example, GCC/OpenACC/nvptx: 1 KiB per thread;
> GCC/OpenMP/nvptx is an exception, because of its use of '-msoft-stack'
> "Use custom stacks instead of local memory for automatic storage".)
>
> Now, the Nvidia Driver tries to accomodate for such largish stack usage,
> and dynamically increases the per-thread stack as necessary (thereby
> potentially reducing parallelism) -- if it manages to understand the call
> graph.  In case of libgfortran I/O, it evidently doesn't.  Not being able
> to disprove existance of recursion is the common problem, as I've read.
> At run time, via 'CU_JIT_INFO_LOG_BUFFER' you then get, for example:
>
> warning : Stack size for entry function 'MAIN__$_omp_fn$0' cannot be 
> statically determined
>
> That's still not an actual problem: if the GPU kernel's stack usage still
> fits into 1 KiB.  Very often it does, but if, as happens in libgfortran
> I/O handling, there is another such 'dt_parm' put onto the stack, the
> stack then overflows; device-side SIGSEGV.
>
> (There is, by the way, some similar analysis by Tom de Vries in
>  "[nvptx, openacc, openmp, testsuite]
> Recursive tests may fail due to thread stack limit".)
>
> Of course, you shouldn't really be doing I/O in GPU kernels, but people
> do like their occasional "'printf' debugging", so we ought to make that
> work (... without pessimizing any "normal" code).
>
> I assume that generally reducing the size of 'dt_parm' etc. is out of
> scope.
>
> There is a way to manually set a per-thread stack size, but it's not
> obvious which size to set: that sizes needs to work for the whole GPU
> kernel, and should be as low as possible (to maximize parallelism).
> I assume that even if GCC did an accurate call graph analysis of the GPU
> kernel's maximum stack usage, that still wouldn't help: that's before the
> PTX JIT does its own code transformations, including stack spilling.
>
> There exists a 'CU_JIT_LTO' flag to "Enable link-time optimization
> (-dlto) for device code".  This might help, assuming that it manages to
> simplify the libgfortran I/O code such that the PTX JIT then understands
> the call graph.  But: that's available only starting with recent
> CUDA 11.4, so not a general solution -- if it works at all, which I've
> not tested.
>
> Similarly, we could enable GCC's LTO for device code generation -- but
> that's a big project, out of scope at this time.  And again, we don't
> know if that at all helps this case.
>
> I see a few options:
>
> (a) Figure out what it is in the libgfortran I/O implementation that
> causes "Stack size [...] cannot be statically determined", and re-work
> that code to avoid that, or even disable certain things for nvptx, if
> feasible.
>
> (b) Also for GCC/OpenACC/nvptx use the GCC/OpenMP/nvptx '-msoft-stack'.
> I don't really want to do that however: it does introduce a bit of
> complexity in all the generated device code and run-time overhead that we
> generally would like to avoid.
>
> (c) I'm contemplating a tweak/compiler pass for transforming such large
> stack objects into heap allocation (during nvptx offloading compilation).
> 'malloc'/'free' do exist; they're slow, but that's not a problem for the
> code paths this is to affect.  (Might also add some compile-time
> diagnostic, of course.)  Could maybe even limit this to only be used
> during libgfortran compilation?  This is then conceptually a bit similar
> to (b), but localized to relevant parts only.  Has such a thing been done
> before in GCC, that I could build upon?
>
> Any other clever ideas?

Shrink st_parameter_dt (it's part of the ABI though, kind of).  Lots of the
bloat is from things that are unused for simpler I/O cases (so some
"inheritance" could help), and lots of the bloat is from using
string/length pairs using char * + size_t for what looks like could be
encoded a lot more efficiently.

There's probably not much low-hanging fruit.

Converting to heap allocation is difficult outside of the frontend and yo

Re: Handling of large stack objects in GPU code generation -- maybe transform into heap allocation?

2022-11-11 Thread Janne Blomqvist via Gcc
On Fri, Nov 11, 2022 at 4:13 PM Thomas Schwinge  wrote:
> For example, for Fortran code like:
>
> write (*,*) "Hello world"
>
> ..., 'gfortran' creates:

> The issue: the stack object 'dt_parm.0' is a half-KiB in size (yes,
> really! -- there's a lot of state in Fortran I/O apparently).

> Any other clever ideas?

There's a lot of potential options to set during Fortran I/O, but in
the vast majority of cases only a few are used. So a better library
interface would be to transfer only those options that are used, and
then let the full set of options live in heap memory managed by
libgfortran. Say some kind of simple byte-code format, with an
'opcode' saying which option it is, followed by the value.

See also https://gcc.gnu.org/bugzilla/show_bug.cgi?id=48419 for some
rough ideas in this direction, although I'm not personally working on
GFortran at this time so somebody else would have to pick it up.


-- 
Janne Blomqvist


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Aaron Ballman via Gcc
On Thu, Nov 10, 2022 at 4:05 PM Paul Eggert  wrote:
>
> On 2022-11-10 10:19, Aaron Ballman wrote:
> > In terms of the Clang side of things, I don't think we've formed any
> > sort of official stance on how to handle that yet. It's UB (you can
> > declare the C standard library interface without UB but calling any
> > function with a mismatched signature is UB)
>
> The Autoconf-generated code is never executed, so this is not a runtime
> issue; it's merely an issue of undefined behavior during translation.

FWIW, the only thing the (Clang) compiler is aware of is translation.
So from the frontend perspective, we can't tell the difference between
"trust me this is safe because it never gets executed" and "this is a
CVE". We believe the runtime behavior is sufficiently dangerous to
warrant a conservative view that any call to a function will be a call
that gets executed at runtime, hence a definitive signature mismatch
is something we feel comfortable diagnosing (in some form) by default.

> A
> problem could occur with a picky compiler or linker that rejects modules
> with mismatched function type declarations. Does Clang do that, or
> require or use such a linker? If not, there's no practical problem here.
> If so, it'd be helpful if Clang continued to support its traditional
> behavior that doesn't reject Autoconf's test cases, and for this to be
> the default.

Clang doesn't require such a linker (we work with various system linkers).

> Autoconf arose because one cannot ask something like "Can I call the
> renameat2 function with its usual signature?" in standard C code and one
> must escape into something like the shell to handle such questions. C23
> and GCC and Clang have added a few features to answer such questions,
> such as __has_include. But these features don't go nearly far enough.
> For example, __has_include tells me only whether the include file
>  exists; it won't tell me whether I can successfully include
> , or whether  will declare the function 'bar', or whether
> 'bar' will have a signature compatible with my code's calls to 'bar'.
>
> If I could request a single thing from the C23/GCC/Clang side, I'd ask
> for better facilities to be able to ask such questions within C code,
> without using the shell. Then a good chunk of Autoconf could dry up and
> blow away.
>
> I realize that I'm asking for a lot. For example, a traditional
> implementation cannot answer the renameat2 question without consulting
> the linker. That being said, this ability is essential for modular
> programming at the low level, and if compilers don't provide this
> ability Autoconf will simply have to do the best it can, regardless of
> whether it generates source code that relies on undefined behavior.

This would be challenging for an implementation like Clang where we
work with an arbitrary C runtime library (which may be dynamically
loaded on the target machine) and an arbitrary linker.

~Aaron


Issues with Sphinx

2022-11-11 Thread Andrew Pinski via Gcc
Can we just revert back to texinfo?
Sphinx requires manual page splitting which is a downgrade from texinfo.
Stable URLs and links was something which we pushed for fixes for texinfo too.
And many other issues with sphinx which makes it better if we revert
back to texinfo until those are fixed including compile time is a
problem now.

Thanks,
Andrew


Re: Different outputs in Gimple pass dump generated by two different architectures

2022-11-11 Thread Andrew Pinski via Gcc
On Fri, Nov 11, 2022 at 12:57 AM Marc Glisse via Gcc  wrote:
>
> On Thu, 10 Nov 2022, Kevin Lee wrote:
>
> > While looking at the failure for gcc.dg/uninit-pred-9_b.c, I observed that
> > x86-64 and risc-v has a different output for the gimple pass since
> > r12-4790-g4b3a325f07acebf4
> > .
>
> Probably since earlier.
>
> > What would be causing the difference? Is this intended? Link
> >  for details. Thank you!
>
> See LOGICAL_OP_NON_SHORT_CIRCUIT in fold-const.cc (and various discussions
> on the topic in mailing lists and bugzilla).

I filed https://gcc.gnu.org/PR107642 to cover the issues around
LOGICAL_OP_NON_SHORT_CIRCUIT and BRANCH_COST.
I hope to get some time in the GCC 14 timeframe to flush out some of
these target macro/hooks issue where the
definitions are not so clear.

Thanks,
Andrew

>
> --
> Marc Glisse


Re: why does gccgit require pthread?

2022-11-11 Thread Jonathan Wakely via Gcc
On Mon, 7 Nov 2022 at 13:51, Jonathan Wakely wrote:
>
> On Mon, 7 Nov 2022 at 13:33, LIU Hao wrote:
> >
> > 在 2022-11-07 20:57, Jonathan Wakely 写道:
> > > It would be a lot nicer if playback::context met the C++ Lockable
> > > requirements, and playback::context::compile () could just take a
> > > scoped lock on *this:
> > >
> > >
> >
> > Yeah yeah that makes a lot of sense. Would you please just commit that? I 
> > don't have write access to
> > GCC repo, and it takes a couple of hours for me to bootstrap GCC just for 
> > this tiny change.
>
> Somebody else needs to approve it first. I'll combine our patches and
> test and submit it properly for approval.

Here's a complete patch that actually builds now, although I'm seeing
a stage 2 vs stage 3 comparison error which I don't have time to look
into right now.
commit 5dde4bd09c4706617120a42c5953908ae39b5751
Author: Jonathan Wakely 
Date:   Fri Nov 11 12:48:29 2022

jit: Use std::mutex instead of pthread_mutex_t

This allows JIT to be built with a different thread model from posix
where pthread isn't available

By renaming the acquire_mutex () and release_mutex () member functions
to lock() and unlock() we make the playback::context type meet the C++
Lockable requirements. This allows it to be used with a scoped lock
(i.e. RAII) type as std::lock_guard. This automatically releases the
mutex when leaving the scope.

Co-authored-by: LIU Hao 

gcc/jit/ChangeLog:

* jit-playback.cc (playback::context::scoped_lock): Define RAII
lock type.
(playback::context::compile): Use scoped_lock to acquire mutex
for the active playback context.
(jit_mutex): Change to std::mutex.
(playback::context::acquire_mutex): Rename to ...
(playback::context::lock): ... this.
(playback::context::release_mutex): Rename to ...
(playback::context::unlock): ... this.
* jit-playback.h (playback::context): Rename members and declare
scoped_lock.
* jit-recording.cc (INCLUDE_PTHREAD_H): Remove unused define.
* libgccjit.cc (version_mutex): Change to std::mutex.
(struct jit_version_info): Use std::lock_guard to acquire and
release mutex.

gcc/ChangeLog:

* system.h [INCLUDE_MUTEX]: Include header for std::mutex.

diff --git a/gcc/jit/jit-playback.cc b/gcc/jit/jit-playback.cc
index d227d36283a..bf006903a44 100644
--- a/gcc/jit/jit-playback.cc
+++ b/gcc/jit/jit-playback.cc
@@ -19,7 +19,7 @@ along with GCC; see the file COPYING3.  If not see
 .  */
 
 #include "config.h"
-#define INCLUDE_PTHREAD_H
+#define INCLUDE_MUTEX
 #include "system.h"
 #include "coretypes.h"
 #include "target.h"
@@ -2302,6 +2302,20 @@ block (function *func,
   m_label_expr = NULL;
 }
 
+// This is basically std::lock_guard but it can call the private lock/unlock
+// members of playback::context.
+struct playback::context::scoped_lock
+{
+  scoped_lock (context &ctx) : m_ctx (&ctx) { m_ctx->lock (); }
+  ~scoped_lock () { m_ctx->unlock (); }
+
+  context *m_ctx;
+
+  // Not movable or copyable.
+  scoped_lock (scoped_lock &&) = delete;
+  scoped_lock &operator= (scoped_lock &&) = delete;
+};
+
 /* Compile a playback::context:
 
- Use the context's options to cconstruct command-line options, and
@@ -2353,15 +2367,12 @@ compile ()
   m_recording_ctxt->get_all_requested_dumps (&requested_dumps);
 
   /* Acquire the JIT mutex and set "this" as the active playback ctxt.  */
-  acquire_mutex ();
+  scoped_lock lock(*this);
 
   auto_string_vec fake_args;
   make_fake_args (&fake_args, ctxt_progname, &requested_dumps);
   if (errors_occurred ())
-{
-  release_mutex ();
-  return;
-}
+return;
 
   /* This runs the compiler.  */
   toplev toplev (get_timer (), /* external_timer */
@@ -2388,10 +2399,7 @@ compile ()
  followup activities use timevars, which are global state.  */
 
   if (errors_occurred ())
-{
-  release_mutex ();
-  return;
-}
+return;
 
   if (get_bool_option (GCC_JIT_BOOL_OPTION_DUMP_GENERATED_CODE))
 dump_generated_code ();
@@ -2403,8 +2411,6 @@ compile ()
  convert the .s file to the requested output format, and copy it to a
  given file (playback::compile_to_file).  */
   postprocess (ctxt_progname);
-
-  release_mutex ();
 }
 
 /* Implementation of class gcc::jit::playback::compile_to_memory,
@@ -2662,18 +2668,18 @@ playback::compile_to_file::copy_file (const char 
*src_path,
 /* This mutex guards gcc::jit::recording::context::compile, so that only
one thread can be accessing the bulk of GCC's state at once.  */
 
-static pthread_mutex_t jit_mutex = PTHREAD_MUTEX_INITIALIZER;
+static std::mutex jit_mutex;
 
 /* Acquire jit_mutex and set "this" as the active playback ctxt.  */
 
 void
-playback::context::acquire_mutex ()
+playback::context::lock ()
 {
   auto_tim

Re: why does gccgit require pthread?

2022-11-11 Thread Jonathan Wakely via Gcc
On Fri, 11 Nov 2022 at 17:16, Jonathan Wakely wrote:
>
> On Mon, 7 Nov 2022 at 13:51, Jonathan Wakely wrote:
> >
> > On Mon, 7 Nov 2022 at 13:33, LIU Hao wrote:
> > >
> > > 在 2022-11-07 20:57, Jonathan Wakely 写道:
> > > > It would be a lot nicer if playback::context met the C++ Lockable
> > > > requirements, and playback::context::compile () could just take a
> > > > scoped lock on *this:
> > > >
> > > >
> > >
> > > Yeah yeah that makes a lot of sense. Would you please just commit that? I 
> > > don't have write access to
> > > GCC repo, and it takes a couple of hours for me to bootstrap GCC just for 
> > > this tiny change.
> >
> > Somebody else needs to approve it first. I'll combine our patches and
> > test and submit it properly for approval.
>
> Here's a complete patch that actually builds now, although I'm seeing
> a stage 2 vs stage 3 comparison error which I don't have time to look
> into right now.

A clean build fixed that. This patch bootstraps and passes testing on
x86_64-pc-linux-gnu (CentOS 8 Stream).

OK for trunk?


Re: Different outputs in Gimple pass dump generated by two different architectures

2022-11-11 Thread Kevin Lee
On Fri, Nov 11, 2022 at 8:39 AM Andrew Pinski  wrote:
>
> On Fri, Nov 11, 2022 at 12:57 AM Marc Glisse via Gcc  wrote:
> >
> > On Thu, 10 Nov 2022, Kevin Lee wrote:

> > > What would be causing the difference? Is this intended? Link
> > >  for details. Thank you!
> >
> > See LOGICAL_OP_NON_SHORT_CIRCUIT in fold-const.cc (and various discussions
> > on the topic in mailing lists and bugzilla).
>

Thank you for the pointer Marc!

> I filed https://gcc.gnu.org/PR107642 to cover the issues around
> LOGICAL_OP_NON_SHORT_CIRCUIT and BRANCH_COST.
> I hope to get some time in the GCC 14 timeframe to flush out some of
> these target macro/hooks issue where the
> definitions are not so clear.
>

This PR was a great explanation for the macro. Thanks!


Re: [PATCH] Various pages: SYNOPSIS: Use VLA syntax in function parameters

2022-11-11 Thread Martin Uecker via Gcc
Am Donnerstag, den 10.11.2022, 23:19 + schrieb Joseph Myers:
> On Thu, 10 Nov 2022, Martin Uecker via Gcc wrote:
> 
> > One problem with WG14 papers is that people put in too much,
> > because the overhead is so high and the standard is not updated
> > very often.  It would be better to build such feature more
> > incrementally, which could be done more easily with a compiler
> > extension.  One could start supporting just [.x] but not more
> > complicated expressions.
> 
> Even a compiler extension requires the level of detail of specification 
> that you get with a WG14 paper (and the level of work on finding bugs in 
> that specification), to avoid the problem we've had before with too many 
> features added in GCC 2.x days where a poorly defined feature is "whatever 
> the compiler accepts".

I think the effort needed to specify the feature correctly
can be minimized by making the first version of the feature
as simple as possible.  

> If you use .x as the notation but don't limit it to [.x], you have a 
> completely new ambiguity between ordinary identifiers and member names
> 
> struct s { int a; };
> void f(int a, int b[((struct s) { .a = 1 }).a]);
> 
> where it's newly ambiguous whether ".a = 1" is an assignment to the 
> expression ".a" or a use of a designated initializer.

If we only allowed [ . a ] then this example would not be allowed.

If need more flexibility, we could incrementally extend it.

> (I think that if you add any syntax for this, GNU VLA forward declarations 
> are clearly to be preferred to inventing something new like [.x] which 
> introduces its own problems.)

I also prefer this.

I proposed forward declarations but WG14 and also people in this
discussion did not like them.  If we would actually start using
them, we could propose them again for the next revision.

Martin





Re: Announcement: Porting the Docs to Sphinx - tomorrow

2022-11-11 Thread Gerald Pfeifer
On Tue, 8 Nov 2022, Martin Liška wrote:
> After the migration, people should be able to build (and install) GCC 
> even if they miss Sphinx (similar happens now if you miss makeinfo). 

My nightly *install* (not build) on amd64-unknown-freebsd12.2 broke 
(from what I can tell due to this - it's been working fine most of 
the last several 1000 days):

  if [ -f doc/g++.1 ]; then rm -f 
/home/gerald/gcc-ref12-amd64/share/man/man1/g++.1; /usr/bin/install -c -m 644 
doc/g++.1 /home/gerald/gcc-ref12-amd64/share/man/man1/g++.1; chmod a-x 
/home/gerald/gcc-ref12-amd64/share/man/man1/g++.1; fimake -C 
/scratch/tmp/gerald/GCC-HEAD/gcc/../doc man 
SOURCEDIR=/scratch/tmp/gerald/GCC-HEAD/gcc/fortran/doc/gfortran 
BUILDDIR=/scratch/tmp/gerald/OBJ--0954/gcc/doc/gfortran/man SPHINXBUILD=
  make[3]: make[3]: don't know how to make w. Stop
  make[3]: stopped in /scratch/tmp/gerald/GCC-HEAD/doc
  gmake[2]: *** [/scratch/tmp/gerald/GCC-HEAD/gcc/fortran/Make-lang.in:164: 
doc/gfortran/man/man/gfortran.1] Error 2
  gmake[2]: Leaving directory '/scratch/tmp/gerald/OBJ--0954/gcc'
  gmake[1]: *** [Makefile:5310: install-strip-gcc] Error 2
  gmake[1]: Leaving directory '/scratch/tmp/gerald/OBJ--0954'
  gmake: *** [Makefile:2734: install-strip] Error 2

(This appears to be the case with "make -j1 install-strip". Not sure where 
that "w" target is coming from?)

Gerald


Re: Announcement: Porting the Docs to Sphinx - tomorrow

2022-11-11 Thread Sandra Loosemore

On 11/11/22 13:52, Gerald Pfeifer wrote:

On Tue, 8 Nov 2022, Martin Liška wrote:

After the migration, people should be able to build (and install) GCC
even if they miss Sphinx (similar happens now if you miss makeinfo).


My nightly *install* (not build) on amd64-unknown-freebsd12.2 broke
(from what I can tell due to this - it's been working fine most of
the last several 1000 days):

   if [ -f doc/g++.1 ]; then rm -f 
/home/gerald/gcc-ref12-amd64/share/man/man1/g++.1; /usr/bin/install -c -m 644 
doc/g++.1 /home/gerald/gcc-ref12-amd64/share/man/man1/g++.1; chmod a-x 
/home/gerald/gcc-ref12-amd64/share/man/man1/g++.1; fimake -C 
/scratch/tmp/gerald/GCC-HEAD/gcc/../doc man 
SOURCEDIR=/scratch/tmp/gerald/GCC-HEAD/gcc/fortran/doc/gfortran 
BUILDDIR=/scratch/tmp/gerald/OBJ--0954/gcc/doc/gfortran/man SPHINXBUILD=
   make[3]: make[3]: don't know how to make w. Stop
   make[3]: stopped in /scratch/tmp/gerald/GCC-HEAD/doc
   gmake[2]: *** [/scratch/tmp/gerald/GCC-HEAD/gcc/fortran/Make-lang.in:164: 
doc/gfortran/man/man/gfortran.1] Error 2
   gmake[2]: Leaving directory '/scratch/tmp/gerald/OBJ--0954/gcc'
   gmake[1]: *** [Makefile:5310: install-strip-gcc] Error 2
   gmake[1]: Leaving directory '/scratch/tmp/gerald/OBJ--0954'
   gmake: *** [Makefile:2734: install-strip] Error 2

(This appears to be the case with "make -j1 install-strip". Not sure where
that "w" target is coming from?)


I've seen something similar:  "make install" seems to be passing an 
empty SPHINXBUILD= option to the docs Makefile which is not equipped to 
handle that.  I know the fix is to get a recent-enough version of Sphinx 
installed (and I'm going to work on that over the weekend), but it ought 
to fail more gracefully, or not try to install docs that cannot be built 
without Sphinx.


-Sandra



gcc-11-20221111 is now available

2022-11-11 Thread GCC Administrator via Gcc
Snapshot gcc-11-2022 is now available on
  https://gcc.gnu.org/pub/gcc/snapshots/11-2022/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 11 git branch
with the following options: git://gcc.gnu.org/git/gcc.git branch 
releases/gcc-11 revision fddd45dc637338e11fad66f1fb7d964102d3466e

You'll find:

 gcc-11-2022.tar.xz   Complete GCC

  SHA256=2c86932305237c9600dc676e01b5adff39c2957a0776eb938b5de650bc3b3326
  SHA1=070c3ab85f4df500c3fdac0bfc92cb9d39556564

Diffs from 11-20221104 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-11
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Announcement: Porting the Docs to Sphinx - 9. November 2022

2022-11-11 Thread David Malcolm via Gcc
On Mon, 2022-10-17 at 15:28 +0200, Martin Liška wrote:
> Hello.
> 
> Based on the very positive feedback I was given at the Cauldron
> Sphinx Documentation BoF,
> I'm planning migrating the documentation on 9th November. There are
> still some minor comments
> from Sandra when it comes to the PDF output, but we can address that
> once the conversion is done.
> 
> The reason I'm sending the email now is that I was waiting for latest
> Sphinx release (5.3.0) that
> simplifies reference format for options and results in much simpler
> Option summary section ([1])
> 
> The current GCC master (using Sphinx 5.3.0) converted docs can be
> seen here:
> https://splichal.eu/scripts/sphinx/
> 
> If you see any issues with the converted documentation, or have a
> feedback about it,
> please reply to this email.
> 
> Cheers,
> Martin
> 
> [1] https://github.com/sphinx-doc/sphinx/pull/10840
> [1]
> https://splichal.eu/scripts/sphinx/gcc/_build/html/gcc-command-options/option-summary.html
> 

FWIW, to help organize the various bugs people have reported due to the
sphinx migration, I've gone ahead and created a tracker bug in GCC
bugzilla.  See:
  https://gcc.gnu.org/bugzilla/showdependencytree.cgi?id=sphinx-migration
aka:
  https://gcc.gnu.org/bugzilla/showdependencytree.cgi?id=107655

Not all of them have "sphinx" in the title, so hope I got them all. 
Please add any I missed, and add any new ones you file as blocking this
bug.

Hope this is helpful
Dave



Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Sam James via Gcc


> On 11 Nov 2022, at 03:33, Zack Weinberg  wrote:
> 
> On Thu, Nov 10, 2022, at 10:08 PM, Sam James wrote:
>>> On 10 Nov 2022, at 21:10, Michael Orlitzky  wrote:
>>> While everyone else is discussing big ideas, it would be helpful for me
>>> personally if autoconf just made a release with the latest bugfixes.
>> 
>> Before I dive into the rest of this thread: yes, this is one of
>> my main thoughts on the matter. Autoconf has a huge network
>> effect problem and letting the existing fixes start to propagate
>> would be most helpful.
> 
> It would be relatively easy for me to take a couple hours this weekend and 
> put out a 2.72 release with everything that's already in trunk and nothing 
> else.  Anyone have reasons I _shouldn't_ do that?
> 
>> Note that in autoconf git, we've also got
>> https://git.savannah.gnu.org/cgit/autoconf.git/commit/?id=f6657256a37da44c987c04bf9cd75575dfca3b60
>> which is going to affect time_t efforts too
> 
> I have not been following the y2038 work closely.  Is it going to affect 
> things in a good way or a bad way??
> 

Back to the original thread: I suspect it might be a better idea to 
(temporarily) revert the two changes
and omit it from 2.72 to allow the other changes to get out.

That's not a judgement on whether the changes will ultimately remain in 
autoconf, I'm just
hesitant to allow a discussion I've kicked off to derail something that we were 
planning
on doing anyway.

What do you think?


signature.asc
Description: Message signed with OpenPGP


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Paul Eggert

On 2022-11-11 15:25, Sam James wrote:

That's not a judgement on whether the changes will ultimately remain in 
autoconf, I'm just
hesitant to allow a discussion I've kicked off to derail something that we were 
planning
on doing anyway.

What do you think?


I'm hesitant to do that partly because the changes to _TIME_BITS are 
already released in multiple packages and need to be dealt with, 
regardless of whether they're backed out of Autoconf. This is because 
they've been in Gnulib since July and several packages based on these 
Gnulib changes have been released since then. Current Gnulib assumes 
these changes will appear in the next Autoconf release; if that's not 
true, we'll need to upgrade Gnulib and in the meantime the other 
packages released since July would still have the changes whatever we do 
with Gnulib and/or Autoconf.


Since distros need to deal with the issue anyway, regardless of what 
Autoconf and/or Gnulib does, I don't see why backing the changes out of 
Autoconf will help all that much.


It should pretty easy for a distro to say "hold on, I don't want 64-bit 
time_t yet" without changing either Autoconf or Gnulib so if you want to 
go that route please feel free to do so.


Re: [PATCH] Various pages: SYNOPSIS: Use VLA syntax in function parameters

2022-11-11 Thread Joseph Myers
On Fri, 11 Nov 2022, Martin Uecker via Gcc wrote:

> > Even a compiler extension requires the level of detail of specification 
> > that you get with a WG14 paper (and the level of work on finding bugs in 
> > that specification), to avoid the problem we've had before with too many 
> > features added in GCC 2.x days where a poorly defined feature is "whatever 
> > the compiler accepts".
> 
> I think the effort needed to specify the feature correctly
> can be minimized by making the first version of the feature
> as simple as possible.  

The version of constexpr in the current C2x working draft is more or less 
as simple as possible.  It also went through lots of revisions to get 
there.  I'm currently testing an implementation of C2x constexpr for GCC 
13, and there are still several issues with the specification I found in 
the implementation process, beyond those raised in WG14 discussions, for 
which I'll need to raise NB comments to clarify things.

I think that illustrates that you need the several iterations on the 
specification process, *and* making it as simple as possible, *and* 
getting implementation experience, *and* the implementation experience 
being with a close eye to what it implies for all the details in the 
specification rather than just getting something vaguely functional but 
not clearly specified.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Zack Weinberg via Gcc
Nick Bowler  writes:
> My gut feeling is that Autoconf should just determine the necessary
> options to get compatible behaviour out of these modern compilers, at
> least for the purpose of running configure tests.  For example, Autoconf
> should probably build the AC_CHECK_FUNC programs using gcc's
> -fno-builtin option

I fear this will cause more problems than it solves.  Messing with
compiler options inside a configure script has a track record of
clashing with “outer” build tools that expect to be able to dictate the
options.

> It saddens me to see so much breakage happening in "modern C", a
> language that has (until now) a long history of new language features
> being carefully introduced to avoid these sort of problems.

I don’t exactly _disagree_ with this.  Quite a few of the compatibility-
breaking changes going into C2x (promoting ‘bool’ to a true keyword, for
instance, and changing the meaning of an empty argument list in a
function declaration) strike me as unnecessary churn.  However, the
specific set of changes that are under discussion right now—removal of
implicit function declarations, implicit int, and old-style function
definitions from the _default_ language accepted by C compilers—I’m very
much in favor of, because they make life significantly easier for people
writing _new_ code.  It’s not healthy for a language to always
prioritize old code over new code.

(Yes, you _can_ opt in to all three of those changes now, but you have
to type a bunch of -W options.  With my day job hat on, I am very much
looking forward to a day where ‘cc test.c’ errors out on implicit
function declarations, because then I won’t have to _explain_ implicit
function declarations, and why they are dangerous, to my students
anymore.)

>> p.s. GCC and Clang folks: As long as you’re changing the defaults out
>> from under people, can you please also remove the last few predefined
>> user-namespace macros (-Dlinux, -Dunix, -Darm, etc) from all the
>> -std=gnuXX modes?
>
> Meh, even though these macros are a small thing I don't accept the
> "things are breaking anyway so let's break even more things" attitude.

Getting rid of these is another change that will make life easier for
people writing new code.

zw


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Zack Weinberg via Gcc
Rich Felker  writes:
> On Thu, Nov 10, 2022 at 12:16:20PM -0500, Zack Weinberg wrote:
>> The biggest remaining (potential) problem, that I’m aware of, is that
>> AC_CHECK_FUNC unconditionally declares the function we’re probing for
>> as ‘char NAME (void)’, and asks the compiler to call it with no
>> arguments, regardless of what its prototype actually is.
…
> Thanks for bringing this up. It is very important and I am very much
> in favor of making these changes and doing it in a way that existing
> broken and unmaintained software can be made to work just by
> re-generating configure scripts with up-to-date autoconf, even if that
> means hard-coding a list of headers needed to get the right
> declarations and automatically pulling them in.

Right.  In principle, I think AC_CHECK_FUNCS([symbol]), where ‘symbol’ is
in either ISO C or in XSI, could be made to do a check of the form you
suggest at the end of https://ewontfix.com/13/

#include 
#include 
int main()
{
double (*p)(const char *, char **, locale_t) = strtod_l;
}

It’s “just” a matter of compiling that big list of headers and expected
function signatures.  I’d also want to do something to ensure that this
assignment is not optimized out, so the linker has to process an
undefined reference to the symbol.

For symbols that are not covered by the built-in list, we could require
people to indicate the necessary headers and signature somehow.
Hypothetical notation

AC_CHECK_FUNCS([
[argp_parse, [argp.h],
   [error_t], [const struct argp *, int, char **,
   unsigned int, int *, void *]]
])

Note that this still isn’t perfect; if some system provides a function
with an identical type signature, but different *semantics*, to the one
the application wants, no compilation test can detect that.  Autoconf’s
not going to step away from its “avoid compile-and-run tests, that
breaks cross compilation” stance.

> I've been writing/complaining about autoconf doing this wrong for
> decades, with the best writeup around 9 years ago at
> https://ewontfix.com/13/. Part of the reason is that this has bitten
> musl libc users over and over again due to configure finding symbols
> that were intended only as ABI-compat and trying to use them (without
> declarations) at the source level

I vaguely recall some cases where this bit glibc and Apple’s libc as
well.

In principle, you’re supposed to be able to declare some ISO C functions
yourself, e.g.

extern int printf(const char *, ...);
int main(void) {
printf("hello world\n");
}

is strictly conforming per C99, but this bypasses any symbol renaming
applied by stdio.h.

> What I'd like to see happen is complete deprecation of the autoconf
> link-only tests

Do you have a concrete list of documented Autoconf macros that you would
like to see deprecated for this reason?

zw


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Zack Weinberg via Gcc
Florian Weimer  writes:
> based on a limited attempt to get this fixed about three years
> ago, I expect that many of the problematic packages have not had their
> configure scripts regenerated using autoconf for a decade or more.  This
> means that as an autoconf maintainer, you unfortunately won't be able to
> help us much.

I’m sadly not surprised.

This is definitely more work than I can see myself doing on a volunteer
basis, but a 2.69.1 patch release — nothing that’s not already on trunk,
cherry pick the changes needed to support the newer compilers (and
also newer Perl and Bash and M4) is a thing that could happen.

> Thanks, these changes are going to be helpful to get a clean run from
> our Fedora tester.

Autoconf’s own test suite is sadly not very thorough.  If you find more
problems I will prioritize them.

> Once you include the header, you also need to know function parameters,
> otherwise you won't be able to form a valid call.

You can assign to a function pointer variable if you know the complete
type signature, which is desirable for other reasons (see reply to Rich).
Needing to know how to form argument *values* could be much more trouble,
but I don’t think it should be necessary.

>> p.s. GCC and Clang folks: As long as you’re changing the defaults out
>> from under people,
>
> Hmph, I wouldn't frame it this way.  We are aware of GCC's special role
> as the system compiler.  We're trying to upstream the changes to sources
> before flipping the compiler default.  (The burden of being a system
> compiler and all that.)  A 25-year transition period evidently wasn't
> enough, so some effort is still needed.  We may conclude that removing
> these extensions is too costly even in 2024.

I didn’t mean to imply that I disliked any of the changes.  In fact,
with my day job (CS professor) hat on, I am quite looking forward to not
having to warn the kids about these legacy features anymore (we don’t
_teach_ them, but they inevitably use them by accident, particularly
implicit function declarations, and then get confused because ‘cc’ with
no -W options doesn’t catch the mistake).

>> can you please also remove the last few predefined
>> user-namespace macros (-Dlinux, -Dunix, -Darm, etc) from all the
>> -std=gnuXX modes?
>
> That's a good point, I'll think about how we can instrument GCC to
> support tracking that.  We won't be able help with -Darm on the Fedora
> side (the AArch64 port doesn't have that, and there's no longer a Fedora
> 32-bit Arm port), but -Dlinux and -Dunix we can help with.

These are also a trip hazard for novices, and the only way to turn them
off is with -std=cXX, which also turns another trip hazard (trigraphs)
*on*… so yeah, anything you can do to help speed up their removal, I
think it’d be worthwhile.

zw


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Sam James via Gcc


> On 12 Nov 2022, at 03:40, Zack Weinberg  wrote:
> 
> Florian Weimer  writes:
>> based on a limited attempt to get this fixed about three years
>> ago, I expect that many of the problematic packages have not had their
>> configure scripts regenerated using autoconf for a decade or more.  This
>> means that as an autoconf maintainer, you unfortunately won't be able to
>> help us much.
> 
> I’m sadly not surprised.
> 
> This is definitely more work than I can see myself doing on a volunteer
> basis, but a 2.69.1 patch release — nothing that’s not already on trunk,
> cherry pick the changes needed to support the newer compilers (and
> also newer Perl and Bash and M4) is a thing that could happen.

I didn't want to ask you to do this because I felt fortunate enough
you were volunteering to handle 2.72, but this would indeed be a help,
because then I won't have to try persuade people they should totally upgrade,
and it should happen naturally enough with distro upgrades.

If you are willing, that would be welcome.

Of course, we'll have to go lobby them, but that is what it is :)



signature.asc
Description: Message signed with OpenPGP


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Joseph Myers
On Fri, 11 Nov 2022, Zack Weinberg via Gcc wrote:

> These are also a trip hazard for novices, and the only way to turn them
> off is with -std=cXX, which also turns another trip hazard (trigraphs)
> *on*… so yeah, anything you can do to help speed up their removal, I
> think it’d be worthwhile.

As of GCC 13, -std=c2x will disable trigraphs, since they've been removed 
from C2x.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: How can Autoconf help with the transition to stricter compilation defaults?

2022-11-11 Thread Sam James via Gcc


> On 12 Nov 2022, at 00:53, Paul Eggert  wrote:
> 
> On 2022-11-11 15:25, Sam James wrote:
>> That's not a judgement on whether the changes will ultimately remain in 
>> autoconf, I'm just
>> hesitant to allow a discussion I've kicked off to derail something that we 
>> were planning
>> on doing anyway.
>> What do you think?
> 
> I'm hesitant to do that partly because the changes to _TIME_BITS are already 
> released in multiple packages and need to be dealt with, regardless of 
> whether they're backed out of Autoconf. This is because they've been in 
> Gnulib since July and several packages based on these Gnulib changes have 
> been released since then. Current Gnulib assumes these changes will appear in 
> the next Autoconf release; if that's not true, we'll need to upgrade Gnulib 
> and in the meantime the other packages released since July would still have 
> the changes whatever we do with Gnulib and/or Autoconf.
> 
> Since distros need to deal with the issue anyway, regardless of what Autoconf 
> and/or Gnulib does, I don't see why backing the changes out of Autoconf will 
> help all that much.
> 
> It should pretty easy for a distro to say "hold on, I don't want 64-bit 
> time_t yet" without changing either Autoconf or Gnulib so if you want to go 
> that route please feel free to do so.

The fact it's already shipped in gnulib & that the "real problem" is in glibc 
IMO means that I don't feel
strongly about reverting it.

You're right that distros have the toggle so as long as we mention this 
prominently enough in NEWS,
I don't have a strong objection to master being released as-is.


signature.asc
Description: Message signed with OpenPGP


Re: [PATCH] Various pages: SYNOPSIS: Use VLA syntax in function parameters

2022-11-11 Thread Martin Uecker via Gcc
Am Samstag, den 12.11.2022, 01:09 + schrieb Joseph Myers:
> On Fri, 11 Nov 2022, Martin Uecker via Gcc wrote:
> 
> > > Even a compiler extension requires the level of detail of specification 
> > > that you get with a WG14 paper (and the level of work on finding bugs in 
> > > that specification), to avoid the problem we've had before with too many 
> > > features added in GCC 2.x days where a poorly defined feature is 
> > > "whatever 
> > > the compiler accepts".
> > 
> > I think the effort needed to specify the feature correctly
> > can be minimized by making the first version of the feature
> > as simple as possible.  
> 
> The version of constexpr in the current C2x working draft is more or less 
> as simple as possible.  It also went through lots of revisions to get 
> there.  I'm currently testing an implementation of C2x constexpr for GCC 
> 13, and there are still several issues with the specification I found in 
> the implementation process, beyond those raised in WG14 discussions, for 
> which I'll need to raise NB comments to clarify things.

constexpr had no implementation experience in C at all and
always suspected that C++ experience should somehow count is
not really justified.  

> I think that illustrates that you need the several iterations on the 
> specification process, *and* making it as simple as possible, *and* 
> getting implementation experience, *and* the implementation experience 
> being with a close eye to what it implies for all the details in the 
> specification rather than just getting something vaguely functional but 
> not clearly specified.

I agree. We should work on specification and on prototyping
new features in parallel.

Martin