Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Martin Liška
On 07/18/2018 04:29 PM, Matthias Klose wrote:
> On 18.07.2018 14:49, Joel Sherrill wrote:
>> On Wed, Jul 18, 2018, 7:15 AM Jonathan Wakely  wrote:
>>
>>> On Wed, 18 Jul 2018 at 13:06, Eric S. Raymond wrote:

 Jonathan Wakely :
> On Wed, 18 Jul 2018 at 11:56, David Malcolm wrote:
>> Python 2.6 onwards is broadly compatible with Python 3.*. and is
>>> about
>> to be 10 years old.  (IIRC it was the system python implementation in
>> RHEL 6).
>
> It is indeed. Without some regular testing with Python 2.6 it could be
> easy to introduce code that doesn't actually work on that old version.
> I did that recently, see PR 86112.
>
> This isn't an objection to using Python (I like it, and anyway I don't
> touch the parts of GCC that you're talking about using it for). Just a
> caution that trying to restrict yourself to a portable subset isn't
> always easy for casual users of a language (also a problem with C++98
> vs C++11 vs C++14 as I'm sure many GCC devs are aware).

 It's not very difficult to write "polyglot" Python that is indifferent
 to which version it runs under.  I had to solve this problem for
 reposurgeon; techniques documented here...
>>>
>>> I don't see any mention of avoiding dict comprehensions (not supported
>>> until 2.7, so unusable on RHEL6/CentOS6 and SLES 11).
>>>
>>> I maintain it's easy to unwittingly use a feature (such as dict
>>> comprehensions) which works fine on your machine, but aren't supported
>>> by all versions you intend to support. Regular testing with the oldest
>>> version is needed to prevent that (which was the point I was making).
>>>
>>
>> I think the RTEMS Community may be a good precedence here. RTEMS is always
>> cross compiled and we are as host agnostic as possible. We use as close to
>> the latest release of GCC, binutils, gdb, and newlib as possible. Our host
>> side tools are in a combination of Python and C++. We use Sphinx for
>> documentation.
>>
>> We are careful to use the Python on RHEL 6 as a baseline. You can build an
>> RTEMS environment there. But at least one of the Sphinx pieces requires a
>> Python of at least RHEL 7 vintage.
>>
>> We have a lot of what I will politely call institutional and large
>> organization users who have to adhere to strict IT policies. I think RHEL 7
>> is common but can't swear there is no RHEL 6 out there and because of that,
>> we set the Python 2.x as a minimum.
>>
>> Yes these are old. And for native new distribution use, it doesn't matter.
>> But for cross and local upgrades, old distributions matter. Particularly
>> those targeting enterprise users. And those are glacially slow.
>>
>> As an aside, it was not being able to build the RTEMS documentation that
>> pushed me off RHEL 6 as my primary personal environment last year. I wanted
>> to be using the oldest distribution I thought was in use in our community.
> 
> doesn't RHEL 6 has overlays for that very reason to install a newer Python3?
> 
> Please don't start with Python2 anymore. It's discontinued in less than two
> years and then you'll have distributions not having Python2 anymore.  If you
> don't have a recent Python3, then you probably can build it for your platform
> itself.

Fully agree with that. Coming up with a new scripts written in python2 really
makes no sense. Even though we agree on transition of option scripts to Python,
I'm planning to that in time frame of GCC 10 release.

Martin

> 
> Python3 is also cross-buildable, and much easier to cross-build than guile or 
> perl.
> 
> Matthias
> 



Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Matthias Klose
On 19.07.2018 22:20, Karsten Merker wrote:
> David Malcolm wrote:
>> On Tue, 2018-07-17 at 14:49 +0200, Martin Liška wrote:
>>> I've recently touched AWK option generate machinery and it's
>>> quite unpleasant to make any adjustments.  My question is
>>> simple: can we starting using a scripting language like Python
>>> and replace usage of the AWK scripts?  It's probably question
>>> for Steering committee, but I would like to see feedback from
>>> community.
>>
>> As you know, I'm a fan of Python.  As I noted elsewhere in this
>> thread, one issue is Python 2 vs Python 3 (and minimum
>> versions).  Within Python 2.*, Python 2.6 onwards is broadly
>> compatible with Python 3.*, and there's a well-known common
>> subset that works in both languages.
>>
>> To what extent would this complicate bootstrap?  (I don't think
>> so, in that it would appear to be just an external build-time
>> dependency on the build machine).
>>
>> Would this make it harder for people to build GCC?  It's one
>> more dependency, but CPython is widely available and relatively
>> easy to build.  (I don't have experience of doing bring-up of a
>> new architecture, though).
> 
> Hello,
> 
> I have recently been working on bringing up a new Debian port for
> the riscv64 architecture from scratch, so I would like to add
> some of my personal experiences here.
> 
> Adding a dependency on python for building gcc would make life
> for distribution porters quite a bit harder.  There are a bunch
> of packages that are more or less essential for a modern Linux
> distribution but at the same time extremely difficult to properly
> cross-build.  For a distribution porter trying to bootstrap a new
> architecture, this means that one has to resort to native
> building sooner or later, i.e. one has to build native toolchain
> packages and then work forward from there.  During the bootstrap
> process it is often necessary to break dependency cycles and
> natively rebuild toolchain packages with different build-profiles
> enabled, or to build newer versions of the same toolchain packages
> with bugfixes for the new architecture.
> 
> A dependency on python would mean that to be able to do a native
> rebuild of the toolchain one would need a native python.  The
> problem here is that python has an enormous number of transitive
> build-dependencies and not all of them are easily cross-buildable,
> i.e. one needs a native compiler to build some of them in a
> bootstrap scenario.  This can lead to a catch-22-style situation
> where one would need a native python package and its dependencies
> for natively building the gcc package and a native gcc package
> for building (some of) the dependencies of the python package.
> 
> With awk we don't have this problem as in contrast to python awk
> doesn't pull in any dependencies that aren't required by gcc
> anyway.  From a distro porter's point of view I would therefore
> appreciate very much if it would be possible to avoid adding a
> python dependency to the gcc build process.

I don't see that as an issue.  As said in another reply in this thread, you can
do a staged python build, which has the same build dependencies as awk (maybe
except the db/gdvm module). And if you need to, you can cross build python as
well more easily than for example perl or guile.

Matthias


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Martin Liška
On 07/19/2018 04:47 PM, Jeff Law wrote:
> On 07/18/2018 03:28 PM, Segher Boessenkool wrote:
>> On Wed, Jul 18, 2018 at 11:51:36AM +0200, Richard Biener wrote:
>>> We already conditionally require Perl for building for some targets so I 
>>> wonder
>>> if using perl would be better ...
>>
>> At least perl is GPL (Python is not).
>>
>>
>> What would the advantage of using Python be?  I haven't heard any yet.
>> Awk may be a bit clunky but at least it is easily readable for anyone.
> I've found python *far* easier to read than awk.  And you can actually
> run a debugger on your python code to see what it's doing.
> Jeff
> 

Yes, using Python is mainly because of object-oriented programming paradigm.
It's handy to have encapsulation of functionality in methods, one can do
unit-testing of parts of the script. Currently AWK scripts are mix of 
input/output
transformation and various emission of printf('#error..') sanity checks.
In general the script is not easily readable and contains multiple global arrays
that simulate encapsulation in classes.

Martin



Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Martin Liška
On 07/19/2018 10:20 PM, Karsten Merker wrote:
> David Malcolm wrote:
>> On Tue, 2018-07-17 at 14:49 +0200, Martin Liška wrote:
>>> I've recently touched AWK option generate machinery and it's
>>> quite unpleasant to make any adjustments.  My question is
>>> simple: can we starting using a scripting language like Python
>>> and replace usage of the AWK scripts?  It's probably question
>>> for Steering committee, but I would like to see feedback from
>>> community.
>>
>> As you know, I'm a fan of Python.  As I noted elsewhere in this
>> thread, one issue is Python 2 vs Python 3 (and minimum
>> versions).  Within Python 2.*, Python 2.6 onwards is broadly
>> compatible with Python 3.*, and there's a well-known common
>> subset that works in both languages.
>>
>> To what extent would this complicate bootstrap?  (I don't think
>> so, in that it would appear to be just an external build-time
>> dependency on the build machine).
>>
>> Would this make it harder for people to build GCC?  It's one
>> more dependency, but CPython is widely available and relatively
>> easy to build.  (I don't have experience of doing bring-up of a
>> new architecture, though).
> 
> Hello,
> 
> I have recently been working on bringing up a new Debian port for
> the riscv64 architecture from scratch, so I would like to add
> some of my personal experiences here.
> 
> Adding a dependency on python for building gcc would make life
> for distribution porters quite a bit harder.  There are a bunch
> of packages that are more or less essential for a modern Linux
> distribution but at the same time extremely difficult to properly
> cross-build.  For a distribution porter trying to bootstrap a new
> architecture, this means that one has to resort to native
> building sooner or later, i.e. one has to build native toolchain
> packages and then work forward from there.  During the bootstrap
> process it is often necessary to break dependency cycles and
> natively rebuild toolchain packages with different build-profiles
> enabled, or to build newer versions of the same toolchain packages
> with bugfixes for the new architecture.
> 
> A dependency on python would mean that to be able to do a native
> rebuild of the toolchain one would need a native python.  The
> problem here is that python has an enormous number of transitive
> build-dependencies and not all of them are easily cross-buildable,
> i.e. one needs a native compiler to build some of them in a
> bootstrap scenario.  This can lead to a catch-22-style situation
> where one would need a native python package and its dependencies
> for natively building the gcc package and a native gcc package
> for building (some of) the dependencies of the python package.

Hi.

The problematic is quite covered in this thread. You're not CC, so
please take a look:

https://gcc.gnu.org/ml/gcc/2018-07/msg00233.html

So for your use case, cross compilation of python (without fancy
modules that have dependencies) should work for you to make a transition
into native distribution.

Martin

> 
> With awk we don't have this problem as in contrast to python awk
> doesn't pull in any dependencies that aren't required by gcc
> anyway.  From a distro porter's point of view I would therefore
> appreciate very much if it would be possible to avoid adding a
> python dependency to the gcc build process.
> 
> Regards,
> Karsten
> 
> P.S.: I am not subscribed to the list, so it would be nice
>   if you could CC me on replies.
> 



Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Martin Liška
On 07/18/2018 08:03 PM, Matthias Klose wrote:
> On 18.07.2018 19:29, Paul Koning wrote:
>>
>>
>>> On Jul 18, 2018, at 1:22 PM, Boris Kolpackov  
>>> wrote:
>>>
>>> Paul Koning  writes:
>>>
> On Jul 18, 2018, at 11:13 AM, Boris Kolpackov  
> wrote:
>
> I wonder what will be the expected way to obtain a suitable version of
> Python if one is not available on the build machine? With awk I can
> build it from source pretty much anywhere. Is building newer versions
> of Python on older targets a similarly straightforward process (somehow
> I doubt it)? What about Windows?

 It's the same sort of thing: untar the sources, configure, make, make
 install.
> 
> Windows binaries and MacOSX binaries are available from upstream.  The build
> process on *ix targets is autoconf based and easy as for awk/gawk.
> 
>>> Will this also install all the Python packages one might plausible want
>>> to use in GCC?
> 
> some extension modules depend on external libraries, but even if those don't
> exist, the build succeeds without building these extension modules. The 
> sources
> come with embedded libs for zlib, libmpdec,  libexpat.  They don't include
> libffi (only in 3.7), libsqlite, libgdbm, libbluetooth, libdb.  I suppose the
> usage of such modules should be banned by policy.  The only needed thing is 
> any
> of libdb (Berkley/SleepyCat) or gdbm to build the anydbm module which might be
> necessary.
> 
>> It installs the entire standard Python library (corresponding to the 1800+ 
>> pages of the library manual).  I expect that will easily cover anything GCC 
>> might want to do.
> 
> The current usage of awk and perl doesn't include any third party libraries.
> That's where the usage of Python should start with.

Thank you Matthias for explanation of dependencies problematics. I can confirm
that option handling scripts can easily work without any fancy modules.

Martin

> 
> Matthias
> 



TREE_USED and DECL_READ_P

2018-07-20 Thread 冠人 王 via gcc
GCC Edition:7.3.0
I find in source code/gcc/c/c-decl.c , 
Line 1265 to 1281 decides what situation leads to warnings for unused variables
I am confused about line 1266:
I think only the DECL_READ_P is enough for program to 
decides the warning for unused variables, so I can elmiminate
" !TREE_USED(p) || "
Am I right ?
The comment for "DECL_READ_P":
InVAR_DECL and PARM_DECL, set when the decl has been used except for being set  




O2 Agressive Optimisation by GCC

2018-07-20 Thread Umesh Kalappa
Hi All ,

We are looking at the C sample i.e

extern int i,j;

int test()
{
while(1)
{   i++;
j=20;
}
return 0;
}

command used :(gcc 8.1.0)
gcc -S test.c -O2

the generated asm for x86

.L2:
jmp .L2

we understand that,the infinite loop is not  deterministic ,compiler
is free to treat as that as UB and do aggressive optimization ,but we
need keep the side effects like j=20 untouched by optimization .

Please note that using the volatile qualifier for i and j  or empty
asm("") in the while loop,will stop the optimizer ,but we don't want
do  that.

Anyone from the community ,please share their insights why above
transformation is right ?

and without using volatile or memory barrier ,how we can stop the
above transformation .


Thank you in advance.
~Umesh


Re: O2 Agressive Optimisation by GCC

2018-07-20 Thread Jakub Jelinek
On Fri, Jul 20, 2018 at 05:49:12PM +0530, Umesh Kalappa wrote:
> We are looking at the C sample i.e
> 
> extern int i,j;
> 
> int test()
> {
> while(1)
> {   i++;
> j=20;
> }
> return 0;
> }
> 
> command used :(gcc 8.1.0)
> gcc -S test.c -O2
> 
> the generated asm for x86
> 
> .L2:
> jmp .L2
> 
> we understand that,the infinite loop is not  deterministic ,compiler
> is free to treat as that as UB and do aggressive optimization ,but we
> need keep the side effects like j=20 untouched by optimization .

Don't invoke UB in your code, and you won't be surprised, it is all that
easy.  After you invoke UB, anything can happen.

Jakub


Re: GCC 8.2 Release Candidate available from gcc.gnu.org

2018-07-20 Thread Bill Seurer

On 07/19/18 07:28, Richard Biener wrote:


A release candidate for GCC 8.2 is available from

  ftp://gcc.gnu.org/pub/gcc/snapshots/8.2.0-RC-20180719/

and shortly its mirrors.  It has been generated from SVN revision 262876.

I have so far bootstrapped and tested the release candidate on
x86_64-unknown-linux-gnu.  Please test it and report any issues to
bugzilla.

If all goes well I'd like to release 8.2 on Thursday, July 26th.



I bootstrapped and tested this on power 7 and power 8 big endian and 
power 8 and power 9 little endian and saw no problems.


--

-Bill Seurer



Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Segher Boessenkool
On Fri, Jul 20, 2018 at 11:49:05AM +0200, Martin Liška wrote:
> Fully agree with that. Coming up with a new scripts written in python2 really
> makes no sense.

Then python cannot be a build requirement for GCC, since some of our
primary targets do not ship python3.


Segher


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Paul Koning



> On Jul 20, 2018, at 12:37 PM, Segher Boessenkool  
> wrote:
> 
> On Fri, Jul 20, 2018 at 11:49:05AM +0200, Martin Liška wrote:
>> Fully agree with that. Coming up with a new scripts written in python2 really
>> makes no sense.
> 
> Then python cannot be a build requirement for GCC, since some of our
> primary targets do not ship python3.

Is it required that GCC must build with only the stock support elements on the 
primary target platforms?  Or is it allowed to require installing 
prerequisites?  Yes, some platforms are so far behind they still don't ship 
Python 3, but installing it is straightforward.

paul



Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Segher Boessenkool
On Fri, Jul 20, 2018 at 12:54:36PM -0400, Paul Koning wrote:
> 
> 
> > On Jul 20, 2018, at 12:37 PM, Segher Boessenkool 
> >  wrote:
> > 
> > On Fri, Jul 20, 2018 at 11:49:05AM +0200, Martin Liška wrote:
> >> Fully agree with that. Coming up with a new scripts written in python2 
> >> really
> >> makes no sense.
> > 
> > Then python cannot be a build requirement for GCC, since some of our
> > primary targets do not ship python3.
> 
> Is it required that GCC must build with only the stock support elements on 
> the primary target platforms?

Not that I know.  But why should we make it hugely harder for essentially
no benefit?

All the arguments against awk were arguments against *the current scripts*.

And yes, we can (and perhaps should) rewrite those build scripts as C code,
just like all the other gen* we have.

> Or is it allowed to require installing prerequisites?  Yes, some platforms 
> are so far behind they still don't ship Python 3, but installing it is 
> straightforward.

Installing it is not straightforward at all.


Segher


Re: O2 Agressive Optimisation by GCC

2018-07-20 Thread Martin Sebor

On 07/20/2018 06:19 AM, Umesh Kalappa wrote:

Hi All ,

We are looking at the C sample i.e

extern int i,j;

int test()
{
while(1)
{   i++;
j=20;
}
return 0;
}

command used :(gcc 8.1.0)
gcc -S test.c -O2

the generated asm for x86

.L2:
jmp .L2

we understand that,the infinite loop is not  deterministic ,compiler
is free to treat as that as UB and do aggressive optimization ,but we
need keep the side effects like j=20 untouched by optimization .

Please note that using the volatile qualifier for i and j  or empty
asm("") in the while loop,will stop the optimizer ,but we don't want
do  that.

Anyone from the community ,please share their insights why above
transformation is right ?


The loop isn't necessarily undefined (and compilers don't look
for undefined behavior as opportunities to optimize code), but
because it doesn't terminate it's not possible for a conforming
C program to detect the side-effects in its body.  The only way
to detect it is to examine the object code as you did.

Compilers are allowed (and expected) to transform source code
into efficient object code as long as the transformations don't
change the observable effects of the program.  That's just what
happens in this case.

Martin


Re: O2 Agressive Optimisation by GCC

2018-07-20 Thread Richard Biener
On July 20, 2018 7:59:10 PM GMT+02:00, Martin Sebor  wrote:
>On 07/20/2018 06:19 AM, Umesh Kalappa wrote:
>> Hi All ,
>>
>> We are looking at the C sample i.e
>>
>> extern int i,j;
>>
>> int test()
>> {
>> while(1)
>> {   i++;
>> j=20;
>> }
>> return 0;
>> }
>>
>> command used :(gcc 8.1.0)
>> gcc -S test.c -O2
>>
>> the generated asm for x86
>>
>> .L2:
>> jmp .L2
>>
>> we understand that,the infinite loop is not  deterministic ,compiler
>> is free to treat as that as UB and do aggressive optimization ,but we
>> need keep the side effects like j=20 untouched by optimization .
>>
>> Please note that using the volatile qualifier for i and j  or empty
>> asm("") in the while loop,will stop the optimizer ,but we don't want
>> do  that.
>>
>> Anyone from the community ,please share their insights why above
>> transformation is right ?
>
>The loop isn't necessarily undefined (and compilers don't look
>for undefined behavior as opportunities to optimize code), but

The variable i overflows.

>because it doesn't terminate it's not possible for a conforming
>C program to detect the side-effects in its body.  The only way
>to detect it is to examine the object code as you did.

I'm not sure we perform this kind of dead code elimination but yes, we could. 
Make i unsigned and check whether that changes behavior. 

>Compilers are allowed (and expected) to transform source code
>into efficient object code as long as the transformations don't
>change the observable effects of the program.  That's just what
>happens in this case.
>
>Martin



RE: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Konovalov, Vadim
> From: Segher Boessenkool
> On Fri, Jul 20, 2018 at 12:54:36PM -0400, Paul Koning wrote:
> > >> Fully agree with that. Coming up with a new scripts written in python2 
> > >> really
> > >> makes no sense.
> > > 
> > > Then python cannot be a build requirement for GCC, since some of our
> > > primary targets do not ship python3.
> > 
> > Is it required that GCC must build with only the stock
> > support elements on the primary target platforms?
> 
> Not that I know.  But why
> should we make it hugely harder for essentially
> no benefit?
> 
> All the arguments
> against awk were arguments against *the current scripts*.
> 
> And yes, we can (and
> perhaps should) rewrite those build scripts as C code,
> just like all the other
> gen* we have.

+1 

> > Or is it allowed to require installing prerequisites?  Yes,
> > some platforms are so far behind they still don't ship Python 3, but 
> > installing

Sometimes those are not behind, those could have no python for other reasons - 
maybe those are too forward? They just don't have python yet?

> > it is straightforward.
> 
> Installing it is not straightforward at all.

I also agree with this;

Please consider that both Python - 2 and 3 - they both do not 
support build chain on Windows with GCC

for me, it is a showstopper


Re: O2 Agressive Optimisation by GCC

2018-07-20 Thread Martin Sebor

On 07/20/2018 12:17 PM, Richard Biener wrote:

On July 20, 2018 7:59:10 PM GMT+02:00, Martin Sebor  wrote:

On 07/20/2018 06:19 AM, Umesh Kalappa wrote:

Hi All ,

We are looking at the C sample i.e

extern int i,j;

int test()
{
while(1)
{   i++;
j=20;
}
return 0;
}

command used :(gcc 8.1.0)
gcc -S test.c -O2

the generated asm for x86

.L2:
jmp .L2

we understand that,the infinite loop is not  deterministic ,compiler
is free to treat as that as UB and do aggressive optimization ,but we
need keep the side effects like j=20 untouched by optimization .

Please note that using the volatile qualifier for i and j  or empty
asm("") in the while loop,will stop the optimizer ,but we don't want
do  that.

Anyone from the community ,please share their insights why above
transformation is right ?


The loop isn't necessarily undefined (and compilers don't look
for undefined behavior as opportunities to optimize code), but


The variable i overflows.


Good point!

It doesn't change the answer or the behavior of any compiler
I tested (although ICC and Oracle cc both emit the assignment
as well as the increment regardless of whether the variables
are signed).  I don't think it should change it either.

Going further, and as much value as I put on diagnosing bugs,
I also wouldn't see it as helpful to diagnose this kind of
eliminated undefined behavior (so long as the result of
the overflow wasn't used).  What might be helpful, though,
is diagnosing the infinite loop similarly to IBM xlc and
Oracle cc.  Maybe not in the constant case but in the non-
constant cases if might help catch bugs.

Martin




because it doesn't terminate it's not possible for a conforming
C program to detect the side-effects in its body.  The only way
to detect it is to examine the object code as you did.


I'm not sure we perform this kind of dead code elimination but yes, we could. 
Make i unsigned and check whether that changes behavior.


Compilers are allowed (and expected) to transform source code
into efficient object code as long as the transformations don't
change the observable effects of the program.  That's just what
happens in this case.

Martin






Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Matthias Klose
On 20.07.2018 20:53, Konovalov, Vadim wrote:
>> From: Segher Boessenkool
>> On Fri, Jul 20, 2018 at 12:54:36PM -0400, Paul Koning wrote:
> Fully agree with that. Coming up with a new scripts written in python2 
> really
> makes no sense.

 Then python cannot be a build requirement for GCC, since some of our
 primary targets do not ship python3.
>>>
>>> Is it required that GCC must build with only the stock
>>> support elements on the primary target platforms?
>>
>> Not that I know.  But why
>> should we make it hugely harder for essentially
>> no benefit?
>>
>> All the arguments
>> against awk were arguments against *the current scripts*.
>>
>> And yes, we can (and
>> perhaps should) rewrite those build scripts as C code,
>> just like all the other
>> gen* we have.
> 
> +1 
> 
>>> Or is it allowed to require installing prerequisites?  Yes,
>>> some platforms are so far behind they still don't ship Python 3, but 
>>> installing
> 
> Sometimes those are not behind, those could have no python for other reasons 
> - 
> maybe those are too forward? They just don't have python yet?
> 
>>> it is straightforward.
>>
>> Installing it is not straightforward at all.
> 
> I also agree with this;

all == "Installing it is not straightforward" ?

I do question this. I mentioned elsewhere what is needed.

> Please consider that both Python - 2 and 3 - they both do not 
> support build chain on Windows with GCC
> 
> for me, it is a showstopper

This seems to be a different issue.  However I have to say that I'm not booting
Windows on a regular basis.  Does build chain on Windows means Cygwin?  If yes,
there surely is Python available prebuilt.

Matthias


RE: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Konovalov, Vadim
> From: Matthias Klose 
> To: Konovalov, Vadim; Segher Boessenkool;
> On 20.07.2018 20:53, Konovalov, Vadim wrote:
> > Sometimes those are not behind, those could have no python for other 
> > reasons - 
> > maybe those are too forward? They just don't have python yet?
> > 
> >>> it is straightforward.
> >>
> >> Installing it is not straightforward at all.
> > 
> > I also agree with this;
> 
> all == "Installing it is not straightforward" ?
> 
> I do question this. I mentioned elsewhere what is needed.

What is needed - not always presented.

> > Please consider that both Python - 2 and 3 - they both do not 
> > support build chain on Windows with GCC
> > 
> > for me, it is a showstopper
> 
> This seems to be a different issue.  However I have to say
> that I'm not booting
> Windows on a regular basis.  Does build chain on Windows
> means Cygwin?  If yes,
> there surely is Python available prebuilt.

Cygwin is very different platform, 
python rebuild on Cygwin is supported here, yes, but this is very 
different matter.

But I was talking about Windows, not Cygwin,

Rebuild of Python on windows (without Cygwin) not supported,
I was surprised to discover that and I will be gladly accept and use it
When it eventually will support GCC+Windows rebuild.

There are some blogs on Internet about someone who eventually 
did a build on windows with GCC, but - 
why this effort wasn't propagated into python mainstream?

Most of those mentioned blogs are from 2006 or 2008; rather obsolete and could 
not be easily reused

https://wiki.python.org/moin/WindowsCompilers

mentions
GCC - MinGW (x86)
MinGW is an alternative C/C++ compiler that works with all Python versions up 
to 3.4.

BUT this is just fake - no, the instruction is unfinished and does not work 
even supposed to work

> Matthias


Re: ChangeLog's: do we have to?

2018-07-20 Thread Joseph Myers
As far as I am concerned, the problem with ChangeLogs is one with the 
format rather than one with having files called ChangeLog.  (The GNU 
Coding Standards have permitted automatic generation of ChangeLog at 
release time from version control information since 1996.)

The main issues I see with the format are:

(a) It's a format designed for pre-VCS days and thus involves writing 
descriptions that duplicate what's plain from reading the change itself, 
rather than descriptions that are written on the basis that readers have 
access to the change itself and so the description can better concentrate 
on a higher-level overview of the changes and information that isn't 
obvious simply from reading them.

(b) It forces all descriptions to be at the level of describing what 
changed in each individual named entity in the source code, whether or not 
that is the most helpful level for understanding that particular change 
(and whether or not entity names either exist or are convenient for the 
ChangeLog - a convention designed for C function names works less well for 
C++).

I think that for many projects these outweigh the benefits from forcing 
someone to read through all their changes carefully to write the ChangeLog 
entry.

In the discussions on bug-standards, RMS was amenable to removing the 
requirement for ChangeLog format and the per-entity descriptions of what 
changed - *if* a sufficiently good tool is provided to produce a list of 
the names of the entities changed by a commit.  I don't see such a tool as 
being useful - I think the cases where people might use such lists can all 
be adequately addressed using existing git facilities to e.g. search for 
changes to an entity rather than list entities involved in a change.  But 
I haven't managed to persuade RMS of that.

So, if someone write the entity-listing tool, we should be able to get the 
GNU Coding Standards requirement for ChangeLog format removed (whether or 
not it's still necessary to generate some kind of log of changes at 
release time and put it in a file called ChangeLog).  The tool should be 
able to handle, at least, the main cases where diff hunk headers get the 
name of the changed entity wrong (when the relevant funcname line is 
inside the hunk rather than before it, when the changed entity is a 
#define, at least).

https://lists.gnu.org/archive/html/bug-standards/2018-05/msg00011.html

If the requirement is removed from the GNU Coding Standards, individual 
GNU packages can then make their own decisions about whether to keep using 
ChangeLog format or not.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: ChangeLog's: do we have to?

2018-07-20 Thread Joseph Myers
On Thu, 5 Jul 2018, Aldy Hernandez wrote:

> However, even if you could "git log --grep" the commit messages, I assume your
> current use is grepping for function names and such, right? Being able to grep
> a commit message won't solve that problem, or am I missing something?

If you know what function and file you're interested in, you can use git 
log -L to find changes to it (I've used that in the GCC context before), 
or of course other tools such as git blame.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: gcc-gnat for Linux/MIPS-32bit-be, and HPPA2

2018-07-20 Thread Florian Weimer
* Jeff Law:

> On 07/19/2018 02:19 PM, Carlo Pisani wrote:
>> hi
>> is there any chance someone has a working gcc-ada compiler? for
>> - Linux/MIPS (big endian, MIPS3, MIPS4 or MIPS32)
>> - Linux/HPPA2
>> 
>> I have successfully compiled gcc-ada for SGI_IRIX (MIPS4/BE)
>> but ... every attempt to create a cross-compiler(1) fails
>> 
>> on HPPA I have never seen an Ada compiler
> We certainly had it on PA HPUX.  Of course that platform is long dead.
>
> It looks like Debian's PA port has had an Ada compiler in the past.  I'd
> be surprised if they still don't -- once you've built it once it's
> fairly easy to carry forward.

Right, it's still building:



However, I'm not sure if it qualifies as a PA-RISC 2.0 port.  There's
certainly no 64-bit userspace.


Re: Good news, bad news on the repository conversion

2018-07-20 Thread Joseph Myers
I don't see any commits at 
git://thyrsus.com/repositories/gcc-conversion.git since January.  Are 
there further changes that haven't been pushed there?  (For example, I 
sent a few additions to the author map on 13 Feb.)

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Repo conversion troubles.

2018-07-20 Thread Joseph Myers
On Mon, 9 Jul 2018, Eric S. Raymond wrote:

> Richard Biener :
> > 12 hours from remote I guess? The subversion repository is available 
> > through rsync so you can create a local mirror to work from (we've been 
> > doing that at suse for years) 
> 
> I'm saying I see rsync plus local checkout take 10-12 hours.  I asked Jason
> about this and his response was basically "Well...we don't do that often."

Isn't that a local checkout *of top-level of the repository*, i.e. 
checking out all branches and tags?  Which is indeed something developers 
would never normally do - they'd just check out the particular branches 
they're working on.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Repo conversion troubles.

2018-07-20 Thread Joseph Myers
On Mon, 9 Jul 2018, Alexandre Oliva wrote:

> On Jul  9, 2018, Jeff Law  wrote:
> 
> > On 07/09/2018 01:57 PM, Eric S. Raymond wrote:
> >> Jeff Law :
> >>> I'm not aware of any such merges, but any that occurred most likely
> >>> happened after mid-April when the trunk was re-opened for development.
> 
> >> I'm pretty certain things were still good at r256000.  I've started that
> >> check running.  Not expecting results in less than twelve hours.
> 
> > r256000 would be roughly Christmas 2017.
> 
> When was the RAID/LVM disk corruption incident?  Could it possibly have
> left any of our svn repo metadata in a corrupted way that confuses
> reposurgeon, and that leads to such huge differences?

That was 14/15 Aug 2017, and all the SVN revision data up to r251080 were 
restored from backup within 24 hours or so.  I found no signs of damage to 
revisions from the 24 hours or so between r251080 and the time of the 
corruption when I examined diffs for all those revisions by hand at that 
time.

(If anyone rsynced corrupted old revisions from the repository during the 
window of corruption, those corrupted old revisions might remain in their 
rsynced repository copy because the restoration preserved file times and 
size, just fixing corrupted contents.)

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Repo conversion troubles.

2018-07-20 Thread Joseph Myers
On Tue, 10 Jul 2018, Jonathan Wakely wrote:

> > Large-scale, I'm afraid.  The context diff is about a GLOC.
> 
> I don't see how that's possible. Most of those files are tiny, or
> change very rarely, so I don't see how that large a diff can happen.

Concretely, the *complete GCC source tree* (trunk, that is) is under 1 GB.  
A complete diff generating the whole source tree from nothing would only 
be about 15 MLOC.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: O2 Agressive Optimisation by GCC

2018-07-20 Thread Allan Sandfeld Jensen
On Freitag, 20. Juli 2018 14:19:12 CEST Umesh Kalappa wrote:
> Hi All ,
> 
> We are looking at the C sample i.e
> 
> extern int i,j;
> 
> int test()
> {
> while(1)
> {   i++;
> j=20;
> }
> return 0;
> }
> 
> command used :(gcc 8.1.0)
> gcc -S test.c -O2
> 
> the generated asm for x86
> 
> .L2:
> jmp .L2
> 
> we understand that,the infinite loop is not  deterministic ,compiler
> is free to treat as that as UB and do aggressive optimization ,but we
> need keep the side effects like j=20 untouched by optimization .
> 
> Please note that using the volatile qualifier for i and j  or empty
> asm("") in the while loop,will stop the optimizer ,but we don't want
> do  that.
> 
But you need to do that! If you want changes to a variable to be observable in 
another thread, you need to use either volatile, atomic, or some kind of 
memory barrier implicit or explicit. This is the same if the loop wasn't 
infinite, the compiler would keep the value in register during the loop and 
only write it to memory on exiting the test() function.

'Allan




Re: O2 Agressive Optimisation by GCC

2018-07-20 Thread Jonathan Wakely
On Fri, 20 Jul 2018 at 23:06, Allan Sandfeld Jensen wrote:
>
> On Freitag, 20. Juli 2018 14:19:12 CEST Umesh Kalappa wrote:
> > Hi All ,
> >
> > We are looking at the C sample i.e
> >
> > extern int i,j;
> >
> > int test()
> > {
> > while(1)
> > {   i++;
> > j=20;
> > }
> > return 0;
> > }
> >
> > command used :(gcc 8.1.0)
> > gcc -S test.c -O2
> >
> > the generated asm for x86
> >
> > .L2:
> > jmp .L2
> >
> > we understand that,the infinite loop is not  deterministic ,compiler
> > is free to treat as that as UB and do aggressive optimization ,but we
> > need keep the side effects like j=20 untouched by optimization .
> >
> > Please note that using the volatile qualifier for i and j  or empty
> > asm("") in the while loop,will stop the optimizer ,but we don't want
> > do  that.
> >
> But you need to do that! If you want changes to a variable to be observable in
> another thread, you need to use either volatile,

No, volatile doesn't work for that.

http://www.isvolatileusefulwiththreads.in/C/

> atomic, or some kind of
> memory barrier implicit or explicit. This is the same if the loop wasn't
> infinite, the compiler would keep the value in register during the loop and
> only write it to memory on exiting the test() function.
>
> 'Allan
>
>


Re: O2 Agressive Optimisation by GCC

2018-07-20 Thread Allan Sandfeld Jensen
On Samstag, 21. Juli 2018 00:21:48 CEST Jonathan Wakely wrote:
> On Fri, 20 Jul 2018 at 23:06, Allan Sandfeld Jensen wrote:
> > On Freitag, 20. Juli 2018 14:19:12 CEST Umesh Kalappa wrote:
> > > Hi All ,
> > > 
> > > We are looking at the C sample i.e
> > > 
> > > extern int i,j;
> > > 
> > > int test()
> > > {
> > > while(1)
> > > {   i++;
> > > 
> > > j=20;
> > > 
> > > }
> > > return 0;
> > > }
> > > 
> > > command used :(gcc 8.1.0)
> > > gcc -S test.c -O2
> > > 
> > > the generated asm for x86
> > > 
> > > .L2:
> > > jmp .L2
> > > 
> > > we understand that,the infinite loop is not  deterministic ,compiler
> > > is free to treat as that as UB and do aggressive optimization ,but we
> > > need keep the side effects like j=20 untouched by optimization .
> > > 
> > > Please note that using the volatile qualifier for i and j  or empty
> > > asm("") in the while loop,will stop the optimizer ,but we don't want
> > > do  that.
> > 
> > But you need to do that! If you want changes to a variable to be
> > observable in another thread, you need to use either volatile,
> 
> No, volatile doesn't work for that.
> 
It does, but you shouldn't use for that due to many other reasons (though the 
linux kernel still does) But if the guy wants to code primitive without using 
system calls or atomics, he might as well go traditional

'Allan




Re: Repo conversion troubles.

2018-07-20 Thread Eric S. Raymond
Joseph Myers :
> On Mon, 9 Jul 2018, Eric S. Raymond wrote:
> 
> > Richard Biener :
> > > 12 hours from remote I guess? The subversion repository is available 
> > > through rsync so you can create a local mirror to work from (we've been 
> > > doing that at suse for years) 
> > 
> > I'm saying I see rsync plus local checkout take 10-12 hours.  I asked Jason
> > about this and his response was basically "Well...we don't do that often."
> 
> Isn't that a local checkout *of top-level of the repository*, i.e. 
> checking out all branches and tags?  Which is indeed something developers 
> would never normally do - they'd just check out the particular branches 
> they're working on.

It is.  I have to check out all tags and branches to validate the conversion.
-- 
http://www.catb.org/~esr/";>Eric S. Raymond

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.




Re: Good news, bad news on the repository conversion

2018-07-20 Thread Eric S. Raymond
Joseph Myers :
> I don't see any commits at 
> git://thyrsus.com/repositories/gcc-conversion.git since January.  Are 
> there further changes that haven't been pushed there?  (For example, I 
> sent a few additions to the author map on 13 Feb.)

Yes, that copy is rather stale. I need toi do some annoying sysdamin stuff
on the downstairs mmachine to get it live again.

Anything you sent me by email is merged into the live repo here on the Beast.
-- 
http://www.catb.org/~esr/";>Eric S. Raymond

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.




Re: Repo conversion troubles.

2018-07-20 Thread Eric S. Raymond
Joseph Myers :
> On Mon, 9 Jul 2018, Alexandre Oliva wrote:
> 
> > On Jul  9, 2018, Jeff Law  wrote:
> > 
> > > On 07/09/2018 01:57 PM, Eric S. Raymond wrote:
> > >> Jeff Law :
> > >>> I'm not aware of any such merges, but any that occurred most likely
> > >>> happened after mid-April when the trunk was re-opened for development.
> > 
> > >> I'm pretty certain things were still good at r256000.  I've started that
> > >> check running.  Not expecting results in less than twelve hours.
> > 
> > > r256000 would be roughly Christmas 2017.
> > 
> > When was the RAID/LVM disk corruption incident?  Could it possibly have
> > left any of our svn repo metadata in a corrupted way that confuses
> > reposurgeon, and that leads to such huge differences?
> 
> That was 14/15 Aug 2017, and all the SVN revision data up to r251080 were 
> restored from backup within 24 hours or so.  I found no signs of damage to 
> revisions from the 24 hours or so between r251080 and the time of the 
> corruption when I examined diffs for all those revisions by hand at that 
> time.

Agreed. I don't think that incident is at the root of the problems.
-- 
http://www.catb.org/~esr/";>Eric S. Raymond

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.




That light at the end of the tunnel?

2018-07-20 Thread Eric S. Raymond
That light at the end of the tunnel turned out to be an oncoming train.

Until recently I thought the conversion was near finished. I'd had
verified clean conversions across trunk and all branches, except for
one screwed-up branch that the management agreed we could discard.

I had some minor issues left with execute-permission propagation and how
to interpret mid-branch deletes  I solved the former and was working
on the latter.  I expected to converge on a final result well before
the end of the year, probably in August or September.

Then, as I reported here, my most recent test conversion produced
incorrect content on trunk.  That's very bad, because the sheer size
of the GCC repository makes bug forensics extremely slow. Just loading
the SVN dump file for examination in reposurgeon takes 4.5 hours; full
conversions are back up to 9 hours now.  The repository is growing
about as fast as my ability to find speed optimizations.

Then it got worse. I backed up to a commit that I remembered as
producing a clean conversion, and it didn't. This can only mean that
the reposurgeon changes I've been making to handle weird branch-copy
cases have been fighting each other.

For those of you late to the party, interpreting the operation
sequences in Subversion dump files is simple and produces results that
are easy to verify - except near branch copy operations. The way those
interact with each other and other operations is extremely murky.

There is *a* correct semantics defined by what the Subversion code
does.  But if any of the Subversion devs ever fully understood it,
they no longer do. The dump format was never documented by them. It is
only partly documented now because I reverse-engineered it.  But the
document I wrote has questions in it that the Subversion devs can't
answer.

It's not unusual for me to trip over a novel branch-copy-related
weirdness while converting a repo.  Normally the way I handle this is
by performing a bisection procedure to pin down the bad commit.  Then I:

(1) Truncate the dump to the shortest leading segment that
reproduces the problem.

(2) Perform a strip operation that replaces all content blobs with
unique small cookies that identify their source commit. Verify that it still
reproduces...

(3) Perform a topological reduce that drops out all uninteresting
commits, that is pure content changes not adjacent to any branch
copies or property changes. Verify that it still reproduces...

(4) Manually remove irrelevant branches with reposurgeon.
Verify that it still reproduces...

At this point I normally have a fairly small test repository (never,
previously, more than 200 or so commits) that reproduces
the issue. I watch conversions at increasing debug levels until I
figure out what is going on. Then I fix it and the reduced dump
becomes a new regression test.

In this way I make monotonic progress towards a dumpfile analyzer
that ever more closely emulates what the Subversion code is doing.
It's not anything like easy, and gets less so as the edge cases I'm
probing get more recondite.  But until now it worked.

The size of the GCC repository defeats this strategy. By back of the
envelope calculation, a single full bisection would take a minimum of
18 days.  Realistically it would probably be closer to a month.

That means that, under present assumptions, it's game over
and we've lost.  The GCC repo is just too large and weird.

My tools need to get a lot faster, like more than an order of
magnitude faster, before digging out of the bad situation the
conversion is now in will be practical.

Hardware improvements won't do that.  Nobody knows how to build a
machine that can crank a single process enough faster than 1.3GHz.
And the problem doesn't parallelize.

There is a software change that might do it.  I have been thinking
about translating reposurgeon from Python to Go. Preliminary
experiments with a Go version of repocutter show that it has a
40x speed advantage over the Python version.  I don't think I'll
get quite that much speedup on reposurgeon, but I'm pretty
optimistic agout getting enough speedup to make debugging the GCC
conversion tractable.  Even at half that, 9 hour test runs would
collapse to 13 minutes.

The problem with this plan is that a full move to Go will be very
difficult.  *Very* difficult.  As in, work time in an unknown and
possibly large number of months.

GCC management will have to make a decision about how patient
it is willing to be.  I am at this point not sure it wouldn't be
better to convert your existing tree state and go from there, jeeping
the Subversion history around for archival purposes
-- 
http://www.catb.org/~esr/";>Eric S. Raymond

..every Man has a Property in his own Person. This no Body has any
Right to but himself.  The Labour of his Body, and the Work of his
Hands, we may say, are properly his.  The great and chief end
therefore, of Mens uniting into Commonwealths, and putting themselves
under Government, is the P