Re: Feature request for "friendship" of pointers in "C"

2020-03-18 Thread Andreas Schwab
On Mär 17 2020, Holger Lamm wrote:

> ANSI C 6.5.8 (5) confirms that "... pointers  to  structure  members
> declared  later  compare  greater  than  pointers  to  members
> declared  earlier  in  the  structure"; I found no definition of address
> to structure vs. address of structure member but there would be no 
> reason to have padding *before* the first element.

See 6.7.2.1#15.

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."


Re: Feature request for "friendship" of pointers in "C"

2020-03-18 Thread Florian Weimer
* aotto:

> Hi, the following scenario has a "definition hole" in the "C" language
>
> code example:
>
> -
> struct base {
>    ...
> };
>
> struct A {
>    struct base obj;
>    ...
> } aObj;
>
> struct B {
>    struct base obj;
>    ...
> } bObj;
>
> void method_base (struct base * hdl, ...);
>
> method_base(&aObj, ...)
> method_base(&bObj, ...)
> 
>
> - a POINTER to "A" is also a valid POINTER to "base"
>
> - a POINTER to "B" is also a valid POINTER to "base"

This is close to one of the extensions enabled by -fplan9-extensions.
It accepts such code if you make the base member anonymous:

struct A {
  struct base;
  ...
} aObj;

struct B {
  struct base;
  ...
} bObj;


Re: March Order

2020-03-18 Thread May Lee via Gcc
Good Morning

We have attach our March order to this mail, confirm this order 
by return mail and issue send Invoice Asap.


Thanks & Best Regards


May Lee 
Know How International GmbH & Co. KG  
Import 


Re: Not usable email content encoding

2020-03-18 Thread Florian Weimer
* Frank Ch. Eigler via Gcc:

>> > Are you trying to copy from the raw message representation?
>> 
>> Everyone trying to work with a patch (instead of just the email) always
>> is working with the raw message.  Just  patch < mbox  or  git-am mbox
>> for example.
>> 
>> https://gcc.gnu.org/contribute.html says
>>   It is strongly discouraged to post patches as MIME parts of type
>>   application/whatever, disposition attachment or encoded as base64 or
>>   quoted-printable.
>> 
>> (which many people still do not follow, making reviewing their patches
>> much harder than needed).
>
> The key here is to realize that the raw message is not what you get
> back from the mailing list reflector, and also not the raw message
> that was sent by the sender.  In this day of mta intermediaries,
> proxies, reflectors, it may be time to revisit that suggestion.

But these largely are new problems.  It used to work flawlessly.
Patch reencoding problems go back to the redhat.com changes last
November (I understand the responsible vendor is working on a fix, but
I'm not up-to-date on the current developments).

Since the sourceware.org Mailman migration, the From: header is being
rewritten, without any compelling reason.  I certainly do not do any
DMARC checking here, so the rewriting does not benefit me.


Re: Not usable email content encoding

2020-03-18 Thread Frank Ch. Eigler via Gcc
Hi -

> > The key here is to realize that the raw message is not what you get
> > back from the mailing list reflector, and also not the raw message
> > that was sent by the sender.  In this day of mta intermediaries,
> > proxies, reflectors, it may be time to revisit that suggestion.
> 
> But these largely are new problems.  It used to work flawlessly.

I understand that's frustrating.  But these workflows were counting on
literally unspecified behaviours not changing, or outright standards
violations continuing.

> Patch reencoding problems go back to the redhat.com changes last
> November (I understand the responsible vendor is working on a fix,
> but I'm not up-to-date on the current developments).

This one is a standards-compliant reencoding.  Even if mimecast (?)
stops doing it, we can't be sure nothing else will.

> Since the sourceware.org Mailman migration, the From: header is being
> rewritten, without any compelling reason.  I certainly do not do any
> DMARC checking here, so the rewriting does not benefit me.

It benefits you because more and more email services are rejecting or
interfering with mail that is not clean enough.  If you want to
receive mail reliably, or send and have confidence that it is
received, clean mail benefits you.


- FChE



RE: How to extend SLP to support this case

2020-03-18 Thread Tamar Christina
Thanks Richard!

This is very helpful to see where you’re going with the changes!

Cheers,
Tamar

From: Richard Biener 
Sent: Friday, March 13, 2020 11:57 AM
To: Tamar Christina 
Cc: Kewen.Lin ; GCC Development ; Segher 
Boessenkool 
Subject: Re: How to extend SLP to support this case

On Tue, Mar 10, 2020 at 12:32 PM Tamar Christina 
mailto:tamar.christ...@arm.com>> wrote:

> -Original Message-
> From: Gcc mailto:gcc-boun...@gcc.gnu.org>> On Behalf 
> Of Richard Biener
> Sent: Tuesday, March 10, 2020 11:12 AM
> To: Kewen.Lin mailto:li...@linux.ibm.com>>
> Cc: GCC Development mailto:gcc@gcc.gnu.org>>; Segher 
> Boessenkool
> mailto:seg...@kernel.crashing.org>>
> Subject: Re: How to extend SLP to support this case
>
> On Tue, Mar 10, 2020 at 7:52 AM Kewen.Lin 
> mailto:li...@linux.ibm.com>> wrote:
> >
> > Hi all,
> >
> > I'm investigating whether GCC can vectorize the below case on ppc64le.
> >
> >   extern void test(unsigned int t[4][4]);
> >
> >   void foo(unsigned char *p1, int i1, unsigned char *p2, int i2)
> >   {
> > unsigned int tmp[4][4];
> > unsigned int a0, a1, a2, a3;
> >
> > for (int i = 0; i < 4; i++, p1 += i1, p2 += i2) {
> >   a0 = (p1[0] - p2[0]) + ((p1[4] - p2[4]) << 16);
> >   a1 = (p1[1] - p2[1]) + ((p1[5] - p2[5]) << 16);
> >   a2 = (p1[2] - p2[2]) + ((p1[6] - p2[6]) << 16);
> >   a3 = (p1[3] - p2[3]) + ((p1[7] - p2[7]) << 16);
> >
> >   int t0 = a0 + a1;
> >   int t1 = a0 - a1;
> >   int t2 = a2 + a3;
> >   int t3 = a2 - a3;
> >
> >   tmp[i][0] = t0 + t2;
> >   tmp[i][2] = t0 - t2;
> >   tmp[i][1] = t1 + t3;
> >   tmp[i][3] = t1 - t3;
> > }
> > test(tmp);
> >   }
> >
> > With unlimited costs, I saw loop aware SLP can vectorize it but with
> > very inefficient codes.  It builds the SLP instance from store group
> > {tmp[i][0] tmp[i][1] tmp[i][2] tmp[i][3]}, builds nodes {a0, a0, a0,
> > a0}, {a1, a1, a1, a1}, {a2, a2, a2, a2}, {a3, a3, a3, a3} after
> > parsing operands for tmp* and t*.  It means it's unable to make the
> > isomorphic group for a0, a1, a2, a3, although they appears isomorphic
> > to merge.  Even if it can recognize over_widening pattern and do some
> > parallel for two a0 from two iterations, but it's still inefficient (high 
> > cost).
> >
> > In this context, it looks better to build  first by
> > leveraging isomorphic computation trees constructing them, eg:
> >   w1_0123 = load_word(p1)
> >   V1_0123 = construct_vec(w1_0123)
> >   w1_4567 = load_word(p1 + 4)
> >   V1_4567 = construct_vec(w1_4567)
> >   w2_0123 = load_word(p2)
> >   V2_0123 = construct_vec(w2_0123)
> >   w2_4567 = load_word(p2 + 4)
> >   V2_4567 = construct_vec(w2_4567)
> >   V_a0123 = (V1_0123 - V2_0123) + (V1_4567 - V2_4567)<<16
> >
> > But how to teach it to be aware of this? Currently the processing
> > starts from bottom to up (from stores), can we do some analysis on the
> > SLP instance, detect some pattern and update the whole instance?
>
> In theory yes (Tamar had something like that for AARCH64 complex rotations
> IIRC).  And yes, the issue boils down to how we handle SLP discovery.  I'd 
> like
> to improve SLP discovery but it's on my list only after I managed to get rid 
> of
> the non-SLP code paths.  I have played with some ideas (even produced
> hackish patches) to find "seeds" to form SLP groups from using multi-level
> hashing of stmts.

I still have this but missed the stage-1 deadline after doing the rewriting to 
C++ 😊

We've also been looking at this and the approach I'm investigating now is 
trying to get
the SLP codepath to handle this after it's been fully unrolled. I'm looking 
into whether
the build-slp can be improved to work for the group size == 16 case that it 
tries but fails
on.

My intention is to see if doing so would make it simpler to recognize this as 
just 4 linear
loads and two permutes. I think the loop aware SLP will have a much harder time 
with this
seeing the load permutations it thinks it needs because of the permutes caused 
by the +/-
pattern.

One Idea I had before was from your comment on the complex number patch, which 
is to try
and move up TWO_OPERATORS and undo the permute always when doing +/-. This 
would simplify
the load permute handling and if a target doesn't have an instruction to 
support this it would just
fall back to doing an explicit permute after the loads.  But I wasn't sure this 
approach would get me the
results I wanted.

In the end you don't want a loop here at all. And in order to do the above with 
TWO_OPERATORS I would
have to let the SLP pattern matcher be able to reduce the group size and 
increase the no# iterations during
the matching otherwise the matching itself becomes quite difficult in certain 
cases.

Just to show where I'm heading I'm attaching current work-in-progress that 
introduces
an explicit SLP merge node and implementing SLP_TREE_TWO_OPERATORS that way:

   v1 + v2  v1 - v2
   \  /
  merge

the SLP merge op

Re: [GSoC 2020] Automatic Detection of Parallel Compilation Viability

2020-03-18 Thread Richard Biener
On Tue, 17 Mar 2020, Giuliano Belinassi wrote:

> Hi, Richi
> 
> Thank you for your review!
> 
> On 03/16, Richard Biener wrote:
> > On Fri, 13 Mar 2020, Giuliano Belinassi wrote:
> > 
> > > Hi, all
> > > 
> > > I want to propose and apply for the following GSoC project: Automatic
> > > Detection of Parallel Compilation Viability.
> > > 
> > > Here is the proposal, and I am attaching a pdf file for better
> > > readability:
> > > 
> > > **Automatic Detection of Parallel Compilation Viability**
> > > 
> > > [Giuliano Belinassi]{style="color: darkgreen"}\
> > > Timezone: GMT$-$3:00\
> > > University of São Paulo -- Brazil\
> > > IRC: giulianob in \#gcc\
> > > Email: [`giuliano.belina...@usp.br`](mailto:giuliano.belina...@usp.br)\
> > > Github: \
> > > 
> > > About Me: Computer Science Bachelor (University of São Paulo), currently
> > > pursuing a Masters Degree in Computer Science at the same institution.
> > > I've always been fascinated by topics such as High-Performance Computing
> > > and Code Optimization, having worked with a parallel implementation of a
> > > Boundary Elements Method software in GPU. I am currently conducting
> > > research on compiler parallelization and developing the
> > > [ParallelGcc](https://gcc.gnu.org/wiki/ParallelGcc) project, having
> > > already presented it in [GNU Cauldron
> > > 2019](https://www.youtube.com/watch?v=jd6R3IK__1Q).
> > > 
> > > **Skills**: Strong knowledge in C, Concurrency, Shared Memory
> > > Parallelism, Multithreaded Debugging and other typical programming
> > > tools.
> > > 
> > > Brief Introduction
> > > 
> > > In [ParallelGcc](https://gcc.gnu.org/wiki/ParallelGcc), we showed that
> > > parallelizing the Intra Procedural optimizations improves speed when
> > > compiling huge files by a factor of 1.8x in a 4 cores machine, and also
> > > showed that this takes 75% of compilation time.
> > > 
> > > In this project we plan to use the LTO infrastructure to improve
> > > compilation performance in the non-LTO case, with a tradeoff of
> > > generating a binary as good as if LTO is disabled. Here, we will
> > > automatically detect when a single file will benefit from parallelism,
> > > and proceed with the compilation in parallel if so.
> > > 
> > > Use of LTO
> > > 
> > > The Link Time Optimization (LTO) is a compilation technique that allows
> > > the compiler to analyse the program as a whole, instead of analysing and
> > > compiling one file at time. Therefore, LTO is able to collect more
> > > information about the program and generate a better optimization plan.
> > > LTO is divided in three parts:
> > > 
> > > -   *LGEN (Local Generation)*: Each file is translated to GIMPLE. This
> > > stage runs sequentially in each file and, therefore, in parallel in
> > > the project compilation.
> > > 
> > > -   *WPA (Whole Program Analysis)*: Run the Inter Procedural Analysis
> > > (IPA) in the entire program. This state runs serially in the
> > > project.
> > > 
> > > -   *LTRANS (Local Transformation)*: Execute all Intra Procedural
> > > Optimizations in each partition. This stage runs in parallel.
> > > 
> > > Since WPA can bottleneck the compilation because it runs serially in the
> > > entire project, LTO was designed to produce faster binaries, not to
> > > produce binaries fast.
> > > 
> > > Here, the proposed use of LTO to address this problem is to run the IPA
> > > for each Translation Unit (TU), instead in the Whole Program, and
> > > automatically detect when to partition the TU into multiple LTRANS to
> > > improve performance. The advantage of this approach is:
> > 
> > "to improve compilation performance"
> > 
> > > -   It can generate binaries as good as when LTO is disabled.
> > > 
> > > -   It is faster, as we can partition big files into multiple partitions
> > > and compile these partitions in parallel
> > > 
> > > -   It can interact with GNU Make Jobserver, improving CPU utilization.
> > 
> > The previous already improves CPU utilization, I guess GNU make jobserver
> > integration avoids CPU overcommit.
> > 
> > > Planned Tasks
> > > 
> > > I plan to use the GSoC time to develop the following topics:
> > > 
> > > -   Week \[1, 3\] -- April 27 to May 15:\
> > > Update `cc1`, `cc1plus`, `f771`, ..., to partition the data after
> > > IPA analysis directly into multiple LTRANS partitions, instead of
> > > generating a temporary GIMPLE file.
> > 
> > To summarize in my own words:
> > 
> >   After IPA analysis partition the CU into possibly multiple LTRANS 
> >   partitions even for non-LTO compilations. Invoke LTRANS compilation
> >   for partitions 2..n without writing intermediate IL through mechanisms
> >   like forking.
> > 
> > I might say that you could run into "issues" here with asm_out_file
> > already opened and partially written to.  Possibly easier (but harder
> > on the driver side) would be to stream LTO LTRANS IL for partitions
> > 2..n and handle those lik

Effects of adding a *description member to structs?

2020-03-18 Thread Raj J Putari via Gcc
That way we can have a clean subsystems of commands for easy processing


Re: Not usable email content encoding

2020-03-18 Thread Florian Weimer
* Frank Ch. Eigler:

> Hi -
>
>> > The key here is to realize that the raw message is not what you get
>> > back from the mailing list reflector, and also not the raw message
>> > that was sent by the sender.  In this day of mta intermediaries,
>> > proxies, reflectors, it may be time to revisit that suggestion.
>> 
>> But these largely are new problems.  It used to work flawlessly.
>
> I understand that's frustrating.  But these workflows were counting on
> literally unspecified behaviours not changing, or outright standards
> violations continuing.

Delivery of each individual message is unspecified as well, but we
still count it to happen.  I'm sorry, but I think this argument sounds
a bit vacuous.  With our own infrastructure, we should be able to get
it to behave in the way we need.

>> Patch reencoding problems go back to the redhat.com changes last
>> November (I understand the responsible vendor is working on a fix,
>> but I'm not up-to-date on the current developments).
>
> This one is a standards-compliant reencoding.  Even if mimecast (?)
> stops doing it, we can't be sure nothing else will.

But it's rather unusual in the RFC 822 world, especially with the
decline of sendmail and its peculiar 8BITMIME handling.  I understand
that you get a lot of this in the corporate mail world originally
influenced by X.400, but such messages are rarely handled by
sourceware these days, probably because people rather use Gmail than
wrestle with their inadequate corporate email.

>> Since the sourceware.org Mailman migration, the From: header is being
>> rewritten, without any compelling reason.  I certainly do not do any
>> DMARC checking here, so the rewriting does not benefit me.
>
> It benefits you because more and more email services are rejecting or
> interfering with mail that is not clean enough.  If you want to
> receive mail reliably, or send and have confidence that it is
> received, clean mail benefits you.

It's definitely not cleaner after Mailman has applied its destructive
header rewriting.  It just replaces one address spoofing with another.

What are the plans for Mailman on sourceware?  Will it be replaced
soon with something else, given that the software stack it runs on is
effectively EOL?

It should not be too hard to add a configuration option where
subscribers can opt out of header rewriting, but with Mailman's
upstream status, I'm not sure if that's a worthwhile effort.


Re: Not usable email content encoding

2020-03-18 Thread Michael Matz
Hi,

On Wed, 18 Mar 2020, Frank Ch. Eigler via Gcc wrote:

> > > The key here is to realize that the raw message is not what you get
> > > back from the mailing list reflector, and also not the raw message
> > > that was sent by the sender.  In this day of mta intermediaries,
> > > proxies, reflectors, it may be time to revisit that suggestion.
> > 
> > But these largely are new problems.  It used to work flawlessly.
> 
> I understand that's frustrating.  But these workflows were counting on
> literally unspecified behaviours not changing, or outright standards
> violations continuing.

Wut?  How is "not mangle the mail body" in any way violating standards?  
You're talking about rewriting or adding headers (where the former is Real 
Bad, no matter what DMARC wants to impose), but the suggestion is based on 
not rewriting the body.  If the body (including attachtments) is rewritten 
any way then that simply is a bug.

> > Patch reencoding problems go back to the redhat.com changes last
> > November (I understand the responsible vendor is working on a fix,
> > but I'm not up-to-date on the current developments).
> 
> This one is a standards-compliant reencoding.  Even if mimecast (?)
> stops doing it, we can't be sure nothing else will.
> 
> > Since the sourceware.org Mailman migration, the From: header is being
> > rewritten, without any compelling reason.  I certainly do not do any
> > DMARC checking here, so the rewriting does not benefit me.
> 
> It benefits you because more and more email services are rejecting or
> interfering with mail that is not clean enough.  If you want to
> receive mail reliably, or send and have confidence that it is
> received, clean mail benefits you.

Depends on your definition of "clean".  If by that you mean rewriting mail 
bodies then I'm not sure what to say.


Ciao,
Michael.


Re: Not usable email content encoding

2020-03-18 Thread Frank Ch. Eigler via Gcc
Hi -

> [...]  You're talking about rewriting or adding headers (where the
> former is Real Bad, no matter what DMARC wants to impose), but the
> suggestion is based on not rewriting the body.  If the body
> (including attachtments) is rewritten any way then that simply is a
> bug. [...]

We're mixing two things.

The From: header rewriting for DMARC participants is something sourceware
is doing now.

The Content-Transfer-Encoding: change is done by intermediate MTA's
whose identity is unknown.  (I don't believe this behaviour is
forbidden by RFCs, but even if it were, we may have no way of fixing
the mystery MTA.)


- FChE


Re: [GSoC 2020] Automatic Detection of Parallel Compilation Viability

2020-03-18 Thread Richard Biener
On Tue, 17 Mar 2020, Giuliano Belinassi wrote:

> Hi, all
> 
> I have applied some revews to the project. Please see the new proposal
> here:

Looks good, some editorial changes below

> https://www.ime.usp.br/~belinass/Automatic_Detection_of_Parallel_Compilation_Viability.pdf
> 
> **Automatic Detection of Parallel Compilation Viability**
> 
> [Giuliano Belinassi]{style="color: darkgreen"}\
> Timezone: GMT$-$3:00\
> University of São Paulo -- Brazil\
> IRC: giulianob in \#gcc\
> Email: [`giuliano.belina...@usp.br`](mailto:giuliano.belina...@usp.br)\
> Github: \
> Date:
> 
> About Me Computer Science Bachelor (University of São Paulo), currently
> pursuing a Masters Degree in Computer Science at the same institution.
> I've always been fascinated by topics such as High-Performance Computing
> and Code Optimization, having worked with a parallel implementation of a
> Boundary Elements Method software in GPU. I am currently conducting
> research on compiler parallelization and developing the
> [ParallelGcc](https://gcc.gnu.org/wiki/ParallelGcc) project, having
> already presented it in [GNU Cauldron
> 2019](https://www.youtube.com/watch?v=jd6R3IK__1Q).
> 
> **Skills**: Strong knowledge in C, Concurrency, Shared Memory
> Parallelism, Multithreaded Debugging and other typical programming
> tools.
> 
> Brief Introduction
> 
> In [ParallelGcc](https://gcc.gnu.org/wiki/ParallelGcc), we showed that
> parallelizing the Intra Procedural optimizations improves speed when
> compiling huge files by a factor of 1.8x in a 4 cores machine, and also
> showed that this takes 75% of compilation time.
> 
> In this project we plan to use the LTO infrastructure to improve
> compilation performance in the non-LTO case, with a tradeoff of
> generating a binary as good as if LTO is disabled. Here, we will
> automatically detect when a single file will benefit from parallelism,
> and procceed with the compilation in parallel if so.
> 
> Use of LTO
> 
> The Link Time Optimization (LTO) is a compilation technique that allows
> the compiler to analyse the program as a whole, instead of analysing and
> compiling one file at time. Therefore, LTO is able to collect more
> information about the program and generate a better optimization plan.
> LTO is divided in three parts:
> 
> -   *LGEN (Local Generation)*: Each file is translated to GIMPLE. This
> stage runs sequentially in each file and, therefore, in parallel in
> the project compilation.
> 
> -   *WPA (Whole Program Analysis)*: Run the Inter Procedural Analysis
> (IPA) in the entire program. This state runs serially in the
> project.
> 
> -   *LTRANS (Local Transformation)*: Execute all Intra Procedural
> Optimizations in each partition. This stage runs in parallel.
> 
> Since WPA can bottleneck the compilation because it runs serially in the
> entire project, LTO was designed to produce faster binaries, not to
> produce binaries fast.
> 
> Here, the proposed use of LTO to address this problem is to run the IPA
> for each Translation Unit (TU), instead in the Whole Program, and

This proposal is to use LTO to produce binaries fast by running
the IPA phase separately for each Translation Unit (TU), instead of on the 
Whole Program and ...

> automatically detect when to partition the TU into multiple LTRANS to
> improve compilation performance. The advantage of this approach is:
> 
> -   It can generate binaries as good as when LTO is disabled.
> 
> -   It is faster, as we can partition big files into multiple partitions
> and compile these partitions in parallel.
> 
> -   It can interact with GNU Make Jobserver, improving CPU utilization.

This reads a bit odd, regular compilation already interacts with the
GNU Make Jobserver.  I'd reorder and reword it w/o dashes like

We can partition big files into multiple partitions and compile these 
partitions in parallel which should improve CPU utilization by exposing
smaller chunks to the GNU Make Jobserver.  Code generation quality
should be unaffected by this.

Thanks,
Richard.

> Planned Tasks
> 
> I plan to use the GSoC time to develop the following topics:
> 
> -   Week \[1, 3\] -- April 27 to May 15:\
> Update `cc1`, `cc1plus`, `f771`, ..., to partition the Compilation
> Unit (CU) after IPA analysis directly into multiple LTRANS
> partitions, instead of generating a temporary GIMPLE file, and to
> accept a additional parameter `-fsplit-outputs=`, in which
> the generated ASM filenames will be written to.
> 
> There are two possible cases in which I could work on:
> 
> 1.  *Fork*: After the CU is partitioned into multiple LTRANS, then
> `cc1` will fork and compile these partitions, each of them
> generating a ASM file, and write the generated asm name into
> ``. Note that if the number of partitions is one, then
> this part is not necessary.
> 
> 2.  *Stream LTRANS IR*: After CU is partitionated into multiple

Re: Not usable email content encoding

2020-03-18 Thread Bernd Schmidt

On 3/18/20 3:22 PM, Frank Ch. Eigler via Gcc wrote:

The From: header rewriting for DMARC participants is something sourceware
is doing now.


Out of curiousity, is this rewriting you are talking about the cause for 
a lot of mails showing up as "From: GCC List" rather than their real 
senders? This has become very annoying recently.



Bernd


Re: Not usable email content encoding

2020-03-18 Thread Frank Ch. Eigler via Gcc
Hi -

> > The From: header rewriting for DMARC participants is something sourceware
> > is doing now.
> 
> Out of curiousity, is this rewriting you are talking about the cause for a
> lot of mails showing up as "From: GCC List" rather than their real senders?
> This has become very annoying recently.

Yes, for emails from domains with declared interest in email
cleanliness, via DMARC records in DNS.  We have observed mail
-blocked- at third parties, even just days ago, when we failed to
sufficiently authenticate outgoing reflected emails.

AIUI, all this effort is driven by wanting to defeat not just spammers
but also real security problems like phishing enabled by forgery,
including specifically the From: header.

- FChE


Re: Not usable email content encoding

2020-03-18 Thread Michael Matz
Hello,

On Wed, 18 Mar 2020, Frank Ch. Eigler via Gcc wrote:

> > > The From: header rewriting for DMARC participants is something sourceware
> > > is doing now.
> > 
> > Out of curiousity, is this rewriting you are talking about the cause for a
> > lot of mails showing up as "From: GCC List" rather than their real senders?
> > This has become very annoying recently.
> 
> Yes, for emails from domains with declared interest in email
> cleanliness, via DMARC records in DNS.  We have observed mail
> -blocked- at third parties, even just days ago, when we failed to
> sufficiently authenticate outgoing reflected emails.

Was this blocking also a problem before mailman (i.e. two weeks ago)?  
Why did nobody scream for not having received mail?  Or why is it blocked 
now, but wasn't before?  Can it be made so again, like it was with ezmlm?

(And DMARCs requirement of having to rewrite From: headers should make it 
clear to everyone that it's stupid).


Ciao,
Michael.


¡Gam, sigue a tu lado!

2020-03-18 Thread Gam Consultoría y Formación
  www.gam.cati...@gam.cat                                   
                                         
 
___
Copyright
© 2020· Gam Consultoría y Formación· C/ Pirineus s/n (esq. Orriols) 17460
Celrà  Conforme con el Reglamento General Europeo de Protección de Datos de
Carácter Personal, RGPD 2016/679, GAM SA, GAM SL, FEIM SL; como responsables
del tratamiento, le informan que sus datos personales facilitados se
incorporarán a sus sistemas informáticos y documentales, y que la base legal
de su tratamiento es la ejecución del contrato convenido, la prestación de los
otros servicios contratados y comunicaciones comerciales relacionadas con
nuestras actividades y actuaciones promocionales relacionadas con la
competitividad empresarial, consultoría, formación y la innovación. Los datos
sólo se comunicarán a terceros cuando exista una obligación legal o bien a
encargados de su tratamiento para gestión del propio grupo empresarial (GAM SA,
GAM SL, FEIM SL). Podrá ejercer sus derechos de acceso, rectificación,
supresión, limitación, portabilidad y oposición dirigiendo una comunicación
a GAM, S.A., C/ Pirineus s/n (esq. Orriols), 17460 Celrà, o bien a través de
correo electrónico a i...@gam.cat, debiéndose adjuntar fotocopia del DNI o
documento de identidad equivalente.Por favor no respondas a este correo. Si
tienes cualquier duda o sugerencia, contacta haciendo clica a aquí Darse de
baja · Política de Privacidad 



¡Gam, sigue a tu lado!

2020-03-18 Thread Gam Consultoría y Formación
  www.gam.cati...@gam.cat                                   
                                         
 
___
Copyright
© 2020· Gam Consultoría y Formación· C/ Pirineus s/n (esq. Orriols) 17460
Celrà  Conforme con el Reglamento General Europeo de Protección de Datos de
Carácter Personal, RGPD 2016/679, GAM SA, GAM SL, FEIM SL; como responsables
del tratamiento, le informan que sus datos personales facilitados se
incorporarán a sus sistemas informáticos y documentales, y que la base legal
de su tratamiento es la ejecución del contrato convenido, la prestación de los
otros servicios contratados y comunicaciones comerciales relacionadas con
nuestras actividades y actuaciones promocionales relacionadas con la
competitividad empresarial, consultoría, formación y la innovación. Los datos
sólo se comunicarán a terceros cuando exista una obligación legal o bien a
encargados de su tratamiento para gestión del propio grupo empresarial (GAM SA,
GAM SL, FEIM SL). Podrá ejercer sus derechos de acceso, rectificación,
supresión, limitación, portabilidad y oposición dirigiendo una comunicación
a GAM, S.A., C/ Pirineus s/n (esq. Orriols), 17460 Celrà, o bien a través de
correo electrónico a i...@gam.cat, debiéndose adjuntar fotocopia del DNI o
documento de identidad equivalente.Por favor no respondas a este correo. Si
tienes cualquier duda o sugerencia, contacta haciendo clica a aquí Darse de
baja · Política de Privacidad 



access to Subversion links forbidden?

2020-03-18 Thread Martin Sebor via Gcc

I've been getting Error 403 (Forbidden - You don't have permission
to access /viewcvs on this server) following the Subversion links
in Bugzilla for some time now (they worked for me before the switch
to Git, but I'm not sure if they also did before the recent hardware 
upgrade).


For example:
  https://gcc.gnu.org/viewcvs?rev=268827&root=gcc&view=rev
  https://gcc.gnu.org/viewcvs?rev=267096&root=gcc&view=rev
  https://gcc.gnu.org/viewcvs?rev=244881&root=gcc&view=rev

Are they supposed to work and if so, is anyone else having trouble with
them and is it a known problem that's already being worked on?

Thanks
Martin


Re: access to Subversion links forbidden?

2020-03-18 Thread Nicholas Krause via Gcc




On 3/18/20 3:49 PM, Martin Sebor via Gcc wrote:

I've been getting Error 403 (Forbidden - You don't have permission
to access /viewcvs on this server) following the Subversion links
in Bugzilla for some time now (they worked for me before the switch
to Git, but I'm not sure if they also did before the recent hardware 
upgrade).


For example:
  https://gcc.gnu.org/viewcvs?rev=268827&root=gcc&view=rev
  https://gcc.gnu.org/viewcvs?rev=267096&root=gcc&view=rev
  https://gcc.gnu.org/viewcvs?rev=244881&root=gcc&view=rev

Are they supposed to work and if so, is anyone else having trouble with
them and is it a known problem that's already being worked on?

Thanks
Martin


Martin,
I've been having trouble with them as well for the last day or so. If I 
recall correctly they worked fine a few

weeks ago post Git so I've assuming its a current issue.

Nick


Re: access to Subversion links forbidden?

2020-03-18 Thread Tobias Burnus

That's a known to-do item – see "cvsweb/svn" under
https://sourceware.org/sourceware-wiki/MigrationWorkItems/?updated

Tobias

On 3/18/20 9:07 PM, Nicholas Krause via Gcc wrote:



On 3/18/20 3:49 PM, Martin Sebor via Gcc wrote:

I've been getting Error 403 (Forbidden - You don't have permission
to access /viewcvs on this server) following the Subversion links
in Bugzilla for some time now (they worked for me before the switch
to Git, but I'm not sure if they also did before the recent hardware
upgrade).

For example:
https://gcc.gnu.org/viewcvs?rev=268827&root=gcc&view=rev
https://gcc.gnu.org/viewcvs?rev=267096&root=gcc&view=rev
https://gcc.gnu.org/viewcvs?rev=244881&root=gcc&view=rev

Are they supposed to work and if so, is anyone else having trouble with
them and is it a known problem that's already being worked on?

Thanks
Martin


Martin,
I've been having trouble with them as well for the last day or so. If
I recall correctly they worked fine a few
weeks ago post Git so I've assuming its a current issue.

Nick

-
Mentor Graphics (Deutschland) GmbH, Arnulfstraße 201, 80634 München / Germany
Registergericht München HRB 106955, Geschäftsführer: Thomas Heurung, Alexander 
Walter


Re: Not usable email content encoding

2020-03-18 Thread Jim Wilson
I'm one of the old timers that likes our current work flow, but even I
think that we are risking our future by staying with antiquated tools.
One of the first things I need to teach new people is now to use email
"properly".  It is a barrier to entry for new contributors, since our
requirements aren't how the rest of the world uses email anymore.
LLVM has phabricator.  Some git based projects are using gerrit.
Github and gitlab are useful services.  We need to think about setting
up easier ways for people to submit patches, rather than trying to fix
all of the MUAs and MTAs in the world.

Jim


Re: Not usable email content encoding

2020-03-18 Thread Jonathan Wakely via Gcc
On Wed, 18 Mar 2020 at 21:54, Jim Wilson wrote:
>
> I'm one of the old timers that likes our current work flow, but even I
> think that we are risking our future by staying with antiquated tools.
> One of the first things I need to teach new people is now to use email
> "properly".  It is a barrier to entry for new contributors, since our
> requirements aren't how the rest of the world uses email anymore.
> LLVM has phabricator.

Which is horrible.

> Some git based projects are using gerrit.

Which I looked into previously and decided I didn't like it. If I
recall correctly, gerrit has to "own" the repo, and so it's only
possible to commit to the repo by pushing to gerrit first, then
getting the patch approved. That is fine for write-after-approval, but
adds a step for maintainers who can approve their own changes.

I think it also only very recently gained the ability to group a
series of patches together, as it wants a single commit per review.


> Github and gitlab are useful services.  We need to think about setting
> up easier ways for people to submit patches, rather than trying to fix
> all of the MUAs and MTAs in the world.

There's also https://pagure.io/pagure which is Free Software.

I think it would be great if one of those forge services could be set
up to run *in parallel* to our existing workflow, so that we can
accept pull requests there without forcing all work to go through it
(like gerrit). New contributors who are used to the GitHub model could
submit pull requests there, and maintainers could merge them into the
repo with the click of a button. For significant changes it's more
likely maintainers would pull the branch into their local repo, test
it, and then push manually. But if we had a CI service testing pull
requests (like Travis CI does for GitHub) and a pull request was shown
to introduce no regressions then it could be merged with a single
click.


Re: Not usable email content encoding

2020-03-18 Thread Frank Ch. Eigler via Gcc
Hi, Jim -

> [gerrit etc.]

Good points.

> [...]  We need to think about setting up easier ways for people to
> submit patches, rather than trying to fix all of the MUAs and MTAs
> in the world.

Another related point.  We are comingling email as a communication
medium AND a commit transport medium.  For the former, as in patch
review / RFC, one may not require a form of the patch that is finally
committable to master, so the exact From: etc. may not matter.  For
the latter, attachments are more bullet-proof.

- FChE


Re: Not usable email content encoding

2020-03-18 Thread Jonathan Wakely via Gcc
N.B. the CC list has got too big and is causing posts to this thread
to be held for moderator approval.


Re: Not usable email content encoding

2020-03-18 Thread Joseph Myers
On Wed, 18 Mar 2020, Jonathan Wakely via Gcc wrote:

> > Some git based projects are using gerrit.
> 
> Which I looked into previously and decided I didn't like it. If I
> recall correctly, gerrit has to "own" the repo, and so it's only

The glibc experiment with gerrit worked without it owning the repo.  
There were a few issues with email notifications that were addressed by 
local patches to gerrit, and a few other such issues with email 
interaction that didn't get addressed but looked like they could have been 
- but nothing for which addressing it seemed unacceptable upstream or that 
looked fatal to using gerrit without it owning the repo and (given some 
fixes) working reasonably well with email (similarly well to Bugzilla, say 
- sending email notifications to a list with sensible content, doing 
something sensible with email replies).

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Not usable email content encoding

2020-03-18 Thread Frank Ch. Eigler via Gcc
Hi -

> N.B. the CC list has got too big and is causing posts to this thread
> to be held for moderator approval.

Ah, can cycle through the lists and raise that limit.
The default 10 is too low.

- FChE


Re: Not usable email content encoding

2020-03-18 Thread Jonathan Wakely via Gcc
On Wed, 18 Mar 2020 at 22:45, Joseph Myers wrote:
>
> On Wed, 18 Mar 2020, Jonathan Wakely via Gcc wrote:
>
> > > Some git based projects are using gerrit.
> >
> > Which I looked into previously and decided I didn't like it. If I
> > recall correctly, gerrit has to "own" the repo, and so it's only
>
> The glibc experiment with gerrit worked without it owning the repo.

Ah, good to know, thanks.


Re: Not usable email content encoding

2020-03-18 Thread Segher Boessenkool
Hi!

On Wed, Mar 18, 2020 at 06:33:08PM -0400, Frank Ch. Eigler wrote:
> > [...]  We need to think about setting up easier ways for people to
> > submit patches, rather than trying to fix all of the MUAs and MTAs
> > in the world.
> 
> Another related point.  We are comingling email as a communication
> medium AND a commit transport medium.  For the former, as in patch
> review / RFC, one may not require a form of the patch that is finally
> committable to master, so the exact From: etc. may not matter.

But OTOH, it is extremely valuable to review the commit message at the
same time as the patch.  Which we now *can*, for contributors who follow
a more "git-like" workflow.

> For the latter, attachments are more bullet-proof.

Disregarding binary attachments, which are unworkable with many tools
(and which are disallowed on gcc-patches for that reason), is this
really true?  Do some MUAs (or MTAs) mess up only the first body part
they encounter?  Or what?


Segher


Re: Not usable email content encoding

2020-03-18 Thread Segher Boessenkool
Hi!

On Wed, Mar 18, 2020 at 02:52:14PM -0700, Jim Wilson wrote:
> I'm one of the old timers that likes our current work flow, but even I
> think that we are risking our future by staying with antiquated tools.

It's not ancient tools, it is low-requirement generic tools, and
everyone can use that to build a workflow that works for him/herself.

> One of the first things I need to teach new people is now to use email
> "properly".  It is a barrier to entry for new contributors, since our
> requirements aren't how the rest of the world uses email anymore.

Knowing how to use email properly is a very useful life skill.

A large part of the Free Software (and open source) world still uses
email as primary communication medium, too.

Also you might want to read  https://lwn.net/Articles/702177/  for a
different viewpoint.

> LLVM has phabricator.  Some git based projects are using gerrit.
> Github and gitlab are useful services.  We need to think about setting
> up easier ways for people to submit patches, rather than trying to fix
> all of the MUAs and MTAs in the world.

People should not just send patches: they need to explain why it would
be a good idea to include that patch, which requires a lot of talking
*about* the patch.  Email is a difficult medium for that, but it is
still _much_ better than any of the code review website things, imo.

The key point is that email is completely free-form, I think?


Segher


Possible Bug in make_more_copies

2020-03-18 Thread Nicholas Krause via Gcc

Greetings Segher,

I've not sure if I've misunderstanding something in the combine code but 
in make_more_copies

for combine.c this seems very odd:
if (!(REG_P (dest) && !HARD_REGISTER_P (dest)))
    continue;

    rtx src = SET_SRC (set);
    if (!(REG_P (src) && HARD_REGISTER_P (src)))
        continue;

Is there any good reason we are assuming the destination can't both be a 
hard register or a regular code here?

If were making pseudo register copies wouldn't it be:
  rtx dest = SET_DEST (dest);
    if ((REG_P (dest) && HARD_REGISTER_P (dest)))
        continue;

I'm assuming you have good reason for doing both hard and standard 
registers checking in this function but

it looks really odd to me for checking both hard and regular registers here,

Nick







Re: Possible Bug in make_more_copies

2020-03-18 Thread Segher Boessenkool
Hi Nick,

On Wed, Mar 18, 2020 at 08:56:11PM -0400, Nicholas Krause wrote:
> I've not sure if I've misunderstanding something in the combine code but 
> in make_more_copies
> for combine.c this seems very odd:
> if (!(REG_P (dest) && !HARD_REGISTER_P (dest)))
>     continue;
> 
>     rtx src = SET_SRC (set);
>     if (!(REG_P (src) && HARD_REGISTER_P (src)))
>         continue;
> 
> Is there any good reason we are assuming the destination can't both be a 
> hard register or a regular code here?

The destionation should be a pseudo-register, and the source should be
a hard register.  If those are true, *then* we make a copy to a new
intermediate pseudo.

We do not want combine to move the hard register into other instructions
(that is the register allocator's job, and it does a much better job of
it, combine just does the greedy first-fit solution).  But, it turns out
that combining a register move with another instruction often is
beneficial (in effect just replacing that other instruction with a
better one).  make_more_copies makes a fresh new register copy to keep
the status quo for that (other parts of combine now disallow combining
a move from a hard register into other insns).


Segher


Re: Not usable email content encoding

2020-03-18 Thread Christopher Faylor
On Wed, Mar 18, 2020 at 06:44:15PM -0400, Frank Ch. Eigler wrote:
>> N.B. the CC list has got too big and is causing posts to this thread
>> to be held for moderator approval.
>
>Ah, can cycle through the lists and raise that limit.
>The default 10 is too low.

Didn't you have to lower that limit for outpost, fche?

I believe it used to be 10 for server1 too, though, fwiw.

cgf



Re: Not usable email content encoding

2020-03-18 Thread Christopher Faylor
On Wed, Mar 18, 2020 at 11:30:22PM -0400, Christopher Faylor wrote:
>On Wed, Mar 18, 2020 at 06:44:15PM -0400, Frank Ch. Eigler wrote:
>>> N.B. the CC list has got too big and is causing posts to this thread
>>> to be held for moderator approval.
>>
>>Ah, can cycle through the lists and raise that limit.
>>The default 10 is too low.
>
>Didn't you have to lower that limit for outpost, fche?
>
>I believe it used to be 10 for server1 too, though, fwiw.

Another annoying thing about mailman is this double inclusion of the
same address in the Cc.

cgf