Re: [RFC] MIPS ABI Extension for IEEE Std 754 Non-Compliant Interlinking

2015-11-16 Thread Maciej W. Rozycki
On Sat, 14 Nov 2015, Cary Coutant wrote:

> > 3.3.2 Static Linking Object Acceptance Rules
> >
> >  The static linker shall follow the user selection as to the linking mode
> > used, either of `strict' and `relaxed'.  The selection will be made
> > according to the usual way assumed for the environment used, which may be
> > a command-line option, a property setting, etc.
> >
> >  In the `strict' linking mode both `strict' and `legacy' objects can be
> > linked together.  All shall follow the same legacy-NaN or 2008-NaN ABI, as
> > denoted by the EF_MIPS_NAN2008 flag described in Section 3.1.  The value
> > of the flag shall be the same across all the objects linked together.  The
> > output of a link involving any `strict' objects shall be marked as
> > `strict'.  No `relaxed' objects shall be allowed in the same link.
> >
> >  In the `relaxed' linking mode any `strict', `relaxed' and `legacy'
> > objects can be linked together, regardless of the value of their
> > EF_MIPS_NAN2008 flag.  If the flag has the same value across all objects
> > linked, then the value shall be propagated to the binary produced.  The
> > output shall be marked as `relaxed'.  It is recommended that the linker
> > provides a way to warn the user whenever a `relaxed' link is made of
> > `strict' and `legacy' objects only.
> 
> This paragraph first says that "If the flag has the same value across
> all objects linked, then the value shall be propagated to the binary
> produced", but then says the "output shall be marked as `relaxed'."
> Are you missing an "Otherwise" there?

 The EF_MIPS_NAN2008 flag (if the same across all the objects linked in 
the `relaxed' mode) shall be propagated to the binary produced and the 
output marked as `relaxed'.  If you think this is not clear from the 
wording used, then I can think of rewording the paragraph.  Please note 
however that nowhere across this document the term `flag' is used to refer 
to `relaxed' vs `strict' vs `legacy' annotation, so I'm not really sure 
why this should be ambiguous.

> Early on in the document, you mention "this applies regardless of
> whether it relies on the use of NaN data or IEEE Std 754 arithmetic in
> the first place," yet your solution is only two-state. Wouldn't it be
> better to have a three-state solution where objects that do not in
> fact rely on the NaN representation at all can be marked as "don't
> care"? Such objects could always be mixed with either strict or
> relaxed objects, regardless of linking mode.

 I find it interesting that you raise this point as this was actually 
considered and deferred to investigation at a later stage.

 The reason is we actually have a no-FP annotation already -- in the GNU 
attribute section (propagated these days to the `fp_abi' member of MIPS 
ABI flags by BFD) -- however the encoding is so unfortunate as to make it 
impossible in ELF binary objects to tell apart ones explicitly annotated 
as containing no FP code (typically in compiler-generated code, such as 
made by GCC invoked with the `-mno-float' command-line option) and ones 
with no annotation at all (either legacy compiler-generated code or often 
handcoded assembly sources).  This is because the no-FP annotation is also 
the default value (0) of the attribute in question and is therefore 
removed in processing by BFD rather than being recorded in ELF output.

 It is of course possible to tell corresponding sources apart, however it 
requires a rewrite of parts of attribute handling in BFD so that the 
original annotation is actually unambiguously propagated to an ELF object 
produced by the assembler.  Such a change being considerable enough on its 
own was decided to be made separately, as it can be added later on at any 
time, as an extra case in addition to the two already handled here which 
are required anyway.  This follows the principle of making one step at a 
time.

 There is also a question of a no-FP executable pulling in an FP DSO, 
either at the load time or with dlopen(3): how is such a process supposed 
to be configured -- as `strict' or `relaxed'?  No clear answer has been 
found to this question yet.

 Does this explanation address your concern?

  Maciej


Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Toon Moene

On 11/16/2015 12:58 AM, Steve Kargl wrote:


On Mon, Nov 16, 2015 at 12:04:06AM +0100, Thomas Koenig wrote:



See

http://arstechnica.com/information-technology/2015/11/llvm-to-get-fortran-compiler-that-targets-parallel-gpus-in-clusters/

It is not entirely clear on what they plan to do.

Use gfortran via dragonegg?



The 3 DOE labs in the USA have contracted PGI to port
(some of) there Fortran FE to LLVM and open source the
result.

http://lists.llvm.org/pipermail/llvm-dev/2015-November/092404.html


To put this in a (timeline) perspective:

On the 18th of March, 2000, I announced Andy Vaught's work on the g95 
front-end to the gcc-patches mailing list.


In 2004 (!) we merged the resulting compiler and run-time library into 
the gcc (cvs) repository (obviously, after the tree-ssa infrastructure 
went in - 2004-05-17, but before the creation of the 4.0 release branch 
- 2005-02-25). Then it took another 2 months for 4.0 to be released.


Unless PGI manages to summon massively large (parallel) working groups 
to accomplish this, it might take a few years to fruition.


--
Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: complex support when using -std=c++11

2015-11-16 Thread Jason Merrill

On 11/15/2015 04:09 PM, D Haley wrote:

Thanks for the prompt reply. I am not an expert here, so I probably
don't know the correct solution for gcc. We are using std=c++11 to
maximise source compatibility for any users seeking to recompile our
code on whatever compiler/toolchain they have.


Note that _Complex isn't part of C++11, so you shouldn't be using it in 
code that's intended to be portable to any C++11 implementation.


But certainly the current G++ behavior can be improved.

Jason



Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Jack Howarth
On Mon, Nov 16, 2015 at 2:14 PM, Toon Moene  wrote:
> On 11/16/2015 12:58 AM, Steve Kargl wrote:
>
>> On Mon, Nov 16, 2015 at 12:04:06AM +0100, Thomas Koenig wrote:
>
>
>>> See
>>>
>>>
>>> http://arstechnica.com/information-technology/2015/11/llvm-to-get-fortran-compiler-that-targets-parallel-gpus-in-clusters/
>>>
>>> It is not entirely clear on what they plan to do.
>>>
>>> Use gfortran via dragonegg?
>
>
>> The 3 DOE labs in the USA have contracted PGI to port
>> (some of) there Fortran FE to LLVM and open source the
>> result.
>>
>> http://lists.llvm.org/pipermail/llvm-dev/2015-November/092404.html
>
>
> To put this in a (timeline) perspective:
>
> On the 18th of March, 2000, I announced Andy Vaught's work on the g95
> front-end to the gcc-patches mailing list.
>
> In 2004 (!) we merged the resulting compiler and run-time library into the
> gcc (cvs) repository (obviously, after the tree-ssa infrastructure went in -
> 2004-05-17, but before the creation of the 4.0 release branch - 2005-02-25).
> Then it took another 2 months for 4.0 to be released.
>
> Unless PGI manages to summon massively large (parallel) working groups to
> accomplish this, it might take a few years to fruition.
>

On the other hand, the llvm-dev posting implies that PGI will be
starting from an existing fortran front-end. If they only need to code
the middle-/back-end integration of llvm into a pre-existing mature
fortran front-end, the promised late 2016 release date might not be so
unlikely.

> --
> Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
> Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
> At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
> Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Toon Moene

On 11/16/2015 10:11 PM, Jack Howarth wrote:


On Mon, Nov 16, 2015 at 2:14 PM, Toon Moene  wrote:



To put this in a (timeline) perspective:

On the 18th of March, 2000, I announced Andy Vaught's work on the g95
front-end to the gcc-patches mailing list.

In 2004 (!) we merged the resulting compiler and run-time library into the
gcc (cvs) repository (obviously, after the tree-ssa infrastructure went in -
2004-05-17, but before the creation of the 4.0 release branch - 2005-02-25).
Then it took another 2 months for 4.0 to be released.

Unless PGI manages to summon massively large (parallel) working groups to
accomplish this, it might take a few years to fruition.



On the other hand, the llvm-dev posting implies that PGI will be
starting from an existing fortran front-end. If they only need to code
the middle-/back-end integration of llvm into a pre-existing mature
fortran front-end, the promised late 2016 release date might not be so
unlikely.


The g95 front-end I mentioned in my 2000-03-18 post to the gcc-patches 
mailing list was "an existing front-end" by virtue of the fact that Andy 
Vaught mailed it to me and it did the work.


Between 2000 and 2004, this front-end was coupled to the rest of the 
infrastructure of the GNU Compiler Collection. This was not trivial 
(just as it will not be trivial to couple the PGI front-end to the LLVM 
infrastructure).


We'll see how many years it'll take, but don't count me in on holding my 
breath.


--
Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: basic asm and memory clobbers

2015-11-16 Thread Jeff Law

On 11/15/2015 06:23 PM, David Wohlferd wrote:

On 11/9/2015 1:32 AM, Segher Boessenkool wrote:

On Sun, Nov 08, 2015 at 04:10:01PM -0800, David Wohlferd wrote:

It seems like a doc update is what is needed to close PR24414 (Old-style
asms don't clobber memory).

What is needed to close the bug is to make the compiler work properly.


The question of course is, what does 'properly' mean?  My assertion is
that 10 years on, 'properly' means whatever it's doing now. Changing it
at this point will probably break more than it fixes, and (as you said)
there is a plausible work-around using extended asm.

So while this bug could be resolved as 'invalid' (since the compiler is
behaving 'properly'), I'm thinking to split the difference and 'fix' it
with a doc patch that describes the supported behavior.
I'd disagree.  A traditional asm has to be considered an opaque blob 
that read/write/clobber any register or memory location.


It's also the case that assuming an old style asm can read or clobber 
any memory location is the safe, conservative thing to do.  So the right 
thing in my mind is to ensure that behaviour and document it.


Andrew's logic is just plain wrong in that BZ.





Whether that means clobbering memory or not, I don't much care -- with
the status quo, if you want your asm to clobber memory you have to use
extended asm; if basic asm is made to clobber memory, if you want your
asm to *not* clobber memory you have to use extended asm (which you
can with no operands by writing e.g.  asm("bork" : );  ).  So both
behaviours are available whether we make a change or not.

But changing things now will likely break user code.
Having an traditional asm clobber memory should not break user code.  It 
may pessimize it slightly, but if it does, that code was already broken.



(dot space space).


+Basic @code{asm} statements are not treated as though they used a
"memory"
+clobber, although they do implicitly perform a clobber of the flags
+(@pxref{Clobbers}).

They do not clobber the flags.  Observe:


Ouch.  i386 shows the same thing for basic asm.

Sadly, I suspect this isn't consistent across targets.

Jeff




Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Jack Howarth
On Mon, Nov 16, 2015 at 4:19 PM, Toon Moene  wrote:
> On 11/16/2015 10:11 PM, Jack Howarth wrote:
>
>> On Mon, Nov 16, 2015 at 2:14 PM, Toon Moene  wrote:
>
>
>>> To put this in a (timeline) perspective:
>>>
>>> On the 18th of March, 2000, I announced Andy Vaught's work on the g95
>>> front-end to the gcc-patches mailing list.
>>>
>>> In 2004 (!) we merged the resulting compiler and run-time library into
>>> the
>>> gcc (cvs) repository (obviously, after the tree-ssa infrastructure went
>>> in -
>>> 2004-05-17, but before the creation of the 4.0 release branch -
>>> 2005-02-25).
>>> Then it took another 2 months for 4.0 to be released.
>>>
>>> Unless PGI manages to summon massively large (parallel) working groups to
>>> accomplish this, it might take a few years to fruition.
>>>
>>
>> On the other hand, the llvm-dev posting implies that PGI will be
>> starting from an existing fortran front-end. If they only need to code
>> the middle-/back-end integration of llvm into a pre-existing mature
>> fortran front-end, the promised late 2016 release date might not be so
>> unlikely.
>
>
> The g95 front-end I mentioned in my 2000-03-18 post to the gcc-patches
> mailing list was "an existing front-end" by virtue of the fact that Andy
> Vaught mailed it to me and it did the work.
>
> Between 2000 and 2004, this front-end was coupled to the rest of the
> infrastructure of the GNU Compiler Collection. This was not trivial (just as
> it will not be trivial to couple the PGI front-end to the LLVM
> infrastructure).
>
> We'll see how many years it'll take, but don't count me in on holding my
> breath.
>

Of course one unknown is whether PGI had already done any work
internally with the llvm middle-/back-end. If so, they might not be
starting from scratch.

>
> --
> Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
> Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
> At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
> Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Toon Moene

On 11/16/2015 10:33 PM, Jack Howarth wrote:


Of course one unknown is whether PGI had already done any work
internally with the llvm middle-/back-end. If so, they might not be
starting from scratch.


Perhaps it helps if I repost the following from 12 years ago:

https://gcc.gnu.org/ml/fortran/2003-11/msg00052.html

Kind regards,

--
Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: inline asm and multi-alternative constraints

2015-11-16 Thread Jeff Law

On 11/11/2015 02:19 AM, David Wohlferd wrote:

On 11/9/2015 1:52 PM, Jeff Law wrote:

On 11/07/2015 12:50 AM, David Wohlferd wrote:


- Starting with 'modifiers', "=+&" and (reluctantly) "%" seem reasonable
for inline asm.  But both "#*" seem sketchy.

Right.  =+& are no-brainer yes, as are the constants 0-9.  % is
probably OK as well.

#* are similar to !? in that they are inherently tied into the
register class preferencing implementation and documenting them would
be inadvisable.


Actually, #* are already doc'ed in the user guide.  Are you advising
they be removed?
Yes.  Much like ?! they are pretty tied to implementation details, just 
not so badly :-)


It may seem like they've got clearer semantics, and they did at one 
time, but they don't anymore.





If so, the attached patch does this.  It also removes references to
define_peephole2 and define_splits from the user guide version of this
page.  There are other parts of this page that are more md than ug, but
these are the ones that annoyed me the most.

I'll install momentarily.

jeff



Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Jack Howarth
On Mon, Nov 16, 2015 at 4:35 PM, Toon Moene  wrote:
> On 11/16/2015 10:33 PM, Jack Howarth wrote:
>
>> Of course one unknown is whether PGI had already done any work
>> internally with the llvm middle-/back-end. If so, they might not be
>> starting from scratch.
>
>
> Perhaps it helps if I repost the following from 12 years ago:
>
> https://gcc.gnu.org/ml/fortran/2003-11/msg00052.html
>
> Kind regards,
>

FYI, this posting has a bit more detail on the actual implementation...

http://lists.llvm.org/pipermail/llvm-dev/2015-November/092438.html

>
> --
> Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
> Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
> At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
> Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Toon Moene

On 11/16/2015 11:02 PM, Jack Howarth wrote:


FYI, this posting has a bit more detail on the actual implementation...

http://lists.llvm.org/pipermail/llvm-dev/2015-November/092438.html


That surely helps - thanks.

--
Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Devirtualization causing undefined symbol references at link?

2015-11-16 Thread Steven Noonan
Hi folks,

(I'm not subscribed to the list, so please CC me on all responses.)

This is using GCC 5.2 on Linux x86_64. On a project at work I've found
that one of our shared libraries refuses to link because of some
symbol references it shouldn't be making. If I add "-fno-devirtualize
-fno-devirtualize-speculatively" to the compile flags, the issue goes
away and everything links/runs fine. The issue does *not* appear on
GCC 4.8 (which is used by our current production toolchain).

First of all, does anyone have any ideas off the top of their head why
devirtualization would break like this?

Second, I'm looking for any ideas on how to gather meaningful data to
submit a useful bug report for this issue. The best idea I've come up
with so far is to preprocess one of the sources with the incorrect
references and use 'delta' to reduce it to a minimal preprocessed
source file that references one of these incorrect symbols.
Unfortunately this is a sluggish process because such a minimal test
case would need to compile correctly to an object file -- so "delta"
is reducing it very slowly. So far I'm down from 11MB preprocessed
source to 1.1MB preprocessed source after running delta a few times.

- Steven


Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Andrew Pinski
On Mon, Nov 16, 2015 at 2:09 PM, Toon Moene  wrote:
> On 11/16/2015 11:02 PM, Jack Howarth wrote:
>
>> FYI, this posting has a bit more detail on the actual implementation...
>>
>> http://lists.llvm.org/pipermail/llvm-dev/2015-November/092438.html
>
>
> That surely helps - thanks.


Basically NVIDIA bought PGI and now is open source their fortran
front-end.  Nothing magical really.
Basically NVIDIA is trying to have the "community" do more of their
development for them.
This is an anti-open/free source way of doing things.

Thanks,
Andrew

>
>
> --
> Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
> Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
> At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
> Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: LLVM to get massive GPU support with Fortran

2015-11-16 Thread Jack Howarth
On Mon, Nov 16, 2015 at 5:24 PM, Andrew Pinski  wrote:
> On Mon, Nov 16, 2015 at 2:09 PM, Toon Moene  wrote:
>> On 11/16/2015 11:02 PM, Jack Howarth wrote:
>>
>>> FYI, this posting has a bit more detail on the actual implementation...
>>>
>>> http://lists.llvm.org/pipermail/llvm-dev/2015-November/092438.html
>>
>>
>> That surely helps - thanks.
>
>
> Basically NVIDIA bought PGI and now is open source their fortran
> front-end.  Nothing magical really.
> Basically NVIDIA is trying to have the "community" do more of their
> development for them.
> This is an anti-open/free source way of doing things.
>

Well, if Nvidia/PGI wanted to shift their fortran compiler to use llvm
for its GPU support, they were bound to open source it. The whole
model for llvm is based on vendors wanting to work within the open
source tree because of the pain involved in maintaining independent
forks based on llvm (due to the heavy upstream churn).

> Thanks,
> Andrew
>
>>
>>
>> --
>> Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
>> Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
>> At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
>> Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: basic asm and memory clobbers

2015-11-16 Thread David Wohlferd

On 11/16/2015 1:29 PM, Jeff Law wrote:

On 11/15/2015 06:23 PM, David Wohlferd wrote:

On 11/9/2015 1:32 AM, Segher Boessenkool wrote:

On Sun, Nov 08, 2015 at 04:10:01PM -0800, David Wohlferd wrote:
It seems like a doc update is what is needed to close PR24414 
(Old-style

asms don't clobber memory).

What is needed to close the bug is to make the compiler work properly.


The question of course is, what does 'properly' mean?  My assertion is
that 10 years on, 'properly' means whatever it's doing now. Changing it
at this point will probably break more than it fixes, and (as you said)
there is a plausible work-around using extended asm.

So while this bug could be resolved as 'invalid' (since the compiler is
behaving 'properly'), I'm thinking to split the difference and 'fix' it
with a doc patch that describes the supported behavior.
I'd disagree.  A traditional asm has to be considered an opaque blob 
that read/write/clobber any register or memory location.


When I first encountered basic asm, my expectation was that of course it 
clobbers.  It HAS to, right?  But that said, let me give my best devil's 
advocate impersonation and ask: Why?


- There is no standard that says it must do this.
- I'm only aware of 1 person who has ever asked for this change. And the 
request has been deemed so unimportant it has languished for a very long 
time.
- There is a plausible work-around with extended asm, which (mostly) has 
clear semantics regarding clobbers.
- While the change probably won't introduce bad code, if it does it will 
be in ways that are going to be difficult to track down, in an area 
where few have the expertise to debug.
- Existing code that currently does things 'right' (ie push/pop any 
modified registers) will suddenly be doing things 'wrong,' or at least 
wastefully.
- Other than top-level asm, it seems like every existing basic asm will 
(probably) get a new performance penalty (memory usage + code size + 
cycles) to allow for situations they may already be handling correctly 
or that don't apply.


True, these aren't particularly compelling reasons to not make the 
change.  But I also don't see any compelling benefits to offset them.


For existing users, presumably they have already found whatever solution 
they need and will just be annoyed that they have to revisit their code 
to see the impact of this change.  Will they need to #if to ensure 
consistent performance/function between gcc versions?  For future users, 
they will have the docs telling them the behavior, and pointing them to 
the (now well documented) extended asm.  Where's the benefit?


If someone were proposing basic asm as a new feature, I'd absolutely be 
arguing that it should clobber everything.  Or I might argue that basic 
asm should only be allowed at top-level (where I don't believe 
clobbering matters?) and everything else should be extended asm so we 
KNOW what to clobber (hmm...).


But changing this so gcc tries (probably futilely) to emulate other 
implementations of asm...  That seems like a weak case to support a 
change to this long-time behavior.  Unless there are other benefits I'm 
just not seeing?


--
Ok, that's my best shot.  You have way more expertise and experience 
here than I do, so I expect that after you think it over, you'll make 
the right call.  And despite my attempt here to defend the opposite 
side, I'm not entirely sure what the right call is.  But these seem like 
the right questions.


Either way, let me know if I can help.

It's also the case that assuming an old style asm can read or clobber 
any memory location is the safe, conservative thing to do. 


Well, safe-r.  Even if you make this change, embedding basic asm in C 
routines still seems risky.  Well, riskier than extended which is risky 
enough.


So the right thing in my mind is to ensure that behaviour 


The right thing in my mind is to find ways to prod people into using 
extended asm instead of basic.  Then they explicitly specify their 
requirements rather than depending on clunky all-or-nothing defaults.


Maybe to the extent of gcc deprecating (non-top level) basic over time 
(-fallow-basic-asm=[none|top|any] where v6 defaults to 'any' and v7 
defaults to 'top').  I'd be surprised if gcc went this way, but that 
doesn't mean it wouldn't be better.



and document it.


and to document it.


Andrew's logic is just plain wrong in that BZ.





Whether that means clobbering memory or not, I don't much care -- with
the status quo, if you want your asm to clobber memory you have to use
extended asm; if basic asm is made to clobber memory, if you want your
asm to *not* clobber memory you have to use extended asm (which you
can with no operands by writing e.g.  asm("bork" : );  ).  So both
behaviours are available whether we make a change or not.

But changing things now will likely break user code.
Having an traditional asm clobber memory should not break user code.  
It may pessimize it slightly, but if it does,