Re: C PreProcessor GCC-specific features ideas.

2014-04-23 Thread Renato Golin
On 23 April 2014 09:03, David Brown  wrote:
> Again, this is stepping /way/ outside the appropriate bounds of a
> built-in pre-processor.
>
> I don't disagree with the idea of improving upon autotools.  But I don't
> think adding features to the pre-processor is the way to go.

I completely agree.


> The big step would be to support modules in cooperation with LLVM, and
> thus eliminate "#include":

>From what it seems, this is the only sane thing to do in the compiler.

While that's not an option, Make magic looks a lot less dirty than any
pre-processor extension.

cheers,
--renato


Re: Comparison of GCC-4.9 and LLVM-3.4 performance on SPECInt2000 for x86-64 and ARM

2014-06-24 Thread Renato Golin
On 24 June 2014 15:11, Vladimir Makarov  wrote:
>   A few people asked me about new performance comparison of latest GCC
> and LLVM.  So I've finished it and put it on my site
>
> http://vmakarov.fedorapeople.org/spec/
>
>   The comparison is achievable from 2014 link and links under it in
> the left frame.

Hi Vladimir,

Nice comparison!

It's on the same ballpark of my own findings (with SPEC and other
benchmarks) on ARM: +10% performance, but also code size for GCC over
LLVM, and massive compilation time savings on LLVM.

I wonder how much of that is due to auto-vectorization (on LLVM, -O2+
turns it on, I suppose GCC is only on -O3?). From Ramana's point,
there may be nothing serious if you haven't enabled NEON, though.

Also interesting to see the impact of LTO being a major drive in
recent performance improvements on both compilers.

cheers,
--renato


Re: Comparison of GCC-4.9 and LLVM-3.4 performance on SPECInt2000 for x86-64 and ARM

2014-06-24 Thread Renato Golin
On 24 June 2014 18:16, Eric Christopher  wrote:
> Might want to try asking them to run some comparison numbers though. I
> remember they did before EuroLLVM a while back when we were looking at
> merging our two aarch64 ports.

http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-April/072393.html

In that context, "ARM64" was the new (merged into) back-end, "AArch64"
was the old (merged from) back-end.

In this context, the first line of each result is the relevant bit,
though that was from a back-end that doesn't exist (per se) any more,
so should be taken with a grain of salt.

cheers,
--renato


Re: Comparison of GCC-4.9 and LLVM-3.4 performance on SPECInt2000 for x86-64 and ARM

2014-06-25 Thread Renato Golin
On 25 June 2014 10:26, Bingfeng Mei  wrote:
> Why is GCC code size so much bigger than LLVM? Does -Ofast have more unrolling
> on GCC? It doesn't seem increasing code size help performance (164.gzip & 
> 197.parser)
> Is there comparisons for O2? I guess that is more useful for typical
> mobile/embedded programmers.

Hi Bingfeng,

My analysis wasn't as thorough as Vladimir's, but I found that GCC
wasn't eliminating some large blocks of dead code or inlining as much
as LLVM was. I haven't dug deeper, though. Some of the differences
were quite big, I'd be surprised if it all can be explained by
unrolling loops and vectorization...

cheers,
--renato


Re: Comparison of GCC-4.9 and LLVM-3.4 performance on SPECInt2000 for x86-64 and ARM

2014-07-15 Thread Renato Golin
On 15 July 2014 15:43, Jan Hubicka  wrote:
> I also noticed that GCC code size is bigger for both firefox and libreoffice.
> There was some extra bloat in 4.9 compared to 4.8.
> Martin did some tests with -O2 and various flags, perhaps we could trottle
> some of -O2 optimizations.

Now that you mention, I do believe that was with 4.9 in comparison
with both 4.8 and LLVM 3.4, all on -O3, around Feb.

Unfortunately, I can't share with you the results, but since both
firefox and libreoffice show the same behaviour, I guess you already
have a way through.

cheers,
--renato


Re: [GNU Tools Cauldron 2014] GCC and LLVM collaboration

2014-08-05 Thread Renato Golin
On 5 August 2014 16:36, Prathamesh Kulkarni  wrote:
> Hi,
>I have written notes on  "GCC and LLVM collaboration BOF"
> presented at the Cauldron. I would be grateful if you would
> review it for me.

Hi Prathamesh,

Sounds about right.

Other reviews, FYI:

http://llvmweekly.org/issue/29

http://article.gmane.org/gmane.comp.compilers.llvm.devel/75207

cheers,
--renato


Re: Testing Leak Sanitizer?

2014-11-28 Thread Renato Golin
On 27 November 2014 at 21:48, Christophe Lyon
 wrote:
> On 30 September 2014 at 19:08, Konstantin Serebryany
>  wrote:
>> Correct, you can run tests from llvm tree with any compiler.
>> https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerTestSuite
>>
>
> I've read that document, and as a first step I wanted to build LLVM +
> run the tests in the "best case" (before any modifications I could
> make, and to have a reference to compare with GCC).
> I have a few questions.
>
> To have clang as the "toolchain I want to test", I added the clang
> sources under llvm_tmp_src/tools,  and compiler-rt sources under
> projects.
>
> I managed to run the tests, but I couldn't find the detailed logs.
> I added -DLLVM_LIT_ARGS=-v when calling cmake, which gave me a list like:
> XFAIL: AddressSanitizer64 :: TestCases/use-after-scope.cc (245 of  249)
> PASS: AddressSanitizer64 :: TestCases/use-after-poison.cc (246 of 249)
>
> 1- I suppose there are more details, like gcc.log. Where are they?
> 2- this is running x86_64 native tests, how can I cross-test with
> aarch64 (using qemu for instance)?

Hi Cristophe,

I'm adding Greg, since he made it work a while ago. I remember he
added a few options to CMake and LIT to run the tests on an emulator
(basically QEMU), but I'm not sure all the cases were covered and
everything was working.

Meanwhile, can you build that natively on AArch64? I remember I've ran
all compiler-rt tests on AArch64, including the sanitizers, last
March. The results were encouraging... :)

cheers,
--renato


Re: Problem with extremely large procedures and 64-bit code

2015-01-23 Thread Renato Golin
On 23 January 2015 at 16:07, Ricardo Telichevesky  wrote:
> gcc: Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn) - don't
> know what that means expected  a number like 4.2.1 or something like that,
> 2.53 GHz Intel Core 2 Duo

Hi Ricardo,

This is not gcc at all, it's Clang+LLVM. :/

I'm not sure why Apple decided to call the binary "gcc", which
obviously causes more confusion than solves problems, but that's
beyond the point. You should try Richard's suggestions again on the
Linux/GCC that you originally started.

cheers,
--renato


Re: Compiler warnings while compiling gcc with clang‏

2015-05-05 Thread Renato Golin
On 5 May 2015 at 05:58, Andrew Pinski  wrote:
> These two are bogus and really clang in GCC's mind.  The main reason
> is the standard says struct and class are the same thing.

Apart from the fact that classes are private by default and structs
are not. They may be similar for layout purposes, and it may be ok to
interchange them on local re-declarations when the compiler doesn't
need the type completely defined, but they're not the same thing.

The compiler might be smart and use the protection model that the
original declaration used (private/public), but what that warning is
saying is that you have refactored your code to include classes and
forgot to update all uses, which is a very valid warning. I can't see
why one would *want* to keep the "struct" keyword. If you're already
compiling in C++ mode, removing it from variable/argument declarations
should be valid, and re-declaring incomplete types should be made as
class.

cheers,
--renato


Re: Compiler warnings while compiling gcc with clang‏

2015-05-05 Thread Renato Golin
On 5 May 2015 at 11:23, Trevor Saunders  wrote:
> Saying forward declaration should be done with class is a value choice
> you've made.

Yes.


>  Given forward declarations with struct and class are
> interchangable it seems like a perfectly valid choice to me to decide
> you don't care to bother fix up all the forward declaration when you
> change from class to struct.

Indeed.


> or really care about consistancy I guess,

That's my view. And I understand it might not be a common one.


> but it
> seems to me since the code it warns about isn't "wrong" in any real way
> the warning doesn't deserve to be in -Wall or really even in -Wextra.

Absolutely agreed.

However, warnings are not just for errors or even potential errors,
they're to help you write better code, whether you consider "better"
just correct, or easy to read, is open to interpretation.

My view is that being easy to read and consistent goes a long way
towards maintenance and avoiding future errors. I know a lot of
excellent programmers that disagree with me, that's a matter of
opinion.

Having said that, we seems to agree that -Wall is *only* about
potential errors, not clarity. Maybe -pedantic would be a better place
for this warning.

cheers,
--renato


Fwd: LLVM collaboration?

2014-02-07 Thread Renato Golin
Folks,

I'm about to do something I've been advised against, but since I
normally don't have good judgement, I'll risk it, because I think it's
worth it. I know some people here share my views and this is the
reason I'm writing this.


The problem

For a long time already I've been hearing on the LLVM list people
saying: "oh, ld should not accept this deprecated instruction, but we
can't change that", "that would be a good idea, but we need to talk to
the GCC guys first", and to be honest, nobody ever does.

Worst still, with Clang and LLVM getting more traction recently, and
with a lot of very interesting academic work being done, a lot of new
things are getting into LLVM first (like the sanitizers, or some
specialized pragmas) and we're dangerously close to start having
clang-extensions, which in my humble opinion, would be a nightmare.

We, on the other side of the fence, know very well how hard it is to
keep with legacy undocumented gcc-extensions, and the ARM side is
particularly filled with magical things, so I know very well how you
guys would feel if you, one day, had to start implementing clang stuff
without even participating in the original design just because someone
relies on it.

So, as far as I can see (please, correct me if I'm wrong), there are
two critical problems that we're facing right now:

1. There IS an unnecessary fence between GCC and LLVM.

License arguments are one reason why we can't share code as easily as
we would like, but there is no argument against sharing ideas,
cross-reporting bugs, helping each other implement a better
compiler/linker/assembler/libraries just because of an artificial
wall. We need to break this wall.

I rarely see GCC folks reporting bugs on our side, or people saying
"we should check with the GCC folks" actually doing it. We're not
contagious folks, you know. Talking to GCC engineers won't make me a
lesser LLVM engineer, and vice-versa.

I happen to have a very deep respect for GCC *and* for my preferred
personal license (GPLv3), but I also happen to work with LLVM, and I
like it a lot. There is no contradiction on those statements, and I
wish more people could share my opinion.

2. There are decisions that NEED to be shared.

In the past, GCC implemented a lot of extensions because the standards
weren't good enough. This has changed, but the fact that there will
always be things that don't belong on any other standard, and are very
specific to the toolchain inner workings, hasn't.

It would be beneficial to both toolchains to have a shared forum where
we could not only discuss how to solve problems better, but also keep
track of the results, so we can use it as guidelines when implementing
those features.

Further still, other compilers would certainly benefit from such
guidelines, if they want to interact with our toolchains. So, this
wouldn't be just for our sake, but also for future technologies. We
had a hard time figuring out why GCC would do this or that, and in the
end, there was always a reason (mostly good, sometimes, not so much),
but we wasted a lot of time following problems lost in translation.


The Open Source Compiler Initiative

My view is that we're unnecessarily duplicating a lot of the work to
create a powerful toolchain. The license problems won't go away, so I
don't think LLVM will ever disappear. But we're engineers, not
lawyers, so we should solve the bigger technical problem in a way that
we know how: by making things work.

For the last year or two, Clang and GCC are approaching an asymptote
as to what people believe a toolchain should be, but we won't converge
to the same solution unless we talk. If we keep our ideas enclosed
inside our own communities (who has the time to follow both gcc and
llvm lists?), we'll forever fly around the expected target and never
reach it.

To solve the technical problem of duplicated work we just need to
start talking to each other. This mailing list (or LLVM's) is not a
good place, since the traffic is huge and not every one is interested,
so I think we should have something else (another list? a web page? a
bugzilla?) where we'd record all common problems and proposals for new
features (not present in any standards), so that at least we know what
the problems are.

Getting to fix a problem or accepting a proposal would go a long way
of having them as kosher on both compilers, and that could be
considered as the standard compiler implementation, so other
compilers, even the closed source ones, should follow suit.

I'll be at the GNU Cauldron this year, feel free to come and discuss
this and other ideas. I hope to participate more in the GCC side of
things, and I wish some of you guys would do the same on our side. And
hopefully, in a few years, we'll all be on the same side.

I'll stop here, TL;DR; wise. Please, reply copying me, as I'm not
(yet) subscribing to this list.

Best Regards,
--renato


Re: LLVM collaboration?

2014-02-07 Thread Renato Golin
On 7 February 2014 21:53, Diego Novillo  wrote:
> I think this would be worth a BoF, at the very least. Would you be
> willing to propose one? I just need an abstract to get it in the
> system. We still have some room left for presentations.

Hi Diego,

Thanks, that'd be great!

A BoF would give us more time to discuss the issue, even though I'd
like to start the conversation a lot earlier. Plus, I have a lot more
to learn than to talk about. ;)

Something along the lines of...

* GCC and LLVM collaboration / The Open Source Compiler Initiative

With LLVM mature enough to feature as the default toolchain in some
Unix distributions, and with the inherent (and profitable) share of
solutions, ideas and code between the two, we need to start talking at
a more profound level. There will always be problems that can't be
included in any standard (language, extension, or machine-specific)
and are intrinsic to the compilation infrastructure. For those, and
other common problems, we need common solutions to at least both LLVM
and GCC, but ideally any open source (and even closed source)
toolchain. In this BoF session, we shall discuss to what extent this
collaboration can take us, how we should start and what are the next
steps to make this happen.

cheers,
--renato


Re: LLVM collaboration?

2014-02-07 Thread Renato Golin
On 7 February 2014 22:33, Andrew Pinski  wrote:
> I think it is going to called anything, it should be GNU and LLVM
> collaboration since GCC does not include binutils/gdb while LLVM
> includes the assembler/etc.

Good point. I do mean the whole toolchain.

cheers,
--renato


Re: LLVM collaboration?

2014-02-07 Thread Renato Golin
On 7 February 2014 22:42, Jonathan Wakely  wrote:
> The sanitizers are IMHO an impressive example of collaboration. The
> process may not be perfect, but the fact is that those powerful tools
> are available in both compilers - I think that's amazing!

I agree.


> Like the Blocks extension? :-)

So, as an example, I started a discussion about our internal
vectorizer and how we could control it from pragmas to test and report
errors. It turned out a lot bigger than I imagined, with people
defending inclusion in openMP pragmas, or implementing as C++11
annotations, and even talking about back-porting annotations for C89
code, as an extension. Seriously, that gave me the chills.

Working in the ARM debugger, compiler and now with LLVM, I had to work
around and understand GNU-isms and the contract that the kernel has
with the toolchain, that I don't think is entirely healthy. We should,
yes, have a close relationship with them, but some proposals are
easier to implement in one compiler than another, others are just
implemented because it was the quickest implementation, or generated
the smallest code, or whatever. Things that I was expecting to see in
closed-source toolchains (as I have), but not on an open source one.

At the very least, some discussion could point to defects on one or
another toolchain, as well as the kernel. We've seen a fair number of
bad code that GCC accepts in the kernel just because it can (VLAIS,
nested functions), not because it's sensible, and that's actually
making the kernel code worse in respect with the standard. Opinions
will vary, and I don't expect everyone to agree with me that those are
nasty (nor I want flame from it, please), but some consensus would be
good.


> I expect that many GCC devs aren't reporting bugs because they're just
> not using LLVM.  I don't report OpenBSD bugs either, not because I
> dislike OpenBSD, I just don't use it.

I understand that, and I take your point. I wasn't requesting every
one to use it, but to enquire about new extensions when they come your
way, as we should do when it comes our way. I'm guilty of this being
my first email to the gcc list (and I have been publicly bashed at
FOSDEM because of it, which I appreciate).


> For things that don't belong in any standard, such as warning options,
> that's an area where the compilers may be in competition to provide a
> better user-experience, so it's unsurprising that options get added to
> one compiler first without discussing it with the other project. What
> tends to happen with warnings is someone says "hey, clang has this
> warning, we should add it too" e.g. -Wdelete-non-virtual-dtor or
> -Winclude-guard, so we may end up agreeing eventually anyway.

I think you have touched a very good point: "competition to provide
the best user experience". Do we really need that?

Front-end warnings are quite easy to replicate, but some other flags
may have slightly different semantics on each compiler, and having the
user to tell the difference is cruel. Inline assembly magic and new
ASM directives are another issue that populate the kernel (and we've
been implementing all of them in our assembler to compile the kernel).
That simply won't go away ever.

I question some of the decisions, as I have questioned some of ARM's
decisions on its ABIs, as something that had a purpose, but the core
reason is gone, and we can move along. Some consensus would have
probably helped to design a better, long-lasting solution, a lot more
consensus would have halted any progress, so we have to be careful.
But I specifically don't think that extensions required by
third-parties (like the kernel) should be discussed directly with any
specific compiler, as that will perpetuate this problem.

Some kernel developers, including Linus, are very receptive to
compiling it with Clang, so new extensions will probably be discussed
with both. Now, if we require them to discuss with each community in
separate, I'm sure the user experience will be terrible when trying to
consolidate it.

I don't want us to compete, I want us to collaborate. I don't believe
LLVM is ever going to steal GCC's shine, but both will coexist, and
having a friendly coexistence would be a lot better for everyone.
Aren't we doing open source for a better world? I can't see where
competition fits into this.

As I said before, one of my main goals of working with LLVM is to make
*GCC* better. I believe that having two toolchains is better than one
for that very reason, but maintaining two completely separate and
competing toolchains is not sustainable, even for open source
projects.

cheers,
--renato


Re: Fwd: LLVM collaboration?

2014-02-07 Thread Renato Golin
On 7 February 2014 23:30, Joseph S. Myers  wrote:
> I think there are other closely related issues, as GCC people try to work
> around issues with glibc, or vice versa, rather than coordinating what
> might be the best solution involving changes to both components,

Hi Joseph,

Thanks for the huge email, all of it (IMHO) was spot on. I agree with
your arguments, and one of the reasons why I finally sent the email,
is that I'm starting to see all this on LLVM, too.

Because of licenses, we have to replicate libgcc, libstdc++, the
linker, etc. And in many ways, features get added to random places
because it's the easiest route, or because it's the right place to be,
even though there isn't anything controlling or monitoring the feature
in the grand scheme of things. This will, in the end, invariably take
us through the route that GNU crossed a few years back, when people
had to use radioactive suites to work on some parts of GCC.

So, I guess my email was more of a cry for help, than a request to
play nice (as some would infer). I don't think we should repeat the
same mistakes you guys did, but I also think that we have a lot to
offer, as you mention, in looking at extensions and proposing to
standards, or keeping kernel requests sane, and having a unison
argument on specific changes, and so on.

The perfect world would be if any compiler could use any assembler,
linker and libraries, interchangeably. While that may never happen, as
a long term goal, this would at least draw us a nice asymptote to
follow. As every one here and there, I don't have enough time to work
through every detail and follow all lists, but if we encourage the
cross over, or even cross posting between the two lists, we might
solve common problems without incurring in additional time wasted.

--renato


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Renato Golin
Hi Jan,

I think this is a very good example where we could all collaborate
(including binutils).

I'll leave your reply intact, so that Chandler (CC'd) can get a bit
more context. I'm copying him because he (and I believe Diego) had
more contact with LTO than I had.

If I got it right, LTO today:

- needs the drivers to explicitly declare the plugin
- needs the library available somewhere
- may have to change the library loading semantics (via LD_PRELOAD)

Since both toolchains do the magic, binutils has no incentive to
create any automatic detection of objects.

The part that I didn't get is when you say about backward
compatibility. Would LTO work on a newer binutils with the liblto but
on an older compiler that knew nothing about LTO?

Your proposal is, then, to get binutils:

- recognizing LTO logic in the objects
- automatically loading liblto if recognized
- warning if not

I'm assuming the extra symbols would be discarded if no library is
found, together with the warning, right? Maybe an error if -Wall or
whatever.

Can we get someone from the binutils community to opine on that?

cheers,
--renato

On 11 February 2014 02:29, Jan Hubicka  wrote:
> One practical experience I have with LLVM developers is sharing experiences
> about getting Firefox to work with LTO with Rafael Espindola and I think it 
> was
> useful for both of us. I am definitly open to more discussion.
>
> Lets try a specific topic that is on my TODO list for some time.
>
> I would like to make it possible for mutliple compilers to be used to LTO a
> single binary. As we are all making LTO more useful, I think it is matter of
> time until people will start shipping LTO object files by default and users
> will end up feeding them into different compilers or incompatible version of
> the same compiler. We probably want to make this work, even thought the
> cross-module optimization will not happen in this case.
>
> The plugin interface in binutils seems to do its job well both for GCC and 
> LLVM
> and I hope that open64 and ICC will eventually join, too.
>
> The trouble however is that one needs to pass explicit --plugin argument
> specifying the particular plugin to load and so GCC ships with its own 
> wrappers
> (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
> thing.
>
> It may be smoother if binutils was able to load multiple plugins at once and
> grab plugins from system and user installed compilers without explicit 
> --plugin
> argument.
>
> Binutils probably should also have a way to detect LTO object files and 
> produce
> more useful diagnostic than they do now, when there is no plugin claiming 
> them.
>
> There are some PRs filled on the topic
> http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=15300
> http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=13227
> but not much progress on them.
>
> I wonder if we can get this designed and implemented.
>
> On the other hand, GCC current maintains non-plugin path for LTO that is now
> only used by darwin port due to lack of plugin enabled LD there.  It seems
> that liblto used by darwin is losely compatible with the plugin API, but it 
> makes
> it harder to have different compilers share it (one has to LD_PRELOAD liblto
> to different one prior executing the linker?)
>
> I wonder, is there chance to implement linker plugin API to libLTO glue or add
> plugin support to native Darwin tools?
>
> Honza


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Renato Golin
On 11 February 2014 16:00, Jan Hubicka  wrote:
> I basically think that binutils should have a way for installed compiler to
> register a plugin and load all plugins by default (or perhaps for performance
> or upon detecking an compatible LTO object file in some way, perhaps also by
> information given in the config file) and let them claim the LTO objects they
> understand to.

Right, so this would be not necessarily related to LTO, but with the
binutils plugin system. In my very limited experience with LTO and
binutils, I can't see how that would be different from just adding a
--plugin option on the compiler, unless it's something that the linker
would detect automatically without the interference of any compiler.


> With the backward compatibility I mean that if we release a new version of
> compiler that can no longer read the LTO objects of older compiler, one can
> just install both versions and have their plugins to claim only LTO objects
> they understand. Just if they were two different compilers.

Yes, this makes total sense.


> Finally I think we can make binutils to recognize GCC/LLVM LTO objects
> as a special case and produce friendly message when user try to handle
> them witout plugin as oposed to today strange errors about file formats
> or missing symbols.

Yes, that as well seems pretty obvious, and mostly orthogonal to the
other two proposals.

cheers,
--renato

PS: Removing Chandler, as he was not the right person to look at this.
I'll check with others in the LLVM list to chime in on this thread.


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Renato Golin
Now copying Rafael, which can give us some more insight on the LLVM LTO side.

cheers,
--renato

On 11 February 2014 09:55, Renato Golin  wrote:
> Hi Jan,
>
> I think this is a very good example where we could all collaborate
> (including binutils).
>
> I'll leave your reply intact, so that Chandler (CC'd) can get a bit
> more context. I'm copying him because he (and I believe Diego) had
> more contact with LTO than I had.
>
> If I got it right, LTO today:
>
> - needs the drivers to explicitly declare the plugin
> - needs the library available somewhere
> - may have to change the library loading semantics (via LD_PRELOAD)
>
> Since both toolchains do the magic, binutils has no incentive to
> create any automatic detection of objects.
>
> The part that I didn't get is when you say about backward
> compatibility. Would LTO work on a newer binutils with the liblto but
> on an older compiler that knew nothing about LTO?
>
> Your proposal is, then, to get binutils:
>
> - recognizing LTO logic in the objects
> - automatically loading liblto if recognized
> - warning if not
>
> I'm assuming the extra symbols would be discarded if no library is
> found, together with the warning, right? Maybe an error if -Wall or
> whatever.
>
> Can we get someone from the binutils community to opine on that?
>
> cheers,
> --renato
>
> On 11 February 2014 02:29, Jan Hubicka  wrote:
>> One practical experience I have with LLVM developers is sharing experiences
>> about getting Firefox to work with LTO with Rafael Espindola and I think it 
>> was
>> useful for both of us. I am definitly open to more discussion.
>>
>> Lets try a specific topic that is on my TODO list for some time.
>>
>> I would like to make it possible for mutliple compilers to be used to LTO a
>> single binary. As we are all making LTO more useful, I think it is matter of
>> time until people will start shipping LTO object files by default and users
>> will end up feeding them into different compilers or incompatible version of
>> the same compiler. We probably want to make this work, even thought the
>> cross-module optimization will not happen in this case.
>>
>> The plugin interface in binutils seems to do its job well both for GCC and 
>> LLVM
>> and I hope that open64 and ICC will eventually join, too.
>>
>> The trouble however is that one needs to pass explicit --plugin argument
>> specifying the particular plugin to load and so GCC ships with its own 
>> wrappers
>> (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
>> thing.
>>
>> It may be smoother if binutils was able to load multiple plugins at once and
>> grab plugins from system and user installed compilers without explicit 
>> --plugin
>> argument.
>>
>> Binutils probably should also have a way to detect LTO object files and 
>> produce
>> more useful diagnostic than they do now, when there is no plugin claiming 
>> them.
>>
>> There are some PRs filled on the topic
>> http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=15300
>> http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=13227
>> but not much progress on them.
>>
>> I wonder if we can get this designed and implemented.
>>
>> On the other hand, GCC current maintains non-plugin path for LTO that is now
>> only used by darwin port due to lack of plugin enabled LD there.  It seems
>> that liblto used by darwin is losely compatible with the plugin API, but it 
>> makes
>> it harder to have different compilers share it (one has to LD_PRELOAD liblto
>> to different one prior executing the linker?)
>>
>> I wonder, is there chance to implement linker plugin API to libLTO glue or 
>> add
>> plugin support to native Darwin tools?
>>
>> Honza


Zero-cost toolchain "standardization" process

2014-02-11 Thread Renato Golin
Hi Folks,

First of all, I'd like to thank everyone for their great responses and
heart warming encouragement for such an enterprise. This will be my
last email about this subject on these lists, so I'd like to just let
everyone know what (and where) I'll be heading next with this topic.
Feel free to reply to me personally, I don't want to span an ugly
two-list thread.

As many of you noted, not everyone is actively interested in this, and
for good reasons. The last thing we want is yet-another standard
getting in the way of actually implementing features and innovating,
which both LLVM and GCC are good at. Following the comments on the GCC
list, slashdot and Phoronix forums, I think the only sensible thing is
to do what everyone said we should: talk.

Also, just this week, we got GCC developers having patches accepted in
LLVM (sanitizers) and LLVM developers discussing LTO strategies on the
GCC list. Both interactions have already shown need for improvements
on both sides. This is a *really* good start!

The proposal, then, is to have a zero-cost process, where only the
interested parties need to take action. A reactive system where
standards are agreed *after* implementation.

1. A new feature request / implementation on one of the toolchains
outlines what's being done on a shared place. Basically, copy and past
from the active thread's summary into a shared area.

2. Interested parties (pre-registered) get a reminder that new content
is available. From here, two things can happen:

  2.1 The other toolchain has the feature, in which case developers should:
2.1.1 Agree that this is, indeed, the way they have implemented,
and check the box: "standard agreed".
2.1.2 Disagree on the implementation and describe what they've done instead.

  2.2 The other toolchain doesn't have it:
2.2.1 Agree with the implementation and mark as "standard agreed"
and "future work".
2.2.2 Disagree on the implementation and mark as "to discuss".

On both disagreement cases, pre-registered developers of both
toolchains would receive emails outlining the conflict and they can
discuss as much as they want until a common ground is decided, or not.
It's perfectly fine to "agree to disagree", when no "common standard"
is reached.

Some important notes:

* No toolchain should be forced to accommodate to the standard, but
would be good to *at least* describe what they do instead, so that
users don't get surprised.
* No toolchain should be forced to keep with the agreed standard, and
discussions to migrate to a better implementation would naturally
happen on a cross-toolchain forum.
* No toolchain should be forced to implement a feature just because
the other toolchain did. It's perfectly fine to never implement it, if
the need never arises.
* No developer should be forced to follow the emails or even care
about the process. Other developers on their own communities should,
if necessary, enforce their own standards, on their own pace, that it
could, or not, agree with the shared one.

What is that different than doing nothing?

First, and most important, it will log our cross-toolchain actions.
Today, we only have two good examples of cross-interactions, and
neither are visible from the other side. When (if) we start having
more, it'd be good to be able to search through them, or contribute to
them on an ad-hoc basis, if a new feature is proposed. We'll have a
documented way of the non-standard things that we're doing, before
they go on other standards, too.

Second, it'll expose what we already have as "standard", and enable a
common channel for like-minded people to solve problems on both
toolchains. It'll also off-load both lists of having to follow any
development, but will still have a way for those interested to discuss
and agree to a common standard.

Finally, once this "database" of implementation details is big enough,
it would be easy to spot the conflicts, and it'll serve as a good TODO
list for commoning up implementation details, or even for future
compilers to choose one or the other. Entire projects, or thesis could
be written based on them, fostering more innovation in the process.

What now?

Well, people really interested in building such a system should (for
now) email me directly. If I get enough feedback, we can start
discussing in private (or another list) about how we're going to
progress.

During the brainstorm phase, or if not enough people are interested, I
still think we shouldn't stop talking. The interaction is already
happening and it's really good, I think we should just continue and
see where this takes us. Maybe by the GNU Cauldron, enough people
would want to contribute, maybe later, maybe never. Whatever works!

To be honest, I'm already really happy with the outcome, so for me, my
target was achieved!

I will report what happens during the next few months on the GCC+LLVM
BoF, so if you're at least mildly interested, please do attend. For
those looking for a few more answers to all the 

Vectorizer Pragmas

2014-02-15 Thread Renato Golin
Folks,

One of the things that we've been discussing for a while and there are
just too many options out there and none fits exactly what we're
looking for (obviously), is the vectorization control pragmas.

Our initial semantics is working on on a specific loop / lexical block to:
 * turn vectorization on/off (even if -fvec is disabled)
 * specify the vector width (number of lanes)
 * specify the unroll factor (either to help with vectorization or to
use when vectorization is not profitable)

Later metadata could be added to:
 * determine memory safety at specific distances
 * determine vectorized functions to use for specific widths
 * etc

The current discussion is floating around four solutions:

1. Local pragma (#pragma vectorize), which is losing badly on the
argument that it's yet-another pragma to do mostly the same thing many
others do.

2. Using OMP SIMD pragmas (#pragma simd, #pragma omp simd) which is
already standardised (OMP 4.0 I think), but that doesn't cover all the
semantics we may want in the future, plus it's segregated and may
confuse the users.

3. Using GCC-style optimize pragmas (#pragma Clang optimize), which
could be Clang-specific without polluting other compilers' namespaces.
The problem here is that we'd end up with duplicated flags with
closely-related-but-different semantics between #pragma GCC and
#pragma Clang variants.

4. Using C++11 annotations. This would be the cleanest way, but would
only be valid in C++11 mode and could be very well a different way to
express an identical semantics to the pragmas, which are valid in all
C variants and Fortran.

I'm trying to avoid adding new semantics to old problems, but I'm also
trying to avoid spreading closely related semantics across a multitude
of pragmas, annotation and who knows else.

Does GCC have anything similar? Do you guys have any ideas we could use?

I'm open to anything, even in favour of one of the defective
propositions above. I'd rather have something than nothing, but I'd
also rather have something that most people agree on.

cheers,
--renato


Re: Vectorizer Pragmas

2014-02-15 Thread Renato Golin
On 15 February 2014 19:26, Jakub Jelinek  wrote:
> GCC supports #pragma GCC ivdep/#pragma simd/#pragma omp simd, the last one
> can be used without rest of OpenMP by using -fopenmp-simd switch.

Does the simd/omp have control over the tree vectorizer? Or are they
just flags for the omp implementation?


> I don't see why we would need more ways to do the same thing.

Me neither! That's what I'm trying to avoid.

Do you guys use those pragmas for everything related to the
vectorizer? I found that the Intel pragmas (not just simd and omp) are
pretty good fit to most of our needed functionality.

Does GCC use Intel pragmas to control the vectorizer? Would be good to
know how you guys did it, so that we can follow the same pattern.

Can GCC vectorize lexical blocks as well? Or just loops?

IF those pragmas can't be used in lexical blocks, would it be desired
to extend that in GCC? The Intel guys are pretty happy implementing
simd, omp, etc. in LLVM, and I think if the lexical block problem is
common, they may even be open to extending the semantics?

cheers,
--renato


Re: Vectorizer Pragmas

2014-02-15 Thread Renato Golin
On 15 February 2014 22:49, Tim Prince  wrote:
> In my experience, the (somewhat complicated) gcc --param options work
> sufficiently well for specification of unrolling.

There is precedent for --param in LLVM, we could go this way, too.
Though, I can't see how it'd be applied to a specific function, loop
or lexical block.


> In the same vein, I haven't seen any cases where gcc 4.9 is excessively 
> aggressive in
> vectorization, so that a #pragma novector plus scalar unroll  is needed, as
> it is with Intel compilers.
> (...)
> If your idea is to obtain selective effective
> auto-vectorization in source code which is sufficiently broken that -O2
> -ftree-vectorize can't be considered or -fno-strict-aliasing has to be set,
> I'm not about to second such a motion.

Our main idea with this is to help people report missed vectorization
on their code, and a way to help them achieve performance while LLVM
doesn't catch up.

Another case for this (and other pragmas controlling the optimization
level on a per-function basis) is to help debugging of specific
functions while leaving others untouched.

I'd not condone the usage of such pragmas on a persistent manner, nor
for any code that goes in production, or to work around broken code at
higher optimization levels.

cheers,
--renato


Re: Vectorizer Pragmas

2014-02-16 Thread Renato Golin
On 16 February 2014 17:23, Tobias Burnus  wrote:
> As '#pragma omp simd' doesn't generate any threads and doesn't call the
> OpenMP run-time library (libgomp), I would claim that it only controls the
> tree vectorizer. (Hence, -fopenmp-simd was added as it permits this control
> without enabling thread parallelization or dependence on libgomp or
> libpthread.)

Right, this is a bit confusing, but should suffice for out purposes,
which are very similar to GCC's.


> Compiler vendors (and users) have different ideas whether the SIMD pragmas
> should give the compiler only a hint or completely override the compiler's
> heuristics. In case of the Intel compiler, the user rules; in case of GCC,
> it only influences the heuristics unless one passes explicitly
> -fsimd-cost-model=unlimited (cf. also -Wopenmp-simd).

We prefer to be on the safe side, too. We're adding a warning callback
mechanism to warn about possible dangerous situations (debug messages
already do that), possibly with the same idea as -Wopenmp-simd. But
the intent is to not vectorize if we're sure it'll break things. Only
on doubt we'll trust the pragmas/flags.

The flag -fsimd-cost-model=unlimited might be a bit too heavy on other
loops, and is the kind of think that I'd rather have as a pragma or
not at all.


> As a user, I found Intel's pragmas interesting, but at the end regarded
> OpenMP's SIMD directives/pragmas as sufficient.

That was the kind of user experience that I was looking for, thanks!


> According to http://gcc.gnu.org/projects/tree-ssa/vectorization.html,
> basic-block vectorization (SLP) support exists since 2009.

Would it be desirable to use some pragmas to control lexical blocks,
too? I'm not sure omp/cilk pragmas apply to lexical blocks...

cheers,
--renato


Re: Vectorizer Pragmas

2014-02-17 Thread Renato Golin
On 16 February 2014 23:44, Tim Prince  wrote:
> I don't think many people want to use both OpenMP 4 and older Intel
> directives together.

I'm having less and less incentives to use anything other than omp4,
cilk and whatever. I think we should be able to map all our internal
needs to those pragmas.

On the other hand, if you guys have any cross discussion with Intel
folks about it, I'd love to hear. Since our support for those
directives are a bit behind, would be good not to duplicate the
efforts in the long run.

Thanks!
--renato


Re: Vectorizer Pragmas

2014-02-17 Thread Renato Golin
On 17 February 2014 14:47, Tim Prince  wrote:
> I'm continuing discussions with former Intel colleagues.  If you are asking
> for insight into how Intel priorities vary over time, I don't expect much,
> unless the next beta compiler provides some inferences.  They have talked
> about implementing all of OpenMP 4.0 except user defined reduction this
> year.  That would imply more activity in that area than on cilkplus,

I'm expecting this. Any proposal to support Cilk in LLVM would be
purely temporary and not endorsed in any way.


> although some fixes have come in the latter.  On the other hand I had an
> issue on omp simd reduction(max: ) closed with the decision "will not be
> fixed."

We still haven't got pragmas for induction/reduction logic, so I'm not
too worried about them.


> I have an icc problem report in on fixing omp simd safelen so it is more
> like the standard and less like the obsolete pragma simd vectorlength.

Our width metadata is slightly different in that it means "try to use
that length", rather than "it's safe to use that length", this is why
I'm holding on use safelen for the moment.


> Also, I have some problem reports active attempting to get clarification of
> their omp target implementation.

Same here... RTFM is not enough in this case. ;)


> You may have noticed that omp parallel for simd in current Intel compilers
> can be used for combined thread and simd parallelism, including the case
> where the outer loop is parallelizable and vectorizable but the inner one is
> not.

That's my fear of going with omp simd directly. I don't want to be
throwing threads all over the place when all I really want is vector
code.

For the time, my proposal is to use legacy pragmas: vector/novector,
unroll/nounroll and simd vectorlength which map nicely to the metadata
we already have and don't incur in OpenMP overhead. Later on, if
OpenMP ends up with simple non-threaded pragmas, we should use those
and deprecate the legacy ones.

If GCC is trying to do the same thing regarding non-threaded-vector
code, I'd be glad to be involved in the discussion. Some LLVM folks
think this should be an OpenMP discussion, I personally think it's
pushing the boundaries a bit too much on an inherently threaded
library extension.

cheers,
--renato


Re: ARM inline assembly usage in Linux kernel

2014-02-19 Thread Renato Golin
On 19 February 2014 11:58, Richard Sandiford
 wrote:
> I agree that having an unrecognised asm shouldn't be a hard error until
> assembly time though.  Saleem, is the problem that this is being rejected
> earlier?

Hi Andrew, Richard,

Thanks for your reviews! We agree that we should actually just ignore
the contents until object emission.

Just for context, one of the reasons why we enabled inline assembly
checks is for some obscure cases when the snippet changes the
instructions set (arm -> thumb) and the rest of the function becomes
garbage. Our initial implementation was to always emit .arm/.thumb
after *any* inline assembly, which would become a nop in the worst
case. But since we had easy access to the assembler, we thought: "why
not?".

The idea is now to try to parse the snippet for cases like .arm/.thumb
but only emit a warning IFF -Wbad-inline-asm (or whatever) is set (and
not to make it on by default), otherwise, ignore. We're hoping our
assembler will be able to cope with the multiple levels of indirection
automagically. ;)

Thanks again!
--renato


Re: ARM inline assembly usage in Linux kernel

2014-02-19 Thread Renato Golin
On 19 February 2014 23:19, Andrew Pinski  wrote:
> With the unified assembly format, you should not need those
> .arm/.thumb and in fact emitting them can make things even worse.

If only we could get rid or all pre-UAL inline assembly on the planet... :)

The has been the only reason why we added support for those in our
assembler, because GAS supports them and people still use (or have
legacy code they won't change).

If the binutils folks (and you guys) are happy to start seriously
de-phasing pre-UAL support, I'd be more than happy to do so on our
end. Do you think I should start that conversation on the binutils
list?

Maybe a new serious compulsory warning, to start?

cheers,
--renato


Re: ARM inline assembly usage in Linux kernel

2014-02-20 Thread Renato Golin
On 20 February 2014 10:11, Ramana Radhakrishnan
 wrote:
> The current behaviour is that when the compiler generates code for
> Thumb1 and Thumb2 we switch back to the appropriate state after inline
> assembler is emitted. We don't switch back to ARM state on the (fairly
> robust) assumption that most inline assembler is written in ARM state.

We went one step further (possibly unnecessarily) and we check what's
the current state before going into inline asm and always emit the
correct code directive afterwards.

We're changing it back from the bad decision to validate inline
assembly (my fault!) in -S mode.


> In any case when users are switching ARM and Thumb states, they need
> to be careful anyway to make sure that the *machine* is going to get
> back to the *correct* state and having a screen full of possibly
> meaningless compile time errors may not be the most productive.

Maybe it'd be better to have fixed the error reporting in the first place. ;)


> .arm / .thumb directives should not assemble to any instruction least
> of all nop. You mean ignored here :).

Yes. ;)

cheers,
--renato


Re: ARM inline assembly usage in Linux kernel

2014-02-20 Thread Renato Golin
On 20 February 2014 12:59, Ramana Radhakrishnan
 wrote:
> It's not really because GAS supports it, but there exists a large body
> of code out there which uses inline assembler with pre-UAL syntax. I'm
> not sure people will appreciate a blanket break in one version of the
> toolchain and especially when people could quite easily mix and match
> between compiler versions and binutils versions.

Hi Ramana,

I agree, I didn't mean it was GAS' fault.


> Before anything else the compiler needs to be fixed and there are some
> corner cases to deal with build attributes especially for Thumb1 in
> the assembler before we can starting thinking about deprecating
> pre-UAL syntax.

Absolutely. But there needs to be an interest in the GNU community to
drive these changes forward. In LLVM we're very much pro-UAL and it
took us quite a lot of convincing to support pre-UAL syntax in the
*parser only*, but we'll never generate it ourselves. Everything we
generate is (or should be) UAL.


> It may be of
> interest for 4.9 + 1 = (4.10 /5.0) in GCC and the next binutils
> revision.

If people are really interested, I can start the ball rolling in the
binutils list.


> Adding the warning by default to GAS is just part of the solution.

It'll only be the second step, yes, with the first one being to fix
the remaining ugly bugs. There will be many more...

cheers,
--renato


Re: Scheduler:LLVM vs gcc, which is better

2014-03-12 Thread Renato Golin
On 12 March 2014 15:13, lin zuojian  wrote:
> Hi Chandler,
> I have looked into their "Machine Instr Scheduler", and find out
> that LLVM have not yet enable them by default.And further test find
> they are still not yet working.(e.g,-mtune=cortex-a9,a15,a53
> generates the same code).

Can I encourage you to move this thread to the LLVM list?

--renato


Builtin: stack pointer

2014-03-27 Thread Renato Golin
Hi there,

There is a common pattern on bare-metal code to refer to the value of
registers directly in a variable:

 register unsigned long current_stack_pointer asm("sp");

But not only that depends on the register names (so you need to split
by arch with ifdefs), but it also uses a non-guaranteed fact about
register variables, and uses inline asm that is not supported by
Clang/LLVM (for several reasons).

The LLVMLinux team have submitted a proposal that works around this on
a target-independent way:

http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-October/066325.html

Basically, introducing the builtin: __builtin_stack_pointer() which
will return the stack pointer register's value. There's no guarantee
that the register will contain the information you want (for example,
if the surrounding code uses it) and is only meant to replace the
construct above.

Here's what we're planning on implementing in LLVM (with docs
explaining the semantics):

http://llvm-reviews.chandlerc.com/D3184

Is this something that can be done in another target-independent way?
If not, is this something that GCC would also be willing to implement,
so that we can replace the register/asm patterns by it? It would make
kernel code simpler to have a single solution.

cheers,
--renato


Re: Builtin: stack pointer

2014-03-27 Thread Renato Golin
On 27 March 2014 10:12, Andreas Schwab  wrote:
> Can't you use __builtin_frame_address (0) instead?

That would give me the frame pointer, not the stack pointer, and the
user would have to calculate manually the offset to get the actual
stack pointer, which would be target-specific, possibly making it even
worse. Is that what you meant?

cheers,
--renato


Re: Builtin: stack pointer

2014-03-27 Thread Renato Golin
On 27 March 2014 10:29, Andreas Schwab  wrote:
> Depends on what you need the value for.

Mostly unwind code that uses both FP and SP, example:

http://git.linuxfoundation.org/?p=llvmlinux/kernel.git;a=commit;h=a875939682dc43bf244bd39a356bca76ac190d77
http://git.linuxfoundation.org/?p=llvmlinux/kernel.git;a=commit;h=9fe678c3c08165ba20eb191717a29e070a323297
http://git.linuxfoundation.org/?p=llvmlinux/kernel.git;a=commit;h=5f661357016dcb28a7a21b307124a679e0c8ec06

Some times, inline asm that needs to mark the stack as used:

http://git.linuxfoundation.org/?p=llvmlinux/kernel.git;a=commit;h=96bb7f2706e70d217e84f2576db4cb30f86c13ae

cheers,
--renato


Re: Builtin: stack pointer

2014-03-27 Thread Renato Golin
On 27 March 2014 10:47, Andrew Pinski  wrote:
> Please don't add a close list to the CC of GCC lists it is annoying.

I didn't realise this list was closed, sorry.

--renato


Re: Builtin: stack pointer

2014-03-27 Thread Renato Golin
Hi Jakub,

Just to make it clear, I'm not an official representative of Clang, or
LLVM, nor I was involved in all discussions about implementing
extensions either. I do not have an agenda to promote LLVM changes.

> To me this sounds like clang proposing extensions because they aren't
> willing to implement existing extensions, not a good reason to change.

Whatever I say or propose is only my limited view of the world as of a
few years ago (when I started working with LLVM) and I'm mainly
discussing this here exactly because I don't want to implement
extensions just for the sake of it, and I want opinions from all
angles before taking any *technical* decision. The arguments you guys
are giving to me are solid and will be taken back to the LLVM list to
discuss this further (after I do some research on past discussions).

Please, don't take my emails as "Clang pushing GCC to do things", as
they're *clearly* not. If anything, they're only a display or my own
ignorance.

AFAIK, the whole idea of __builtin_stack_pointer was to mimic the
already existing __builtin_frame_address, but the usage and examples
you and Andrew gave me (separate files, not meaningful in all targets)
show why it hasn't been done in GCC.

I honestly took it as a simple oversight of kernel engineers
implementing something that could be done better in a way GCC already
understood (as it happens a lot, example is the inline asm + Macro
issue we raised earlier). It seems this time, it was not.


On 27 March 2014 10:44, Jakub Jelinek  wrote:
> I don't see what is wrong with this, this isn't inline asm, it is
> the local register var GNU extension,

The argument I remember hearing of is the joint of:
 1. The "register" specifier is not guaranteed to keep variables'
values in registers (C99, 6.7.1p4)
 2. Representing register names are not possible in LLVM IR at the moment

I understand that these reasons are not stronger than the case both
you and Andrew made.

In theory, if we can lower SP as a register value, we should be able
to lower any register. So we should be able to create an intrinsic
that maps that specific construct into a name, for example:

%reg = @llvm.register("sp")

And lower the same way Marc's code lowers the stack pointer. I'll
follow up on the LLVM list.


> which is far more general then this
> proposal.  Using stack pointer is inherently target specific anyway, some
> targets (e.g. sparc64) apply e.g. a bias to the stack pointer register,
> it is unclear what this builtin would do in that case.

This is true. Joined with the fact that Andrew mentioned, that the
code is split in files anyway, our proposal loses a lot of weight.

Thanks both for your comments!

cheers,
--renato


Re: Builtin: stack pointer

2014-03-27 Thread Renato Golin
On 27 March 2014 11:33, Jakub Jelinek  wrote:
> Sure, normally register keyword is just a hint, that e.g. GCC I think
> ignores whenever optimizing (for -O0 it means a variable doesn't have to
> be allocated on the stack), but when it is part of the GNU global/local 
> register
> variable extension (i.e. there is both register keyword and
> asm (""), then register is not a mere hint, but a requirement.

Yes, I know. That's why I said it was a "joint issue" of that AND the
IR problem. The extension syntax is clear and precise.


> E.g. glibc uses such inline asms for inline syscalls heavily, other projects
> similarly.

Good to know. I had plans of compiling glibc with LLVM soon, and this
might just be another strong point towards implementing named
registers in LLVM.

Thanks,
--renato


Re: [llvmlinux] Builtin: stack pointer

2014-03-27 Thread Renato Golin
On 27 March 2014 14:44, Behan Webster  wrote:
> That is what led to this proposal.

I'm having a go at implementing named registers, and I also have
started a thread in the LLVM mailing list. Let's see how it goes...


> For the existing cases this is true. However such a builtin would allow it
> to be used in common code (as a theoretical example).

I'd not be happy to see this usage in userland, to be honest. Even as
a builtin the guarantees we can give are pretty limited.

cheers,
-renato


Re: Builtin: stack pointer

2014-03-27 Thread Renato Golin
On 27 March 2014 15:06,   wrote:
> But unwind code is inherently platform-dependent.  Your objection to the 
> inline asm that references SP by name is that it's platform dependent.  The 
> builtin would reduce the amount of platform dependent code by one line, i.e., 
> probably much less than one percent.

There were some user cases but to be honest, I'm not comfortable with
regular users trying to access the stack pointer for anything.
Everything else (low level, bare-metal) I can think of is
platform-dependent elsewhere anyway, so that argument was invalid.

cheers,
--renato