Re: Should the build system use ar rcs instead of ranlib + ar rc?

2016-07-16 Thread Andreas Schwab
Andrew Pinski  writes:

> On Fri, Jul 15, 2016 at 6:46 PM, Patrick Palka  wrote:
>> The build step that invokes "ranlib libbackend.a" (which immediately
>> follows the invocation of "ar rc libbackend.a ...") takes over 7 seconds
>> on my machine and causes the entire 450MB archive to be rewritten.  By
>> instead making the build system use ar rcs -- so that the archive and
>> its index are built at once -- the time it takes for the compiler to get
>> rebuilt gets reduced by 25%, from 27s to 20s (in a --disable-bootstrap
>> tree after touching a random source file).  This is a pretty significant
>> reduction in compile time and disk io.
>>
>> Is this a good idea?
>
> Yes and no.  Do we know if all ar support rcs now?

It is easy to find out: run a configure check.

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: Should the build system use ar rcs instead of ranlib + ar rc?

2016-07-16 Thread Patrick Palka
On Sat, Jul 16, 2016 at 4:27 AM, Andreas Schwab  wrote:
> Andrew Pinski  writes:
>
>> On Fri, Jul 15, 2016 at 6:46 PM, Patrick Palka  wrote:
>>> The build step that invokes "ranlib libbackend.a" (which immediately
>>> follows the invocation of "ar rc libbackend.a ...") takes over 7 seconds
>>> on my machine and causes the entire 450MB archive to be rewritten.  By
>>> instead making the build system use ar rcs -- so that the archive and
>>> its index are built at once -- the time it takes for the compiler to get
>>> rebuilt gets reduced by 25%, from 27s to 20s (in a --disable-bootstrap
>>> tree after touching a random source file).  This is a pretty significant
>>> reduction in compile time and disk io.
>>>
>>> Is this a good idea?
>>
>> Yes and no.  Do we know if all ar support rcs now?
>
> It is easy to find out: run a configure check.

I see, I was not aware that non-GNU ar is supported.  I posted a patch
on gcc-patches that uses a configure check at
https://gcc.gnu.org/ml/gcc-patches/2016-07/msg00991.html

>
> Andreas.
>
> --
> Andreas Schwab, sch...@linux-m68k.org
> GPG Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
> "And now for something completely different."


Re: "error: static assertion failed: [...]"

2016-07-16 Thread Martin Sebor

 From a diagnostics point-of-view, neither version is quoted:

c/c-parser.c: error_at (assert_loc, "static assertion failed: %E", string);

cp/semantics.c: error ("static assertion failed: %s",

To be "quoted", it would need to use either %q or %<%>. Note that %qs
would produce `foo' not "foo". Nevertheless, we probably want to print
'x' for character literals and not `'x'' and "string" for string
literals and not `string'. Thus, the wiki should probably be amended to
clarify this.

Also, there is a substantial difference between %E and %s when the
string contains control characters such as \n \t \u etc. Clang uses
something similar to %E.


I agree that the two sets of quotes in '"string"' don't look quite
right, and that letting embedded control sequences affect the compiler
output probably isn't a good idea.  Which, AFAICT, leaves a plain %E
as the only option.

The nice thing about %qE (or %<%E%>) vs plain %E is that the former
highlight the text of the string (i.e., make it bold).  That makes
the reason for the error stand out.  It would be nice if there were
a way to do that without adding the extra pair of quotes or giving
up on the control character transformation.



For comparison, we use %s to print

test.c:1:9: note: #pragma message:
string
#pragma message "\nstring"


That seems potentially unsafe and might be worth changing to match
the static_assert.

Martin


Re: [RFC] Rationale for passing vectors by value in SIMD registers

2016-07-16 Thread Andrew Pinski
On Sat, Feb 15, 2014 at 12:16 AM, Matthew Fortune
 wrote:
>> On Fri, Feb 14, 2014 at 2:17 AM, Matthew Fortune
>>  wrote:
>> > MIPS is currently evaluating the benefit of using SIMD registers to pass
>> vector data by value. It is currently unclear how important it is for vector 
>> data
>> to be passed in SIMD registers. I.e. the need for passing vector data by 
>> value
>> in real world code is not immediately obvious. The performance advantage is
>> therefore also unclear.
>> >
>> > Can anyone offer insight in the rationale behind decision decisions made
>> for other architectures ABIs? For example, the x86 and x86_64 calling
>> convention for vector data types presumes that they will passed in SSE/AVX
>> registers and raises warnings if passed when sse/avx support is not enabled.
>> This is what MIPS is currently considering however there are two concerns:
>> >
>> > 1) What about the ability to create architecture/implementation
>> independent APIs that may include vector types in the prototypes. Such APIs
>> may be built for varying levels of hardware support to make the most of a
>> specific architecture implementation but be called from otherwise
>> implementation agnostic code. To support such a scenario we would need to
>> use a common calling convention usable on all architecture variants.
>> > 2) Although vector types are not specifically covered by existing ABI
>> definitions for MIPS we have unfortunately got a defacto standard for how
>> to pass these by value. Vector types are simply considered to be small
>> structures and passed as such following normal ABI rules. This is still a
>> concern even though it is generally accepted that there is some room for
>> change when it comes to vector data types in an existing ABI.
>> >
>> > If anyone could offer a brief history the x86 ABI with respect to vector 
>> > data
>> types that may also be interesting. One question would be whether the use
>> of vector registers in the calling convention was only enabled by default 
>> once
>> there was a critical mass of implementations, and therefore the default ABI
>> was changed to start making assumptions about the availability of features
>> like SSE and AVX.
>> >
>> > Comments from any other architecture that has had to make such changes
>> over time would also be welcome.
>>
>> PPC and arm and AARCH64 are common targets where vectors are
>> passed/return via value.  The idea is simple, sometimes you have functions
>> like vector float vsinf(vector float a) where you want to be faster and 
>> avoid a
>> round trip to L1 (or even L2).  These kind of functions are common for vector
>> programming.  That is extending the scalar versions to the vector versions.
>
> I suppose this cost (L1/L2) is mitigated to some extent if the base ABI were 
> to pass a vector in multiple GP/FP register rather than via the stack. There 
> would of course still be a cost to marshall the data between GP/FP and SIMD 
> registers. For such a support routine like vsinf I would expect it also needs 
> a reduced clobber set to ensure that the caller's live SIMD registers don't 
> need saving/restoring, such registers would normally be caller-saved. If the 
> routine were to clobber all SIMD registers anyway then the improvement in 
> argument passing seems negligible.
>
> Do you/anyone know of any open source projects, which have started adopting 
> generic vector types, and show the use of this kind of construct?

Yes glibc provides these functions on x86 now.

Thanks,
Andrew

>
> Thanks for your input.
>
> Matthew
>
>>
>> Thanks,
>> Andrew Pinski
>>
>> >
>> > Thanks in advance,
>> > Matthew
>> >