Re: [C++ Patch] PR 51911 V2 ("G++ accepts new auto { list }")

2015-09-12 Thread Paolo Carlini

Hi,

On 09/11/2015 10:05 PM, Jason Merrill wrote:

On 09/11/2015 03:11 PM, Paolo Carlini wrote:

this is a slightly reworked (simplified) version of a patch I sent a
while ago. The issue is that we are not enforcing at all 5.3.4/2 in the
parser, thus we end up rejecting the first test below with a misleading
error message talking about list-initialization (and a wrong location),
because we diagnose it too late like 'auto foo{3, 4, 5};', and simply
accepting the second. Tested x86_64-linux.


Hmm, I think we really ought to accept

  new auto { 2 }

to be consistent with all the other recent changes to treat { elt } 
like (elt); this seems like a piece that was missed from DR 1467.  Do 
you agree, Ville?
I see, while waiting for Ville, maybe I can ask what we should do in 
case he agrees. The error message we currently emit for new auto { 3, 4, 
5 } seems suboptimal in various ways:


51911.C:6:31: error: direct-list-initialization of ‘auto’ requires 
exactly one element [-fpermissive]

 auto foo = new auto { 3, 4, 5 };
   ^
51911.C:6:31: note: for deduction to ‘std::initializer_list’, use 
copy-list-initialization (i.e. add ‘=’ before the ‘{’)


the caret is under the last '}' and the note doesn't make much sense (of 
course do_auto_deduction doesn't know we are handling a new). Thus I 
wonder if we should anyway have something in the parser and with which 
exact wording (just tweak what I sent earlier replacing 'exactly one 
parenthesized expression' with 'exactly one element'?!?)


Thanks,
Paolo.





Re: [SPARC] Simplify const_all_ones_operand

2015-09-12 Thread Eric Botcazou
[Sorry for the delay]

> gen_rtx_CONST_VECTOR ensures that there is a single instance of:
> 
>(const_vector:M [(const_int -1) ... (const_int -1)])
> 
> for each M, so pointer equality with CONSTM1_RTX is enough.  Also,
> HOST_BITS_PER_WIDE_INT == 32 is doubly dead: HOST_WIDE_INT is always
> 64 bits now, and we always use const_int rather than const_double
> or const_wide_int for all-ones values (or any other value that
> fits in a signed HOST_WIDE_INT).
> 
> This seemed like a better fix than using the helper functions
> that I'm about to post.
> 
> Tested with a cross-compiler and ensured that the predicate was
> still accepting all (-)1 values.  OK to install?
> 
> Thanks,
> Richard
> 
> gcc/
>   * config/sparc/predicates.md (const_all_ones_operand): Use
>   CONSTM1_RTX to simplify definition.

OK, thanks.

-- 
Eric Botcazou


Re: [C++ Patch] PR 51911 V2 ("G++ accepts new auto { list }")

2015-09-12 Thread Ville Voutilainen
On 11 September 2015 at 23:05, Jason Merrill  wrote:
> Hmm, I think we really ought to accept
>
>   new auto { 2 }
>
> to be consistent with all the other recent changes to treat { elt } like
> (elt); this seems like a piece that was missed from DR 1467.  Do you agree,
> Ville?


Yes. I thought we already accept it.


Re: [patch] libstdc++/67173 Fix filesystem::canonical for Solaris 10.

2015-09-12 Thread Jonathan Wakely
On 11 September 2015 at 18:39, Martin Sebor wrote:
> On 09/11/2015 08:21 AM, Jonathan Wakely wrote:
>>
>> Solaris 10 doesn't follow POSIX in accepting a null pointer as the
>> second argument to realpath(), so allocate a buffer for it.
>
>
> FWIW, the NULL requirement is new in Issue 7. In Issue 6, the behavior
> is implementation-defined, and before then it was an error. Solaris 10
> claims conformance to SUSv2 and its realpath fails with EINVAL.
> Solaris 11 says it conforms to Issue 6 but according to the man page
> its realpath already implements the Issue 7 requirement.

Thanks.

> I suspect the same problem will come up on other systems so checking
> the POSIX version might be better than testing for each OS.

The problem with doing that is that many BSD systems have supported
passing NULL as an extension long before issue 7, so if we just check
something like _POSIX_VERSION >= 200809L then we can only canonicalize
paths up to PATH_MAX on many systems where the extension is available
but _POSIX_VERSION implies conformance to an older standard.

So maybe we want an autoconf macro saying whether realpath() accepts
NULL, and just hardcode it for the targets known to support it, and
only check _POSIX_VERSION for the unknowns.


RE: [Patch,tree-optimization]: Add new path Splitting pass on tree ssa representation

2015-09-12 Thread Ajit Kumar Agarwal


-Original Message-
From: Jeff Law [mailto:l...@redhat.com] 
Sent: Thursday, September 10, 2015 3:10 AM
To: Ajit Kumar Agarwal; Richard Biener
Cc: GCC Patches; Vinod Kathail; Shail Aditya Gupta; Vidhumouli Hunsigida; 
Nagaraju Mekala
Subject: Re: [Patch,tree-optimization]: Add new path Splitting pass on tree ssa 
representation

On 08/26/2015 11:29 PM, Ajit Kumar Agarwal wrote:
>
> Thanks. The following testcase testsuite/gcc.dg/tree-ssa/ifc-5.c
>
> void dct_unquantize_h263_inter_c (short *block, int n, int qscale, int 
> nCoeffs) { int i, level, qmul, qadd;
>
> qadd = (qscale - 1) | 1; qmul = qscale << 1;
>
> for (i = 0; i <= nCoeffs; i++) { level = block[i]; if (level < 0) 
> level = level * qmul - qadd; else level = level * qmul + qadd; 
> block[i] = level; } }
>
> The above Loop is a candidate of path splitting as the IF block merges 
> at the latch of the Loop and the path splitting duplicates The latch 
> of the loop which is the statement block[i] = level into the 
> predecessors THEN and ELSE block.
>
> Due to above path splitting,  the IF conversion is disabled and the 
> above IF-THEN-ELSE is not IF-converted and the test case fails.
>>So I think the question then becomes which of the two styles generally 
>>results in better code?  The path-split version or the older if-converted 
>>version.

>>If the latter, then this may suggest that we've got the path splitting code 
>>at the wrong stage in the optimizer pipeline or that we need better 
>>heuristics for >>when to avoid applying path splitting.

The code generated by the Path Splitting is useful when it exposes the DCE, 
PRE,CCP candidates. Whereas the IF-conversion is useful
When the if-conversion exposes the vectorization candidates. If the  
if-conversion doesn't exposes the vectorization and the path splitting doesn't 
Exposes the DCE, PRE redundancy candidates, it's hard to predict. If the 
if-conversion does not exposes the vectorization and in the similar case
Path splitting exposes the DCE , PRE  and CCP redundancy candidates then path 
splitting is useful. Also the path splitting increases the granularity of the
THEN and ELSE path makes better register allocation and code scheduling.

The suggestion for keeping the path splitting later in the pipeline after the 
if-conversion and the vectorization is useful as it doesn't break the
Existing Deja GNU tests. Also useful to keep the path splitting later in the 
pipeline after the if-conversion and vectorization is that path splitting
Can always duplicate the merge node into its predecessor after the 
if-conversion and vectorization pass, if the if-conversion and vectorization
Is not applicable to the Loops. But this suppresses the CCP, PRE candidates 
which are earlier in the optimization pipeline.


>
> There were following review comments from the above patch.
>
> +/* This function performs the feasibility tests for path splitting
>> +   to perform. Return false if the feasibility for path splitting
>> +   is not done and returns true if the feasibility for path
>> splitting +   is done. Following feasibility tests are performed.
>> + +   1. Return false if the join block has rhs casting for assign
>> +  gimple statements.
>
> Comments from Jeff:
>
>>> These seem totally arbitrary.  What's the reason behind each of 
>>> these restrictions?  None should be a correctness requirement 
>>> AFAICT.
>
> In the above patch I have made a check given in point 1. in the loop 
> latch and the Path splitting is disabled and the IF-conversion happens 
> and the test case passes.
>>That sounds more like a work-around/hack.  There's nothing inherent with a 
>>type conversion that should disable path splitting.

I have sent the patch with this change and I will remove the above check from 
the patch.

>>What happens if we delay path splitting to a point after if-conversion is 
>>complete?

This is better suggestion as explained above, but gains achieved through path 
splitting by keeping earlier in the pipeline before if-conversion
, tree-vectorization, tree-vrp is suppressed if the following optimization 
after path splitting is not applicable for the above loops.

I have made the above changes and the existing set up doesn't break but the 
gains achieved in the benchmarks like rgbcmy_lite(EEMBC)
Benchmarks is suppressed. The path splitting for the above EEMBC benchmarks 
give gains of 9% and for such loops if-conversion and
Vectorization is not applicable  exposing gain with path splitting 
optimizations.

>>Alternately, could if-conversion export a routine which indicates if a 
>>particular sub-graph is likely to be if-convertable?  The path splitting pass 
>>could then use >>that routine to help determine if the path ought to be split 
>>or if it should instead rely on if-conversion.

Exporting the above routine from IF-conversion is not useful as the heuristics 
used in IF-conversion populates the Data Dependence through
Scalar evolution which is trigger much later in the optimization

[libgfortran,committed] Fix some issues revealed by sanitizer

2015-09-12 Thread FX
Three recent PRs (67527, 67535, 67536) have reported issues latent issues in 
libgfortran, where C code relies on undefined behavior: (i) large left shift of 
signed value, and (ii) calls to memcpy(dst,src,len) where src == NULL and len 
== 0.

After confirming that all three issues are indeed undefined behavior, I 
committed the attached patch to trunk to fix them.
Regtested on x86_64-apple-darwin15.
Issues are latent, so I don’t think a backport is in order.

Cheers,
FX




lib.diff
Description: Binary data


Re: [Patch] Teach RTL ifcvt to handle multiple simple set instructions

2015-09-12 Thread Eric Botcazou
> Some targets have -mbranch-cost to allow overriding the default costing.
>   visium has a branch cost of 10!

Yeah, the GR5 variant is pipelined but has no branch prediction; moreover 
there is an additional adverse effect coming for the instructions bus...

>   Several ports have a cost of 6 either unconditionally or when the branch
>   is not well predicted.

9 for UltraSPARC3, although this should probably be lowered if predictable_p.

-- 
Eric Botcazou


Re: [PATCH] Convert SPARC to LRA

2015-09-12 Thread Eric Botcazou
> Richard, Eric, any objections?

Do we really need to promote to 64-bit if TARGET_ARCH64?  Most 32-bit 
instructions are still available.  Otherwise this looks good to me.

You need to update https://gcc.gnu.org/backends.html

-- 
Eric Botcazou


[Darwin, Driver/specs] A bit more TLC, factor version-min code, lose some dead specs.

2015-09-12 Thread Iain Sandoe
Hi,

This is a clean-up and code re-factoring patch; NFC intended.

a) The arcane version specs that attempted to figure out a version-min on the 
basis of inspecting other c/l flags have been dead for some time (since the 
driver started inserting a default); so let's lose those.

b) We'll need access to the version-min at the outer level in the 
darwin-specific driver code in order to use that to figure out where to find 
sysroots, so let's factor the code to do that and shorten it at the same time.

c) In the absence of any other information, the best choice we can make for 
version-min is 10.5, since that's the only version that (fully) supports all of 
our archs...

d) ... however, we normally do have a version-min default, to let's make sure 
we provide that as the init.

e) If a user elects to call a compiler (cc1*, f951, etc.) without a 
version-min, the init provided in (d) will kick in, and stop the compiler from 
segv-ing.  However, we should warn the User that a default was used, because 
it's very likely not what was intended.

OK for trunk?
Iain

gcc/
* config/darwin-driver.c (darwin_default_min_version): Refactor code.
(darwin_driver_init): Note a version-min when provided on the c/l.
* config/darwin.c (darwin_override_options): Warn the user if the 
compiler is
invoked without a version-min.
* config/darwin.h (%darwin_minversion): Remove.
* config/i386/darwin.h: Likewise.
* config/rs6000/darwin.h: Likewise.
* config/darwin.opt (mmacosx-version-min=): Use the configured default, 
rather than
an arbitrary constant.

From 04cfd2ea513fdaa48826891dbc87615f97270950 Mon Sep 17 00:00:00 2001
From: Iain Sandoe 
Date: Mon, 7 Sep 2015 09:59:45 +0100
Subject: [PATCH] [Darwin, driver] Revise and clean up system version
 detection.

This re-factors the system version detection and makes the version
string available to the darwin_driver_init() routine, when it is
available.  We also delete all the "darwin_minversion" spec stuff
which is unused and redundant.  The default value for compilers is
now set to match the configured default.  If compilers are invoked
directly without an explicit system version, a warning is given.
---
 gcc/config/darwin-driver.c | 109 +++--
 gcc/config/darwin.c|   8 
 gcc/config/darwin.h|  10 ++---
 gcc/config/darwin.opt  |   5 +--
 gcc/config/darwin12.h  |   3 ++
 gcc/config/i386/darwin.h   |  10 -
 gcc/config/rs6000/darwin.h |  12 -
 7 files changed, 65 insertions(+), 92 deletions(-)

diff --git a/gcc/config/darwin-driver.c b/gcc/config/darwin-driver.c
index 727ea53..4042a68 100644
--- a/gcc/config/darwin-driver.c
+++ b/gcc/config/darwin-driver.c
@@ -96,73 +96,36 @@ darwin_find_version_from_kernel (void)
included in tm.h).  This may be overidden by setting the flag explicitly
(or by the MACOSX_DEPLOYMENT_TARGET environment).  */
 
-static void
-darwin_default_min_version (unsigned int *decoded_options_count,
-   struct cl_decoded_option **decoded_options)
+static const char *
+darwin_default_min_version (void)
 {
-  const unsigned int argc = *decoded_options_count;
-  struct cl_decoded_option *const argv = *decoded_options;
-  unsigned int i;
-  const char *new_flag;
-
-  /* If the command-line is empty, just return.  */
-  if (argc <= 1)
-return;
-  
-  /* Don't do this if the user specified -mmacosx-version-min= or
- -mno-macosx-version-min.  */
-  for (i = 1; i < argc; i++)
-if (argv[i].opt_index == OPT_mmacosx_version_min_)
-  return;
-
-  /* Retrieve the deployment target from the environment and insert
- it as a flag.  */
-  {
-const char * macosx_deployment_target;
-macosx_deployment_target = getenv ("MACOSX_DEPLOYMENT_TARGET");
-if (macosx_deployment_target
-   /* Apparently, an empty string for MACOSX_DEPLOYMENT_TARGET means
-  "use the default".  Or, possibly "use 10.1".  We choose
-  to ignore the environment variable, as if it was never set.  */
-   && macosx_deployment_target[0])
-  {
-   ++*decoded_options_count;
-   *decoded_options = XNEWVEC (struct cl_decoded_option,
-   *decoded_options_count);
-   (*decoded_options)[0] = argv[0];
-   generate_option (OPT_mmacosx_version_min_, macosx_deployment_target,
-1, CL_DRIVER, &(*decoded_options)[1]);
-   memcpy (*decoded_options + 2, argv + 1,
-   (argc - 1) * sizeof (struct cl_decoded_option));
-   return;
-  }
-  }
+  /* Try to retrieve the deployment target from the environment.  */
+  const char *new_flag = getenv ("MACOSX_DEPLOYMENT_TARGET");
 
+  /* Apparently, an empty string for MACOSX_DEPLOYMENT_TARGET means
+ "use the default".  Or, possibly "use 10.1".  We choose
+ to ignore the environment variable, as if it was never set.  */
+  if (new_flag == NULL || new_flag[0] == 0

[COMMITTED] Fix ICE compiling sbgdec.c in ffmpeg package

2015-09-12 Thread John David Anglin
The attached change fixes an ICE caused by pa_output_move_double's failure to 
handle a DImode high
const_int operand, (high:DI (const_int 864 [0x141dd76000])).  The need 
to handle this operand
form in pa_output_move_double is quite infrequent, so the problem has gone 
unnoticed for many years.

This issue only affects 32-bit targets.

DImode constants are split by pa_emit_move_sequence into high and lo_sum parts. 
 These parts are
handled by separate insn patterns in pa.md.  There is a third pattern which 
handles all other DImode moves
using pa_output_move_double for moves involving the integer registers.  There 
is an 'i' immediate_operand
constraint to handle immediate operands.  The problem occurs when an equivalent 
reg value,
REG_EQUIV (high:DI (const_int 864 [0x141dd76000])), is substituted for 
a register operand and
the instruction is NOT re-recognized.

Splitting DImode constants was presumably done to improve optimization 
opportunities.  However, this only
works because the predicate does not allow immediate operands.  This is not 
recommended since insn recognition
may fail if an immediate operand is substituted (e.g., a const_int) as there is 
no predicate that accepts a const_int.
However, if the predicate is changed to allow immediate operands, then the 
high/lo_sum splits are just put back
together and the optimization benefit lost.  There is no cost benefit in doing 
this, so I'm not sure why it happens.

There is also an issue regarding the insn length when we have a high immediate 
operand.  It should be
12 not 16 for that case.  I intend to look at fixing this but it's not a major 
issue.

Tested on hppa-unknown-linux-gnu and hppa2.0w-hp-hpux11.11 with no regressions. 
 Committed to active
branches and trunk.

Dave
--
John David Anglin   dave.ang...@bell.net


2015-09-12  John David Anglin  

* config/pa/pa.c (pa_output_move_double): Enhance to handle HIGH
CONSTANT_P operands.

Index: config/pa/pa.c
===
--- config/pa/pa.c  (revision 226978)
+++ config/pa/pa.c  (working copy)
@@ -2464,6 +2464,7 @@
   enum { REGOP, OFFSOP, MEMOP, CNSTOP, RNDOP } optype0, optype1;
   rtx latehalf[2];
   rtx addreg0 = 0, addreg1 = 0;
+  int highonly = 0;
 
   /* First classify both operands.  */
 
@@ -2674,7 +2675,14 @@
   else if (optype1 == OFFSOP)
 latehalf[1] = adjust_address_nv (operands[1], SImode, 4);
   else if (optype1 == CNSTOP)
-split_double (operands[1], &operands[1], &latehalf[1]);
+{
+  if (GET_CODE (operands[1]) == HIGH)
+   {
+ operands[1] = XEXP (operands[1], 0);
+ highonly = 1;
+   }
+  split_double (operands[1], &operands[1], &latehalf[1]);
+}
   else
 latehalf[1] = operands[1];
 
@@ -2727,8 +2735,11 @@
   if (addreg1)
 output_asm_insn ("ldo 4(%0),%0", &addreg1);
 
-  /* Do that word.  */
-  output_asm_insn (pa_singlemove_string (latehalf), latehalf);
+  /* Do high-numbered word.  */
+  if (highonly)
+output_asm_insn ("ldil L'%1,%0", latehalf);
+  else
+output_asm_insn (pa_singlemove_string (latehalf), latehalf);
 
   /* Undo the adds we just did.  */
   if (addreg0)


[Darwin, driver] Make our sysroot handling the same as clang's.

2015-09-12 Thread Iain Sandoe
Hi,

At present, we have somewhat strange sysroot in that --sysroot causes -isysroot 
to be passed to cc1* ...
... but no -syslibroot for collect2/ld.
Conversely, -isysroot /XYZZY causes this to be passed as -isysroot to cc1* and 
-syslibroot to collect/ld.

AFAIU the options, it ought to be the other way around.

However (possibly to be 'compatible' with GCC) currently clang has the same 
behaviour for both -isysroot and --sysroot.

In preparation for other improvements, I want to use the --sysroot properly, so 
the following patch makes GCC's behaviour match that of clang (for the Darwin 
platform only).  TODO: is to start a joint conversation with the clang folks 
about retiring the -syslibroot from the "-isysroot" case ( but that ship might 
have sailed - this behaviour is already "in the wild" ).

checked out with x86_64-darwin12 and an i686-darwin9 X x86_64-darwin12 cross, 
where the built-in configured --with-sysroot= is correctly overridden by 
--sysroot= on the c/l.

Unfortunately, other than detecting the version of ld in use, there's no 
sensible configure mechanism to set HAVE_LD_SYSROOT, and it seems somewhat 
overkill to do that for something that's essentially a constant for the 
versions of ld that work with current GCC.

OK for trunk?
Iain

gcc/
* config/darwin.h (TARGET_SYSTEM_ROOT): Remove this from here (use the 
mechanism ing gcc.c)
driven by - (HAVE_LD_SYSROOT): New. (SYSROOT_SPEC): New. 
(LINK_SYSROOT_SPEC): Revise to remove the default for target sysroot.
(STANDARD_STARTFILE_PREFIX_1): New.
(STANDARD_STARTFILE_PREFIX_2): New.

From 3d20de66ec2e7da5f371175253f01d0dd74dc8c0 Mon Sep 17 00:00:00 2001
From: Iain Sandoe 
Date: Fri, 11 Sep 2015 23:39:35 +0100
Subject: [PATCH] [darwin] Make sysroot stuff the same as clang

---
 gcc/config/darwin.h | 19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/gcc/config/darwin.h b/gcc/config/darwin.h
index bb4451a..7d093c9 100644
--- a/gcc/config/darwin.h
+++ b/gcc/config/darwin.h
@@ -207,12 +207,21 @@ extern GTY(()) int darwin_ms_struct;
 #undef  LINK_GCC_C_SEQUENCE_SPEC
 #define LINK_GCC_C_SEQUENCE_SPEC "%G %L"
 
-#ifdef TARGET_SYSTEM_ROOT
-#define LINK_SYSROOT_SPEC \
-  "%{isysroot*:-syslibroot %*;:-syslibroot " TARGET_SYSTEM_ROOT "}"
-#else
+/* ld64 supports a sysroot, it just has a different name and there's no easy
+   way to check for it at config time.  */
+#undef HAVE_LD_SYSROOT
+#define HAVE_LD_SYSROOT 1
+/* It seems the only (working) way to get a space after %R is to append a
+   dangling '/'.  */
+#define SYSROOT_SPEC "%{!isysroot*:-syslibroot %R/ }"
+
+/* Do the same as clang, for now, and insert the sysroot for ld when an
+   isysroot is specified.  */
 #define LINK_SYSROOT_SPEC "%{isysroot*:-syslibroot %*}"
-#endif
+
+/* Suppress the addition of extra prefix paths when a sysroot is in use.  */
+#define STANDARD_STARTFILE_PREFIX_1 ""
+#define STANDARD_STARTFILE_PREFIX_2 ""
 
 #define DARWIN_PIE_SPEC "%{fpie|pie|fPIE:}"
 
-- 
2.2.1





Re: [PATCH][20/n] Remove GENERIC stmt combining from SCCVN

2015-09-12 Thread Eric Botcazou
>   * fold-const.c (fold_binary_loc): Move simplifying of comparisons
>   against the highest or lowest possible integer ...
>   * match.pd: ... as patterns here.

This incorrectly dropped the calls to omit_one_operand_loc, resulting in the 
failure of the attached Ada test: if the operand has side effects, you cannot 
replace the entire comparison with just 'true' or 'false'.


* gnat.dg/overflow_sum3.adb: New test.

-- 
Eric Botcazou--  { dg-do run }
--  { dg-options "-gnato" }

procedure Overflow_Sum3 is

   function Ident (I : Integer) return Integer is
   begin
  return I;
   end;

   X : Short_Short_Integer := Short_Short_Integer (Ident (127));

begin
   if X+1 <= 127 then
  raise Program_Error;
   end if;
exception
   when Constraint_Error => null;
end;


Fix PR ada/66965

2015-09-12 Thread Eric Botcazou
This just removes the problematic test.  Applied on the mainline.


2015-09-12  Eric Botcazou  

PR ada/66965
* gnat.dg/specs/addr1.ads: Remove.


-- 
Eric Botcazou


Re: [patch] libstdc++/67173 Fix filesystem::canonical for Solaris 10.

2015-09-12 Thread Martin Sebor

On 09/12/2015 04:09 AM, Jonathan Wakely wrote:

On 11 September 2015 at 18:39, Martin Sebor wrote:

On 09/11/2015 08:21 AM, Jonathan Wakely wrote:


Solaris 10 doesn't follow POSIX in accepting a null pointer as the
second argument to realpath(), so allocate a buffer for it.



FWIW, the NULL requirement is new in Issue 7. In Issue 6, the behavior
is implementation-defined, and before then it was an error. Solaris 10
claims conformance to SUSv2 and its realpath fails with EINVAL.
Solaris 11 says it conforms to Issue 6 but according to the man page
its realpath already implements the Issue 7 requirement.


Thanks.


I suspect the same problem will come up on other systems so checking
the POSIX version might be better than testing for each OS.


The problem with doing that is that many BSD systems have supported
passing NULL as an extension long before issue 7, so if we just check
something like _POSIX_VERSION >= 200809L then we can only canonicalize
paths up to PATH_MAX on many systems where the extension is available
but _POSIX_VERSION implies conformance to an older standard.


You're right. I agree that just checking the POSIX version may not
lead to optimal results. Some implementations also support multiple
versions and the one in effect may not be the one most recent. To
get the most out of those, it's usually recommended to set
_POSIX_C_SOURCE to the latest version before including any headers,
then test the supported version, and when the supported version is
less than the one requested and involves implementation defined
behavior (as in Issue 6) or undefined behavior that's known to be
used to provide extensions (as in SUSv2), check the implementation
version just as the patch does.



So maybe we want an autoconf macro saying whether realpath() accepts
NULL, and just hardcode it for the targets known to support it, and
only check _POSIX_VERSION for the unknowns.


That will work for Issue 6 where the realpath behavior is
implementation-defined. The test wouldn't yield reliable results
for SUSv2 implementations where the behavior is undefined. There,
the result would have to be hardcoded based on what the manual says.
An autoconf test won't help with the ENAMETOOLONG problem since it
might depend on the filesystem. To overcome that, libstdc++ would
need to do the path traversal itself.

Martin


Re: [patch] libstdc++/67173 Fix filesystem::canonical for Solaris 10.

2015-09-12 Thread Martin Sebor

On 09/12/2015 12:07 PM, Martin Sebor wrote:

On 09/12/2015 04:09 AM, Jonathan Wakely wrote:

On 11 September 2015 at 18:39, Martin Sebor wrote:

On 09/11/2015 08:21 AM, Jonathan Wakely wrote:


Solaris 10 doesn't follow POSIX in accepting a null pointer as the
second argument to realpath(), so allocate a buffer for it.



FWIW, the NULL requirement is new in Issue 7. In Issue 6, the behavior
is implementation-defined, and before then it was an error. Solaris 10
claims conformance to SUSv2 and its realpath fails with EINVAL.
Solaris 11 says it conforms to Issue 6 but according to the man page
its realpath already implements the Issue 7 requirement.


Thanks.


I suspect the same problem will come up on other systems so checking
the POSIX version might be better than testing for each OS.


The problem with doing that is that many BSD systems have supported
passing NULL as an extension long before issue 7, so if we just check
something like _POSIX_VERSION >= 200809L then we can only canonicalize
paths up to PATH_MAX on many systems where the extension is available
but _POSIX_VERSION implies conformance to an older standard.


You're right. I agree that just checking the POSIX version may not
lead to optimal results. Some implementations also support multiple
versions and the one in effect may not be the one most recent. To
get the most out of those, it's usually recommended to set
_POSIX_C_SOURCE to the latest version before including any headers,
then test the supported version, and when the supported version is
less than the one requested and involves implementation defined
behavior (as in Issue 6) or undefined behavior that's known to be
used to provide extensions (as in SUSv2), check the implementation
version just as the patch does.



So maybe we want an autoconf macro saying whether realpath() accepts
NULL, and just hardcode it for the targets known to support it, and
only check _POSIX_VERSION for the unknowns.


That will work for Issue 6 where the realpath behavior is
implementation-defined. The test wouldn't yield reliable results
for SUSv2 implementations where the behavior is undefined.


Sorry -- I meant "SUSv2 where the behavior is an error, or in
implementations where the behavior is undefined" (in general).

But based on what you said, the BSD implementations that accept
NULL are non-conforming so they would need to be treated as such
(i.e., outside of POSIX).


There,
the result would have to be hardcoded based on what the manual says.
An autoconf test won't help with the ENAMETOOLONG problem since it
might depend on the filesystem. To overcome that, libstdc++ would
need to do the path traversal itself.

Martin




Re: [PATCH] PR28901 -Wunused-variable ignores unused const initialised variables

2015-09-12 Thread Mark Wielaard
On Sat, Sep 12, 2015 at 12:29:05AM +0200, Bernd Schmidt wrote:
> On 09/12/2015 12:12 AM, Mark Wielaard wrote:
> >12 years ago it was decided that -Wunused-variable shouldn't warn about
> >static const variables because some code used const static char rcsid[]
> >strings which were never used but wanted in the code anyway. But as the
> >bug points out this hides some real bugs. These days the usage of rcsids
> >is not very popular anymore. So this patch changes the default to warn
> >about unused static const variables with -Wunused-variable. And it adds
> >a new option -Wno-unused-const-variable to turn this warning off. New
> >testcases are included to test the new warning with -Wunused-variable
> >and suppressing it with -Wno-unused-const-variable or unused attribute.
> 
> >PR c/28901
> >* gcc.dg/unused-4.c: Adjust warning for static const.
> >* gcc.dg/unused-variable-1.c: New test.
> >* gcc.dg/unused-variable-2.c: Likewise.
> 
> Should these go into c-c++-common?

No. It is C only. But I realize that isn't really clear from my patch
nor from the documentation I wrote for it. Since in C++ a const isn't
by default file scoped and const variables always need to be initialized
they are used differently than in C. Where in C you would use a #define
in C++ you would use a const. So the cxx_warn_unused_global_decl ()
lang hook explicitly says to not warn about them. Although I think that
is correct, I now think it is confusing you cannot enable the warning
for C++ if you really want to. It should be off by default for C++, but
on by default for C when -Wunused-variable is enabled. But it would
actually be nice to be able to use it too for C++ if the user really
wants to instead of having the warning suppression for C++ hardcoded.

> Otherwise I'm ok with the patch, please
> wait a few days to see if there are objections to this change then commit.

I'll rewrite my patch a little, add some C++ testcases, and update the
documentation. Then we can discuss again.

Thanks,

Mark


Re: [PATCH] Convert SPARC to LRA

2015-09-12 Thread David Miller
From: Eric Botcazou 
Date: Sat, 12 Sep 2015 16:04:09 +0200

>> Richard, Eric, any objections?
> 
> Do we really need to promote to 64-bit if TARGET_ARCH64?  Most 32-bit 
> instructions are still available.  Otherwise this looks good to me.

No, we don't, we can just promote to 32-bit.  I'll make that adjustment
and update the backends page as well.

Thanks.