Re: RFA: AVR: Support building AVR Linux targets
On Mon, 12 Aug 2013, Nick Clifton wrote: > Hi Dennis, Hi Anatoly, Hi Eric, > > I have run into a small problem building GCC for an AVR Linux target - > glibc-c.o is not being built. It turns out that the section handling > "avr-*-*" in the config.gcc file is redefining tmake_file without > allowing for the fact that t-glibc has already been added to it. Your patch itself makes sense on general principles, but the concept of an AVR Linux target doesn't - this is an 8-bit processor Really, the bug you've found is that there's an avr-*-* case that is too general, matching nonsensical targets such as AVR Linux rather than just avr-*-none / avr-*-elf. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH/Merge Request] Vtable Verification feature.
On Mon, 12 Aug 2013, Caroline Tice wrote: > I have finished my first pass at the porting documentation. It is now > available on the vtable verification feature web site: > http://gcc.gnu.org/wiki/vtv Please provide the links there in a form that does not require JavaScript to read them (in particular, does not require running non-free JavaScript, as per GNU Project principles). That's certainly possible for Google Docs documents; you can read <https://docs.google.com/document/pub?id=1ZfyfkB62EFaR4_g4JKm4--guz3vxm9pciOBziMHTnK4&pli=1> (GCC Architectural Goals) without JavaScript, for example. -- Joseph S. Myers jos...@codesourcery.com
Re: [RFC] MULTILIB_COMPATIBLE option support in multilib
On Tue, 13 Aug 2013, Ilya Enkovich wrote: > Any comments on the used approach? You don't have any documentation (fragments.texi) in your patch. Apart from that, this patch doesn't appear to do anything to engage with all the problems of having multiple multilibs in use at once, such as the inability of ld to handle multiple sysroots (so the single sysroot determined by SYSROOT_SUFFIX_SPEC would just get quietly used), and the need to get all search paths for one multilib (for libraries, and, separately, for headers) used before any paths for another - those paths may be accumulated from multiple specs rather than just a single spec, so I don't think anything just locally changing how specs are processed can suffice. Certainly a much more detailed analysis of all the search paths that are related to a multilib in some way, and how the driver causes the linker and cc1 to search in those paths, and how multiple relevant options get processed by the linker and cc1 after being passed to them, would be needed to justify that a particular patch does achieve the desired search order. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Tue, 20 Aug 2013, Cong Hou wrote: > When a sin() (cos(), log(), etc.) function is called on a value of > float type and the returned double value is converted to another value > of float type, GCC converts this function call into a float version > (sinf()) in the optimization mode. This avoids two type conversions > and the float version function call usually takes less time. However, > this can result in different result and therefore is unsafe. Whether it's safe depends on the existence of input values for which the double-rounded result is different from the single-rounded result. It's certainly safe for some of the functions for which the optimization is enabled, regardless of the precisions involved (fabs, logb). For sqrt, if the smaller precision is p then the larger precision being at least 2p+3 (I think) makes things same. For the other cases, exhaustive searches may be needed to determine when the conversion is safe. (Actually, this code appears to deal with cases with up to three types involved - the operand, the result and the function that's called. This complicates the analysis a bit, but the same principles apply.) -- Joseph S. Myers jos...@codesourcery.com
Re: [RFC] Add conditional compare support
On Wed, 21 Aug 2013, Zhenqiang Chen wrote: > SHORT_CIRCUIT expression will be converted to CCMP expressions > at the end of front-end. That doesn't sound like the right place to me. We want the same code to be generated whether users write && and || directly or write corresponding sequences of if conditions. In general we want to move away from optimizing complicated expressions in fold-const, towards having GIMPLE passes that detect the relevant patterns however they were written (and without needing to combine GIMPLE back into trees for passing through the old folder). So, I think you should detect this at some point in the GIMPLE optimizations rather than changing fold-const at all (and as a consequence, you then shouldn't need to change front-end pretty-printers). -- Joseph S. Myers jos...@codesourcery.com
Re: powerpc64le multilibs and multiarch dir
On Thu, 22 Aug 2013, Alan Modra wrote: > For multiarch, powerpc64le-linux now will use powerpc64le-linux-gnu. > Given a typical big-endian native toolchain with os dirs /lib and > /lib64, we'll use /lible and /lib64le if supporting little-endian as > well. If you happen to use /lib and /lib32, then the little-endian > variants are /lible and /lib32le. For completeness I also support > building big-endian multilibs on a little-endian host. Given those directory names, what are the defined dynamic linker paths for 32-bit and 64-bit little-endian (which can't depend on which of the various directory arrangements may be in use)? Does the Power Architecture support, in principle, a single system running both big-endian and little-endian processes at the same time, or is it a matter of hardware configuration or boot-time setup? Unless both can run at once, it doesn't seem particularly useful to define separate directories for big and little endian since a particular system would be just one or the other. Other architectures don't define separate directory names for endianness variants unless multiarch naming conventions are in use. (ARM does support having processes with both endiannesses - indeed, a process can change its own endianness with the unprivileged SETEND instruction (deprecated in ARMv8) - although the Linux kernel has no support for this the last time I checked, and big-endian ARM is rarely used.) -- Joseph S. Myers jos...@codesourcery.com
Re: [Patch] Fix infinite loop/crash if array initializer index equals max value
On Thu, 22 Aug 2013, Selvaraj, Senthil_Kumar wrote: > 2013-08-23 Senthil Kumar Selvaraj > * c-typeck.c (output_pending_init_elements): Handle overflow of > constructor_unfilled_index. This patch needs to add include a testcase to the testsuite that fails before and passes after the patch. (I realise such a test may only be able to run for a subset of targets.) -- Joseph S. Myers jos...@codesourcery.com
Re: libtool update for powerpc64le-linux
On Fri, 23 Aug 2013, Alan Modra wrote: > I'd like to import upstream libtool into gcc to support powerpc64le, Has the sysroot semantics issue been resolved in upstream libtool, or do you mean "import with 3334f7ed5851ef1e96b052f2984c4acdbf39e20c reverted"? -- Joseph S. Myers jos...@codesourcery.com
Re: RFA: AVR: Support building AVR Linux targets
On Fri, 23 Aug 2013, nick clifton wrote: > Hi Joseph, > > Your patch itself makes sense on general principles, but the concept of an > > AVR Linux target doesn't - this is an 8-bit processor Really, the bug > > you've found is that there's an avr-*-* case that is too general, matching > > nonsensical targets such as AVR Linux rather than just avr-*-none / > > avr-*-elf. > > Fair enough. Would you like me to > > A. Apply the patch anyway. On the general principle that target specific > sections in config.gcc add to tmake_file rather than override it. > > B. Do nothing. On the general principle that if it ain't broke don't fix > it. > > C. Draw up another patch that restricts the AVR patterns in config.gcc to > -none and -elf. A and C - I think both changes should be applied. -- Joseph S. Myers jos...@codesourcery.com
RE: [PATCH] Add a new option "-ftree-bitfield-merge" (patch / doc inside)
On Fri, 23 Aug 2013, Zoran Jovanovic wrote: > New test case involving unions is added. Tests for unions should include *execution* tests that the relevant bits have the right values after the sequence of assignments. -- Joseph S. Myers jos...@codesourcery.com
Re: Clean up pretty printers [15/n]
On Sun, 25 Aug 2013, Gabriel Dos Reis wrote: > Instead of defining the same macro several times in different > translation units, we can just make it a function and use it where > needed. Have you made sure that po/exgettext still extracts the relevant messages for translation (I'm not sure how good it is at identifying C++ functions with arguments called gmsgid, or how good xgettext then is at identifying relevant C++ function calls)? In particular, even if other cases get identified properly, conditional expressions such as > @@ -379,15 +375,15 @@ > switch (code) > { > case INTEGER_TYPE: > - pp_string (pp, (TYPE_UNSIGNED (t) > - ? M_(" - : M_(" + pp->translate_string (TYPE_UNSIGNED (t) > +? " +: "
Re: Clean up pretty printers [15/n]
On Sun, 25 Aug 2013, Gabriel Dos Reis wrote: > I don't know whether after the switch C++ xgettext is being invoked > explicitly with -C or --c++, or whether we are still relying on xgettext > to pick the language based on the file extension. This is probably > another reason why we might want to rename files to use apppropriate > extensions. In the meantime, we might want to explicitly specify the > language for the input source file. > > po/exgettext only looks whether the parameter name ends with 'msgid'. And uses --language=c, --language=c++ and --language=GCC-source depending on the source files and functions in question. The issue is both whether the awk code properly handles C++ function declarations to identify the parameter, and whether whatever way it passed the identified function name to xgettext (if it does identify the function) results in xgettext (with whatever language it uses) handling the call correctly. > | > @@ -379,15 +375,15 @@ > | > switch (code) > | > { > | > case INTEGER_TYPE: > | > - pp_string (pp, (TYPE_UNSIGNED (t) > | > - ? M_(" | > - : M_(" | > + pp->translate_string (TYPE_UNSIGNED (t) > | > +? " | > +: " | > | may need each case of the conditional expression to be marked for > | extraction for translation, or to be separated into two separate calls > | using "if" (we've had that issue before with conditional expressions in > | diagnostics). > > Hmm, why would that be needed now, and not before? > (not that I am found of the conditional, but only by curiosity.) Previously, each string was inside a separate call to M_() - the strings themselves were the msgid parameters. Now, the msgid parameter is not a single string but a more complicated expression and xgettext may not handle such expressions properly the way it handles having just a single string as parameter. -- Joseph S. Myers jos...@codesourcery.com
Re: Clean up pretty printers [15/n]
On Sun, 25 Aug 2013, Gabriel Dos Reis wrote: > | Previously, each string was inside a separate call to M_() - the strings > | themselves were the msgid parameters. Now, the msgid parameter is not a > | single string but a more complicated expression and xgettext may not > | handle such expressions properly the way it handles having just a single > | string as parameter. > > OK, thanks the explanation. > > Do you think the same issue arise with diagnostic_set_info, > diagnostic_append_note? I don't see any problematic calls to those functions with this sort of conditional expression. -- Joseph S. Myers jos...@codesourcery.com
Re: wide-int branch now up for public comment and review
On Sun, 25 Aug 2013, Mike Stump wrote: > On Aug 23, 2013, at 8:02 AM, Richard Sandiford > wrote: > > We really need to get rid of the #include "tm.h" in wide-int.h. > > MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent > > thing in there. If that comes from tm.h then perhaps we should put it > > into a new header file instead. > > BITS_PER_UNIT comes from there as well, and I'd need both. Grabbing the > #defines we generate is easy enough, but BITS_PER_UNIT would be more > annoying. No port in the tree makes use of it yet (other than 8). So, > do we just assume BITS_PER_UNIT is 8? Regarding avoiding tm.h dependence through BITS_PER_UNIT (without actually converting it from a target macro to a target hook), see my suggestions at <http://gcc.gnu.org/ml/gcc-patches/2010-11/msg02617.html>. It would seem fairly reasonable, if in future other macros are converted to hooks and it's possible to build multiple back ends into a single compiler binary, to require that all such back ends share a value of BITS_PER_UNIT. BITS_PER_UNIT describes the number of bits in QImode - the RTL-level byte. I don't think wide-int should care about that at all. As I've previously noted, many front-end uses of BITS_PER_UNIT really care about the C-level char and so should be TYPE_PRECISION (char_type_node). Generally, before thinking about how to get BITS_PER_UNIT somewhere, consider whether the code is actually correct to be using BITS_PER_UNIT at all - whether it's the RTL-level QImode that is really what's relevant to the code. -- Joseph S. Myers jos...@codesourcery.com
Re: RFA: prefer double over same-size float as conversion result
In convert_arguments I think you should be comparing TYPE_MAIN_VARIANT (valtype) against double_type_node and long_double_type_node, rather than just valtype. This is PR c/35649 (so include that number in your ChangeLog entry and close that bug as fixed). -- Joseph S. Myers jos...@codesourcery.com
Re: [ubsan] Introduce pointer_sized_int_node
On Wed, 28 Aug 2013, Richard Biener wrote: > > That is not easily possible. The thing is, in the C FEs they are created > > from the C/C++ type name (for stdint purposes): > > if (INTPTR_TYPE) > > intptr_type_node = > > TREE_TYPE (identifier_global_value (c_get_ident (INTPTR_TYPE))); > > if (UINTPTR_TYPE) > > uintptr_type_node = > > TREE_TYPE (identifier_global_value (c_get_ident (UINTPTR_TYPE))); > > and are supposed to match the stdint.h previously used type, because > > it is part of ABI etc. (long vs. long long vs. int e.g. when two of these > > are the same precision). > > So the middle-end uintptr type needs to be something different, while it > > ideally should have the same precision, it is not the same language type. > > But it's the C ABI type - and we do need the C ABI types from within > the middle-end. Joseph, any idea? The possible issues that I see are: * Types getting determined from strings (that are the values of macros such as INTPTR_TYPE) by the front ends. This is no real problem, in that the Fortran front end already emulates what's required in get_typenode_from_name. Of course it would be cleaner for all these macros to be defined to evaluate to enum values rather than strings, so the middle-end doesn't need such emulation of C type name parsing. See the thread starting at <http://gcc.gnu.org/ml/gcc-patches/2010-12/msg00964.html> for a conversion to using enum values for the stdint.h macros - that could be updated for current trunk. Of course the aim would be for other macros such as SIZE_TYPE to be converted as well, and indeed to get a conversion to hooks (probably a few separate hooks for different groups of types, taking appropriate enum values as arguments, rather than just one for all the typedefs or a separate hook for every typedef). * Targets still not having the type information in GCC. As I said in <http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00248.html>, listing those targets, it's probably time to deprecate them (and then freely check in changes that break them by requiring those types to be defined). -- Joseph S. Myers jos...@codesourcery.com
Re: Fwd: Re: [PATCH]. Fix HAVE_SYS_SDT_H for cross-compilation
On Fri, 30 Aug 2013, Christian Bruel wrote: > So to cross build a target library | > --with-build-sysroot=|dir looks appropriate to specify the alternative > host root path. > but > --with-sysroot looks not appropriate because it changes the search paths > (that should still be /usr/include on the target tree). If you are building a cross compiler to a target such as GNU/Linux for which the concept of native directory arrangements makes sense, then you need --with-sysroot. That specifies the directory that the final installed compiler will use to find headers and libraries (as modified by sysroot suffixes and the usual relocation if the compiler is installed somewhere other than its configured prefix). If, when you are building such a compiler, the sysroot is located somewhere other than the location specified with --with-sysroot, then you need --with-build-sysroot as well. That's the entire purpose of --with-build-sysroot: the case where the build-time location of the sysroot for a sysrooted compiler you are building differs from the configured location given with --with-sysroot. I don't see what your use is for a cross compiler without --with-sysroot. It's simply not correct for the installed cross compiler to search /usr/include, and if you don't specify a sysroot at all then it will just search directories such as /include and /lib under the installation (which cannot represent the complexity of standard GNU/Linux directory arrangements; note the absolute paths, intended to be interpreted relative to a sysroot, in the libc.so linker script from glibc, for example), so that compiler needs a sysroot. All the above is about cross compilers - compilers with $host != $target. Your patch suggests you are actually using a cross compiler to build a native compiler - $build != $host == $target. In that case, it's best not to build target libraries at all, as they will already have been built with the $build-x-$target cross compiler that must have previously been built from the same GCC sources, with the same configuration. That is, "make all-host" and "make install-host", and copy the libraries from the previous build. And since you already have such a $build-x-$target compiler, it would seem best to determine what directory that compiler actually searches for headers and compute target_header_dir that way, to the extent that the target headers need examining to determine configuration of the compiler proper. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Fri, 30 Aug 2013, Cong Hou wrote: > I have done a simple experiment and found that it may only be safe to do > this conversion for fabs() and sqrt(). For fabs() it is easy to see the > safety. For sqrt(), I tried many random floating point values on two > versions and did not see a difference. Although it cannot prove its safety, > my proposed patch (shown below) does change the situation. > > However, for other functions, my experiment shows it is unsafe to do this > conversion. Those functions are: > > sin, cos, sinh, cosh, asin, acos, asinh, acosh, tan, tanh, atan, atanh, > log, log10, log1p, log2, logb, cbrt, erf, erfc, exp, exp2, expm1. I don't see why it would be unsafe for logb - can you give an example (exact float input value as hex float, and the values you believe logb should return for float and double). For sqrt you appear to have ignored my explanation of the necessary and sufficient condition on the precisions of the respective types. The conversion is not safe for sqrt if the two types are double and long double and long double is x86 extended, for example. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Sat, 31 Aug 2013, Cong Hou wrote: > > I don't see why it would be unsafe for logb - can you give an example > > (exact float input value as hex float, and the values you believe logb > > should return for float and double). > > > > Please try the following code (you will get different results whether to > use optimization mode): > > #include > #include > > int main() > { > int i = 0x3edc67d5; > float f = *((float*)&i); > float r1 = logb(f); > float r2 = logbf(f); > printf("%x %x\n", *((int*)&r1), *((int*)&r2)); > } (a) Please stop sending HTML email, so your messages reach the mailing list, and resend your messages so far to the list. The mailing list needs to see the whole of both sides of the discussion of any patch being proposed for GCC. (b) I referred to the values *you believe logb should return*. Optimization is not meant to preserve library bugs; the comparison should be on the basis of correctly rounded results from both float and double functions. The correct return from logb appears to be -2 here, and I get that from both logb and logbf with current git glibc. The existence of a bug in some old library is not relevant here. (c) I always advise writing such tests as *valid C code* using hex floats (or if really necessary, unions), not *invalid C code* with aliasing violations. -- Joseph S. Myers jos...@codesourcery.com
Re: [RFC] Changes to the wide-int classes
On Sun, 1 Sep 2013, Richard Sandiford wrote: > like to get rid of them and just keep the genuine operators. The problem > is that I'd have liked the AND routine to be "wi::and", but of course that > isn't possible with "and" being a keyword, so I went for "wi::bit_and" > instead. Same for "not" and "wi::bit_not", and "or" and "wi::bit_or". > Then it seemed like the others should be bit_* too, and "wi::bit_and_not" > just seems a bit unwieldly... > > Hmm, if we decide to forbid the use of "and" in gcc, perhaps we could > #define it to something safe. But that would probably be too confusing. "and" in C++ is not a keyword, but an alternative token (like %> etc.). As such, it can't be defined as a macro, or used as a macro name in #define, #ifdef etc., and does not get converted to 0 in #if conditions but is interpreted as an operator there. (The status of "new" and "delete" in this regard is less clear; see <http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#369>.) -- Joseph S. Myers jos...@codesourcery.com
Re: [patch 4/4] -fstrict-volatile-bitfields cleanup v3: remove from defaults on all targets
On Mon, 2 Sep 2013, Richard Earnshaw wrote: > On 01/09/13 14:10, Bernd Edlinger wrote: > > IMHO the AAPCS forbids packed structures. Therefore we need not > > interfere with the C++ memory model if we have unaligned data. > > The AAPCS neither forbids nor requires packed structures. They're a GNU > extension and as such not part of standard C++. Thus the semantics of > such an operation are irrelavant to the AAPCS: you get to chose what the > behaviour is in this case... The trouble is that AAPCS semantics are incompatible with the default GNU semantics for non-packed structures as well - AAPCS strict-volatile-bitfields is only compatible with --param allow-store-data-races=1, which is not the default for any language variant accepted by GCC (and I say that the default language semantics here should not depend on the target architecture). -- Joseph S. Myers jos...@codesourcery.com
RE: [patch 4/4] -fstrict-volatile-bitfields cleanup v3: remove from defaults on all targets
On Tue, 3 Sep 2013, Bernd Edlinger wrote: > >> The trouble is that AAPCS semantics are incompatible with the default GNU > >> semantics for non-packed structures as well - AAPCS > >> strict-volatile-bitfields is only compatible with --param > >> allow-store-data-races=1, which is not the default for any language > >> variant accepted by GCC (and I say that the default language semantics > >> here should not depend on the target architecture). > > > > As I said it should be easy to fulfil AAPCS requirements if they do not > > violate > > language constrains during code generation and thus warn about accesses > > that are emitted in a way not conforming to AAPCS (a warning at struct > > declaration time would be nicer, but I guess requires more coding and > > thought, > > though at the point we compute DECL_BIT_FIELD_REPRESENTATIVE in > > stor-layout.c would be a suitable place). > > > > Just to make that clear, Sandra's patch tries to follow the AAPCS > requirements, > _even if_ they violate the language constraints. > > Thus -fstrict-volatile-bitfields implies --param allow-store-data-races=1. And my concern is specifically about the defaults - the default for ARM should be the same C/C++ language as on other targets - rather than what happens if someone specifies -fstrict-volatile-bitfields explicitly. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Tue, 3 Sep 2013, Xinliang David Li wrote: > >From Joseph: > > "The > conversion is not safe for sqrt if the two types are double and long > double and long double is x86 extended, for example." > > This is not reflected in the patch. No, the problem is that it tries to reflect it but hardcodes the specific example I gave, rather than following the logic I explained regarding the precisions of the types involved, which depend on the target. And since I only gave a simplified analysis, for two types when this function deals with cases involving three types, the patch submission needs to include its own analysis for the full generality of three types to justify the logic used (as inequalities involving the three precisions). (I suspect it reduces to the case of two types so you don't need to go into the details of reasoning about floating point to produce the more general analysis. But in any case, it's for the patch submitter to give the full explanation.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Tue, 3 Sep 2013, Cong Hou wrote: > + CASE_MATHFN (SQRT) > +/* sqrtl(double) cannot be safely converted to sqrt(double). */ > +if (fcode == BUILT_IN_SQRTL && > +(TYPE_MODE (type) == TYPE_MODE (double_type_node)) && > +!flag_unsafe_math_optimizations) > + break; Please reread my previous messages on this subject and try again, with regard to both the patch itself and the accompanying analysis. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Tue, 3 Sep 2013, Cong Hou wrote: > Could you please tell me how to check the precision of long double in > GCC on different platforms? REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->p (but you should be referring to the relevant types - "type", the type being converted to, "itype", the type of the function being called in the source code, "TREE_TYPE (arg0)", the type of the argument after extensions have been removed, and "newtype", computed from those - so you should have expressions like the above with two or more of those four types, but not with long_double_type_node directly). The patch submission will need to include a proper analysis to justify to the reader why the particular inequality with particular types from those four is correct in all cases where the relevant code may be executed. -- Joseph S. Myers jos...@codesourcery.com
Re: [Patch] Fix infinite loop/crash if array initializer index equals max value
On Wed, 4 Sep 2013, Senthil Kumar Selvaraj wrote: > Reattaching the patch with a testcase for the AVR target. I'm not sure > how to generalize the testcase for other targets - the constant is the > max value (unsigned) of the mode used to represent initialized array > indices. Logically that should be __SIZE_MAX__, unless there's an unusual choice of size_t. Are there any issues with large compiler memory consumption if you use __SIZE_MAX__ there for ordinary 32-bit or 64-bit targets? If so, that would be a reason to keep the test restricted to AVR; if not, it would seem better to have __SIZE_MAX__ in the test (or a macro defined in the test that defaults to __SIZE_MAX__ but is overridden for particular targets, if some targets need it overridden). -- Joseph S. Myers jos...@codesourcery.com
RE: [patch 4/4] -fstrict-volatile-bitfields cleanup v3: remove from defaults on all targets
On Wed, 4 Sep 2013, Bernd Edlinger wrote: > How about this for a compromise: Let's make the > default of -fstrict-volatile-bitfields an optional > configure argument for gcc 4.9, that can be used for every > target, even for X86_64 of you want ? I think it's generally a bad idea for semantic options, such as --param allow-store-data-races=1, to depend on configure options like that (as opposed to non-semantic options such as what architecture variant is optimized for by default). Of course, *if* the user passes --param allow-store-data-races=1, then -fstrict-volatile-bitfields becomes non-semantic at the C/C++ language level and it's reasonable for an architecture to default it to on. Any configure option related to this should maybe be a stronger version of --disable-threads, meaning that not only is no thread library support to be included in any of GCC's libraries but that the compiler will never be used to build any code for which any concurrency or data race issues arise (including through signals) and so --param allow-store-data-races=1 can be the default (along with any target-specific consequences of that default). Call it --enable-data-races and require it to be used along with --disable-threads, or something like that. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Wed, 4 Sep 2013, Cong Hou wrote: > I have made a new patch according to your comments. I found several > references saying that the precision 2p+2 is OK for the sqrt > conversion (one here: > http://www.cs.berkeley.edu/~fateman/generic/algorithms.pdf). The new > patch is pasted as below. This patch submission still fails to pay attention to various of my comments. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Wed, 4 Sep 2013, Xinliang David Li wrote: > > This patch submission still fails to pay attention to various of my > > comments. > > > > If you can provide inlined comments in the patch, that will be more > useful and productive. I have explained things several times in this thread already. I see no point in repeating things when what I would say has already been said and ignored. As far as I can tell, this latest patch submission has taken one line from the message it is in response to, and largely ignored the following two paragraphs (including where I explicitly say that said line should not appear literally in the source code at all). But, repeating what I said before yet again: (but you should be referring to the relevant types The patch does not do properly that. It refers explicitly to long_double_type_node and double_type_node. - "type", the type being converted to, "itype", the type of the function being called in the source code, "TREE_TYPE (arg0)", the type of the argument after extensions have been removed, and "newtype", computed from those The patch only engages with "type". I suspect "newtype" is what it really means there when using "type". When it uses long_double_type_node and double_type_node, those should be "itype". - so you should have expressions like the above with two or more of those four types, but not with long_double_type_node directly). See above. The patch uses long_double_type_node directly, contrary to what I explicitly said. You are free to disagree with something I say in a review - but in that case you need to reply specifically to the review comment explaining your rationale for disagreeing with it. Just ignoring the comment and not mentioning the disagreement wastes the time of reviewers. The patch submission will need to include a proper analysis to justify to the reader why the particular inequality with particular types from those four is correct in all cases where the relevant code may be executed. The comments only refer to "T1" and "T2" without explaining how they relate to the original expression (three types) or the variables within GCC dealing with various types (four types, because newtype gets involved). I said back in <http://gcc.gnu.org/ml/gcc-patches/2013-09/msg00161.html> and <http://gcc.gnu.org/ml/gcc-patches/2013-08/msg01384.html> that three types are involved - when I say "the patch submission needs to include its own analysis for the full generality of three types", again, it's inappropriate for a patch to omit such an analysis without justification. The patch submission needs to include an analysis that properly explains the transformation involved and the conditions under which it is safe. Maybe starting along the lines of: We have an expression of the form (T1) sqrtT2 ((T2) exprT3), where exprT3 has type T3 (TREE_TYPE (ARG0)), T2 is the type of the floating-point square root function being used (ITYPE), T1 is TYPE and all these types are binary floating-point types. We wish to optimize if possible to an expression of the form (T1) sqrtT4 ((T4) exprT3), where T4 (NEWTYPE) is narrower than T2. (Then explain the choice of T4 and the conditions under which the transformation is safe, with appropriate references.) I suggest that for the next patch submission you (the patch submitter) make sure you spend at least an hour properly understanding the issues and all the previous messages in the thread and writing up the detailed, coherent explanation of the transformation and analysis of the issues that I asked for some time ago, as if for a reader who hasn't read any of this thread or looked at this transformation in GCC before. I've certainly spent longer than that on review in this thread. It's normal for a patch involving anything at all tricky to take hours to write up (and also normal for one to discover, in the course of writing the detailed coherent analysis for people who aren't familiar with the code and the issues involved, that there's in fact something wrong with the patch and it needs revisiting before submission). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Enable non-complex math builtins from C99 for Bionic
On Thu, 5 Sep 2013, Bernhard Reutner-Fischer wrote: > uClibc has C99 math support optionally as well as other optional, nonstandard > feature sets. Your patch does not seem to check (in a cross-compilable > fashion, of course) if C99 math is supported in libc or not, thus regressing > on uClibc with C99_MATH enabled. It is a basic principle that it should be possible to bootstrap cross tools by building the compiler, once, then using it to build runtime libraries. We haven't got there yet, but configure-time tests for library features that affect how the compiler behaves are best avoided so as to support such bootstraps - and if present, it's best for there to be a corresponding configure option to override them, and a command-line option to control things on a per-multilib basis. To the extent that we do have configure support for checking library headers if those are available when the compiler is configured, it only supports checking the default multilib and not headers for other multilibs. -- Joseph S. Myers jos...@codesourcery.com
Re: RFC - Next refactoring steps
On Thu, 5 Sep 2013, Andrew MacLeod wrote: > Or are you suggesting that coretypes.h is a file we can assume is available? Every .c file should start with includes of config.h, system.h and coretypes.h, in that order, so all other headers can assume they have been included. (In principle I think coretypes.h should be split up so that files included in the driver don't see at all, even through minimal coretypes.h declarations, types such as "tree" that don't make sense in the driver. But that probably requires splitting up various other headers, and I don't think it's a priority.) -- Joseph S. Myers jos...@codesourcery.com
Re: RFC - Next refactoring steps
On Fri, 6 Sep 2013, Richard Biener wrote: > But yes, making all dependencies explicit puts #includes where they > belong and shows where header refactoring would make sense. It also > removes weird includes from .c files that are only necessary to make > included required headers not break. One difficulty is that e.g. flags.h uses SWITCHABLE_TARGET (tm.h) but is included in lots of front-end files that don't actually care about the SWITCHABLE_TARGET use in flags.h and have no other reason to include tm.h. Since we generally want to eliminate tm.h uses in front-end files (converting target macros used there into hooks, etc.), making the "dependency" explicit in the obvious way isn't such a good idea. Instead, probably the bits that don't depend on SWITCHABLE_TARGET should move out of flags.h to other headers, the residual bit that does depend on tm.h should become e.g. switchable-target-flags.h (and include tm.h), and files that currently include flags.h should change to including the generated options.h (and any other headers needed for things formerly got from flags.h). (If other headers do start including tm.h, of course it becomes more complicated to identify which front-end files are still including tm.h, as they may include it indirectly.) -- Joseph S. Myers jos...@codesourcery.com
Re: crash fix for unhanded operation
On Mon, 9 Sep 2013, Mike Stump wrote: > Presently gcc just dies with a crash for an unhanded operation, the > below handles it better. > > I'm torn between sorry and error, error might be better. Thoughts? error means there is something wrong with the user's source code, and should generally be associated with the location of an erroneous source code construct. I don't see how it can be appropriate here; my impression is that this code should never fail for any compiler input, valid or invalid, and so an ICE seems better than sorry (which is for some well-defined source code feature lacking GCC support, or exceeding implementation limits in GCC). -- Joseph S. Myers jos...@codesourcery.com
Re: crash fix for unhanded operation
On Mon, 9 Sep 2013, Mike Stump wrote: > On Sep 9, 2013, at 3:48 PM, "Joseph S. Myers" wrote: > > On Mon, 9 Sep 2013, Mike Stump wrote: > > > >> Presently gcc just dies with a crash for an unhanded operation, the > >> below handles it better. > >> > >> I'm torn between sorry and error, error might be better. Thoughts? > > > > error means there is something wrong with the user's source code, > > Right. If one has mode X, and there are no instructions for mode X, the > thing that is wrong with their source code is wanting to perform an > operation in mode X. The language standard doesn't mandate that the > operation for mode X works, so, we are free to just give an error. Well, your patch was missing the testcase, or explanation for why a testcase can't be added to the testsuite, so I don't know what sort of source code you have in mind here. But "mode" is only a source-code concept to the extent that the user uses "mode" attributes. If the user does a division of "int" values, say, that's fully defined by ISO C, and so if there's back-end support for it missing, that's a back-end bug and an ICE is appropriate; it doesn't reflect anything wrong with the user's source code. If however the user used a "mode" attribute for a mode for which the division operation is unsupported, the front end should give an error for that operation, just as the front end gives an error for dividing two struct values (unless of course that's supported by the language, in which case it gets lowered to appropriate GIMPLE); it doesn't generate GIMPLE with a struct division because that's invalid GIMPLE, and it shouldn't generate GIMPLE with a division on types for which the operation isn't supported. Note that for 32-bit x86, simply doing int x __attribute__((mode(TI))); gives "error: unable to emulate 'TI'"; you don't even need to try any operations on that mode. For target-specific types with more fine-grained restrictions on permitted operations, there are several target hooks such as invalid_unary_op, invalid_binary_op and invalid_parameter_type. That's how the errors should be given, so that the invalid GIMPLE is never generated. > > should generally be associated with the location of an erroneous source > > code construct. > > Indeed, the ^ points to exactly what is wrong in their source, which is > (relatively new and) nice. My observation has been that errors given by RTL expanders are at bad locations because input_location at that point tends to be set to the end of the function (at best). In particular, I've seen that for errors from target-specific built-in function expanders. But maybe this has improved since I saw such bad locations. -- Joseph S. Myers jos...@codesourcery.com
Re: [patch driver]: Fix relocatable toolchain path-replacement in driver
On Wed, 11 Sep 2013, Kai Tietz wrote: > This change fixes a quirk happening for relocated toolchains. Driver > remembers original-build directory The original *build* directory should never be known to the driver; only the *configured* prefix. This area is complicated and subtle; you need to be very careful and precise in the analysis included in any patch submission related to it. Get things absolutely clear in your head and then write a complete, careful, precise self-contained explanation of everything relevant in GCC, as if for an audience not at all familiar with the code in question. > in std_prefix variable for being able later to modify path. Sadly > this std_prefix variable gets modified > later on, and so update_path can't work any longer as desired. This You don't say what "as desired" is. prefix.c contains a long description of *what* the path updates are, but no explanation of *why* or any overall design; it appears to be something Windows-specific. update_path should, as I understand it, always be called with PATH being a relocated path (one that has had the configured prefix converted to the prefix where the toolchain is in fact installed, using make_relative_prefix, or that doesn't need any relocation, e.g. a path passed with a -B option). In incpath.c, for example, you see how a path is computed using make_relative_prefix before update_path is called. Thus, it is correct for std_prefix to be the relocated prefix rather than the unrelocated one. If there is a bug, I'd say it's in whatever code is calling update_path without having first done the make_relative_prefix relocation on the path being passed to update_path. Thus, it is that caller that you need to fix. -- Joseph S. Myers jos...@codesourcery.com
Re: [patch driver]: Fix relocatable toolchain path-replacement in driver
On Wed, 11 Sep 2013, Kai Tietz wrote: > > update_path should, as I understand it, always be called with PATH being a > > relocated path (one that has had the configured prefix converted to the > > prefix where the toolchain is in fact installed, using > > make_relative_prefix, or that doesn't need any relocation, e.g. a path > > passed with a -B option). In incpath.c, for example, you see how a path > > is computed using make_relative_prefix before update_path is called. > > Thus, it is correct for std_prefix to be the relocated prefix rather than > > the unrelocated one. > > I understand this different. See here translate_name function (used > by update_path). If path is empty it falls back to PREFIX (not > std_prefix). So I understand that this routine tries to cut off > compiled in prefix from paths and replaces it by key's path. But > well, I might got this wrong. It doesn't really make sense to me for there to be a mixture of direct replacement of the configure-time prefix based on keys, and first replacing the configure-time prefix with the install location and then replacing that based on keys. I think the most appropriate design is as I said: make_relative_prefix, and nothing else, deals with replacing the configure-time prefix with the install location, and then prefix.c deals with any subsequent replacements of the install location. On that basis, translate_name should be using std_prefix not PREFIX, and any callers of these prefix.c interfaces that pass paths still using the configure-time prefix should be fixed so that the make_relative_prefix substitution occurs first. You've found evidence of an inconsistency. A fix needs to be based on a clear design - a clear understanding of what sort of paths should be used where and what interfaces should translate them to other kinds of paths - rather than just locally changing the paths used in one particular place without a basis for arguing that the change fits a coherent design and so won't cause problems elsewhere. -- Joseph S. Myers jos...@codesourcery.com
Re: gcc_GAS_FLAGS: Add more gcc_cv_as_flags overrides
On Wed, 11 Sep 2013, Thomas Schwinge wrote: > configure:23559: checking assembler for thread-local storage support I don't expect anyone to work on this, but such a configure test is silly - it's a test where a failure is more likely to indicate the test is buggy than that the feature tested for is actually missing. I think each GCC version should have a minimum version of GNU binutils, to be used except for a whitelist of targets where some other assembler/linker are known to be supported, and should check the binutils version number (unless configured for one of the special targets not to use GNU binutils) and fail if it's too old, but not check for assembler or linker features in cases where the binutils version requirement means they can be assumed. (If the relevant conditional code in GCC wouldn't be used anyway for the proprietary Unix targets with non-GNU system assembler/linker, it can be made unconditional.) So only targets where TLS support was a recent addition to the GNU assembler, or those supporting a non-GNU assembler where TLS may or may not be supported, would need such a test at all. -- Joseph S. Myers jos...@codesourcery.com
Re: [patch][PR/42955] Don't install $(target)/bin/gcc, gfortran, etc.
On Wed, 11 Sep 2013, Brooks Moses wrote: > Ping^3? > > Joseph, I'd been cc'ing you on this because it's driver-related and I > didn't find a more-obvious reviewer. Is there someone else I should > be asking to review it? Alternately, is this a change that should be > discussed on gcc@ before having the actual patch reviewed? I'm thinking of this as a build-system change rather than a driver one. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH][ubsan] Add VLA bound instrumentation
On Thu, 12 Sep 2013, Joseph S. Myers wrote: > (Actually, I believe sizes (in bytes) greater than target PTRDIFF_MAX, not > just SIZE_MAX, should be caught, because pointer subtraction cannot work > reliably with larger objects. So it's not just when the size or > multiplication overflow size_t, but when they overflow ptrdiff_t.) And, to add a bit more to the list of possible ubsan features (is this todo list maintained anywhere?), even if the size is such that operations on the array are in principle defined, it's possible that adjusting the stack pointer by too much may take it into other areas of memory and so cause stack overflow that doesn't get detected by the kernel. So maybe ubsan should imply -fstack-check or similar. Everything about VLA checking - checks on the size being positive and on it not being larger than PTRDIFF_MAX, and on avoiding stack overflow - applies equally to alloca: calls to alloca should also be instrumented. (But I think alloca (0) is valid.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH][ubsan] Add VLA bound instrumentation
On Thu, 12 Sep 2013, Marek Polacek wrote: > This patch adds the instrumentation of VLA bounds. Basically, it just > checks that the size of a VLA is positive. I.e., We also issue an error > if the size of the VLA is 0. It catches e.g. This is not an objection to this patch, but there are a few other bits of VLA bound instrumentation that could be done as well. If the size has a wide-enough type to be bigger than the target's SIZE_MAX, and is indeed bigger than SIZE_MAX, that could be detected at runtime as well. Or if the multiplication of array size and element size exceeds SIZE_MAX (this covers both elements of constant size, and elements that are themselves VLAs, but the former can be handled more efficiently by comparing against an appropriate constant rather than needing a runtime test for whether a multiplication in size_t overflows). (Actually, I believe sizes (in bytes) greater than target PTRDIFF_MAX, not just SIZE_MAX, should be caught, because pointer subtraction cannot work reliably with larger objects. So it's not just when the size or multiplication overflow size_t, but when they overflow ptrdiff_t.) -- Joseph S. Myers jos...@codesourcery.com
Re: [patch c] Fix target/57848: internal compiler error on builtin and '#pragma GCC target()' option
On Fri, 13 Sep 2013, Kai Tietz wrote: > ChangeLog > > 2013-09-13 Kai Tietz > > PR target/57484 > * c/c-decl.c (c_builtin_function_ext_scope): Remove > wrong assumption that it is never called on prexisting > symbol. c/ has its own ChangeLog so the entry should go there without the initial c/ in the file name in the ChangeLog entry. The patch is OK with the addition of the testcase you gave to gcc.target/i386/. -- Joseph S. Myers jos...@codesourcery.com
Re: New GCC options for loop vectorization
On Fri, 13 Sep 2013, Richard Biener wrote: > @@ -1691,6 +1695,12 @@ common_handle_option (struct gcc_options > opts->x_flag_ipa_reference = false; >break; > > +case OPT_ftree_vectorize: > + if (!opts_set->x_flag_tree_loop_vectorize) > + opts->x_flag_tree_loop_vectorize = value; > + if (!opts_set->x_flag_tree_slp_vectorize) > + opts->x_flag_tree_slp_vectorize = value; > + break; > > doesn't look obviously correct. Does that handle It looks right to me. The general principle is that the more specific option takes precedence over the less specific one, whatever the order on the command line. > -ftree-vectorize -fno-tree-loop-vectorize -ftree-vectorize Should mean -ftree-slp-vectorize. > -ftree-loop-vectorize -fno-tree-vectorize Should mean -ftree-loop-vectorize. > -ftree-slp-vectorize -fno-tree-vectorize Should mean -ftree-slp-vectorize. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH][ubsan] Add VLA bound instrumentation
On Fri, 13 Sep 2013, Marek Polacek wrote: > On Thu, Sep 12, 2013 at 04:05:48PM +0000, Joseph S. Myers wrote: > > cause stack overflow that doesn't get detected by the kernel. So maybe > > ubsan should imply -fstack-check or similar. > > Well, I have a patch for that, but I no longer think that ubsan should > imply -fstack-check, since e.g. > > int > main (void) > { > int x = -1; > int b[x - 4]; > /* ... */ > return 0; > } > > segfaults at runtime on int b[x - 4]; line when -fstack-check is used > (even without sanitizing), so we wouldn't give proper diagnostics > for stmts following that line... A guaranteed segfault is better than doing something undefined. But I'd expect sanitizing to make the initial check that the array size in bytes is in the range [1, PTRDIFF_MAX] and -fstack-check only to come into play if that passes (for sizes that are too large for the stack limit in effect at runtime although within the range that is in principle valid). You probably don't want to enable -fstack-check from ubsan until the checks for the range [1, PTRDIFF_MAX] are in place. (Those checks, incidentally, would need to apply not just to arrays whose specified size is variable, but also to constant-size arrays of variable-size arrays - if you have a VLA type, then define an array VLA array[10]; then you need to check that the result of the multiplication of sizes in bytes doesn't exceed PTRDIFF_MAX. So the more general checks can't all go in the place where you're inserting the checks for a single variable size in isolation.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Don't always instrument shifts (PR sanitizer/58413)
On Fri, 13 Sep 2013, Marek Polacek wrote: > This is kind of fugly, but don't have anything better at the moment. > 2013-09-13 Marek Polacek > > PR sanitizer/58413 > c-family/ > * c-ubsan.c (ubsan_instrument_shift): Don't instrument > an expression if we can prove it is correct. Shouldn't the conditions used here for an expression being proved correct match those for instrumentation, i.e. depend on flag_isoc99 and on (cxx_dialect == cxx11 || cxx_dialect == cxx1y)? -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Don't always instrument shifts (PR sanitizer/58413)
On Mon, 16 Sep 2013, Marek Polacek wrote: > On Fri, Sep 13, 2013 at 07:18:24PM +0000, Joseph S. Myers wrote: > > On Fri, 13 Sep 2013, Marek Polacek wrote: > > > > > This is kind of fugly, but don't have anything better at the moment. > > > 2013-09-13 Marek Polacek > > > > > > PR sanitizer/58413 > > > c-family/ > > > * c-ubsan.c (ubsan_instrument_shift): Don't instrument > > > an expression if we can prove it is correct. > > > > Shouldn't the conditions used here for an expression being proved correct > > match those for instrumentation, i.e. depend on flag_isoc99 and on > > (cxx_dialect == cxx11 || cxx_dialect == cxx1y)? > > I don't think so: for the unsigned case we could restrict it to C > only, but it doesn't hurt doing it even for C++; in the signed case > we care only about C, but we can't restrict it to flag_isoc99 only, > since we need to prove the correctnes even for ANSI C. I don't understand how this answers my question. * The following principle applies: for any command-line options, with ubsan enabled, if an integer operation with particular (non-constant) operands is accepted by the sanitization code at runtime, the same operation with the same operand values (and types) as constants should be accepted at compile time (and at runtime) in contexts where an integer constant expression is required. Does this patch make the compiler meet this principle, for all the different command-line options that vary what is accepted at runtime? * The following principle also applies: for any command-line options, with ubsan enabled, if an integer operation with particular (non-constant) operands is rejected by the sanitization code at runtime, the same operation with the same operand values (and types) as constants should be rejected at compile time (or at runtime) in contexts where an integer constant expression is required. Does this patch make the compiler meet this principle, for all the different command-line options that vary what is accepted at runtime? -- Joseph S. Myers jos...@codesourcery.com
Re: [gomp4] C UDR support
On Wed, 18 Sep 2013, Jakub Jelinek wrote: > Hi! > > This is the C FE counterpart of the C++ FE OpenMP 4.0 user defined > reductions, as C has no templates, class inheritance, doesn't > allow #pragma omp declare reduction in structs/unions, the implementation is > significantly simpler. > > Joseph, any comments on this? No comments here. -- Joseph S. Myers jos...@codesourcery.com
Re: [Patch, Darwin] update t-* and x-* fragments after switch to auto-deps.
On Sat, 28 Sep 2013, Iain Sandoe wrote: > * config/t-darwin (darwin.o, darwin-c.o, darwin-f.o, darwin-driver.o): > Use COMPILE and POSTCOMPILE. > * config/x-darwin (host-darwin.o): Likewise. > * config/i386/x-darwin (host-i386-darwin.o): Likewise. > * config/rs6000/x-darwin (host-ppc-darwin.o): Likewise. > * config/rs6000/x-darwin64 (host-ppc64-darwin.o): Likewise. Do you need these compilation rules at all? Or could you change config.host to use paths such as config/host-darwin.o rather than just host-darwin.o, and so allow the generic rules to be used (my understanding was that the auto-deps patch series made lots of such changes to the locations of .o files in the build tree to avoid needing special compilation rules for particular files)? -- Joseph S. Myers jos...@codesourcery.com
Re: Defining C99 predefined macros for whole translation unit
le, const char *fname) +{ + return _cpp_stack_include (pfile, fname, true, IT_DEFAULT); +} + /* Do appropriate cleanup when a file INC's buffer is popped off the input stack. */ void Index: libcpp/include/cpplib.h === --- libcpp/include/cpplib.h (revision 191711) +++ libcpp/include/cpplib.h (working copy) @@ -1,6 +1,6 @@ /* Definitions for CPP library. Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, - 2004, 2005, 2007, 2008, 2009, 2010, 2011 + 2004, 2005, 2007, 2008, 2009, 2010, 2011, 2012 Free Software Foundation, Inc. Written by Per Bothner, 1994-95. @@ -1010,6 +1010,7 @@ extern bool cpp_included (cpp_reader *, const char extern bool cpp_included_before (cpp_reader *, const char *, source_location); extern void cpp_make_system_header (cpp_reader *, int, int); extern bool cpp_push_include (cpp_reader *, const char *); +extern bool cpp_push_default_include (cpp_reader *, const char *); extern void cpp_change_file (cpp_reader *, enum lc_reason, const char *); extern const char *cpp_get_path (struct _cpp_file *); extern cpp_dir *cpp_get_dir (struct _cpp_file *); Index: libcpp/init.c === --- libcpp/init.c (revision 191711) +++ libcpp/init.c (working copy) @@ -1,7 +1,7 @@ /* CPP Library. Copyright (C) 1986, 1987, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2007, 2008, - 2009, 2010, 2011 Free Software Foundation, Inc. + 2009, 2010, 2011, 2012 Free Software Foundation, Inc. Contributed by Per Bothner, 1994-95. Based on CCCP program by Paul Rubin, June 1986 Adapted to ANSI C, Richard Stallman, Jan 1987 @@ -593,7 +593,7 @@ cpp_read_main_file (cpp_reader *pfile, const char } pfile->main_file -= _cpp_find_file (pfile, fname, &pfile->no_search_path, false, 0); += _cpp_find_file (pfile, fname, &pfile->no_search_path, false, 0, false); if (_cpp_find_failed (pfile->main_file)) return NULL; Index: libcpp/internal.h === --- libcpp/internal.h (revision 191711) +++ libcpp/internal.h (working copy) @@ -117,7 +117,7 @@ extern unsigned char *_cpp_unaligned_alloc (cpp_re #define BUFF_LIMIT(BUFF) ((BUFF)->limit) /* #include types. */ -enum include_type {IT_INCLUDE, IT_INCLUDE_NEXT, IT_IMPORT, IT_CMDLINE}; +enum include_type {IT_INCLUDE, IT_INCLUDE_NEXT, IT_IMPORT, IT_CMDLINE, IT_DEFAULT}; union utoken { @@ -625,7 +625,7 @@ extern void _cpp_destroy_hashtable (cpp_reader *); /* In files.c */ typedef struct _cpp_file _cpp_file; extern _cpp_file *_cpp_find_file (cpp_reader *, const char *, cpp_dir *, - bool, int); + bool, int, bool); extern bool _cpp_find_failed (_cpp_file *); extern void _cpp_mark_file_once_only (cpp_reader *, struct _cpp_file *); extern void _cpp_fake_include (cpp_reader *, const char *); -- Joseph S. Myers jos...@codesourcery.com
Re: [Ping]FW: [PATCH] Cilk Plus merging to trunk (2 of n)
On Thu, 4 Oct 2012, Iyer, Balaji V wrote: > Did you get a chance to look at this submission? I think I have > fixed all the changes you have mentioned. Is it OK for trunk? I expect to look at the revised specification and patch later this month. My initial impression from a glance at the revised specification was that it was much improved, but some issues would probably still require further work on the specification - and once those had been resolved, then a detailed review of the patch would make sense. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH, gdc] - Merging gdc (GNU D Compiler) into gcc
On Thu, 4 Oct 2012, Iain Buclaw wrote: > I would like to get a bump on this. > > It's been a while, and there have been quite a number of changes since > the initial post that address many of the issues raised. Rather than > reposting patches, someone mentioned attaching changelog, well, here > it is. > > Repository is still located here: https://github.com/D-Programming-GDC/GDC > > Would it be possible to have a re-newed review? I'm still keen on > pushing this, however I'm not certain of the right plan of execution. > :-) Post whatever exact patches you propose for current mainline and want reviewed (or links if they are too big). Probably also post a self-contained description of the copyright / license / assignment situation for the code, with a pointer to the public statement from the SC of FSF approval of anything outside the standard arrangements described in the Mission Statement. (I think the idea was something like the Go front end - the front end proper being externally maintained, while the GCC interface code is assigned to the FSF as usual - that is, every contributor of more than fifteen lines or so of copyrightable text in the GCC interface code has an assignment on file with the FSF. But whatever the arrangement is, it should be approved by the FSF.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH, gdc] - Merging gdc (GNU D Compiler) into gcc
On Thu, 4 Oct 2012, Iain Buclaw wrote: > The only patches to gcc proper are documentation-related and adding > the D frontend / libphobos to configure and make files. I would have > thought that these would typically only be included with the actual > front-end? Looking back at my previous review comments, I suggested that you might need to split up c-common.[ch] so that certain parts of attribute handling could be shared with D, because duplicate code copied from elsewhere in GCC was not an appropriate implementation approach. Have you then eliminated the duplicate code in some other way that does not involve splitting up those files so code can be shared? -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH, gdc] - Merging gdc (GNU D Compiler) into gcc
On Fri, 5 Oct 2012, Iain Buclaw wrote: > Would the best approach be to move all handle_* functions and any > helper functions into a new source file that can be shared between > frontends, and define two new frontend hooks, > LANG_HOOK_ATTRIBUTE_TABLE and LANG_HOOK_FORMAT_ATTRIBUTE_TABLE ? I don't know what the best approach would be. It's up to you to work out a clean design (that is, a design that makes logical sense as a way to implement the relevant features - starting from the features, not the existing code, as a justification for the design) and justify in your patch posting why you think it's the right way to refactor this code so it can be shared. -- Joseph S. Myers jos...@codesourcery.com
Re: patch to fix constant math - first small patch
On Fri, 5 Oct 2012, Kenneth Zadeck wrote: > +# define HOST_HALF_WIDE_INT_PRINT "h" This may cause problems on hosts not supporting %hd (MinGW?), and there's no real need for using "h" here given the promotion of short to int; you can just use "" (rather than e.g. needing special handling in xm-mingw32.h like is done for HOST_LONG_LONG_FORMAT). -- Joseph S. Myers jos...@codesourcery.com
Ping Re: Defining C99 predefined macros for whole translation unit
Ping. This patch <http://gcc.gnu.org/ml/gcc-patches/2012-09/msg01907.html> (non-C parts) is pending review. -- Joseph S. Myers jos...@codesourcery.com
Re: [RFC] New feature to reuse one multilib among different targets
On Wed, 10 Oct 2012, Terry Guo wrote: > Hello Joseph, > > Please help to review this new Multilib feature. It intends to provide user Your patch doesn't include documentation for fragments.texi (which needs to define the semantics without reference to the details of what gcc.c's internal datastructures for multilibs, as output by genmultilib, might look like). I am unconvinced that directly adding to the drivers' internal datastructures like this is a sensible interface for specifying multilib choice in target makefile fragments. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] -Wsizeof-pointer-memaccess improvements (PR c/54381)
On Wed, 10 Oct 2012, Jakub Jelinek wrote: > Hi! > > This is something Florian requested on ml and Gerard in bugzilla > after -Wsizeof-pointer-memaccess has been added in August for the C FE, > but I've been deferring the work until C++ FE support will be in too. > > It adds diagnostics to some more builtins (stpncpy, bcopy, bcmp, bzero, > snprintf and vsnprintf), also __builtin_*_chk for all the old and new > builtins where applicable, and changes the wording of *cmp* builtin > diagnostics (which don't have source/destination arguments, but two source > arguments). > > To support e.g. snprintf/vsnprintf, I had to change the FEs a little bit, > now first 3 arguments are remembered whether they were sizeof, their > original arguments and locus, instead of the last argument. > > Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk? The C front-end changes are OK. -- Joseph S. Myers jos...@codesourcery.com
Re: PR fortran/51727: make module files reproducible, question on C++ in gcc
On Sat, 13 Oct 2012, Diego Novillo wrote: > On 2012-10-13 14:04 , Tobias Schlüter wrote: > > On 2012-10-13 20:00, Diego Novillo wrote: > > > On 2012-10-13 09:26 , Tobias Schlüter wrote: > > > > first a question also to non-gfortraners: if I want to use std::map, > > > > where do I "#include "? In system.h? > > > > > > No. Ada includes system.h in pure C code. Why not include it exactly > > > where you need it? > > > > Ok, I was just wondering if there's a rule because a quick grep revealed > > no non-header source file that includes system headers. > > Joseph or others may have reason to create a system.hxx file or some such, but > in principle I don't see why we shouldn't include std library headers as we > need them. > > Joseph? The poisoning of various standard library functions that should not be used directly in GCC can be problematic if system headers, using those functions, are included after that poisoning. Thus in general it's better for system header includes to go in system.h, before the poisoning. In addition, where there is some autoconf configuration about whether to include a header, it's generally preferred to keep down the number of places with such a conditional on host features (encapsulating whatever features the header provides in some way so that other code can use them unconditionally). -- Joseph S. Myers jos...@codesourcery.com
Re: encoding all aliases options in .opt files
On Sat, 13 Oct 2012, Manuel L?pez-Ib??ez wrote: > OK. The attached patch implements this. Does the approach look ok? I > will write changelog and more comments if it seems reasonable. One Without the comments explaining the semantics of the new functions and their parameters, I'm not going to attempt to reverse-engineer what the approach in question is intended to be and so whether it's reasonable or not. > 2) While fixing this, I was thinking: Why the difference between > Joined and Separate? Why not make every function that takes an > argument work with and without '=' and with separate argument. It > seems we could remove a lot of options this way. What I am proposing > is, instead of: I see no significant advantage to users in having all these different ways to pass options, and multiple disadvantages: * If you add yet more ways of passing options, then any future overhaul of option handling needs to preserve those in addition to all the others. The fewer ways there are, the fewer constraints on the implementation. * Options like this > aux-infoFILE /* we could accept this to be compatible with some > options like -B */ are liable to be a pain for anyone, seeing such an option, to work out what is the option name that they might search for to get more information, and what is the argument. * In general, if all users are using an option in the same form it's easier for them to tell that another person really is using the same option rather than something that might be different from the option they are used to. * This also may not work so well with JoinedOrMissing options. > Wstrict-aliasing2 Shades of the old -gdwarf2 that once meant "DWARF 1, debug level 2", not to be confused with -gdwarf-2. I don't think such cryptic forms should be encouraged. > Wstrict-aliasing 2 Ambiguous with the existing -Wstrict-aliasing option followed by a linker input file called 2. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Rs6000 infrastructure cleanup (switches), revised patch #4
On Fri, 12 Oct 2012, Michael Meissner wrote: > I decided to see if it was possible to simplify the change over by adding > another flag word in the .opt handling to give the old names (TARGET_ and > MASK_). For Joseph Myers and Neil Booth, the issue is when changing all > of the switches that use Mask(xxx) and InverseMask(xxx) to also use Var(xxx), > the option machinery changes the names of the macros to OPTION_ and > OPTION_MASK_, which in turn causes lots and lots of changes for patch > review. Some can't be omitted, where we referred to the 'target_flags' and > 'target_flags_explicit' fields, but at least it reduces the number of other > changes. I think changing the names is safer - it's immediately obvious as a build failure if you missed anything. If you have MASK_* names for bits in more than one flags variable, there's a risk of accidentally testing a bit in the wrong variable, or ORing together bits that belong in different variables in a way that can't possibly work, without this causing immediately visible problems. Maybe you're actually only using the names for a single variable, but it still seems error-prone for future changes. I guess TARGET_* names should be safe in a way that MASK_* ones aren't for multiple variables - but then I wouldn't have options to do things two different ways, but instead use TARGET_* instead of OPTION_* and fix existing uses of OPTION_* for such bits. I don't know if with C++ it's possible to keep the names the same *and* ensure that compile time errors occur if bits from different variables are used together or a bit is used with the wrong variable *and* avoid any other issues occurring as a consequence of such changes. -- Joseph S. Myers jos...@codesourcery.com
Ping^2 Re: Defining C99 predefined macros for whole translation unit
Ping^2. This patch <http://gcc.gnu.org/ml/gcc-patches/2012-09/msg01907.html> (non-C parts) is still pending review. -- Joseph S. Myers jos...@codesourcery.com
Re: RFC: Merge the GUPC branch into the GCC 4.8 trunk (patch 0 of 16)
On Mon, 15 Oct 2012, Gary Funck wrote: > Various UPC language related checks and operations > are called in the "C" front-end and middle-end. > To insure that these operations are defined, > when linked with the other language front-ends > and compilers, these functions are stub-ed, > in a fashion similar to Objective C: Is there a reason you chose this approach rather than the -fcilkplus approach of enabling an extension in the C front end given a command-line option? (If you don't want to support e.g. the ObjC / UPC combination, you can always give an error in such cases.) In general I think such conditionals are preferable to linking in stub variants of functions - and I'm sure people doing all-languages LTO bootstraps will appreciate not having to do link-time optimization of the language-independent parts of the compiler yet more times because of yet another binary like cc1, cc1plus, ... that links in much the same code. The functions you stub out would then all start with assertions that they are only ever called in UPC mode - or if they are meant to be called in C mode but do nothing in that case, with appropriate checks that return early for C (if needed). -- Joseph S. Myers jos...@codesourcery.com
Re: [c-family] Fix -fdump-ada-spec buglet in C++
On Mon, 15 Oct 2012, Eric Botcazou wrote: > Hi, > > since the sizetype change, we generate invalid Ada for flexible array members > with -fdump-ada-spec in C++. The attached patch fixes this issue and also > partially revamps the code to polish some rough edges. > > Tested on x86_64-suse-linux, OK for mainline? OK. -- Joseph S. Myers jos...@codesourcery.com
Re: RFC: Merge the GUPC branch into the GCC 4.8 trunk (patch 0 of 16)
On Mon, 15 Oct 2012, Gary Funck wrote: > On 10/15/12 17:06:28, Joseph S. Myers wrote: > > On Mon, 15 Oct 2012, Gary Funck wrote: > > > Various UPC language related checks and operations > > > are called in the "C" front-end and middle-end. > > > To insure that these operations are defined, > > > when linked with the other language front-ends > > > and compilers, these functions are stub-ed, > > > in a fashion similar to Objective C: > > > > Is there a reason you chose this approach rather than the -fcilkplus > > approach of enabling an extension in the C front end given a command-line > > option? (If you don't want to support e.g. the ObjC / UPC combination, > > you can always give an error in such cases.) > > Back when we began to develop GUPC, it was recommended that we > introduce the UPC capability as a language dialect, similar to > Objective C. That is the approach that we have taken. Recommended where? I think that approach has been a bad idea for a long time and the approach of building into cc1, as taken by the cilkplus patches, is better (and that really most objc/ code should be like c-family/, built once and linked into both cc1 and cc1plus, though in its present state that's much harder to achieve). > I agree that there is no de facto reason that cc1upc is built > other than the fact we use a similar approach to Objective C. > However, I think that re-working this aspect of how GUPC is > implemented will require a fair amount of time/effort. If we I'd expect it to be a fairly straightforward rework (as would making ObjC a mode of cc1, if ObjC++ didn't exist). -- Joseph S. Myers jos...@codesourcery.com
Re: RFC: Merge the GUPC branch into the GCC 4.8 trunk (patch 01 of 16)
On Mon, 15 Oct 2012, Gary Funck wrote: > Some UPC-specific configuration options are added to > the top-level configure.ac and gcc/configure.ac scripts. Any patch that includes new configure options should include the documentation for those options in install.texi. Also please include ChangeLog entries with each patch submission, and a more detailed description of the changes involved and the rationale for them and implementation choices made. Also please see the review comments I sent previously in <http://gcc.gnu.org/ml/gcc-patches/2010-07/msg02136.html> (at least, this patch has a license notice in obsolete form, referring to GPLv2 and giving an FSF postal address instead of a URL). The changes to darwin.c, darwin.h, i386.c, rs6000.c should definitely be given their own rationale and called to the attention of relevant target maintainers, possibly through extracting them into separate patches in the series with meaningful subject lines mentioning the relevant targets, not mixed into what's ostensibly a build-system patch. The configure options need justification for their existence and (given that they exist) for working using #if, given that (as I said before, in the above referenced 2010 message, in <http://gcc.gnu.org/ml/gcc-patches/2011-07/msg00083.html> and <http://gcc.gnu.org/ml/gcc-patches/2011-07/msg00135.html>), "if" conditionals are preferred to #if and it's better for users to be able to choose things when they run the compiler rather than only having it determined by the person building the compiler (so configure options determining defaults are better than configure options that force the compiler to behave in a particular way - even if the option affects runtime library builds, it's still better to have a command-line option and the configure option just affect the default, since then at least people can see how code generation changes with the option and potentially build multilibs with both variants). In general, please make sure that you take account of previous feedback on previous GUPC patch submissions, and explain in your submissions how you have adjusted things for this feedback, or, if you are not making a previously requested change, why you consider the approach taken in your patch to be optimal. The change to ACX_BUGURL in configure.ac is simply wrong. Adding $(C_STUB_OBJS) uses to random language makefiles is very suspicious. stub-objc.o isn't needed by non-C-family front ends. Nor should stub-upc.o be - although I think the whole approach of using stub functions like that is wrong, as discussed. The language-independent compiler should be using language-independent interfaces to process language-independent internal representations; if there are additions to the language-independent IR to handle UPC, then the relevant code to handle them should be built in unconditionally. Front ends should not have their own Makefile.in if at all avoidable, only Make-lang.in - why do you have gcc/upc/Makefile.in? Note that it says "Derived from objc/Makefile.in", which was removed in r37088 (2000-10-27). See what I said in my 2010 review about "careful review for whether it ... takes account of cleanups done in the past decade or so" - you really do need to read through the entirety of the code to be submitted, line by line, and check how it stands up to the conventions followed elsewhere in the compiler and to issues that get raised in reviews of other patches nowadays. (But with rearrangement to be an option to the C front end, I expect you shouldn't need upc/Make-lang.in either.) -- Joseph S. Myers jos...@codesourcery.com
Re: RFC: Merge the GUPC branch into the GCC 4.8 trunk (patch 02 of 16)
On Mon, 15 Oct 2012, Gary Funck wrote: > Attached, patch 02 of 16. > > This patch set lists the small set of changes required > in the C++ and Objective C front-end to accommodate > changes needed for UPC. > > gcc/c/c-decl.c is also included here, to illustrate > both the UPC front-end changes and to motivate the introduction of > c_build_qualified_type_1() which accepts an additional argument > which is the "layout qualifier" (blocking factor) that is unique to UPC. Please go back to my previous comments, then post a full patch series *once those issues are addressed*. Each submission needs ChangeLog entries and more detailed descriptions / rationale. For example, explain why a lang hook change is made in c-objc-common.h. Make sure that you consistently use error_at, not error, for new diagnostics, or explain in your rationale why a location isn't readily available. Make sure that source lines do not go over 80 columns. There are at least some other coding standards issues; some visible at a glance are the comment /* for static declarations of shared arrays */ (start with a capital letter, end with '.', actually say what the semantics of this variable's value are if it needs a comment) and a missing space in TREE_CODE(decl). Avoid any diff lines that are solely changing trailing whitespace (you have at least once diff hunk in c-decl.c that appears to do nothing but add such whitespace). Some indentation also looks suspicious - look at the @@ -8859,6 +9048,23 @@ declspecs_add_qual hunk, for example, where inconsistent indentation reveals that you are indenting with spaces in some places, except for one line using a TAB (all new lines should be indented with TABs for every 8 spaces). -- Joseph S. Myers jos...@codesourcery.com
Re: RFC: Merge the GUPC branch into the GCC 4.8 trunk (patch 01 of 16)
On Mon, 15 Oct 2012, Gary Funck wrote: > Regarding ChangeLog entries, I can add them to each > change set, if that is needed. However, I was hoping to > wait on that until the patches are generally approved, > in principle. As a patch submitter, it's your responsibility to make it easy for reviewers to review your patch. The ChangeLogs provide a roadmap for the patch - showing what is changed where, at a glance. They go alongside the plain text rationale, explaining at a higher level the structure and rationale for the changes and anything that might seem unclear as to why the patches work in a particular way. Each patch submission's Subject: line should best include a brief description of that patch as well, not just a patch number. Any changes that are not immediately and obviously inherently UPC-specific deserve specific explanation in the plain text rationale. That certainly includes such things as the langhook change I mentioned, the target changes and the changes to non-C-family front-end Make-lang.in files. In the case of the target changes, there should be a high-level explanation of how other target maintainers might determine whether changes to their targets might be appropriate for UPC, or how you determined that only those targets should be changed. For changes developed over several years, reading through them in detail to prepare a ChangeLog is particularly valuable as it will show up places where there are spurious changes (such as those to whitespace) that may have resulted from code being changed and changed again in the course of development, but that are not needed for a clean patch submission. I don't really think your division into 16 patches is a particularly good one - it separates things that should be together, and joins things that might better be separate. Putting documentation in a separate patch from the things documented is always bad, for example - if you have new target hooks, put the .texi changes in the same patch that adds those hooks. A better division might be: * Target changes, split out per target. * Changes to existing C front-end files (including c-family). * Changes to any other front ends, split out by front end. * New front-end files. * Changes to the language and target independent compiler (including build system code). * Library. * Compiler testsuite. -- Joseph S. Myers jos...@codesourcery.com
Re: LangEnabledBy with arguments
On Sun, 14 Oct 2012, Manuel L?pez-Ib??ez wrote: > 2012-10-14 Manuel L?pez-Ib??ez > > PR c/53063 > PR c/40989 > gcc/ > * optc-gen.awk: Handle new form of LangEnabledBy. > * opts.c (set_Wstrict_aliasing): Declare here. Make static. > * common.opt (Wstrict-aliasing=,Wstrict-overflow=): Do not use Init. > * doc/options.texi (LangEnabledBy): Document new form. > * flags.h (set_Wstrict_aliasing): Do not declare. > c-family/ > * c.opt (Wstrict-aliasing=,Wstrict-overflow=): Use LangEnabledBy. > * c-opts.c (c_common_handle_option): Do not set them here. Add > comment. > (c_common_post_options): Likewise. > testsuite/ > * gcc.dg/Wstrict-overflow-24.c: New. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: EnabledBy(Wa && Wb)
On Mon, 15 Oct 2012, Manuel L?pez-Ib??ez wrote: > 2012-10-15 Manuel L?pez-Ib??ez > > PR c/53063 > PR c/40989 > * doc/options.texi (EnabledBy): Document new form. > * optc-gen.awk: Handle new form of EnabledBy. > * common.opt (Wunused-but-set-parameter): Use EnabledBy. > (Wunused-parameter): Likewise. > * opts.c (finish_options): Do not handle them explicitly. > * opt-functions.awk (search_var_name): New. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Rs6000 infrastructure cleanup (switches), revised patch #4
On Mon, 15 Oct 2012, Michael Meissner wrote: > > I think changing the names is safer - it's immediately obvious as a build > > failure if you missed anything. If you have MASK_* names for bits in more > > than one flags variable, there's a risk of accidentally testing a bit in > > the wrong variable, or ORing together bits that belong in different > > variables in a way that can't possibly work, without this causing > > immediately visible problems. Maybe you're actually only using the names > > for a single variable, but it still seems error-prone for future changes. > > Well to be safest, we should have a prefix for each word if you define more > than one flag word. Preferably a name that the user can specify in the .opt > file. Yes, for MASK_*. > > I guess TARGET_* names should be safe in a way that MASK_* ones aren't for > > multiple variables - but then I wouldn't have options to do things two > > different ways, but instead use TARGET_* instead of OPTION_* and fix > > existing uses of OPTION_* for such bits. > > I can see the MASK/OPTION_MASK thing, but not having TARGET_* means there are > lots and lots of code changes. > > Unfortunately in order to bring the number of changes down to a point where > the > patches can be reviewed, my previous patches did: > > #define TARGET_FOO OPTION_FOO > #define MASK_FOO OPTION_MASK_FOO The first of those #defines might be an intermediate step towards actually replacing OPTION_FOO by TARGET_FOO everywhere (since there seems to be no actual need for the different naming convention there, only for the masks). But I don't really think we should delay the mechanical replacement much (changing all OPTION_* that aren't OPTION_MASK_* to be TARGET_* should be a straightforward change to make automatically, although a pain to review the results of that substitution so maybe best kept in a separate patch from one doing anything more substantive). That is: 1. Patch adding TARGET_FOO aliases for OPTION_FOO (small change to the awk scripts and associated documentation, I expect). 2. Large, mechanical, automatically generated patch to change existing OPTION_FOO users (or maybe one such patch per target). 3. Patch removing the OPTION_FOO name (small change to awk scripts and documentation). Then you've eliminated one unnecessary cause of changes when moving bits out of target_flags. > If TargetName were defined, it would use TARGET_ instead of OPTION_, > but the OPTION_MASK_ would not be changed. Not needed, given the above sequence of changes. > If SetFunction was defined, the opt*.awk files would generate: > > #define SET_FOO(VALUE)\ > do { \ > if (VALUE) \ > target_flags &= ~MASK_FOO;\ > else\ > target_flags |= MASK_FOO; \ > } while (0) > > If ExplicitFunction was defined, the opt*.awk files would generate: > > #define EXPLICIT_FOO(VALUE) \ > ((global_options_set.x_target_flags & MASK_FOO) != 0) I'd like any such new macros to take an argument that's the pointer to the relevant options structure (global_options, global_options_set). If the place where the macro is called has a pointer available, then it can be passed in, otherwise pass in &global_options or &global_options_set unless and until such a pointer becomes available in the relevant place. > How would you feel about SetFunction, ExplicitFunction, and the reduced > TargetName? The principle of having macros for setting flags or testing if they are explicitly set is fine, though it's not clear to me that they need any such special settings as SetFunction and ExplicitFunction (rather than being generated unconditionally). I'd quite like the macros such as target_flags that refer to global options to end up not being lvalues at all. That helps ensure that option settings are only modified in limited places that have options pointers. It would be nice eventually for such things as "optimize" and "target" attributes to be able to swap options structures, and to work closer to how options on the command line are processed - for that, you want careful control on what places actually modify options at all. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Rs6000 infrastructure cleanup (switches), revised patch #4
On Tue, 16 Oct 2012, Michael Meissner wrote: > It occurs to me that now that we've committed to GCC being done in C++, we > could just make global_options{,_set} be a class instead of a structure. So > you could say: > > global_options.set_FOO (value) > > Or: > > global_options.set_FOO (); > global_options.clear_FOO (); > > I could generate the macros (or inline functions) if you would prefer to stick > the C style of doing things. However, as an old C dinosaur, I'm not sure of > all of the ramifications of doing this. It just seems it would be cleaner to > use the class structure, instead of passing pointers. In general, as much as possible should use an instance of struct gcc_options that is passed explicitly to the relevant code (or associated with the function being compiled, etc.), rather than using global_options directly (explicitly or implicitly). The existing way of doing that is using a pointer to a gcc_options structure. With a class I'd think you'd still need to pass it around as either a pointer or a reference (even if you then use member functions for some operations on these structures), and I'm not aware of any particular advantage of using a reference. I do not think most functions that happen to take a gcc_options pointer (often along with lots of other pointers to other pieces of state) are particularly suited to being member functions of gcc_options. Given that existing practice is passing pointers around, I'd think that's appropriate for any new such functions / macros, unless and until we have some clear notion of when functionality should or should not be a member function of gcc_options. -- Joseph S. Myers jos...@codesourcery.com
Re: RFC: Merge the GUPC branch into the GCC 4.8 trunk (patch 01 of 16)
On Thu, 18 Oct 2012, Gary Funck wrote: > GUPC (then called GCC/UPC) dates back to the GCC 2.7 and GCC 2.95 days. > The GCC 2.7 based implementation was a prototype, and the GCC 2.95 version > was a first attempt to implement a UPC compiler that fits within the GCC > build framework. At the time, we had discussions with Mark Mitchell, > Mike Stump and perhaps others regarding the best method for introducing > a UPC compiler into GCC. The consensus recommendation at the time was to > implement GCC/UPC as a language dialect, similar to Objective-C. > Thus, much of the initial work was patterned after ObjC; it used > stubs and built a distinct front-end, so that is what we did. GCC coding style has moved on a long way since then, and GCC generally has been moving away from conditional compilation, from #if to if conditions, from conditionally building something in to making the choice when the compiler is run. I don't think ObjC as a separate front end is a good idea now, and don't think UPC should be added that way. I also still don't think making UPC into a mode of the C front end should be more than a day or two's work. > My main issue with entertaining large re-structuring of GUPC > at the moment is that I would like to see if we can introduce > GUPC into GCC 4.8. As you point out, there are a lot of If the concern is 4.8, I'd think the priority should be the changes to the language and target independent code - changes to handle whatever additions UPC may make to GENERIC / GIMPLE - which will need middle-end maintainer review and which are probably the highest-risk part of the changes. > Also, specifying the flavor of PTS representation on the UPC > compiler command line (for example) is a departure from the > current method of configuring and invoking GUPC. There is a As noted above, use of #if is no longer generally good practice in GCC. Even "if (macro)", which might turn into "if (0)", is better than #if - at least it means all the code does get checked by the compiler, whichever configuration is built - but command-line options are certainly better. > JSM: Adding $(C_STUB_OBJS) uses to random language makefiles is very > JSM: suspicious. stub-objc.o isn't needed by non-C-family front ends. Nor > JSM: should stub-upc.o be - although I think the whole approach of using stub > JSM: functions like that is wrong, as discussed. > > I'm not fond of stubs or adding extra files to other front-ends, > but those changes seemed necessary at the time to get things to link. "seemed necessary at the time to get things to link" isn't sufficient reason. You'd need to state what symbols, used in what files, were undefined. But then the solution won't be to link in a stub file - it will be to link in real code, unconditionally. Whatever additions you make to GENERIC / GIMPLE for UPC should be handled entirely by language-independent code - that is, code that is always linked in, for all languages, to handle those GENERIC / GIMPLE features (even if most front ends don't generate those features), rather than code only linked in for particular front ends. > JSM: The language-independent > JSM: compiler should be using language-independent interfaces to process > JSM: language-independent internal representations; if there are additions to > JSM: the language-independent IR to handle UPC, then the relevant code to > JSM: handle them should be built in unconditionally. > > We followed the Objective C approach for better/worse. ObjC does not require stubs to be linked into non-C-family front ends. Whatever code exists in the language-independent compiler to handle GIMPLE from the ObjC front end gets linked into all compilers without any need to link in any stubs. > The few additional tree nodes needed for UPC are defined > in a language dependent tree definition file. The UPC-specific Whatever those get gimplified to needs to be language-independent (that is, handled after gimplification entirely by language-independent parts of the compiler). -- Joseph S. Myers jos...@codesourcery.com
Re: PR c/53063 Use (Lang)EnabledBy for a bunch of options
On Wed, 17 Oct 2012, Manuel L?pez-Ib??ez wrote: > This patch changes most trivial cases to use the new (Lang)EnabledBy. > > Bootstrapped and regression tested on x86_64-linux-gnu. OK? > > 2012-10-17 Manuel L?pez-Ib??ez > > PR c/53063 > PR c/40989 > c-family/ > * c.opt (Waddress,Wchar-subscripts,Wsign-conversion,Wimplicit, > Wimplicit-function-declaration,Wimplicit-int,Wsizeof-pointer-memaccess, > Wnarrowing,Wparentheses,Wpointer-sign,Wreturn-type,Wsequence-point, > Wsign-compare,Wuninitialized,Wmaybe-uninitialized,Wunused, > Wvolatile-register-var): Add LangEnabledBy or EnabledBy. > * c-opts.c (c_common_handle_option): Remove explicit handling from > here. > (c_common_post_options): Likewise. > gcc/ > * opts.c (finish_options): Remove explicit handling from here. OK. > Not sure how to encode these dependencies. Ideas? No specific suggestions here. If a particular case isn't common enough to have a .opt mechanism for it, then in principle the use of the _set structures tracking what was explicitly set could replace the use of -1 initializers. But since the _set structures generally correspond to what was set directly, whereas implicit setting by another option also replaces the -1 value, this might only work given the extension of _set structures to record the distance between the option passed by the user and the one set, not just whether each option was set directly. -- Joseph S. Myers jos...@codesourcery.com
Re: PR c/53063 Handle Wformat with LangEnabledBy
On Wed, 17 Oct 2012, Manuel L?pez-Ib??ez wrote: > documentation but I can also implement -Wformat=0 being an alias for > -Wno-format and -Wformat=1 an alias for -Wformat and simply reject > -Wno-format=. I think that's what's wanted; -Wno-format= should be rejected, -Wformat= should take an arbitrary integer level (of which at present all those above 2 are equivalent to 2, just as -O for n > 3 is equivalent to -O3). -- Joseph S. Myers jos...@codesourcery.com
Re: [patch] PR 54930: add -Wreturn-local-addr
On Thu, 18 Oct 2012, Jonathan Wakely wrote: > This adds a warning switch for the existing "returning address of > local variable" warnings in the C and C++ FEs which are enabled by > default but have no switch controlling them. Adding a switch allows it > to be turned into an error with -Werror=return-local-addr. I've added > two tests checking the -Werror form works. The C front-end changes are OK. -- Joseph S. Myers jos...@codesourcery.com
RE: [Ping]FW: [PATCH] Cilk Plus merging to trunk (2 of n)
_node in some cases, not converting at all in other cases. But the spec says "The expressions in a triplet are converted to ptrdiff_t.", so you need to convert to target ptrdiff_t, not target int. And there's a requirement that "Each of the expressions in a section triplet shall have integer type.". So you need to check that, and give an error if it doesn't have integer type, before converting - and of course add testcases for each of the possible positions for an expression having one that doesn't have integer type. In c-typeck.c you disable some errors and warnings for expressions containing array notations. I don't know where the later point is at which you check for such errors - but in any case, you need testcases for these diagnostics on those cases to show that they aren't being lost. In invoke.texi you have: +@opindex flag_enable_cilkplus But @opindex is for the user-visible options, not for internal variables. That is, @opindex fcilkplus would be appropriate. In passes.texi you refer to "the Cilk runtime library (located in libcilkrts directory)". But no such directory is added by this patch. Only add references to it in documentation with the patch that adds the directory. -- Joseph S. Myers jos...@codesourcery.com
Ping^3 Re: Defining C99 predefined macros for whole translation unit
Ping^3. This patch <http://gcc.gnu.org/ml/gcc-patches/2012-09/msg01907.html> (non-C parts) is still pending review. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] [2/10] AArch64 Port
On Tue, 23 Oct 2012, Marcus Shawcroft wrote: > +@item -mcmodel=tiny > +@opindex mcmodel=tiny > +Generate code for the tiny code model. The program and its statically > defined > +symbols must be within 1GB of each other. Pointers are 64 bits. Programs > can > +be statically or dynamically linked. This model is not fully implemented and > +mostly treated as "small". Say @samp{small} instead of using "" quotes in Texinfo sources. -- Joseph S. Myers jos...@codesourcery.com
Re: PR c/53063 Handle Wformat with LangEnabledBy
On Tue, 23 Oct 2012, Manuel López-Ibáñez wrote: > The problem is how to represent that Wformat-y2k is enabled by > -Wformat=X with X >= 2, while Wformat-zero-length is enabled by X >=1. > > One possiblity is to allow to specify a condition directly: I guess that's reasonable. -- Joseph S. Myers jos...@codesourcery.com
Re: ARC port (1/5): configuration file patches
On Tue, 23 Oct 2012, Joern Rennecke wrote: > + tm_file="dbxelf.h elfos.h linux.h ${tm_file}" Should be using gnu-user.h linux.h glibc-stdint.h, not linux.h on its own. -- Joseph S. Myers jos...@codesourcery.com
Re: RFA: Add lock_lenth attribute to support the ARC port (Was: Re: Ping: RFA: add lock_length attribute to break branch-shortening cycles)
On Tue, 23 Oct 2012, Joern Rennecke wrote: > I'll be posting the ARC port shortly; it does not fit into a single 100 KB > posting, so I'm thinking of splitting it in a configury patch and zx > compressed files/tarballs for arc.c, arc.md, libgcc, and the rest of the port. The size limit is 400 kB, not 100 kB. Hopefully that means you don't need to compress so many bits and can attach more as plain patches for future revisions. -- Joseph S. Myers jos...@codesourcery.com
Re: ARC port (2/5): gcc/config/arc/arc.c
Diagnostics should not end with '.' (or '\n', in some cases here); also avoid starting with a capital letter in cases such as "Operand". -- Joseph S. Myers jos...@codesourcery.com
Re: ARC port (3/5): gcc/config/arc/arc.md
Same diagnostic comment regarding one fatal_error call here; in addition, diagnostic functions should not be called directly from .md files at all, because .md files aren't processed by exgettext to extract messages for translation. -- Joseph S. Myers jos...@codesourcery.com
Re: ARC port (4/5): libgcc/config/arc/
If you need special t-* logit for fp-bit.c / dp-bit.c, rather than being able to use t-fdpbit, then you need comments explaining why the special logic rather than the generic code is used. -- Joseph S. Myers jos...@codesourcery.com
Re: ARC port (5/5): rest of gcc/{,common/}config/arc/
Don't use ATTRIBUTE_UNUSED on a parameter that, as in arc_option_init_struct, is in fact used unconditionally. handle_option hooks take a location, so use warning_at not plain warning in arc_handle_option. Same diagnostic issue as noted before applies. There are lots of obsolete bits in your ASM_SPEC and LINK_SPEC, such as %{v}, %{version:-v}, %{b}, %{Wl,*:%*}., %{!dynamic-linker:...}. Remove them. -- Joseph S. Myers jos...@codesourcery.com
RE: [Ping]FW: [PATCH] Cilk Plus merging to trunk (2 of n)
On Wed, 24 Oct 2012, Iyer, Balaji V wrote: > > Where in the patch you use int for the size of something (e.g. a list) on > > the host, > > please use size_t. > > Can you please give me an example where I am violating this rule? Here > is a link to the last submitted patch in case you need it > (http://gcc.gnu.org/ml/gcc-patches/2012-10/msg00431.html). For example, see build_array_notation_expr. The variables ii, jj, rhs_list_size, lhs_list_size should be size_t. I see no reason in principle those lists should not have 2^31 or more elements, on a 64-bit host. Unless there is some reason why, whatever the host and target, the values of some variable in the compiler could never need to exceed 2^31, int is probably not the right type for that variable. > Also you mention "host" and "target." What do you exactly mean by that? > I generally use those terms in like a cross-compiler setting (i.e. host > = the machine on which you are compiling and target means the target > architecture you are compiling for). That's what I mean. You're measuring the size of an array on the host, so use size_t not int to measure that size. If you were measuring something on the target, you might need HOST_WIDE_INT (for example, a compiler on a 32-bit host can reasonably build objects for a 64-bit target that declare arrays with more than 2^31 elements, so size_t is not the right type to use in the compiler for the size of a target array). -- Joseph S. Myers jos...@codesourcery.com
Re: PR c/53066 Wshadow should not warn for shadowing an extern function
The code changes seem fine, but I don't think a testcase should depend on the details of what system strings.h declares (or even that it exists) like that. If you want a system header declaration, add a header to the testsuite that uses #pragma GCC system_header to be sure it's handled as such, and include that header in the testcase; there are several existing examples of this in gcc.dg. -- Joseph S. Myers jos...@codesourcery.com
Re: Patch: add @direntry for gcov
On Tue, 30 Oct 2012, Tom Tromey wrote: > This patch adds a @direntry for gcov. > > I noticed that it was missing today, when I tried to find the gcov > manual from the info "dir" node. Then I found out that I had filed PR > 50899 for this ages ago. > > Ok? OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [patch][RFC] Filename based shared library versioning on AIX
On Wed, 31 Oct 2012, Michael Haubenwallner wrote: > I would like to introduce filename-based shared library versioning (known as > the "soname" in ELF world) for AIX, activated by the '--enable-aix-soname' > configure flag. Patches adding new configure options should include the changes to install.texi to document those options for people building GCC. -- Joseph S. Myers jos...@codesourcery.com
Re: [Patch] Update libquadmath from GLIBC
On Wed, 31 Oct 2012, Tobias Burnus wrote: > Tobias Burnus wrote: > > libquadmath's math functions are based on (but not identical to) GLIBC's > > sysdeps/ieee754/ldbl-128 functions. In the attached patch, I have ported the > > bug fixes from GLIBC over to libquadmath. Hopefully, the port is complete > > and correct. > > Slightly updated version, committed as Rev. 193037. At a glance, I don't see any sign of commit c0df8e693f34b535bd6ee1b691bc4ca6bc3b4579 Author: Joseph Myers Date: Thu Mar 22 12:52:50 2012 + Fix low-part sign handling in sin/cos for ldbl-128 and ldbl-128ibm. being included (although the effects of that change were only small reductions, on average, in ulps errors); I don't know if other relevant commits to the ldbl-128 code (since it was last merged from glibc) are missing. (And unsurprisingly it doesn't have the fma change I committed today relating to spurious exceptions on after-rounding architectures - FWIW, all the architectures currently using libquadmath are after-rounding.) There have been lots of improvements to the complex functions in glibc lately (those are generally in math/ not sysdeps/ieee754/), although libquadmath seems to be following glibc's versions of those less closely, and fixing those in glibc is still very much a work in progress (inverse trig and hyperbolic functions, and cpow, still need major work to be reasonably accurate for all inputs and avoid spurious exceptions / produce correct exceptions). Separately, soft-fp in GCC could do with being updated from glibc - that might eliminate a few spurious underflow exceptions if you get those in your test results. (Note that current soft-fp should, ideally, have FP_TRAPPING_EXCEPTIONS definitions added to sfp-machine.h files, where hardware exceptions are supported at all by those files. The relevant architecture maintainers - aarch64, i386, ia64 - could reasonably be asked to add those.) printf and strtod code in libquadmath looks like it could do with being updated from glibc as well - strtod in particular has had major changes in glibc since it was last updated. -- Joseph S. Myers jos...@codesourcery.com
Re: [Patch] Update libquadmath from GLIBC
On Wed, 31 Oct 2012, Tobias Burnus wrote: > Jakub Jelinek: > > I think it would be nice if you also posted the changes you did to > > test-ldouble.c and libm-test.inc, so that next time we could more easily > > test it again. > > See attachment. (I didn't do it properly at first, thus, I had to propagate > the changes to the right files ) Is there also a change to gen-libm-test.pl (or elsewhere) to change "l" or "L" suffixes on constants (inputs and expected results) to "Q"? You define TEST_LDOUBLE, but don't seem to redefine LDBL_MANT_DIG - does this mean that the tests conditioned e.g. on LDBL_MANT_DIG >= 113 don't run? (Really I'd like the tests in glibc to be conditioned on the particular features they need, e.g. at least a certain number of mantissa bits, rather than hardcoding conditions such as "defined TEST_LDOUBLE && LDBL_MANT_DIG >= 113".) When the draft C floating-point TS is more advanced, it's expected to define standard names for standard IEEE 754 types, when the implementation supports them (such as _Float128), and standard function names (such as expf128) - at that point it might make sense to consider rearranging the glibc ldbl-128 code so that it can be used both to provide _Float128 functions and "long double" functions, and making glibc provide these functions as they will now have standard C bindings. (libquadmath will still make sense for non-glibc or older-glibc systems, but hopefully the implementations can be closer to the glibc versions then. And supporting the various parts of the floating-point TS in glibc would be a lot of work.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Reducing number of alias checks in vectorization.
On Tue, 1 Oct 2013, Cong Hou wrote: > +#include > +#include > +#include > + > #include "config.h" Whatever the other issues about including these headers at all, any system header (C or C++) must always be included *after* config.h, as config.h may define feature test macros that are only properly effective if defined before any system headers are included, and these macros (affecting such things as the size of off_t) need to be consistent throughout GCC. -- Joseph S. Myers jos...@codesourcery.com
Re: Copyright years for new old ports (Re: Ping^6: contribute Synopsys Designware ARC port)
On Wed, 2 Oct 2013, Joern Rennecke wrote: > From my understanding, the condition for adding the current Copyright year > without a source code change is to have a release in that year. Are we > sure 4.9.0 will be released this year? "release" here includes availability of a development version in public version control, as well as snapshots and non-FSF releases. The effect is that if the first copyright year in a GCC source file is 1987 or later, a single range -2013 can be used. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] libgccjit.so: an embeddable JIT-compiler based on GCC
On Wed, 2 Oct 2013, David Malcolm wrote: > The idea is that GCC is configured with a special --enable-host-shared > option, which leads to it being built as position-independent code. You > would configure it with host==target, given that the generated machine > code will be executed within the same process (the whole point of JIT). Configuring with a different target also makes sense - consider JIT generation of code for a GPU (for example). (Of course lots of other cleanups would be needed to be able to generate code for multiple targets within the same process / same copy of libgccjit.so, but the idea seems useful. There are probably tricks available involving building multiple copies with different SONAMEs / symbol names. Anyway, it may make sense now to consider defining the interfaces in a way that would support choosing the target for which you get a context.) > * There are some grotesque kludges in internal-api.c, especially in > how we go from .s assembler files to a DSO (grep for "gross hack" ;) ) Apart from making the assembler into a shared library itself, it would also be nice to avoid needing those files at all (via an API for writing assembler text that doesn't depend on a FILE * pointer, although on GNU hosts you can always use in-memory files via open_memstream, which would make the internal changes much smaller). But in the absence of such a cleanup, running the assembler via the driver should work, although inefficient. What do you do about errors and warnings (other than hope that they don't get generated)? Again, one step is to avoid anything that directly uses stderr / stdout (explicitly, or implicitly through functions such as printf), so allowing FILE * pointers to instead be something opened with open_memstream, while a larger cleanup would narrow the interface to writing such text as far as possible (to well-defined diagnostic and dumping interfaces that don't directly take a FILE *, eliminating all ad hoc calls to fprintf etc.). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fixing improper conversion from sin() to sinf() in optimization mode.
On Fri, 6 Sep 2013, Cong Hou wrote: > 4: (float) sqrtl ((long double) double_val) -> (float) sqrt (double_val) I don't believe this case is in fact safe even if precision (long double) >= precision (double) * 2 + 2 (when your patch would allow it). The result that precision (double) * 2 + 2 is sufficient for the result of rounding the long double value to double to be the same as the result of rounding once from infinite precision to double would I think also mean the same when rounding of the infinite-precision result to float happens once - that is, if instead of (float) sqrt (double_val) you have fsqrt (double_val) (fsqrt being the proposed function in draft TS 18661-1 for computing a square root of a double value, rounded exactly once to float). But here the question isn't whether rounding to long double then float gives the same result as rounding to float. It's whether rounding to long double then float gives the same result as rounding to double then float. Consider, for example, double_val = 0x1.02011p+0. The square root rounded once to float (and so the result if you go via long double and long double is sufficiently precise) is 0x1.02p+0. But the square root rounded to double is 0x1.01p+0, which on rounding to float produces 0x1p+0. > 5: (double) sqrtl ((long double) float_val) -> sqrt ((double) float_val) (This case, however, *is* safe if long double is sufficiently precise; the conversion of float to long double is trivially equivalent to a conversion involving an intermediate "double" type, so it reduces to a case for which the standard formula involving precisions of just two types applies.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] libgccjit.so: an embeddable JIT-compiler based on GCC
On Thu, 3 Oct 2013, David Malcolm wrote: > Right now all you get back from the result is a "void*" which you're > meant to cast to machine code for the CPU. I guess we could add an And right now the library is calling dlopen. Which means that although what the user gets is a void *, the dynamic linker processing has dealt with registering eh_frame information and the right hooks have been passed through for GDB to see that an object has been loaded and access its debug information. Is this use of dlopen intended to be part of the interface, or just a temporary hack with users eventually needing to use other interfaces for eh_frame and debug info handling? (GDB has some support for JITs, but I haven't looked into the details of how it works.) > option on the gcc_jit_context for setting which ISA you want code for. Even apart from completely separate ISAs, there's also the matter of other command-line options such as -march=, which I'd think users should be able to specify. And the complication that default options to cc1 can be determined by specs, whether in .h files or from configure options such as --with=arch=, so cc1's defaults may not be the same as the defaults when you run the driver (which are likely to be better defaults for the JIT than cc1's). And if the user wants to use -march=native (which seems sensible for the common use case of a JIT, generating code for the same system as runs the compiler), that's handled through specs. Maybe the driver should be converted to library code so it's possible to run command-line options through it and generate the command line that would get passed to cc1 (but then use that to call a function in the same process - with a different copy of global options data, diagnostic context etc. to avoid any issues from those being initialized twice - rather than running a subprocess). That would also allow generating the right options for the assembler to pass to the library version of the assembler. > > > * There are some grotesque kludges in internal-api.c, especially in > > > how we go from .s assembler files to a DSO (grep for "gross hack" ;) ) > > > > Apart from making the assembler into a shared library itself, it would > > also be nice to avoid needing those files at all (via an API for writing > > assembler text that doesn't depend on a FILE * pointer, although on GNU > > hosts you can always use in-memory files via open_memstream, which would > > make the internal changes much smaller). But in the absence of such a > > cleanup, running the assembler via the driver should work, although > > inefficient. > > (nods) Note that all of the kludgery (if that's a word) is hidden > inside the library, so we can fix it all up without affecting client > code: the client-facing API doesn't expose any of this. > > FWIW I added timevars for measuring the invocation of the driver; that > kludge makes up about 50% of the overall time taken. I haven't yet > broken that down into assembler vs linker vs fork/exec overhead, but > clearly this is something that could use optimization - leading to > (very) vague thoughts involving turning parts of binutils into libraries > also. First I guess it might simply be a library that receives blocks of text that currently go to asm_out_file - but eventually there might be efficiency to be gained by passing binary data to the assembler in some cases (e.g. for large blocks of debug info) rather than generating and then parsing text. (So some asm_out methods would gain implementations passing such data instead of generating text.) -- Joseph S. Myers jos...@codesourcery.com
Re: Patch to split out new warning flag for floating point conversion
On Wed, 9 Oct 2013, Joshua J Cogliati wrote: > Because this changes -Wextra, when compiling with -Werror and -Wextra, > some code will not compile now. The code in gcc that this occurred in > was changed to use explicit casts. The patch would be shorter if I think those changes should be submitted separately, as cleanups that can be proposed on their own merits. Note that casts use spaces, "(type) expr", as detailed at <http://gcc.gnu.org/codingconventions.html#C_Formatting>. -- Joseph S. Myers jos...@codesourcery.com
Re: Patch to split out new warning flag for floating point conversion
Also note that this patch needs to add testcases to the testsuite (gcc/testsuite/c-c++-common/, probably) testing what cases generate warnings with the new option and what cases don't. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Add --enable-host-shared configuration option
On Wed, 9 Oct 2013, David Malcolm wrote: > This patch adds an "--enable-host-shared" option throughout the various > configure/Make machinery for host code, adding "-fPIC" where appropriate > when enabled. Please document this in install.texi (even if it isn't particularly useful at the stage where it just means PIC rather than actual shared libraries). -- Joseph S. Myers jos...@codesourcery.com
Re: [1/6] OpenMP 4.0 C FE support
My only comment is that you change the parameters to c_parser_binary_expression, but don't change the comment above that function documenting what they are. -- Joseph S. Myers jos...@codesourcery.com
Re: Patch: Add #pragma ivdep support to the ME and C FE
On Thu, 10 Oct 2013, Tobias Burnus wrote: > Joseph: Could you have a look at the C part? Thanks! Your error "missing loop condition loop with IVDEP pragma" should, I think, say "loop condition in loop". Since the pragma is "ivdep" (case-sensitive), it should also say % not IVDEP. This restriction should be clarified in the documentation, which could do with examples. There should be at least one execution test for correct execution of loops with this pragma, and at least one test for this diagnostic. The documentation refers to "safelen" which sounds like implementation terminology; this is the user manual and needs to describe things in user terms, not in terms of the implementation. -- Joseph S. Myers jos...@codesourcery.com