Re: Integer overflow in operator new
On Monday, 9 April 2007 10:23, Florian Weimer wrote: > * Ross Ridge: > > Florian Weimer writes: > >>I don't think this check is correct. Consider num = 0x3334 and > >>size = 6. It seems that the check is difficult to perform > >> efficiently unless the architecture provides unsigned > >> multiplication with overflow detection, or an instruction to > >> implement __builtin_clz. > > > > This should work instead: > > > > inline size_t __compute_size(size_t num, size_t size) { > > if (num > ~size_t(0) / size) > > return ~size_t(0); > > return num * size; > > } > > Yeah, but that division is fairly expensive if it can't be performed > at compile time. OTOH, if __compute_size is inlined in all places, > code size does increase somewhat. You could avoid the division in nearly all cases by checking for reasonably-sized arguments first: inline size_t __compute_size(size_t num, size_t size) { static const int max_bits = sizeof(size_t) * CHAR_BITS; int low_num, low_size; low_num = num < ((size_t)1 << (max_bits * 5 / 8)); low_size = size < ((size_t)1 << (max_bits * 3 / 8)); if (__builtin_expect(low_num && low_size, 1) || num <= ~(size_t)0 / size) return num * size; else return ~size_t(0); } -- Ross Smith [EMAIL PROTECTED] Auckland, New Zealand "Those who can make you believe absurdities can make you commit atrocities."-- Voltaire
Re: Integer overflow in operator new
On Monday, 9 April 2007 13:09, J.C. Pizarro wrote: > > This code is bigger than Joe Buck's. > > Joe Buck's code: 10 instructions > Ross Ridge's code: 16 instructions > Ross Smith's code: 16 instructions Well, yes, but it also doesn't have the bug Joe's code had. That was sort of the whole point. If you don't care whether it gives the right answer you might as well just leave the status quo. -- Ross Smith [EMAIL PROTECTED] Auckland, New Zealand "Those who can make you believe absurdities can make you commit atrocities."-- Voltaire
Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]
On Sunday, 29 May 2005 03:17, Uros Bizjak wrote: > > There is no problem that Bugzilla is un-intuitive, it is far from > that. The users don't fill bugreports because they are afraid of > filling an invalid report or a duplicate. I strongly suspect you're mistaken about the reason. > Is perhaps some kind of anonymous account needed (as in Slashdot's > case) to encourage these users to fill bugreports? I think this is probably the real showstopper. I'll admit I haven't exactly made a scientific survey here, but I suspect a lot of people give up when they see the login form. Whenever I see something like "we need a valid email address" on a corporate web site, I always take it for granted that it's because they want to spam me. If I really need the information behind the login wall, I set up a throwaway address or use www.bugmenot.com. Of course I'm not accusing the FSF of being spammers, but you can't expect the casual user of GCC who isn't aware of its background to know that. I'd bet that this is the real reason so few people file bug reports. As soon as they see the demand for an email address, alarm bells start going off in their minds, and they go away. (If the email request was on the bug report form itself instead of a login, and was _optional_, probably more people would be willing to fill it in.) -- Ross Smith [EMAIL PROTECTED] Auckland, New Zealand "Plausible rockets are rare. Plausible space travel is rare. Most SF authors could not calculate a mass ratio if you put them in a sunken pit filled with ravenous sliderules." -- James Nicoll
Re: order of -D and -U is significant
On 2009-08-05, at 04:03, Joe Buck wrote: Another alternative would be an extra flag that would turn on conformance to the spec. Traditionally spelled -posixly-correct in other GNU software. This would presumably also affect other options, such as making the default - std=c99 instead of gnu89. -- Ross Smith
Re: Over-sensitive warning, or some quirk of C++ language rules?
On 2010-01-10, at 00:31, Dave Korn wrote: > Simple testcase, using h...@155680. > > > $ cat badwarn.cpp > > extern void bar (void); > int foo (void) __attribute__ ((__noreturn__)); > > int > foo (void) > { > while (1) > { >bar (); > } > } > > $ g++-4 -c badwarn.cpp -Wall > badwarn.cpp: In function 'int foo()': > badwarn.cpp:12:1: warning: no return statement in function returning non-void > gcc 4.0.1, 4.2.1, and 4.3.4 don't warn about this. Looks like a regression. -- Ross Smith
Re: Serious code generation/optimisation bug (I think)
Zoltán Kócsi wrote: On Thu, 29 Jan 2009 08:53:10 + Andrew Haley wrote: We're talking about gcc on ARM. gcc on ARM uses 0 for the null pointer constant, therefore a linker cannot place an object at address zero. All the rest is irrelevant. Um, the linker *must* place the vector table at address zero, because the ARM, at least the ARM7TDMI fetches all exception vectors from there. Dictated by the HW, not the compiler. This sounds like a genuine bug in gcc, then. As far as I can see, Andrew is right -- if the ARM hardware requires a legitimate object to be placed at address zero, then a standard C compiler has to use some other value for the null pointer. -- Ross Smith
Re: About strict-aliasing warning
Paolo Bonzini wrote: "-Wstrict-aliasing This option is only active when -fstrict-aliasing is active. It warns about code which might break the strict aliasing rules that the compiler is using for optimization. The warning does not catch all cases, but does attempt to catch the more common pitfalls. It is included in -Wall. It is equivalent to -Wstrict-aliasing=3 " and -O2 would active -fstrict-aliasing by default, which should also active this options. No, the text above means that "-fstrict-aliasing" is a *necessary* condition to get aliasing warnings, not a sufficient condition. Do you have suggestions for how to clarify the text? Pergaps the first sentence should read something like "This option is only respected when -fstrict-aliasing is active", or "This option has no effect unless -fstrict-aliasing is active". I follow what the existing wording was intended to mean, but I can also see how it could easily be interpreted to mean that -fstrict-aliasing automatically implies -Wstrict-aliasing. -- Ross Smith
Re: Using __sync_* builtins within libgcc code
Paolo Carlini wrote: Joseph S. Myers wrote: I hold that it is a bug that i686-* tools default to -march=i386 instead of -march=i686 (whereas e.g. sparcv9-* tools default to -mcpu=sparcv9, and -mcpu means -march for SPARC). > Seconded. I've long been of the opinion that -march=native should be the default. -- Ross Smith
Re: gcc-in-cxx branch created
Ian Lance Taylor wrote: I expect that we will find it appropriate to use STL containers, as in for (Type::iterator p = container.begin(); p != container.end(); ++p) For loops like this I'd recommend using some kind of FOREACH macro (the functional equivalent of BOOST_FOREACH; this is easy to write when you can use GCC's typeof feature). I've found that using a FOREACH macro improves code readability significantly. Not only is the normal case shorter and clearer, but it also means that when you do spell out a for loop explicitly, it acts as a signal to the reader that "we're doing something more complicated than straightforward iteration here, so read carefully". -- Ross Smith
Re: [RFC] Marking C++ new operator as malloc?
Gabriel Dos Reis wrote: Joe Buck <[EMAIL PROTECTED]> writes: | On Sat, Sep 08, 2007 at 04:33:50PM -0500, Gabriel Dos Reis wrote: | > "Richard Guenther" <[EMAIL PROTECTED]> writes: | > | > | On 9/8/07, Chris Lattner <[EMAIL PROTECTED]> wrote: | > | > I understand, but allowing users to override new means that the actual | > | > implementation may not honor the aliasing guarantees of attribute | > | > malloc. | > | | > | Well, you can argue that all hell breaks lose if you do so. A sane ::new | > | is required for almost everything :) | > | > I suspect the question is how to you distinguish a sane new from an an | > insane one. | | Does it matter? No, it does not. The reason is 3.7.3.1/2 [...] If the request succeeds, the value returned shall be a nonnull pointer value (4.10) p0 different from any previously returned value p1, unless that value p1 was subsequently passed to an operator delete. That's not sufficient. First, merely requiring pointers to be different isn't the same as requiring them not to alias, which requires the blocks of memory they point to not to overlap. Second, the standard only requires the pointers returned by new to be different from each other, not from any other pointer in the program. Probably the most common use of a custom new is to allocate memory from a user-controlled pool instead of the standard free store. Somewhere in the program there will be a pointer to the complete pool, which aliases every pointer returned by that version of new. Any such pool-based new doesn't meet the requirements of the malloc attribute.
Re: Optimization of conditional access to globals: thread-unsafe?
Erik Trulsson wrote: On Sun, Oct 28, 2007 at 06:06:17PM -, Dave Korn wrote: As far as I know, there is no separate 'pthreads' spec apart from what is defined in the Threads section (2.9) of the SUS (http://tinyurl.com/2wdq2u) and what it says about the various pthread_ functions in the system interfaces (http://tinyurl.com/2r7c5k) chapter. None of that, as far as I have been able to determine, makes any kind of claims about access to shared state or the use of volatile. Having just been pointed to that copy of the SUS, I must agree. I can't find anything in there saying anything at all about what is required to safely share data between threads. If that is really so it seems 'pthreads' are even more under-specified than I thought (and I had fairly low expectations in that regard.) I really hope there is something I have missed. I think the relevant part is here: http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap04.html#tag_04_10 [begin quote] 4.10 Memory Synchronization Applications shall ensure that access to any memory location by more than one thread of control (threads or processes) is restricted such that no thread of control can read or modify a memory location while another thread of control may be modifying it. Such access is restricted using functions that synchronize thread execution and also synchronize memory with respect to other threads. The following functions synchronize memory with respect to other threads: fork() pthread_barrier_wait() pthread_cond_broadcast() pthread_cond_signal() pthread_cond_timedwait() pthread_cond_wait() pthread_create() pthread_join() pthread_mutex_lock() pthread_mutex_timedlock() pthread_mutex_trylock() pthread_mutex_unlock() pthread_spin_lock() pthread_spin_trylock() pthread_spin_unlock() pthread_rwlock_rdlock() pthread_rwlock_timedrdlock() pthread_rwlock_timedwrlock() pthread_rwlock_tryrdlock() pthread_rwlock_trywrlock() pthread_rwlock_unlock() pthread_rwlock_wrlock() sem_post() sem_trywait() sem_wait() wait() waitpid() The pthread_once() function shall synchronize memory for the first call in each thread for a given pthread_once_t object. Unless explicitly stated otherwise, if one of the above functions returns an error, it is unspecified whether the invocation causes memory to be synchronized. Applications may allow more than one thread of control to read a memory location simultaneously. [end quote] -- Ross Smith
Re: -Wparentheses lumps too much together
Paul Brook wrote: >James K. Lowden wrote: 1) most combinations of && and || don't need parentheses because (a && b) || (c && d) is by far more common than a && (b || c) && d and, moreover, broken code fails at runtime, and I dispute these claims. The former may be statistically more common, but I'd be surprised if the difference is that big. I can think of several fairly common situations where both would be used. Any time you've got any sort of nontrivial condition, I always find it better to include the explicit parentheses. Especially if a, b, c, and d are relatively complex relational expressions rather than simple variables. I second Paul's points. The precedence of && and || are not widely enough known that warning about it should be off by default (for people sane enough to use -Wall). A couple of data points: First, I've been writing C and C++ for a living for nearly 20 years now, and I didn't know that && had higher precedence than ||. I vaguely recalled that they had different precedence, but I couldn't have told you which came first without looking it up. I'd happily bet that the same is true of the overwhelming majority of developers who aren't compiler hackers. Giving && and || different precedence is one of those things that feels so totally counterintuitive that I have trouble remembering it no matter how many times I look it up. I have a firm coding rule of always parenthesising them when they're used together. (Likewise &, |, and ^, which have similar issues. I can't remember whether -Wparentheses warns about those too.) Second, I just grepped the codebase I'm curently working on (about 60k lines of C++) for anywhere && and || appear on the same line. I got 49 hits, 29 where && was evaluated before ||, 20 the other way around. (All of them, I'm happy to say, properly parenthesised.) So while &&-inside-|| seems to be slightly more common, I'd certainly dispute James's claim that it's "far more common". -- Ross Smith