[Bug target/28808] Alignment problem in __gthread_once_t in vxWorks
--- Comment #3 from aaron at aarongraham dot com 2009-10-29 15:53 --- It appears that this one is fixed as of SVN revision 146566: http://gcc.gnu.org/viewcvs/trunk/gcc/gthr-vxworks.h?view=log -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28808
[Bug libstdc++/41861] [C++0x] does not use monotonic_clock
--- Comment #6 from aaron at aarongraham dot com 2009-11-10 04:38 --- So it appears that the problem is gthreads. The monotonic_clock support is purely superficial in gcc until gthreads supports such a concept. Developers will need to create their own clock and modify the standard library headers to use it should they require a reasonable level of reliability in the face of a possibly-changing system clock. But I think the Howard/Detlef debate is a separate issue. I believe they have determined that a condition_variable (and mutex) must continue to use a specific clock once the object is created, and to sync all given time points to that clock, and are arguing over whether or not that is implementable. No big deal. I just don't believe there is any particular requirement that it be the system_clock (and, if there were, I would think that to be a big mistake). In almost every project I've worked on, our purposes would be much better served if a monotonic_clock were used instead. Rarely do we care what the epoch is. What we do care about is timer reliability even when NTP (or some other mechanism) is changing the clock. But that's just my experience. Thanks for looking into this. I'm hoping for a resolution that doesn't make and all but useless as provided by the standard library sans modification. The boost team has already made some egregious mistakes in this area. -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=41861
[Bug c++/86143] New: ICE capturing constexpr chrono duration
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86143 Bug ID: 86143 Summary: ICE capturing constexpr chrono duration Product: gcc Version: 8.1.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c++ Assignee: unassigned at gcc dot gnu.org Reporter: aaron at aarongraham dot com Target Milestone: --- The following test, compiled via 'g++ -c test.cc': #include #include using namespace std::chrono_literals; extern void set_duration(std::chrono::nanoseconds); int main() { constexpr auto dur = 1ms; std::thread([]{ set_duration(dur); }).join(); } Result: during RTL pass: expand test.cc: In lambda function: test.cc:9:32: internal compiler error: in make_decl_rtl, at varasm.c:1322 std::thread([]{ set_duration(dur); }).join(); ^~~ Please submit a full bug report, with preprocessed source if appropriate. See <https://gcc.gnu.org/bugs/> for instructions. Compiler built for ARM (arm-unknown-linux-gnueabi)
[Bug libstdc++/58931] condition_variable::wait_until overflowed by large time_point
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58931 --- Comment #5 from Aaron Graham --- Created attachment 43261 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=43261&action=edit Patch to check for overflow
[Bug libstdc++/58931] condition_variable::wait_until overflowed by large time_point
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58931 Aaron Graham changed: What|Removed |Added CC||aaron at aarongraham dot com --- Comment #3 from Aaron Graham --- This is still a problem in current gcc trunk. The bug is in the condition_variable::wait_until clock conversion. It doesn't check for overflow in that math. Since the steady_clock and system_clock epochs can be very different, it's likely to overflow with values much less than max(). template cv_status wait_until(unique_lock& __lock, const chrono::time_point<_Clock, _Duration>& __atime) { // DR 887 - Sync unknown clock to known clock. const typename _Clock::time_point __c_entry = _Clock::now(); const __clock_t::time_point __s_entry = __clock_t::now(); const auto __delta = __atime - __c_entry; const auto __s_atime = __s_entry + __delta; return __wait_until_impl(__lock, __s_atime); } I modified my version of gcc to use steady_clock as condition_variable's "known clock" (__clock_t). This is more correct according to the C++ standard and most importantly it makes condition_variable resilient to clock changes when used in conjunction with steady_clock. Because of this, in my case, it works fine with steady_clock::time_point::max(), but fails with system_clock::time_point::max(). Because I made that change and since I don't do timed waits on system_clock (which is unsafe), the overflow hasn't been a problem for me and I haven't fixed it.
[Bug libstdc++/58931] condition_variable::wait_until overflowed by large time_point
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58931 --- Comment #4 from Aaron Graham --- See bug 41861 for discussion of steady_clock wrt condition_variable.
[Bug c++/63829] New: Crash in__tls_init when -mcpu=arm1176jzf-s is used
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63829 Bug ID: 63829 Summary: Crash in__tls_init when -mcpu=arm1176jzf-s is used Product: gcc Version: 4.9.2 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c++ Assignee: unassigned at gcc dot gnu.org Reporter: aaron at aarongraham dot com ARM processor, Raspberry Pi Offending code: #include thread_local std::unique_ptr tls_test; struct foo { foo() { tls_test.reset(new int(42)); } } const foo_instance; int main() {} The following works: g++ test.cc -std=c++14 The following crashes: g++ test.cc -std=c++14 -mcpu=arm1176jzf-s I will attach the full disassembly for both. Here's the basic gdb output: Core was generated by `./a.out'. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x8668 in __tls_init () (gdb) bt #0 0x8668 in __tls_init () #1 0x8a14 in TLS wrapper function for tls_test () #2 0x86ec in foo::foo() () #3 0x8648 in __static_initialization_and_destruction_0(int, int) () #4 0x86d0 in _GLOBAL__sub_I_tls_test () #5 0x8a78 in __libc_csu_init () #6 0x4f508f18 in __libc_start_main () from /opt/armtools/20141030/arm-brcm-linux-gnueabi/sysroot/lib/libc.so.6 #7 0x84fc in _start () (gdb) info reg r0 0x10da0 69024 r1 0x 65535 r2 0xc 12 r3 0x0 0 r4 0x2 2 r5 0x10c64 68708 r6 0x2 2 r7 0x1 1 r8 0xafb35654 2947765844 r9 0xafb3565c 2947765852 r100x4f3a7000 1329229824 r110xafb354ac 2947765420 r120x0fff 4294905855 sp 0xafb354a8 0xafb354a8 lr 0x8a14 35348 pc 0x8668 0x8668 <__tls_init+16> cpsr 0x6010 1610612752 (gdb)
[Bug c++/63829] Crash in__tls_init when -mcpu=arm1176jzf-s is used
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63829 --- Comment #1 from Aaron Graham --- Created attachment 33945 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=33945&action=edit Disassembly of crashing result.
[Bug c++/63829] Crash in__tls_init when -mcpu=arm1176jzf-s is used
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63829 --- Comment #2 from Aaron Graham --- Created attachment 33946 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=33946&action=edit Disassembly of good result.
[Bug libstdc++/63829] _Lock_policy used in thread.cc can cause incompatibilities with binaries using different -mcpu
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63829 Aaron Graham changed: What|Removed |Added Component|c++ |libstdc++ Summary|Crash in__tls_init when |_Lock_policy used in |-mcpu=arm1176jzf-s is used |thread.cc can cause ||incompatibilities with ||binaries using different ||-mcpu --- Comment #3 from Aaron Graham --- This is a C++ standard library problem. If the toolchain is compiled for generic arm processors, the C++ standard library uses the "_S_mutex" _Lock_policy (see ext/concurrence.h). // Compile time constant that indicates prefered locking policy in // the current configuration. static const _Lock_policy __default_lock_policy = #ifdef __GTHREADS #if (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_2) \ && defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4)) _S_atomic; #else _S_mutex; #endif #else _S_single; #endif If the compiler is then used to build binaries using -mcpu=arm1176jzf-s (or cortex-a9 or just about anything else) then those binaries use the "_S_atomic" _Lock_policy and are *incompatible* with the standard library built with the compiler. Here's some simpler code that was failing because of this problem: void foo() {} int main() { std::thread(&foo).join(); } This fails because execute_native_thread_routine accesses a shared_ptr, thereby requiring all binaries that link to it to use its locking policy, or else. I've solved this problem in my own setup by building the toolchain and application binaries with the same -mcpu. A more general solution might be to move more code out of places like thread.cc and into the headers.
[Bug libstdc++/887] libstdc++-v3 modulator setw does not work writing to a file
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=887 --- Comment #9 from Aaron Graham --- Thanks. I had already patched our gcc so that gthreads cond always gets initialized with CLOCK_MONOTONIC, then I switched __clock_t in condition_variable to steady_clock. It was a very simple change and works well but not nearly as portable as yours. I also disabled all timed waits on mutex (gcc already has ifdef for that) in order to avoid problems there. In my opinion, people shouldn't be using timed waits on mutexes anyway, since they are not cooperatively interruptible. If we did need them for some reason, I would reimplement timed mutex in terms of condition_variable and a regular mutex. It seems strange that this is no big deal to lots of people. On Jul 7, 2015 11:51 AM, "mac at mcrowe dot com" wrote: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=41861 > > --- Comment #10 from Mike Crowe --- > (In reply to Mike Crowe from comment #9) > > 3. condition_variable should support wait_until using at least > steady_clock > > (CLOCK_MONOTONIC) and system_clock (CLOCK_REALTIME.) Relative wait > > operations should use steady_clock. User-defined clocks should probably > > convert to steady_clock. > > > > I believe that only option 3 makes any sense but this requires an > equivalent > > to pthread_cond_timedwait that supports specifying the clock. The glibc > > implementation of such a function would be straightforward. > > I've proposed a patch that implements this option at: > > https://gcc.gnu.org/ml/libstdc++/2015-07/msg00024.html > > and the required glibc change at: > > https://sourceware.org/ml/libc-alpha/2015-07/msg00193.html > > -- > You are receiving this mail because: > You reported the bug. >
[Bug libstdc++/100259] New: ODR violations in
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100259 Bug ID: 100259 Summary: ODR violations in Product: gcc Version: 12.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: libstdc++ Assignee: unassigned at gcc dot gnu.org Reporter: aaron at aarongraham dot com Target Milestone: --- Current implementation in has functions that violate ODR: std::experimental::net::ip::make_error_code std::experimental::net::ip::make_error_condition std::experimental::net::ip::make_network_v4 It seems these should be inline and/or constexpr. There are probably others.
[Bug c++/100322] New: Switching from std=c++17 to std=c++20 causes performance regression in relationals
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100322 Bug ID: 100322 Summary: Switching from std=c++17 to std=c++20 causes performance regression in relationals Product: gcc Version: 12.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c++ Assignee: unassigned at gcc dot gnu.org Reporter: aaron at aarongraham dot com Target Milestone: --- Experiment here: https://godbolt.org/z/PT73cn5e5 #include using clk = std::chrono::steady_clock; bool compare_count(clk::duration a, clk::duration b) { return a.count() > b.count(); } bool compare(clk::duration a, clk::duration b) { return a > b; } Compiling with -std=c++17 I get: _Z13compare_countNSt6chrono8durationIxSt5ratioILx1ELx10S3_: cmp r2, r0 sbcsr3, r3, r1 ite lt movlt r0, #1 movge r0, #0 bx lr _Z7compareNSt6chrono8durationIxSt5ratioILx1ELx10S3_: cmp r2, r0 sbcsr3, r3, r1 ite lt movlt r0, #1 movge r0, #0 bx lr Compiling with -std=c++20 I get: _Z13compare_countNSt6chrono8durationIxSt5ratioILx1ELx10S3_: cmp r2, r0 sbcsr3, r3, r1 ite lt movlt r0, #1 movge r0, #0 bx lr _Z7compareNSt6chrono8durationIxSt5ratioILx1ELx10S3_: cmp r1, r3 it eq cmpeq r0, r2 beq .L4 cmp r0, r2 sbcsr3, r1, r3 bge .L5 mov r0, #-1 .L3: cmp r0, #0 ite le movle r0, #0 movgt r0, #1 bx lr .L4: movsr0, #0 b .L3 .L5: movsr0, #1 b .L3 (Note that clang doesn't have this problem)
[Bug c/108518] New: Format-overflow warning using `*.s` directive with null but zero-length string
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108518 Bug ID: 108518 Summary: Format-overflow warning using `*.s` directive with null but zero-length string Product: gcc Version: 13.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: aaron at aarongraham dot com Target Milestone: --- https://godbolt.org/z/YGra91Woa #include int main() { // This causes a format-overflow warning, but it // should not warn if size() is 0 printf("%.*s\n", 0, (char*)0); } The warning is: : In function 'int main()': :6:13: warning: '%.*s' directive argument is null [-Wformat-overflow=] 6 | printf("%.*s\n", 0, (char*)0); | ^~~~ I see this commonly when using std::string_view with printf. In cases where it knows that you're passing a default-constructed string_view it produces this warning. It should not produce this warning if the length being printed is 0.
[Bug libstdc++/106802] New: Comparators in don't work with orderings in
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106802 Bug ID: 106802 Summary: Comparators in don't work with orderings in Product: gcc Version: unknown Status: UNCONFIRMED Severity: normal Priority: P3 Component: libstdc++ Assignee: unassigned at gcc dot gnu.org Reporter: aaron at aarongraham dot com Target Milestone: --- gcc does not allow this to compile: std::less<>{}(std::strong_ordering::less, 0); Even though `std::strong_ordering::less < 0` is perfectly legal and well-formed. It will compile this (but clang will not): std::less<>{}(std::strong_ordering::less, nullptr); Godbolt link: https://gcc.godbolt.org/z/9ed16KbhP