From: Matthew Malcomson <mmalcom...@nvidia.com> N.b. including testsuite maintainers hoping for review of the new effective targets I've added.
-------------- >8 ------- 8< ----------- Tests we include are: - A set of tests checking the observed arithmetic behaviour of the new builtins. - A test ensuring the floating point exception handling for of the new builtins for each of the standard floating point types. - A test checking that we error for unknown types in a similar manner to before. ------------------------------ Name effective targets as sync_double_runtime (and similar for each different floating point type). This not only corresponds slightly better to the underlying effective targets that we use (e.g. sync_long_long_runtime), but it also shows the user of the effective target that this requires the ability to run executables (which is required at least in part due to the way in which this effective target is checked). Ensure we include `dg-add-options` in those testcases which have such an option. This allowing them to run on as many targets without error as they can. We generate checks for whether each floating point type is atomic. These generated checks simply map the floating point type to the effective target of the integral type of the same size. The assumption here being that if there is not the ability to perform integral atomic operations on a given size there would also not be the ability to perform atomic operations on floating point types of the same size. This holds for the current mechanism by which GCC implements things like atomic loads. N.b. the actual effective target TCL procedures are inlined instead of generated. While this is more code it also means that there is a standard string to `grep` for that will find these effective targets when someone finds an effective target in a testcase and wants to see what that effective target is actually checking. Ensure we include a comment describing why it's OK to require the ability to run an executable to pass these effective targets. N.b. in order to make this effective target work with C++ as well as C we use checks on `__cplusplus`, and use ${tool}_load rather than gcc_load in order to ensure this effective target works in the g++ testsuite. ------------------------------ Do not link in libatomic for atomic-op-fp* testcases As it stands, libatomic can not easily be linked in for these testcases in this directory. It requires a bit of setup that is handled in gcc.dg/atomic/atomic.exp. This is not performed in the gcc.dg/dg.exp test runner. In a following patchset we hope to include libatomic by default, and with that the testsuite should be updated. N.b. at this point in the patchset these tests will always fail (due to needing a later patch that introduces a flag to avoid requiring libatomic). ------------------------------ Require libatomic for atomic-op-fp-fenv.c Clearly needs libatomic (partly due to the use of __atomic_feraiseexcept I introduce, partly due to __atomic_store implementations required on targets which don't have `long double` atomic loads/stores). Since it clearly needs libatomic, put this in the gcc.dg/atomic directory which always includes libatomic and only runs its tests if libatomic is available. ------------------------------ Directives added spell out the following: 1) This test is a run test (i.e. it should be executed and the execution should succeed). 2) We want to specify -std=c23 for the floating point types introduced in that standard. 3) If libatomic is not available, this target needs to support atomic operations on this data size. 4) This target always needs to support this floating point type. This allows running the test when libatomic is not available but the target supports atomic operations directly on this data size, and also running the test via libatomic when the target does not support atomic operations directly on the given data size. That works when combined with a later patch that makes libatomic available to these testcases. ------------------------------ We also include a testcase that we still emit the correct errors. E.g. when bfloat16 is not available on the target, or if it is available but not an arithmetic type. gcc/testsuite/ChangeLog: * lib/target-supports.exp (check_effective_target_bfloat16_storage_only): New. (check_effective_target_sync_float_runtime): New. (check_effective_target_sync_double_runtime): New. (check_effective_target_sync_long_double_runtime): New. (check_effective_target_sync_bfloat16_runtime): New. (check_effective_target_sync_float16_runtime): New. (check_effective_target_sync_float32_runtime): New. (check_effective_target_sync_float64_runtime): New. (check_effective_target_sync_float128_runtime): New. (check_effective_target_sync_float32x_runtime): New. (check_effective_target_sync_float64x_runtime): New. * gcc.dg/atomic-op-fp-errs.c: New test. * gcc.dg/atomic-op-fp.c: New test. * gcc.dg/atomic-op-fpf.c: New test. * gcc.dg/atomic-op-fpf128.c: New test. * gcc.dg/atomic-op-fpf16.c: New test. * gcc.dg/atomic-op-fpf16b.c: New test. * gcc.dg/atomic-op-fpf32.c: New test. * gcc.dg/atomic-op-fpf32x.c: New test. * gcc.dg/atomic-op-fpf64.c: New test. * gcc.dg/atomic-op-fpf64x.c: New test. * gcc.dg/atomic-op-fpl.c: New test. * gcc.dg/atomic/atomic-op-fp-fenv.c: New test. Signed-off-by: Matthew Malcomson <mmalcom...@nvidia.com> --- gcc/testsuite/gcc.dg/atomic-op-fp-errs.c | 14 + gcc/testsuite/gcc.dg/atomic-op-fp.c | 198 +++++++++ gcc/testsuite/gcc.dg/atomic-op-fpf.c | 198 +++++++++ gcc/testsuite/gcc.dg/atomic-op-fpf128.c | 201 ++++++++++ gcc/testsuite/gcc.dg/atomic-op-fpf16.c | 201 ++++++++++ gcc/testsuite/gcc.dg/atomic-op-fpf16b.c | 201 ++++++++++ gcc/testsuite/gcc.dg/atomic-op-fpf32.c | 201 ++++++++++ gcc/testsuite/gcc.dg/atomic-op-fpf32x.c | 201 ++++++++++ gcc/testsuite/gcc.dg/atomic-op-fpf64.c | 201 ++++++++++ gcc/testsuite/gcc.dg/atomic-op-fpf64x.c | 201 ++++++++++ gcc/testsuite/gcc.dg/atomic-op-fpl.c | 198 +++++++++ .../gcc.dg/atomic/atomic-op-fp-fenv.c | 376 ++++++++++++++++++ gcc/testsuite/lib/target-supports.exp | 146 ++++++- 13 files changed, 2536 insertions(+), 1 deletion(-) create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fp-errs.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fp.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpf.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpf128.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpf16.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpf16b.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpf32.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpf32x.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpf64.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpf64x.c create mode 100644 gcc/testsuite/gcc.dg/atomic-op-fpl.c create mode 100644 gcc/testsuite/gcc.dg/atomic/atomic-op-fp-fenv.c diff --git a/gcc/testsuite/gcc.dg/atomic-op-fp-errs.c b/gcc/testsuite/gcc.dg/atomic-op-fp-errs.c new file mode 100644 index 00000000000..bf98343b350 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fp-errs.c @@ -0,0 +1,14 @@ +/* Test that invalid floating point types still raise an error. */ +/* { dg-do compile } */ +/* { dg-additional-options "-std=c23" } */ +/* { dg-add-options bfloat16 } */ + +/* { dg-error "unknown type name '__bf16" "" { target { ! { bfloat16 || bfloat16_storage_only } } } .+2 } */ +/* { dg-error "operand type '__bf16 .' is incompatible with argument 1" "" { target { bfloat16_storage_only && { ! bfloat16 } } } .+2 } */ +__bf16 do_bfloat16_fetch_add (__bf16 *p, __bf16 val) { + return __atomic_fetch_add (p, val, 0); +} +/* { dg-error "_Float16. is not supported on this target" "" { target { ! float16_noopt } } .+1 } */ +_Float16 do_float16_fetch_add (_Float16 *p, _Float16 val) { + return __atomic_fetch_add (p, val, 0); +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fp.c b/gcc/testsuite/gcc.dg/atomic-op-fp.c new file mode 100644 index 00000000000..1477dbed058 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fp.c @@ -0,0 +1,198 @@ +/* Test __atomic routines for existence and proper execution on double + values with each valid memory model. */ +/* { dg-do run } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_double_runtime { target { ! libatomic_available } } } */ + +/* Test the execution of the __atomic_*OP builtin routines for a double. */ + +extern void abort(void); + +double v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpf.c b/gcc/testsuite/gcc.dg/atomic-op-fpf.c new file mode 100644 index 00000000000..467fc1348f5 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpf.c @@ -0,0 +1,198 @@ +/* Test __atomic routines for existence and proper execution on float + values with each valid memory model. */ +/* { dg-do run } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_float_runtime { target { ! libatomic_available } } } */ + +/* Test the execution of the __atomic_*OP builtin routines for a float. */ + +extern void abort(void); + +float v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpf128.c b/gcc/testsuite/gcc.dg/atomic-op-fpf128.c new file mode 100644 index 00000000000..86f6ab39565 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpf128.c @@ -0,0 +1,201 @@ +/* Test __atomic routines for existence and proper execution on _Float128 + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-additional-options "-std=c23" } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_float128_runtime { target { ! libatomic_available } } } */ +/* { dg-require-effective-target float128_runtime } */ +/* { dg-add-options float128 } */ + +/* Test the execution of the __atomic_*OP builtin routines for a _Float128. */ + +extern void abort(void); + +_Float128 v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpf16.c b/gcc/testsuite/gcc.dg/atomic-op-fpf16.c new file mode 100644 index 00000000000..81b54d819dd --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpf16.c @@ -0,0 +1,201 @@ +/* Test __atomic routines for existence and proper execution on _Float16 + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-additional-options "-std=c23" } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_float16_runtime { target { ! libatomic_available } } } */ +/* { dg-require-effective-target float16_runtime } */ +/* { dg-add-options float16 } */ + +/* Test the execution of the __atomic_*OP builtin routines for a _Float16. */ + +extern void abort(void); + +_Float16 v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpf16b.c b/gcc/testsuite/gcc.dg/atomic-op-fpf16b.c new file mode 100644 index 00000000000..0ff38878d5a --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpf16b.c @@ -0,0 +1,201 @@ +/* Test __atomic routines for existence and proper execution on __bf16 + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-additional-options "-std=c23" } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_bfloat16_runtime { target { ! libatomic_available } } } */ +/* { dg-require-effective-target bfloat16_runtime } */ +/* { dg-add-options bfloat16 } */ + +/* Test the execution of the __atomic_*OP builtin routines for a __bf16. */ + +extern void abort(void); + +__bf16 v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpf32.c b/gcc/testsuite/gcc.dg/atomic-op-fpf32.c new file mode 100644 index 00000000000..29e59dc98c7 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpf32.c @@ -0,0 +1,201 @@ +/* Test __atomic routines for existence and proper execution on _Float32 + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-additional-options "-std=c23" } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_float32_runtime { target { ! libatomic_available } } } */ +/* { dg-require-effective-target float32_runtime } */ +/* { dg-add-options float32 } */ + +/* Test the execution of the __atomic_*OP builtin routines for a _Float32. */ + +extern void abort(void); + +_Float32 v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpf32x.c b/gcc/testsuite/gcc.dg/atomic-op-fpf32x.c new file mode 100644 index 00000000000..32526732846 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpf32x.c @@ -0,0 +1,201 @@ +/* Test __atomic routines for existence and proper execution on _Float32x + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-additional-options "-std=c23" } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_float32x_runtime { target { ! libatomic_available } } } */ +/* { dg-require-effective-target float32x_runtime } */ +/* { dg-add-options float32x } */ + +/* Test the execution of the __atomic_*OP builtin routines for a _Float32x. */ + +extern void abort(void); + +_Float32x v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpf64.c b/gcc/testsuite/gcc.dg/atomic-op-fpf64.c new file mode 100644 index 00000000000..be75f7c06b5 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpf64.c @@ -0,0 +1,201 @@ +/* Test __atomic routines for existence and proper execution on _Float64 + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-additional-options "-std=c23" } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_float64_runtime { target { ! libatomic_available } } } */ +/* { dg-require-effective-target float64_runtime } */ +/* { dg-add-options float64 } */ + +/* Test the execution of the __atomic_*OP builtin routines for a _Float64. */ + +extern void abort(void); + +_Float64 v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpf64x.c b/gcc/testsuite/gcc.dg/atomic-op-fpf64x.c new file mode 100644 index 00000000000..89ee198dfd2 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpf64x.c @@ -0,0 +1,201 @@ +/* Test __atomic routines for existence and proper execution on _Float64x + values with each valid memory model. */ +/* { dg-do run } */ +/* { dg-additional-options "-std=c23" } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_float64x_runtime { target { ! libatomic_available } } } */ +/* { dg-require-effective-target float64x_runtime } */ +/* { dg-add-options float64x } */ + +/* Test the execution of the __atomic_*OP builtin routines for a _Float64x. */ + +extern void abort(void); + +_Float64x v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic-op-fpl.c b/gcc/testsuite/gcc.dg/atomic-op-fpl.c new file mode 100644 index 00000000000..06429855ae4 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic-op-fpl.c @@ -0,0 +1,198 @@ +/* Test __atomic routines for existence and proper execution on long double + values with each valid memory model. */ +/* { dg-do run } */ +/* Can use fallback if libatomic is available, otherwise need hardware support. */ +/* { dg-require-effective-target sync_long_double_runtime { target { ! libatomic_available } } } */ + +/* Test the execution of the __atomic_*OP builtin routines for a long double. */ + +extern void abort(void); + +long double v, count, res; + +/* The fetch_op routines return the original value before the operation. */ + +void +test_fetch_add () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_fetch_add (&v, count, __ATOMIC_RELAXED) != 0) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_CONSUME) != 1) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQUIRE) != 2) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_RELEASE) != 3) + abort (); + + if (__atomic_fetch_add (&v, count, __ATOMIC_ACQ_REL) != 4) + abort (); + + if (__atomic_fetch_add (&v, 1, __ATOMIC_SEQ_CST) != 5) + abort (); +} + + +void +test_fetch_sub() +{ + v = res = 20; + count = 0.0; + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_RELAXED) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_CONSUME) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQUIRE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE) != res--) + abort (); + + if (__atomic_fetch_sub (&v, count + 1, __ATOMIC_ACQ_REL) != res--) + abort (); + + if (__atomic_fetch_sub (&v, 1, __ATOMIC_SEQ_CST) != res--) + abort (); +} + +/* The OP_fetch routines return the new value after the operation. */ + +void +test_add_fetch () +{ + v = res = 0.0; + count = 1.0; + + if (__atomic_add_fetch (&v, count, __ATOMIC_RELAXED) != 1) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_CONSUME) != 2) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQUIRE) != 3) + abort (); + + if (__atomic_add_fetch (&v, 1, __ATOMIC_RELEASE) != 4) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL) != 5) + abort (); + + if (__atomic_add_fetch (&v, count, __ATOMIC_SEQ_CST) != 6) + abort (); +} + + +void +test_sub_fetch () +{ + v = res = 20; + count = 0.0; + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_CONSUME) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQUIRE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, 1, __ATOMIC_RELEASE) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL) != --res) + abort (); + + if (__atomic_sub_fetch (&v, count + 1, __ATOMIC_SEQ_CST) != --res) + abort (); +} + +/* Test the OP routines with a result which isn't used. Use both variations + within each function. */ + +void +test_add () +{ + v = res = 0.0; + count = 1.0; + + __atomic_add_fetch (&v, count, __ATOMIC_RELAXED); + if (v != 1) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_CONSUME); + if (v != 2) + abort (); + + __atomic_add_fetch (&v, 1.0 , __ATOMIC_ACQUIRE); + if (v != 3) + abort (); + + __atomic_fetch_add (&v, 1.0, __ATOMIC_RELEASE); + if (v != 4) + abort (); + + __atomic_add_fetch (&v, count, __ATOMIC_ACQ_REL); + if (v != 5) + abort (); + + __atomic_fetch_add (&v, count, __ATOMIC_SEQ_CST); + if (v != 6) + abort (); +} + + +void +test_sub() +{ + v = res = 20; + count = 0.0; + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_RELAXED); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_CONSUME); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, 1, __ATOMIC_ACQUIRE); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, 1, __ATOMIC_RELEASE); + if (v != --res) + abort (); + + __atomic_sub_fetch (&v, count + 1, __ATOMIC_ACQ_REL); + if (v != --res) + abort (); + + __atomic_fetch_sub (&v, count + 1, __ATOMIC_SEQ_CST); + if (v != --res) + abort (); +} + +int +main () +{ + test_fetch_add (); + test_fetch_sub (); + + test_add_fetch (); + test_sub_fetch (); + + test_add (); + test_sub (); + + return 0; +} diff --git a/gcc/testsuite/gcc.dg/atomic/atomic-op-fp-fenv.c b/gcc/testsuite/gcc.dg/atomic/atomic-op-fp-fenv.c new file mode 100644 index 00000000000..2f3ffab6886 --- /dev/null +++ b/gcc/testsuite/gcc.dg/atomic/atomic-op-fp-fenv.c @@ -0,0 +1,376 @@ +/* Test for builtin fetch_add with floating point types. Test + floating-point exceptions for compound assignment are consistent with + result (so that if multiple iterations of the compare-and-exchange + loop are needed, exceptions get properly cleared). */ +/* { dg-do run } */ +/* { dg-options "-std=c11 -pedantic-errors -pthread -U_POSIX_C_SOURCE -D_POSIX_C_SOURCE=200809L" } */ +/* { dg-add-options ieee } */ +/* { dg-additional-options "-mfp-trap-mode=sui" { target alpha*-*-* } } */ +/* { dg-additional-options "-D_XOPEN_SOURCE=600" { target *-*-solaris2* } } */ +/* { dg-additional-options "-D_HPUX_SOURCE" { target *-*-hpux* } } */ +/* { dg-require-effective-target fenv_exceptions } */ +/* { dg-require-effective-target pthread } */ +/* { dg-timeout-factor 2 } */ + +#include <fenv.h> +#include <float.h> +#include <pthread.h> +#include <stdbool.h> +#include <stdio.h> +#include <stdlib.h> + +/* N.b. in this testcase we only check that the floating point + environment is correctly handled. We do not check if the addition + semantics are correctly implemented. That is tested in + atomic-op-fp.c and its similar associated files. */ + +#define TEST_ALL_EXCEPT (FE_DIVBYZERO \ + | FE_INEXACT \ + | FE_INVALID \ + | FE_OVERFLOW \ + | FE_UNDERFLOW) + +#if defined __alpha__ || defined __aarch64__ + #define ITER_COUNT 100 +#else + #define ITER_COUNT 10000 +#endif + +static volatile _Atomic bool thread_ready, thread_stop; + +/* Generate test code (with NAME used to name functions and variables) + for __atomic_fetch_##OP and __atomic_##OP##_fetch calls on a variable + of type LHSTYPE. One thread repeatedly stores the values INIT1 and + INIT2 in a variable, while the other repeatedly executes the atomic + call having set floating-point exceptions to BEXC. For both + calls we check whether the INIT1 or INIT2 was used in the operation + (using BEFORETEST1(var) for the fetch_op variant and AFTERTEST1(var) + for the op_fetch variant). + When INIT1 was used, the floating-point exceptions should be BEXC | + EXC1; otherwise, they should be BEXC | EXC2. A function + test_main_##NAME is generated that returns nonzero on failure, zero + on success. */ + +#define TEST_FUNCS(NAME, LHSTYPE, OP, VAL, MEMORDER, BEXC, INIT1, BEFORETEST1, \ + AFTERTEST1, EXC1, INIT2, EXC2) \ + \ + static volatile LHSTYPE var_##NAME; \ + \ + static void *test_thread_##NAME (void *arg) \ + { \ + thread_ready = true; \ + LHSTYPE temp1 = INIT1; \ + LHSTYPE temp2 = INIT2; \ + while (!thread_stop) \ + { \ + sched_yield (); \ + __atomic_store (&var_##NAME, &temp1, __ATOMIC_RELAXED); \ + sched_yield (); \ + __atomic_store (&var_##NAME, &temp2, __ATOMIC_RELAXED); \ + sched_yield (); \ + } \ + return NULL; \ + } \ + \ + static int test_main_##NAME (void) \ + { \ + LHSTYPE temp1 = INIT1; \ + LHSTYPE temp2 = INIT2; \ + thread_stop = false; \ + thread_ready = false; \ + __atomic_store (&var_##NAME, &temp1, __ATOMIC_RELAXED); \ + pthread_t thread_id; \ + int pret = pthread_create (&thread_id, NULL, test_thread_##NAME, NULL); \ + if (pret != 0) \ + { \ + printf ("pthread_create failed: %d\n", pret); \ + return 1; \ + } \ + int num_1_pass_a = 0, num_1_fail_a = 0, num_2_pass_a = 0, \ + num_2_fail_a = 0; \ + int num_1_pass_b = 0, num_1_fail_b = 0, num_2_pass_b = 0, \ + num_2_fail_b = 0; \ + while (!thread_ready) \ + sched_yield (); \ + for (int i = 0; i < ITER_COUNT; i++) \ + { \ + feclearexcept (FE_ALL_EXCEPT); \ + feraiseexcept (BEXC); \ + LHSTYPE r = __atomic_fetch_##OP (&var_##NAME, VAL, MEMORDER); \ + int rexc = fetestexcept (TEST_ALL_EXCEPT); \ + sched_yield (); \ + if (BEFORETEST1 (r)) \ + { \ + if (rexc == ((BEXC) | (EXC1))) \ + num_1_pass_a++; \ + else \ + num_1_fail_a++; \ + __atomic_store (&var_##NAME, &temp2, __ATOMIC_RELAXED); \ + } \ + else \ + { \ + if (rexc == ((BEXC) | (EXC2))) \ + num_2_pass_a++; \ + else \ + num_2_fail_a++; \ + __atomic_store (&var_##NAME, &temp1, __ATOMIC_RELAXED); \ + } \ + } \ + for (int i = 0; i < ITER_COUNT; i++) \ + { \ + feclearexcept (FE_ALL_EXCEPT); \ + feraiseexcept (BEXC); \ + LHSTYPE r = __atomic_##OP##_fetch (&var_##NAME, VAL, MEMORDER); \ + int rexc = fetestexcept (TEST_ALL_EXCEPT); \ + sched_yield (); \ + if (AFTERTEST1 (r)) \ + { \ + if (rexc == ((BEXC) | (EXC1))) \ + num_1_pass_b++; \ + else \ + num_1_fail_b++; \ + __atomic_store (&var_##NAME, &temp2, __ATOMIC_RELAXED); \ + } \ + else \ + { \ + if (rexc == ((BEXC) | (EXC2))) \ + num_2_pass_b++; \ + else \ + num_2_fail_b++; \ + __atomic_store (&var_##NAME, &temp1, __ATOMIC_RELAXED); \ + } \ + } \ + thread_stop = true; \ + pthread_join (thread_id, NULL); \ + printf (#NAME " A: (a) %d pass, %d fail; (b) %d pass, %d fail\n", \ + num_1_pass_a, num_1_fail_a, num_2_pass_a, num_2_fail_a); \ + printf (#NAME " B: (a) %d pass, %d fail; (b) %d pass, %d fail\n", \ + num_1_pass_b, num_1_fail_b, num_2_pass_b, num_2_fail_b); \ + return num_1_fail_a || num_2_fail_a || num_1_fail_b || num_2_fail_b; \ + } + +/* Use this to generate tests for each of the memory orders. + Each memory order should have the same result, so the tests are easy + to generate for each one. */ +#define ALL_MEMORDERS(X, NAME, LHSTYPE, OP, VAL, BEXC, INIT1, BEFORETEST1, \ + AFTERTEST1, EXC1, INIT2, EXC2) \ + X (NAME##_##OP##_relaxed, LHSTYPE, OP, VAL, __ATOMIC_RELAXED, BEXC, INIT1, \ + BEFORETEST1, AFTERTEST1, EXC1, INIT2, EXC2) \ + X (NAME##_##OP##_consume, LHSTYPE, OP, VAL, __ATOMIC_CONSUME, BEXC, INIT1, \ + BEFORETEST1, AFTERTEST1, EXC1, INIT2, EXC2) \ + X (NAME##_##OP##_acquire, LHSTYPE, OP, VAL, __ATOMIC_ACQUIRE, BEXC, INIT1, \ + BEFORETEST1, AFTERTEST1, EXC1, INIT2, EXC2) \ + X (NAME##_##OP##_release, LHSTYPE, OP, VAL, __ATOMIC_RELEASE, BEXC, INIT1, \ + BEFORETEST1, AFTERTEST1, EXC1, INIT2, EXC2) \ + X (NAME##_##OP##_acq_rel, LHSTYPE, OP, VAL, __ATOMIC_ACQ_REL, BEXC, INIT1, \ + BEFORETEST1, AFTERTEST1, EXC1, INIT2, EXC2) \ + X (NAME##_##OP##_seq_cst, LHSTYPE, OP, VAL, __ATOMIC_SEQ_CST, BEXC, INIT1, \ + BEFORETEST1, AFTERTEST1, EXC1, INIT2, EXC2) + +/* Use this macro to generate tests for both fetch_add/add_fetch pairs, + and the fetch_sub/sub_fetch pairs with a negative value. + N.b. the expansion of one operation could result in generating the + other depending on what operations the target provides. */ +#define ADD_AND_SUB(X, NAME, LHSTYPE, VAL, BEXC, INIT1, BEFORETEST1, \ + AFTERTEST1, EXC1, INIT2, EXC2) \ + ALL_MEMORDERS (X, NAME, LHSTYPE, add, VAL, BEXC, INIT1, BEFORETEST1, \ + AFTERTEST1, EXC1, INIT2, EXC2) \ + ALL_MEMORDERS (X, NAME, LHSTYPE, sub, (-(VAL)), BEXC, INIT1, BEFORETEST1, \ + AFTERTEST1, EXC1, INIT2, EXC2) + +#define IS_0(X) ((X) == 0) +#define NOT_0(X) ((X) != 0) +#define NOT_DBL_EPSILON_2(X) ((X) != DBL_EPSILON / 2) +#define NOT_FLT_EPSILON_2(X) ((X) != FLT_EPSILON / 2) +#define NOT_MINUS_DBL_EPSILON_2(X) ((X) != -DBL_EPSILON / 2) +#define NOT_MINUS_FLT_EPSILON_2(X) ((X) != -FLT_EPSILON / 2) +#define NOT_MINUS_LDBL_EPSILON_2(X) ((X) != -LDBL_EPSILON / 2) + +#if LDBL_MANT_DIG != 106 +# define LDBL_TESTS_MANT_DIG_NOT106(X) \ + ADD_AND_SUB(X, long_double_add_overflow, long double, \ + /* Addition value */ LDBL_MAX, \ + /* Before flags */ 0, \ + /* INIT1 */ LDBL_MAX, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ FE_OVERFLOW | FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, long_double_sub_overflow, long double, \ + /* Addition value */ -LDBL_MAX, \ + /* Before flags */ 0, \ + /* INIT1 */ -LDBL_MAX, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ FE_OVERFLOW | FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) +#else +# define LDBL_TESTS_MANT_DIG_NOT106(X) +#endif + +#define ALL_TESTS(X) \ + ADD_AND_SUB(X, float_invalid, float, \ + /* Addition value */ __builtin_inff (), \ + /* Before flags */ 0, \ + /* INIT1 */ 0, \ + /* BEFORETEST for INIT1 */ IS_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ 0, \ + /* INIT2 */ -__builtin_inff(), \ + /* exception FLAGS when INIT2 */ FE_INVALID) \ + ADD_AND_SUB(X, float_invalid_prev, float, \ + /* Addition value */ __builtin_inff (), \ + /* Before flags */ FE_DIVBYZERO | FE_INEXACT | FE_OVERFLOW | FE_UNDERFLOW, \ + /* INIT1 */ 0, \ + /* BEFORETEST for INIT1 */ IS_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ 0, \ + /* INIT2 */ -__builtin_inff (), \ + /* exception FLAGS when INIT2 */ FE_INVALID) \ + ADD_AND_SUB(X, float_add_overflow, float, \ + /* Addition value */ FLT_MAX, \ + /* Before flags */ 0, \ + /* INIT1 */ FLT_MAX, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ FE_OVERFLOW | FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, float_add_overflow_prev, float, \ + /* Addition value */ FLT_MAX, \ + /* Before flags */ FE_INVALID, \ + /* INIT1 */ FLT_MAX, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ FE_OVERFLOW | FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, float_add_inexact, float, \ + /* Addition value */ FLT_EPSILON / 2, \ + /* Before flags */ 0, \ + /* INIT1 */ 1.0f, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ NOT_FLT_EPSILON_2, \ + /* exception FLAGS when INIT1 */ FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, float_sub_invalid, float, \ + /* Addition value */ -__builtin_inff (), \ + /* Before flags */ 0, \ + /* INIT1 */ 0, \ + /* BEFORETEST for INIT1 */ IS_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ 0, \ + /* INIT2 */ __builtin_inff (), \ + /* exception FLAGS when INIT2 */ FE_INVALID) \ + ADD_AND_SUB(X, float_sub_overflow, float, \ + /* Addition value */ -FLT_MAX, \ + /* Before flags */ 0, \ + /* INIT1 */ -FLT_MAX, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ FE_OVERFLOW | FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, float_sub_inexact, float, \ + /* Addition value */ -FLT_EPSILON / 2, \ + /* Before flags */ 0, \ + /* INIT1 */ -1.0f, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ NOT_MINUS_FLT_EPSILON_2, \ + /* exception FLAGS when INIT1 */ FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, double_add_invalid, double, \ + /* Addition value */ __builtin_inf (), \ + /* Before flags */ 0, \ + /* INIT1 */ 0, \ + /* BEFORETEST for INIT1 */ IS_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ 0, \ + /* INIT2 */ -__builtin_inf (), \ + /* exception FLAGS when INIT2 */ FE_INVALID) \ + ADD_AND_SUB(X, double_add_overflow, double, \ + /* Addition value */ DBL_MAX, \ + /* Before flags */ 0, \ + /* INIT1 */ DBL_MAX, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ FE_OVERFLOW | FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, double_add_inexact, double, \ + /* Addition value */ DBL_EPSILON / 2, \ + /* Before flags */ 0, \ + /* INIT1 */ 1.0, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ NOT_DBL_EPSILON_2, \ + /* exception FLAGS when INIT1 */ FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, double_sub_invalid, double, \ + /* Addition value */ -__builtin_inf (), \ + /* Before flags */ 0, \ + /* INIT1 */ 0, \ + /* BEFORETEST for INIT1 */ IS_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ 0, \ + /* INIT2 */ __builtin_inf (), \ + /* exception FLAGS when INIT2 */ FE_INVALID) \ + ADD_AND_SUB(X, double_sub_overflow, double, \ + /* Addition value */ -DBL_MAX, \ + /* Before flags */ 0, \ + /* INIT1 */ -DBL_MAX, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ FE_OVERFLOW | FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, double_sub_inexact, double, \ + /* Addition value */ -DBL_EPSILON / 2, \ + /* Before flags */ 0, \ + /* INIT1 */ -1.0, \ + /* BEFORETEST for INIT1 */ NOT_0, \ + /* AFTERTEST for INIT1 */ NOT_MINUS_DBL_EPSILON_2, \ + /* exception FLAGS when INIT1 */ FE_INEXACT, \ + /* INIT2 */ 0, \ + /* exception FLAGS when INIT2 */ 0) \ + ADD_AND_SUB(X, long_double_add_invalid, long double, \ + /* Addition value */ __builtin_infl (), \ + /* Before flags */ 0, \ + /* INIT1 */ 0, \ + /* BEFORETEST for INIT1 */ IS_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ 0, \ + /* INIT2 */ -__builtin_infl (), \ + /* exception FLAGS when INIT2 */ FE_INVALID) \ + ADD_AND_SUB(X, long_double_sub_invalid, long double, \ + /* Addition value */ -__builtin_infl (), \ + /* Before flags */ 0, \ + /* INIT1 */ 0, \ + /* BEFORETEST for INIT1 */ IS_0, \ + /* AFTERTEST for INIT1 */ __builtin_isinf, \ + /* exception FLAGS when INIT1 */ 0, \ + /* INIT2 */ __builtin_infl (), \ + /* exception FLAGS when INIT2 */ FE_INVALID) \ + LDBL_TESTS_MANT_DIG_NOT106(X) + + +ALL_TESTS(TEST_FUNCS) + +int +main (void) +{ + int ret = 0; +#define DOCHECK(NAME, LHSTYPE, OP, VAL, MEMORDER, BEXC, INIT1, BEFORETEST1, \ + AFTERTEST1, EXC1, INIT2, EXC2) \ + ret |= test_main_##NAME (); + + ALL_TESTS (DOCHECK); + if (ret != 0) + abort (); + else + exit (0); +} diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp index b69960c4cfa..da57bbf5a78 100644 --- a/gcc/testsuite/lib/target-supports.exp +++ b/gcc/testsuite/lib/target-supports.exp @@ -4115,6 +4115,18 @@ proc add_options_for_bfloat16 { flags } { return "$flags" } +# Return 1 if the target supports bfloat16 as a storage type and does not +# support bfloat16 as an arithmetic type. +proc check_effective_target_bfloat16_storage_only {} { + if {![check_effective_target_bfloat16] + && [check_no_compiler_messages_nocache bfloat16_storage_only object { + __bf16 doload (__bf16 *x) { return *x; } + } [add_options_for_bfloat16 ""]] } { + return 1 + } + return 0 +} + # Return 1 if the target supports all four forms of fused multiply-add # (fma, fms, fnma, and fnms) for both float and double. @@ -9960,7 +9972,139 @@ proc check_effective_target_sync_char_short { } { || [check_effective_target_mips_llsc] }}] } -# Return 1 if thread_fence does not rely on __sync_synchronize +# While check_compile always generates a file with the `.c` suffix, it doesn't +# force g++ to compile it as if it were C. Hence have to account for C++ *and* +# C. +proc generate_floating_sync_check_template { typename } { + return [format { +#ifdef __cplusplus + extern "C" { +#endif + extern int printf(const char *, ...); +#ifdef __cplusplus + } +#endif + int main () { +#ifdef __cplusplus + static_assert +#else + _Static_assert +#endif + (sizeof(%1$s) == 1 + || sizeof(%1$s) == 2 + || sizeof(%1$s) == 4 + || sizeof(%1$s) == 8 + || sizeof(%1$s) == 16 + , "%1$s size not found"); + if (sizeof(%1$s) == sizeof(char) || sizeof(%1$s) == sizeof(short)) { + printf("sync_char_short"); + } else if (sizeof(%1$s) == sizeof(int) || sizeof(%1$s) == sizeof(long)) { + printf("sync_int_long"); + } else if (sizeof(%1$s) == sizeof(long long)) { + printf("sync_long_long_runtime"); + } else { + printf("sync_int_128_runtime"); + } + return 0; + } + } $typename] +} +foreach { typename typeid } { + float float + double double + {long double} long_double + __bf16 bfloat16 + _Float16 _Float16 + _Float32 _Float32 + _Float64 _Float64 + _Float128 _Float128 + _Float32x _Float32x + _Float64x _Float64x +} { + proc atomic_${typeid}_evalcheck { typename typeidarg } { + set template [generate_floating_sync_check_template $typename] + set tempvar [check_compile size_$typeidarg executable $template ""] + # If failed to compile then the type isn't available this floating type + # is certainly not atomic on this target. + if { ![string match "" [lindex $tempvar 0]] } { + return 0; + } + # Now want to run the binary and see what it prints out. + # N.b. it feels OK to require the ability to run binaries on this + # target for the sync_<floating-point-type>_runtime effective targets + # since they are specifically about whether we can run atomic code with + # those floating point types. + global tool + set output [${tool}_load "./[lindex $tempvar 1]"] + remote_file build delete [lindex $tempvar 1] + if { [lindex $output 0] != "pass" } { + return 0; + } + return [check_effective_target_[lindex $output 1]] + } + # Could use the below to generate the effective target procedures within + # this for loop. But writing the effective target procs out manually in + # order to easy searching for them with `grep`. + # set genstring {proc check_effective_target_sync_%1$s_runtime {} { + # return [check_cached_effective_target sync_%1$s \ + # "atomic_%1$s_evalcheck {%2$s} %1$s"] + # }} + # set eval_string [format $genstring $typeid $typename] + # eval $eval_string +} +unset typename +unset typeid + +proc check_effective_target_sync_float_runtime {} { + return [check_cached_effective_target sync_float \ + "atomic_float_evalcheck {float} float"] +} + +proc check_effective_target_sync_double_runtime {} { + return [check_cached_effective_target sync_double \ + "atomic_double_evalcheck {double} double"] +} + +proc check_effective_target_sync_long_double_runtime {} { + return [check_cached_effective_target sync_long_double \ + "atomic_long_double_evalcheck {long double} long_double"] +} + +proc check_effective_target_sync_bfloat16_runtime {} { + return [check_cached_effective_target sync_bfloat16 \ + "atomic_bfloat16_evalcheck {__bf16} bfloat16"] +} + +proc check_effective_target_sync_float16_runtime {} { + return [check_cached_effective_target sync_float16 \ + "atomic__Float16_evalcheck {_Float16} _Float16"] +} + +proc check_effective_target_sync_float32_runtime {} { + return [check_cached_effective_target sync_float32 \ + "atomic__Float32_evalcheck {_Float32} _Float32"] +} + +proc check_effective_target_sync_float64_runtime {} { + return [check_cached_effective_target sync_float64 \ + "atomic__Float64_evalcheck {_Float64} _Float64"] +} + +proc check_effective_target_sync_float128_runtime {} { + return [check_cached_effective_target sync_float128 \ + "atomic__Float128_evalcheck {_Float128} _Float128"] +} + +proc check_effective_target_sync_float32x_runtime {} { + return [check_cached_effective_target sync_float32x \ + "atomic__Float32x_evalcheck {_Float32x} _Float32x"] +} + +proc check_effective_target_sync_float64x_runtime {} { + return [check_cached_effective_target sync_float64x \ + "atomic__Float64x_evalcheck {_Float64x} _Float64x"] +} + # library function proc check_effective_target_thread_fence {} { -- 2.43.0