https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80092

Thomas Schwinge <tschwinge at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Target|                            |nvptx
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2017-03-23
     Ever confirmed|0                           |1

--- Comment #2 from Thomas Schwinge <tschwinge at gcc dot gnu.org> ---
(In reply to Tom de Vries from comment #0)
> Atm, when running for target nvptx, we run into unsupported features in the
> tests.
> 
> F.i. in the g++ testsuite:
> ...
> $ grep -c "sorry, unimplemented:" g++.log 
> 12693
> ...

... a lot...

> more in detail:
> ...
> $ grep "sorry, unimplemented:" g++.log | sed 's/.*sorry, unimplemented://' |
> dos2unix | sort -u 
>  converting lambda which uses '...' to function pointer
>  global constructors not supported on this target
>  global destructors not supported on this target
>  indirect jumps are not available on this target
>  mangling __underlying_type
>  non-trivial designated initializers not supported
>  passing arguments to ellipsis of inherited constructor 'B::B(int, ...)
> [inherited from A]'
>  target cannot support alloca.
>  target cannot support nonlocal goto.
> ...
> 
> All those tests are FAILs, while they should be UNSUPPORTED.
> 
> In principle, having those as FAILs is not a problem when regression
> testing. We can compare tests results, and only look at changes.
> 
> But it's better to introduce effective-target keywords for those features,
> and mark the tests as such. That will reduce the noise rate because of
> unsupported features being used or not due to code generation changes.

But that will be a rather huge effort to do -- and to keep up to date.  Is that
really worth it?

Reply via email to