https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96566

--- Comment #6 from Tom de Vries <vries at gcc dot gnu.org> ---
(In reply to Jakub Jelinek from comment #3)
> Either the test can be skipped on nvptx or any targets that don't emit
> something like a .zero similar directive, or we should after the size of
> variable is too large diagnostic throw the initializer away (set it to
> error_mark_node)?
> Of course, I guess the timeout will happen even if the object size is not
> too large for the warning, just slightly below it,
> struct Ax_m3 { char a[PTRDIFF_MAX / 32 - 3], ax[]; };
> struct Ax_m3 xm3_3 = { { 0 }, { 1, 2, 3 } };
> will IMHO still timeout if it needs to emit 288 quadrillion "0, " strings.

Agreed, I browsed the ptx spec at bit, and was hoping for a better way to
express this, but it seems there isn't, even in the latest ptx version (7.0).

As for the ptx back-end, we could add an -minit-limit, with a reasonable
default.

With a size of 0xfffffff we take 5s and generate a 193MB assembly file.

With a size of 0xffffffff we take 1m10s and generate a 3.1GB assembly file.

So perhaps the first could be a good default. 

Then when running into the limit we error out, instead of timing out or running
out of disk space.

Reply via email to