https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79647

--- Comment #6 from Martin Sebor <msebor at gcc dot gnu.org> ---
I'm not entirely sure.  But the trouble with the test case is that it converts
between signed and unsigned integers with different sizes.  The async_jobs
variable's type is int but memset takes size_t (which is unsigned long in
LP64).  Constraining a signed variable to an unsigned range should work but in
complex code the unsigned range can easily change and become signed.  The best
way to guarantee this doesn't happen is to make the variable unsigned.  In the
test case, change async_jobs to unsigned and add the hunk below.  That has the
same effect and avoids the warning.

             }
+            if (async_jobs > 99999) {
+                BIO_printf(bio_err,
+                           "%s: too many async_jobs\n",
+                           prog);
+                goto opterr;
+            }

Btw., there's another hidden instance of the same issue lurking here. 
app_malloc is declared to take size as a signed int, and there's nothing to let
GCC know it's an allocation function.  The function would be better declared
like so to help GCC find buffer overflows and detect excessive allocations.  

void* app_malloc(size_t sz, const char *what) __attribute__ ((alloc_size (1)));

With this declaration (and without the int -> unsigned change above), GCC also
warns for the call to it, for the same reason:

apps/speed.c: In function ‘speed_main’:
apps/speed.c:1514:14: warning: argument 1 range [18446742819579101184,
18446744073709551032] exceeds maximum object size 9223372036854775807
[-Walloc-size-larger-than=]
In file included from apps/speed.c:36:0:
apps/apps.h:473:7: note: in a call to allocation function ‘app_malloc’ declared
here

Reply via email to