https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78786
--- Comment #3 from Vincent Lefèvre <vincent-gcc at vinc17 dot net> --- Well, concerning "%.*Rf", indeed, mpfr_snprintf allocates 2 GB for a short period. But I notice that glibc is much worse. Consider the following program. #include <stdio.h> #include <stdlib.h> #include <limits.h> #include <mpfr.h> int main (int argc, char **argv) { mpfr_t x; int i, m, r; if (argc != 3) exit (1); i = atoi (argv[1]); m = atoi (argv[2]); if (i <= 0) i = INT_MAX + i; mpfr_init2 (x, 64); mpfr_set_ui (x, 0, MPFR_RNDN); r = m == 0 ? snprintf (0, 0, "%.*f", i, 0.0) : m == 1 ? gmp_snprintf (0, 0, "%.*f", i, 0.0) : m == 2 ? mpfr_snprintf (0, 0, "%.*f", i, 0.0) : m == 3 ? mpfr_snprintf (0, 0, "%.*Rf", i, x) : (fprintf (stderr, "bad value of m\n"), -2); printf ("%d\n", r); mpfr_clear (x); return 0; } On -33 0, this program outputs: 2147483616 10487712 i.e. glibc uses 10 GB (and it takes much time). On -33 3, this program outputs: 2147483616 2099120 i.e. MPFR only uses 2 GB. If I replace the number 0 by 0.5 and use the size 20000000, then MPFR still takes less memory than glibc; this time, MPFR is much slower than glibc, but this was expected (the very specific case small-precision input with large-precision output is just not optimized). Now, in both cases, it seems that both glibc and MPFR compute the result even though it will not be used (in the worst cases, this may be necessary in order to distinguish between 9.999...99 and 10.000...00, which have different sizes, but not in the average case). Is this a problem in practice? If this is done to obtain the size of a buffer for a new snprintf call with the result, then not really. An easier improvement that could be done in MPFR is to detect early int overflow, but this is here a corner case and the user could have first done optimizations on his side. Note that glibc is buggy in this respect: https://sourceware.org/bugzilla/show_bug.cgi?id=17829