https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84666
Bug ID: 84666
Summary: ostringstream prints floats 2x slower than snprintf,
when precision>=37
Product: gcc
Version: 7.2.0
Status: UNCONFIRMED
Severity: normal
Priority: P3
Component: libstdc++
Assignee: unassigned at gcc dot gnu.org
Reporter: b7.10110111 at gmail dot com
Target Milestone: ---
Created attachment 43541
--> https://gcc.gnu.org/bugzilla/attachment.cgi?id=43541&action=edit
Test program
If you compile the attached test program and run it, you'll notice that
ostringstream performance becomes 2x slower at precision>=38 (and 1.5x slower
on average at precision==37). I've traced it to _M_insert_float using too small
initial buffer, regardless of the precision requested, and thus having to call
std::__convert_from_v second time.
The offending line is:
// First try a buffer perhaps big enough (most probably sufficient
// for non-ios_base::fixed outputs)
int __cs_size = __max_digits * 3;
Here __max_digits is a numeric trait of _ValueT, and doesn't depend on __prec.
It seems more correct to use __prec instead of (or in addition to) __max_digits
here.
Interestingly, a few lines below, in the #else branch of #if
_GLIBCXX_USE_C99_STDIO, we can see that __prec is taken into account in
calculation of __cs_size. Apparently, on Kubuntu 14.04 amd64,
_GLIBCXX_USE_C99_STDIO was set to 1.