https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111741
Bug ID: 111741 Summary: gcc long double precision Product: gcc Version: 11.4.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: bernardwidynski at gmail dot com Target Milestone: --- Created attachment 56081 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=56081&action=edit C program to compute sum of numbers 1, 2, 3, ... N It is my understanding the long double in gcc has 80 bits precision. I've run a simple program which shows that it is less than 80 bits precision. The numbers 1, 2, 3, ... N are summed and compared with N*(N+1)/2 For the case where N = 2^32, the sums compare correctly. For the case where N = 2^33, the sums are different. 2^33*(2^33-1)/2 is less than 80 bits in precision. Why doesn't the long double have the capacity for this computation? See attached program and output file. This was run on Cygwin64 using gcc version 11.4.0 on an Intel Core i7-9700