...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
It may appear that floating-point numbers are displayed by the Unit
Test Framework with an excessive number of decimal digits. However
the number of digits shown is chosen to avoid apparently nonsensical displays
like [1.00000
!= 1.00000]
when comparing exactly unity against a
value which is increased by just one least significant binary digit using
the default precision for float of just 6 decimal digits, given by std::numeric_limits<float>::digits10
. The function used for the number
of decimal digits displayed is that proposed for a future C++ Standard,
A
Proposal to add a max significant decimal digits value, to be called
std::numeric_limits::max_digits10();
.
For 32-bit floats, 9 decimal digits are needed to ensure a single bit change
produces a different decimal digit string.
So a much more helpful display using 9 decimal digits is thus: [1.00000000 != 1.00000012]
showing that the two values are in fact
different.
For IEEE754 32-bit float values - 9 decimal
digits are shown. For 64-bit IEEE754 double
- 17 decimal digits. For IEEE754 extended
long double using 80-bit - 21 decimal digits. For IEEE754
quadruple long double 128-bit, and SPARC extended long double 128-bit -
36 decimal digits. For floating-point types, a convenient formula to calculate
max_digits10
is: 2 + std::numeric_limits<FPT>::digits
* 3010/10000
;
Note | |
---|---|
Note that a user defined floating point type UDFPT must define |