Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of Boost. Click here to view this page for the latest version.
Library Documentation Index

Safe Numerics

PrevUpHomeNext

Rationale and FAQ

1. Is this really necessary? If I'm writing the program with the requisite care and competence, problems noted in the introduction will never arise. Should they arise, they should be fixed "at the source" and not with a "band aid" to cover up bad practice.
2. Can safe types be used as drop-in replacements for built-in types?
3. Why are there special types for literal such as safe_signed_literal<42>? Why not just use std::integral_const<int, 42>?
4. Why is safe...literal needed at all? What's the matter with const safe<int>(42)?
5. Are safe type operations constexpr? That is, can they be invoked at compile time?
6. Why define safe_literal? Isn't it effectively the same as std::integral_constant?
7. Why is Boost.Convert not used?
8. Why is the library named "safe ..." rather than something like "checked ..." ?
9. Given that the library is called "numerics" why is floating point arithmetic not addressed?
10. Isn't putting a defensive check just before any potential undefined behavior often considered a bad practice?
11. It looks like the implementation presumes two's complement arithmetic at the hardware level. So this library is not portable - correct? What about other hardware architectures?
12. According to C/C++ standards, unsigned integers cannot overflow - they are modular integers which "wrap around". Yet the safe numerics library detects and traps this behavior as errors. Why is that?
13. Why does the library require C++14?
14. This is a C++ library - yet you refer to C/C++. Which is it?
15. Some compilers (including gcc and clang) include builtin functions for checked addition, multiplication, etc. Does this library use these intrinsics?
16. Some compilers (including gcc and clang) included a builtin function for detecting constants. This seemed attractive to eliminate the requirement for the safe_literal type. Alas, these builtin functions are defined as macros. Constants passed through functions down into the safe numerics library cannot be detected as constants. So the opportunity to make the library even more efficient by moving more operations to compile time doesn't exist - contrary to my hopes and expections.

1.

Is this really necessary? If I'm writing the program with the requisite care and competence, problems noted in the introduction will never arise. Should they arise, they should be fixed "at the source" and not with a "band aid" to cover up bad practice.

This surprised me when it was first raised. But some of the feedback I've received makes me think that it's a widely held view. The best answer is to consider the examples in the Tutorials and Motivating Examples section of the library documentation. I believe they convincingly demonstrate that any program which does not use this library must be assumed to contain arithmetic errors.

2.

Can safe types be used as drop-in replacements for built-in types?

Almost. Replacing all built-in types with their safe counterparts should result in a program that will compile and run as expected. Occasionally compile time errors will occur and adjustments to the source code will be required. Typically these will result in code which is more correct.

3.

Why are there special types for literal such as safe_signed_literal<42>? Why not just use std::integral_const<int, 42>?

By defining our own "special" type we can simplify the interface. Using std::integral_const requires one to specify both the type and the value. Using safe_signed_literal<42> doesn't require a parameter for the type. So the library can select the best type to hold the specified value. It also means that one won't have the opportunity to specify a type-value pair which are inconsistent.

4.

Why is safe...literal needed at all? What's the matter with const safe<int>(42)?

const safe<int>(42) looks like it might be what we want: An immutable value which invokes the "safe" operators when used in an expression. But there is one problem. The std::numeric_limits<safe<int>> is a range from INTMIN to INTMAX even though the value is fixed to 42 at compile time. It is this range which is used at compile time to calculate the range of the result of the operation.

So when an operation is performed, the range of the result is calculated from [INTMIN, INTMAX] rather than from [42,42].

5.

Are safe type operations constexpr? That is, can they be invoked at compile time?

Yes. safe type construction and calculations are all constexpr. Note that to get maximum benefit, you'll have to use safe...literal to specify the primitive values at compile time.

6.

Why define safe_literal? Isn't it effectively the same as std::integral_constant?

Almost, but there are still good reasons to create a different type.

  • std::integral_constant<int, 42> requires specification of type as well as value so it's less convenient than safe_signed_literal which maps to the smallest type required to hold the value.

  • std::numeric_limits<std::integral_constant<int, 42>>::is_integer returns false. This would complicate implementation of the library

  • type trait is_safe<std::integral_constant<int, 42>> would have to be defined to return true.

  • But globally altering the traits of std::integral_constant might have unintended side-effects related to other code. These might well be surprises which are create errors which are hard to find and hard to work around.

7.

Why is Boost.Convert not used?

I couldn't figure out how to use it from the documentation.

8.

Why is the library named "safe ..." rather than something like "checked ..." ?

I used "safe" in large part because this is what has been used by other similar libraries. Maybe a better word might have been "correct" but that would raise similar concerns. I'm not inclined to change this. I've tried to make it clear in the documentation what the problem that the library addressed is.

9.

Given that the library is called "numerics" why is floating point arithmetic not addressed?

Actually, I believe that this can/should be applied to any type T which satisfies the type requirement Numeric type as defined in the documentation. So there should be specializations safe<float> and related types as well as new types like safe<fixed_decimal> etc. But the current version of the library only addresses integer types. Hopefully the library will evolve to match the promise implied by its name.

10.

Isn't putting a defensive check just before any potential undefined behavior often considered a bad practice?

By whom? Is leaving code which can produce incorrect results better? Note that the documentation contains references to various sources which recommend exactly this approach to mitigate the problems created by this C/C++ behavior. See [Seacord]

11.

It looks like the implementation presumes two's complement arithmetic at the hardware level. So this library is not portable - correct? What about other hardware architectures?

As far as is known as of this writing, the library does not presume that the underlying hardware is two's complement. However, this has yet to be verified in any rigorous way.

12.

According to C/C++ standards, unsigned integers cannot overflow - they are modular integers which "wrap around". Yet the safe numerics library detects and traps this behavior as errors. Why is that?

The guiding purpose of the library is to trap incorrect arithmetic behavior - not just undefined behavior. Although a savvy user may understand and keep present in his mind that an unsigned integer is really a modular type, the plain reading of an arithmetic expression conveys the idea that all operands are common integers. Also in many cases, unsigned integers are used in cases where modular arithmetic is not intended, such as array indices. Finally, the modulus for such an integer would vary depending upon the machine architecture. For these reasons, in the context of this library, an unsigned integer is considered to be a representation of a subset of integers. Note that this decision is consistent with [INT30-C], “Ensure that unsigned integer operations do not wrap” in the CERT C Secure Coding Standard [Seacord].

13.

Why does the library require C++14?

The original version of the library used C++11. Feedback from CPPCon, Boost Library Incubator and Boost developer's mailing list convinced me that I had to address the issue of run-time penalty much more seriously. I resolved to eliminate or minimize it. This led to more elaborate meta-programming. But this wasn't enough. It became apparent that the only way to really minimize run-time penalty was to implement compile-time integer range arithmetic - a pretty elaborate sub library. By doing range arithmetic at compile-time, I could skip runtime checking on many/most integer operations. While C++11 constexpr wasn't quite powerful enough to do the job, C++14 constexpr is. The library currently relies very heavily on C++14 constexpr. I think that those who delve into the library will be very surprised at the extent that minor changes in user code can produce guaranteed correct integer code with zero run-time penalty.

14.

This is a C++ library - yet you refer to C/C++. Which is it?

C++ has evolved way beyond the original C language. But C++ is still (mostly) compatible with C. So most C programs can be compiled with a C++ compiler. The problems of incorrect arithmetic afflict both C and C++. Suppose we have a legacy C program designed for some embedded system.

  • Replace all int declarations with int16_t and all long declarations with int32_t.

  • Create a file containing something like the following and include it at the beginning of every source file.

    #ifdef TEST
    // using C++ on test platform
    #include  <cstdint>
    #include <boost/numeric/safe_numerics/safe_integer.hpp>
    #include <cpp.hpp>
    using pic16_promotion = boost::numeric::cpp<
        8,  // char
        8,  // short
        8,  // int
        16, // long
        32  // long long
    >;
    // define safe types used in the desktop version of the program.
    template <typename T> // T is char, int, etc data type
    using safe_t = boost::numeric::safe<
        T,
        pic16_promotion,
        boost::numeric::default_exception_policy // use for compiling and running tests
    >;
    typedef safe_t<std::int_least16_t> int16_t;
    typedef safe_t<std::int_least32_t> int32_t;
    #else
    /* using C on embedded platform */
    typedef int int_least16_t;
    typedef long int_least16_t;
    #endif
    
    
  • Compile tests on the desktop with a C++14 compiler and with the macro TEST defined.

  • Run the tests and change the code to address any thrown exceptions.

  • Compile for the target C platform with the macro TEST undefined.

This example illustrates how this library, implemented with C++14 can be useful in the development of correct code for programs written in C.

15.

Some compilers (including gcc and clang) include builtin functions for checked addition, multiplication, etc. Does this library use these intrinsics?

No. I attempted to use these but they are currently not constexpr. So I couldn't use these without breaking constexpr compatibility for the safe numerics primitives.

16.

Some compilers (including gcc and clang) included a builtin function for detecting constants. This seemed attractive to eliminate the requirement for the safe_literal type. Alas, these builtin functions are defined as macros. Constants passed through functions down into the safe numerics library cannot be detected as constants. So the opportunity to make the library even more efficient by moving more operations to compile time doesn't exist - contrary to my hopes and expections.


PrevUpHomeNext