...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/math/tools/roots.hpp>
namespace boost { namespace math { namespace tools { // Note namespace boost::math::tools. // Newton-Raphson template <class F, class T> T newton_raphson_iterate(F f, T guess, T min, T max, int digits); template <class F, class T> T newton_raphson_iterate(F f, T guess, T min, T max, int digits, std::uintmax_t& max_iter); // Halley template <class F, class T> T halley_iterate(F f, T guess, T min, T max, int digits); template <class F, class T> T halley_iterate(F f, T guess, T min, T max, int digits, std::uintmax_t& max_iter); // Schr'''ö'''der template <class F, class T> T schroder_iterate(F f, T guess, T min, T max, int digits); template <class F, class T> T schroder_iterate(F f, T guess, T min, T max, int digits, std::uintmax_t& max_iter); template <class F, class ComplexType> Complex complex_newton(F f, Complex guess, int max_iterations = std::numeric_limits<typename ComplexType::value_type>::digits); template<class T> auto quadratic_roots(T const & a, T const & b, T const & c); }}} // namespaces boost::math::tools.
These functions all perform iterative root-finding using derivatives:
newton_raphson_iterate
performs second-order Newton-Raphson
iteration.
halley_iterate
and schroder_iterate
perform third-order
Halley and Schröder iteration.
complex_newton
performs
Newton's method on complex-analytic functions.
solve_quadratic
solves
quadratic equations using various tricks to keep catastrophic cancellation
from occurring in computation of the discriminant.
Parameters of the real-valued root finding functions
Type F must be a callable function object (or C++ lambda) that accepts one parameter and returns a std::pair, std::tuple, boost::tuple or boost::fusion::tuple:
For second-order iterative method (Newton
Raphson) the tuple
should have two elements containing
the evaluation of the function and its first derivative.
For the third-order methods (Halley
and Schröder) the tuple
should have three elements containing
the evaluation of the function and its first and second derivatives.
The initial starting value. A good guess is crucial to quick convergence!
The minimum possible value for the result, this is used as an initial lower bracket.
The maximum possible value for the result, this is used as an initial upper bracket.
The desired number of binary digits precision.
An optional maximum number of iterations to perform. On exit, this is updated to the actual number of iterations performed.
When using these functions you should note that:
max_iter =
(std::numeric_limits<std::uintmax_t>::max)()
is effectively 'iterate for ever'.
min
and max
are the wrong way around or if they
converge to a local minima.
numeric_limits<T>::max()
.
std::numeric_limits<T>::digits
.
#define BOOST_MATH_INSTRUMENT
before the #include <boost/math/tools/roots.hpp>
, and also ensure that display of all
the significant digits with cout.precision(std::numeric_limits<double>::digits10)
: or even possibly significant digits with
cout.precision(std::numeric_limits<double>::max_digits10)
:
but be warned, this may produce copious output!
Given an initial guess x0 the subsequent values are computed using:
Out-of-bounds steps revert to bisection of the current bounds.
Under ideal conditions, the number of correct digits doubles with each iteration.
Given an initial guess x0 the subsequent values are computed using:
Over-compensation by the second derivative (one which would proceed in the wrong direction) causes the method to revert to a Newton-Raphson step.
Out of bounds steps revert to bisection of the current bounds.
Under ideal conditions, the number of correct digits trebles with each iteration.
Given an initial guess x0 the subsequent values are computed using:
Over-compensation by the second derivative (one which would proceed in the wrong direction) causes the method to revert to a Newton-Raphson step. Likewise a Newton step is used whenever that Newton step would change the next value by more than 10%.
Out of bounds steps revert to bisection of the current bounds.
Under ideal conditions, the number of correct digits trebles with each iteration.
This is Schröder's general result (equation 18 from Stewart, G. W. "On Infinitely Many Algorithms for Solving Equations." English translation of Schröder's original paper. College Park, MD: University of Maryland, Institute for Advanced Computer Studies, Department of Computer Science, 1993.)
This method guarantees at least quadratic convergence (the same as Newton's method), and is known to work well in the presence of multiple roots: something that neither Newton nor Halley can do.
The complex Newton method works slightly differently than the rest of the methods:
Since there is no way to bracket roots in the complex plane, the min
and max
arguments are not accepted. Failure to reach a root is communicated by returning
nan
s. Remember that if a function
has many roots, then which root the complex Newton's method converges to is
essentially impossible to predict a priori; see the Newton's fractal for more
information.
Finally, the derivative of f must be continuous at the root or else non-roots can be found; see here for an example.
An example usage of complex_newton
is given in examples/daubechies_coefficients.cpp
.
To solve a quadratic ax2 + bx + c = 0, we may use
auto [x0, x1] = boost::math::tools::quadratic_roots(a, b, c);
If the roots are real, they are arranged so that x0
≤ x1
. If the roots are
complex and the inputs are real, x0
and x1
are both std::numeric_limits<Real>::quiet_NaN()
. In this case we must cast a
, b
and c
to a complex type to
extract the complex roots. If a
,
b
and c
are integral, then the roots are of type double. The routine is much faster
if the fused-multiply-add instruction is available on your architecture. If
the fma is not available, the function resorts to slow emulation. Finally,
speed is improved if you compile for your particular architecture. For instance,
if you compile without any architecture flags, then the std::fma
call
compiles down to call _fma
,
which dynamically chooses to emulate or execute the vfmadd132sd
instruction based on the capabilities of the architecture. If instead, you
compile with (say) -march=native
then
no dynamic choice is made: The vfmadd132sd
instruction is always executed if available and emulation is used if not.