Boost C++ Libraries of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of Boost. Click here to view this page for the latest version.

Daubechies Filters

#include <boost/math/filters/daubechies.hpp>

namespace boost::math:filters {

template <typename Real, size_t p>
constexpr std::array<Real, 2*p> daubechies_scaling_filter();

template<class Real, size_t p>
std::array<Real, 2*p> daubechies_wavelet_filter();

} // namespaces

The Daubechies filters provided by Boost.Math return the filter coefficients of the compactly-supported family discovered by Daubechies.

A basic usage is as follows:

using boost::math::filters::daubechies_scaling_filter;
using boost::math::filters::daubechies_wavelet_filter;

auto h = daubechies_scaling_filter<double, 8>();
auto g = daubechies_wavelet_filter<double, 8>();

The filters take the number of vanishing moments as a template argument, returning a std::array which has twice as many entries as vanishing moments.

Caveats and Ambiguities

Numerous notational conventions for the filter coefficients exist. The first ambiguity is whether they should be indexed by the number of vanishing moments, or the number of filter taps. Boost.Math indexes the family by the number of vanishing moments. This indexing convention is the same as PyWavelets and Mathematica, but Numerical Recipes and Wikipedia index by the number of filter taps.

The next ambiguity is the overall scale. Boost (and PyWavelets) normalize the filters so that their L2 norm is 1. Others normalize so that the L2 norm is √2.

Finally, we can use a convolutional representation of the filters, or a dot product representation. The dot product has all elements reversed relative to the convolutional representation.

The Boost representation agrees with Table 1 of Daubechies 1988 paper "Orthonormal Bases of Compactly Supported Wavelets", and Mallat's "A Wavelet Tour of Signal Processing", Table 7.2.

The Boost convention:

// ... 
constexpr const int p = 2;
auto h = boost::math::filters::daubechies_scaling_filter<double, p>();
std::cout << "h = {" << h[0] << ", " << h[1] << ", " << h[2] << ", " << h[3] << "}\n";
// output:
h = {0.48296291314453416, 0.83651630373780794, 0.22414386804201339, -0.12940952255126037}

Mathematica conventions:

WaveletFilterCoefficients[DaubechiesWavelet[2], "PrimalLowpass"]
{{0, 0.341506}, {1, 0.591506}, {2, 0.158494}, {3, -0.0915064}}

(Multiplying all elements of the Mathematica filter by √2 gives the Boost filter.)

PyWavelet conventions:

>>> import pywt
>>> print(pywt.Wavelet('db2').dec_lo)
[-0.12940952255126037, 0.2241438680420134, 0.8365163037378079, 0.48296291314453416]
>>> np.linalg.norm(pywt.Wavelet('db2').dec_lo)

(Reversing the PyWavelet filter gives the Boost filter.)

The wavelet filters bring additional ambiguity, because they are defined only by being orthogonal to the scaling filter. So both sign and scale are determined by convention.

For this reason, we demo some other conventions for the wavelet filter as well:


auto g = boost::math::filters::daubechies_wavelet_filter<double, p>();
std::cout << "g = {" << g[0] << ", " << g[1] << ", " << g[2] << ", " << g[3] << "}\n";
// output:
g = {-0.12940952255126037, -0.22414386804201339, 0.83651630373780794, -0.48296291314453416}


WaveletFilterCoefficients[DaubechiesWavelet[2], "PrimalHighpass"]
{{-2, -0.0915064}, {-1, -0.158494}, {0, 0.591506}, {1, -0.341506}}

(Note how Mathematica indexes the filter starting at -2; Boost starts at 0, and the scaling convention is different.)


>>> print(pywt.Wavelet('db2').dec_hi)
[-0.48296291314453416, 0.8365163037378079, -0.2241438680420134, -0.12940952255126037]

(Again, reversing the PyWavelets filter gives the Boost filter.)


The filters are accurate to octuple precision.