Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

PrevUpHomeNext

Random Search

Synopsis

#include <boost/math/optimization/random_search.hpp>

namespace boost::math::optimization {

template <typename ArgumentContainer> struct random_search_parameters {
    using Real = typename ArgumentContainer::value_type;
    ArgumentContainer lower_bounds;
    ArgumentContainer upper_bounds;
    size_t max_function_calls = 0;
    ArgumentContainer const * initial_guess = nullptr;
};

template <typename ArgumentContainer, class Func, class URBG>
ArgumentContainer random_search(const Func cost_function,
                                random_search_parameters<ArgumentContainer> const &params,
                                URBG &gen,
                                std::invoke_result_t<Func, ArgumentContainer> value_to_reach
                                  = std::numeric_limits<std::invoke_result_t<Func, ArgumentContainer>>::quiet_NaN(),
                                std::atomic<bool> *cancellation = nullptr,
                                std::atomic<std::invoke_result_t<Func, ArgumentContainer>> *current_minimum_cost = nullptr,
                                std::vector<std::pair<ArgumentContainer, std::invoke_result_t<Func, ArgumentContainer>>> *queries = nullptr);

} // namespaces

The random_search function searches for a global minimum of a function. There is no special sauce to this algorithm-it merely blasts function calls over threads. It's existence is justified by the "No free lunch" theorem in optimization, which "establishes that for any algorithm, any elevated performance over one class of problems is offset by performance over another class." In practice, it is not clear that the conditions of the NFL theorem holds, and on test cases, random_search is slower and less accurate than (say) differential evolution, jSO, and CMA-ES. However, it is often the case that rapid convergence is not the goal: For example, we often want to spend some time exploring the cost function surface before moving to a faster converging algorithm. In addition, random search is embarrassingly parallel, which allows us to avoid Amdahl's law-induced performance problems.

Parameters

The function

template <typename ArgumentContainer, class Func, class URBG>
ArgumentContainer random_search(const Func cost_function,
                                random_search_parameters<ArgumentContainer> const &params,
                                URBG &gen,
                                std::invoke_result_t<Func, ArgumentContainer> value_to_reach
                                  = std::numeric_limits<std::invoke_result_t<Func, ArgumentContainer>>::quiet_NaN(),
                                std::atomic<bool> *cancellation = nullptr,
                                std::atomic<std::invoke_result_t<Func, ArgumentContainer>> *current_minimum_cost = nullptr,
                                std::vector<std::pair<ArgumentContainer, std::invoke_result_t<Func, ArgumentContainer>>> *queries = nullptr)

Parameters:

Returns:

The ArgumentContainer corresponding to the minimum cost found by the optimization.

Examples

An example exhibiting graceful cancellation and progress observability can be studied in random_search_example.cpp.

References

PrevUpHomeNext