Imagine that you are writing a top-down dynamic algorithm. You already represented your dependencies as recursive functions and now you need to figure out how to decrease the memory usage and calculate things in correct order. On the other hand, you can just apply brute force approach and cache results. How to write caching to be able to reuse it quickly?
Let’s start with code for hashing any parameters. We can use boost implementation:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
namespace std { namespace { template <class T> inline void hash_combine(std::size_t& seed, T const& v) { seed ^= hash<T>()(v) + 0x9e3779b9 + (seed << 6) + (seed >> 2); } template <class Tuple, size_t Index = std::tuple_size<Tuple>::value - 1> struct HashValueImpl { static void apply(size_t& seed, Tuple const& tuple) { HashValueImpl<Tuple, Index - 1>::apply(seed, tuple); hash_combine(seed, get<Index>(tuple)); } }; template <class Tuple> struct HashValueImpl<Tuple, 0> { static void apply(size_t& seed, Tuple const& tuple) { hash_combine(seed, get<0>(tuple)); } }; } template <typename ... TT> struct hash<std::tuple<TT...>> { size_t operator()(std::tuple<TT...> const& tt) const { size_t seed = 0; HashValueImpl<std::tuple<TT...> >::apply(seed, tt); return seed; } }; } |
Now, we can write generic method for caching things:
1 2 3 4 5 6 7 8 9 |
template<class F, class... Args> auto cached(F&& f, const Args&... x) { static unordered_map<tuple<Args...>, typename result_of<F(Args...)>::type> cache; auto key = tuple<Args...>(x...); auto y = cache.find(key); if (y != cache.end()) return y->second; return cache[key] = f(x...); } |
So we accept first parameter representing the function to call and pack of parameters for actual arguments. Next comes normal cache implementation.
And this is how we call it:
1 2 3 4 |
int fib(int n) { if (n < 2) return n; return cached(fib, n - 2) + cached(fib, n - 1); } |
You can see that instead of fib(n-2)
I call cached
method, pass function for calculating results and all arguments. That’s it.