Futures: asynchronous invocation

In the concurrent world, a Future object refers to an object whose actual value is to be computed in the future. You can think of it as a handle to an asynchronous invocation of a computation that yields a value.

Many so called concurrent programming languages support this idea as a native construct offered by the core language itself. Unfortunately, C++ does not. Well, at least not in the current standard; C++0x (or shall I say C++1x ?) is going to support std::future as part of the massive new C++0x thread library, which is based on boost::thread. In this post we will implement a simple, yet very powerful, such future object.

As you could guess, our future object will create its own thread of execution in order to carry out the needed asynchronous computation. We shall exploit boost::thread for this task. A straightforward implementation is hereby presented:

#ifndef _FUTURE_HPP_
#define _FUTURE_HPP_

#include <boost/utility.hpp>
#include <boost/thread.hpp>
#include <tr1/functional>

namespace std { using namespace std::tr1; }

template <typename T>
class future : boost::noncopyable {
    public:
        future (const std::function<T ()> &func);
        operator T& ();
        bool ready () const;

    private:
        void compute ();

        volatile bool ready_, joined_;
        std::function<T ()> func_;
        boost::mutex mutex_;
        T result_;

        boost::thread thread_;
};

template <typename T>
future<T>::future (const std::function<T ()> &func)
 : ready_(false), joined_(false), func_(func),
   thread_(boost::bind(&future::compute, this))
{ }

template <typename T>
future<T>::operator T& () {
    if (!joined_) {
        // locking to prevent race conditions on join() which causes
        // undefined behavior if called more than once.
        // using double checked locking with all of its downsides.
        boost::mutex::scoped_lock lock(mutex_);
        if (!joined_) {
            thread_.join();
            joined_ = true;
        }
    }
    return result_;
}

template <typename T>
bool future<T>::ready () const {
    return ready_;
}

template <typename T>
void future<T>::compute () {
    result_ = func_();
    ready_ = true;
}

#endif // _FUTURE_HPP_

Few points worth taking note of in this implementation are these:

  • We have to use a mutex to prevent a race condition on the join() operation, which may cause undefined behavior if called more than once and is not thread safe.
  • Double checked locking is problematic in many cases (I strongly suggest reading this article about “C++ and the Perils of Double-Checked Locking” by Alexandrescu and Meyers) but should be fine here.
  • The use of std::tr1::function and boost::bind (also in tr1) greatly simplify the implementation and its interface.

It is also important to mention that the folks at boost provide a similar functionality as part of boost::thread, but the interface seems a little uncomfortable. At least to me.

Now, a utilization example is in order. How would you feel about Fibonacci numbers making another appearance?

#include <iostream>
#include <tr1/tuple>
#include <boost/bind.hpp>

#include "future.hpp"

namespace std { using namespace std::tr1; }

unsigned nth_fib (unsigned n, unsigned modulo) {
    unsigned prev=1, curr=1;
    for (unsigned c=2;c<n;++c)
        std::tie(prev, curr) = 
            std::make_tuple(curr, (prev+curr) % modulo);
    return curr;
}

int main () {
    future<unsigned> fib1(boost::bind(nth_fib, 1000000, 4567)),
                     fib2(boost::bind(nth_fib, 3000000, 1234));

    while (!fib1.ready() || !fib2.ready()) {
        std::cout << "working.." << std::endl;
        boost::this_thread::sleep(boost::posix_time::seconds(1));
    }

    std::cout << "fib1: " << fib1 << " fib2: " << fib2 << std::endl;
}

Just a reminder: using Tuples is pretty efficient.

3 thoughts on “Futures: asynchronous invocation

  1. The current name is c 0B… Steve meyers said that he hoped it would be finish in 2010 so they could call it c 0X (X = 10 in roman digits)

Leave a Reply

Your email address will not be published. Required fields are marked *