performancemachine-learningscalabilityvowpalwabbitonline-algorithm

VowpalWabbit: Differences and scalability


I am trying to ascertain how VowpalWabbit's "state" is maintained as the size of our input set grows. In a typical machine learning environment, if I have 1000 input vectors, I would expect to send all of those at once, wait for a model building phase to complete, and then use the model to create new predictions.

In VW, it appears that the "online" nature of the algorithm shifts this paradigm to be more performant and capable of adjusting in real-time.

  1. How is this real-time model modification implemented ?

  2. Does VW take increasing resources with respect to total input data size over time ? That is, as i add more data to my VW model (when it is small), do the real-time adjustment calculations begin to take longer once the cumulative # of feature vector inputs increases to 1000s, 10000s, or millions?


Solution

  • Just to add to carlosdc's good answer.

    Some of the features that set vowpal wabbit apart, and allow it to scale to tera-feature (1012) data-sizes are:

    The online weight vector: vowpal wabbit maintains an in memory weight-vector which is essentially the vector of weights for the model that it is building. This is what you call "the state" in your question.

    Unbounded data size: The size of the weight-vector is proportional to the number of features (independent input variables), not the number of examples (instances). This is what makes vowpal wabbit, unlike many other (non online) learners, scale in space. Since it doesn't need to load all the data into memory like a typical batch-learner does, it can still learn from data-sets that are too big to fit in memory.

    Cluster mode: vowpal wabbit supports running on multiple hosts in a cluster, imposing a binary tree graph structure on the nodes and using the all-reduce reduction from leaves to root.

    Hash trick: vowpal wabbit employs what's called the hashing trick. All feature names get hashed into an integer using murmurhash-32. This has several advantages: it is very simple and time-efficient not having to deal with hash-table management and collisions, while allowing features to occasionally collide. It turns out (in practice) that a small number of feature collisions in a training set with thousands of distinct features is similar to adding an implicit regularization term. This counter-intuitively, often improves model accuracy rather than decrease it. It is also agnostic to sparseness (or density) of the feature space. Finally, it allows the input feature names to be arbitrary strings unlike most conventional learners which require the feature names/IDs to be both a) numeric and b) unique.

    Parallelism: vowpal wabbit exploits multi-core CPUs by running the parsing and learning in two separate threads, adding further to its speed. This is what makes vw be able to learn as fast as it reads data. It turns out that most supported algorithms in vw, counter-intuitively, are bottlenecked by IO speed, rather than by learning speed.

    Checkpointing and incremental learning: vowpal wabbit allows you to save your model to disk while you learn, and then to load the model and continue learning where you left off with the --save_resume option.

    Test-like error estimate: The average loss calculated by vowpal wabbit "as it goes" is always on unseen (out of sample) data (*). This eliminates the need to bother with pre-planned hold-outs or do cross validation. The error rate you see during training is 'test-like'.

    Beyond linear models: vowpal wabbit supports several algorithms, including matrix factorization (roughly sparse matrix SVD), Latent Dirichlet Allocation (LDA), and more. It also supports on-the-fly generation of term interactions (bi-linear, quadratic, cubic, and feed-forward sigmoid neural-net with user-specified number of units), multi-class classification (in addition to basic regression and binary classification), and more.

    There are tutorials and many examples in the official vw wiki on github.

    (*) One exception is if you use multiple passes with --passes N option.