pythonmachine-learningrandom-forestmachine-learning-modelrapids

Deploy a RAPIDS CUML Random Forest model to Windows Virtual Machine where RAPIDS/CUML can't be installed


I need to perform inference for a cuml.dask.ensemble.RandomForestClassifier on a GPU-less Windows virtual machine where rapids/cuml can't be installed.

I have thought to use treelite so I have to import the model into treelite and generate a shared library (.dll file for windows). After that, I would use treelite_runtime.Predictor to import the shared library and perform inference in the target machine.

The problem is that I have no idea of how to import the RandomForestClassifier model into treelite to create a treelite model.

I have tried to use the 'convert_to_treelite_model' but the obtained object isn't a treelite model and I don't know how to use it.

See the attached code (executed under Linux, so I try to use the gcc toolchain and generate a '.so' file...

I get the exception "'cuml.fil.fil.TreeliteModel' object has no attribute 'export_lib'" when I try to call the 'export_lib' function...

import numpy as np
import pandas as pd
import cudf
from sklearn import model_selection, datasets
from cuml.dask.common import utils as dask_utils
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
import dask_cudf
from cuml.dask.ensemble import RandomForestClassifier as cumlDaskRF
import treelite
import treelite_runtime

if __name__ == '__main__':
    # This will use all GPUs on the local host by default
    cluster = LocalCUDACluster(threads_per_worker=1)
    c = Client(cluster)

    # Query the client for all connected workers
    workers = c.has_what().keys()
    n_workers = len(workers)
    n_streams = 8 # Performance optimization

    # Data parameters
    train_size = 10000
    test_size = 100
    n_samples = train_size + test_size
    n_features = 10

    # Random Forest building parameters
    max_depth = 6
    n_bins = 16
    n_trees = 100

    X, y = datasets.make_classification(n_samples=n_samples, n_features=n_features,
                                     n_clusters_per_class=1, n_informative=int(n_features / 3),
                                     random_state=123, n_classes=5)
    X = X.astype(np.float32)
    y = y.astype(np.int32)
    X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=test_size)

    n_partitions = n_workers

    # First convert to cudf (with real data, you would likely load in cuDF format to start)
    X_train_cudf = cudf.DataFrame.from_pandas(pd.DataFrame(X_train))
    y_train_cudf = cudf.Series(y_train)
    X_test_cudf = cudf.DataFrame.from_pandas(pd.DataFrame(X_test))

    # Partition with Dask
    # In this case, each worker will train on 1/n_partitions fraction of the data
    X_train_dask = dask_cudf.from_cudf(X_train_cudf, npartitions=n_partitions)
    y_train_dask = dask_cudf.from_cudf(y_train_cudf, npartitions=n_partitions)
    x_test_dask = dask_cudf.from_cudf(X_test_cudf, npartitions=n_partitions)

    # Persist to cache the data in active memory
    X_train_dask, y_train_dask, x_test_dask= dask_utils.persist_across_workers(c, [X_train_dask, y_train_dask, x_test_dask], workers=workers)

    cuml_model = cumlDaskRF(max_depth=max_depth, n_estimators=n_trees, n_bins=n_bins, n_streams=n_streams)
    cuml_model.fit(X_train_dask, y_train_dask)

    wait(cuml_model.rfs) # Allow asynchronous training tasks to finish

    # HACK: comb_model is None if a prediction isn't performed before calling to 'get_combined_model'.
    # I don't know why...

    cuml_y_pred = cuml_model.predict(x_test_dask).compute()
    cuml_y_pred = cuml_y_pred.to_array()
    del cuml_y_pred

    comb_model = cuml_model.get_combined_model()

    treelite_model = comb_model.convert_to_treelite_model()
    toolchain = 'gcc'
    treelite_model.export_lib(toolchain=toolchain, libpath='./mymodel.so', verbose=True) # <----- EXCEPTION!

    del cuml_model
    del treelite_model

    predictor = treelite_runtime.Predictor('./mymodel.so', verbose=True)
    y_pred = predictor.predict(X_test)

    # ......

Notes: I'm trying to run the code on an Ubuntu box with 2 NVIDIA RTX2080ti GPUs, using the following library versions:

cudatoolkit               10.1.243
cudnn                     7.6.0
cudf                      0.15.0
cuml                      0.15.0
dask                      2.30.0 
dask-core                 2.30.0 
dask-cuda                 0.15.0 
dask-cudf                 0.15.0 
rapids                    0.15.1
treelite                  0.92
treelite-runtime          0.92

Solution

  • At the moment Treelite does not have a serialization method that can be directly used. We have an internal serialization method that we use to pickle cuML's RF model.

    I would recommend creating a feature request in Treelite's github repo (https://github.com/dmlc/treelite) and requesting a feature for serializing and deserializing Treelite models.

    Furthermore, the output of convert_to_treelite_model function is a Treelite model. It shows it as :

    In [2]: treelite_model
    Out[2]: <cuml.fil.fil.TreeliteModel at 0x7f11ceeca840>
    

    As we expose the C++ Treelite code in cython to have direct access to Treelite's C++ handle.