First tutorial

This tutorial will demonstrate the basic workflow.

import treelite

Regression Example

In this tutorial, we will use a small regression example to describe the full workflow.

Load the Boston house prices dataset

Let us use the Boston house prices dataset from scikit-learn (sklearn.datasets.load_boston()). It consists of 506 houses with 13 distinct features:

from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
print('dimensions of X = {}'.format(X.shape))
print('dimensions of y = {}'.format(y.shape))

Train a tree ensemble model using XGBoost

The first step is to train a tree ensemble model using XGBoost (dmlc/xgboost).

Disclaimer: Treelite does NOT depend on the XGBoost package in any way. XGBoost was used here only to provide a working example.

import xgboost
dtrain = xgboost.DMatrix(X, label=y)
params = {'max_depth':3, 'eta':1, 'silent':1, 'objective':'reg:linear',
bst = xgboost.train(params, dtrain, 20, [(dtrain, 'train')])

Pass XGBoost model into treelite

Next, we feed the trained model into treelite. If you used XGBoost to train the model, it takes only one line of code:

model = treelite.Model.from_xgboost(bst)


Using other packages to train decision trees

With additional work, you can use models trained with other machine learning packages. See this page for instructions.

Generate shared library

Given a tree ensemble model, treelite will produce a prediction subroutine (internally represented as a C program). To use the subroutine for prediction task, we package it as a dynamic shared library, which exports the prediction subroutine for other programs to use.

Before proceeding, you should decide which of the following compilers is available on your system and set the variable toolchain appropriately:

  • gcc
  • clang
  • msvc (Microsoft Visual C++)
toolchain = 'clang'   # change this value as necessary

The choice of toolchain will be used to compile the prediction subroutine into native code.

Now we are ready to generate the library.

model.export_lib(toolchain=toolchain, libpath='./mymodel.dylib', verbose=True)
                            #                            ^^^^^
                            # set correct file extension here; see the following paragraph


File extension for shared library

Make sure to use the correct file extension for the library, depending on the operating system:

  • Windows: .dll
  • Mac OS X: .dylib
  • Linux / Other UNIX: .so


Want to deploy the model to another machine?

This tutorial assumes that predictions will be made on the same machine that is running treelite. If you’d like to deploy your model to another machine (that may not have treelite installed), see the page Deploying models.


Reducing compilation time for large models

For large models, export_lib() may take a long time to finish. To reduce compilation time, enable the parallel_comp option by writing

model.export_lib(toolchain=toolchain, libpath='./mymodel.dylib',
                 params={'parallel_comp': 32}, verbose=True)

which splits the prediction subroutine into 32 source files that gets compiled in parallel. Adjust this number according to the number of cores on your machine.

Use the shared library to make predictions

Once the shared library has been generated, we feed it into a separate module (treelite.runtime) known as the runtime. The optimized prediction subroutine is exposed through the Predictor class:

import treelite.runtime     # runtime module
predictor = treelite.runtime.Predictor('./mymodel.dylib', verbose=True)

We decide on which of the houses in X we should make predictions for. Say, from 10th house to 20th:

batch = treelite.runtime.Batch.from_npy2d(X, rbegin=10, rend=20)

We used the method from_npy2d() because the matrix X was a dense NumPy array (numpy.ndarray). If X were a sparse matrix (scipy.sparse.csr_matrix), we would have used the method from_csr() instead.

out_pred = predictor.predict(batch, verbose=True)