Treelite : model compiler for decision tree ensembles
Treelite is a model compiler for decision tree ensembles, aimed at efficient deployment.Star Watch
Use machine learning package of your choice
Treelite accommodates a wide range of decision tree ensemble models. In particular, it handles both random forests and gradient boosted trees.
Treelite can read models produced by XGBoost, LightGBM, and scikit-learn. In cases where you are using another package to train your model, you may use the flexible builder class.
Deploy with minimal dependencies
It is a great hassle to install machine learning packages (e.g. XGBoost, LightGBM, scikit-learn, etc.) on every machine your tree model will run. This is the case no longer: Treelite will export your model as a stand-alone prediction library so that predictions will be made without any machine learning package installed.
Universal, lightweight specification for all tree models
Are you designing an optimized prediction runtime software for tree models? Do not be overwhelmed by the variety of tree models in the wild. Treelite lets you convert many kinds of tree models into a single, lightweight exchange format. You can serialize (save) any tree model into a byte sequence or a file. Plus, Treelite is designed to be used as a component in prediction runtimes. Currently, Treelite is used by Amazon SageMaker Neo and and RAPIDS cuML.
Install Treelite from PyPI:
python3 -m pip install --user treelite treelite_runtime
Import your tree ensemble model into Treelite:
import treelite model = treelite.Model.load('my_model.model', model_format='xgboost')
Deploy a source archive:
# Produce a zipped source directory, containing all model information # Run `make` on the target machine model.export_srcpkg(platform='unix', toolchain='gcc', pkgpath='./mymodel.zip', libname='mymodel.so', verbose=True)
Deploy a shared library:
# Like export_srcpkg, but generates a shared library immediately # Use this only when the host and target machines are compatible model.export_lib(toolchain='gcc', libpath='./mymodel.so', verbose=True)
Make predictions on the target machine:
import treelite_runtime predictor = treelite_runtime.Predictor('./mymodel.so', verbose=True) dmat = treelite_runtime.DMatrix(X) out_pred = predictor.predict(dmat)
Read First tutorial for a more detailed example. See Deploying models for additional instructions on deployment.
A note on API compatibility
Since Treelite is in early development, its API may change substantially in the future.
How Treelite works
The workflow involves two distinct machines: the host machine that generates prediction subroutine from a given tree model, and the target machine that runs the subroutine. The two machines exchange a single C file that contains all relevant information about the tree model. Only the host machine needs to have Treelite installed; the target machine requires only a working C compiler.
- Treelite API
- Treelite runtime API
- General Tree Inference Library (GTIL)
- Treelite C API
- Treelite runtime Rust API
- Knobs and Parameters
- Notes on Serialization
- Documentation for the C++ codebase