Treelite
|
Functions | |
int | TreelitePredictorLoad (const char *library_path, int num_worker_thread, PredictorHandle *out) |
load prediction code into memory. This function assumes that the prediction code has been already compiled into a dynamic shared library object (.so/.dll/.dylib). More... | |
int | TreelitePredictorPredictBatch (PredictorHandle handle, DMatrixHandle batch, int verbose, int pred_margin, PredictorOutputHandle out_result, size_t *out_result_size) |
Make predictions on a batch of data rows (synchronously). This function internally divides the workload among all worker threads. More... | |
int | TreeliteCreatePredictorOutputVector (PredictorHandle handle, DMatrixHandle batch, PredictorOutputHandle *out_output_vector) |
Convenience function to allocate an output vector that is able to hold the prediction result for a given data matrix. The vector's length will be identical to TreelitePredictorQueryResultSize() and its type will be identical to TreelitePredictorQueryLeafOutputType(). To prevent memory leak, make sure to de-allocate the vector with TreeliteDeletePredictorOutputVector(). More... | |
int | TreeliteDeletePredictorOutputVector (PredictorHandle handle, PredictorOutputHandle output_vector) |
De-allocate an output vector. More... | |
int | TreelitePredictorQueryResultSize (PredictorHandle handle, DMatrixHandle batch, size_t *out) |
Given a batch of data rows, query the necessary size of array to hold predictions for all data points. More... | |
int | TreelitePredictorQueryNumClass (PredictorHandle handle, size_t *out) |
Get the number classes in the loaded model The number is 1 for most tasks; it is greater than 1 for multiclass classification. More... | |
int | TreelitePredictorQueryNumFeature (PredictorHandle handle, size_t *out) |
Get the width (number of features) of each instance used to train the loaded model. More... | |
int | TreelitePredictorQueryPredTransform (PredictorHandle handle, const char **out) |
Get name of post prediction transformation used to train the loaded model. More... | |
int | TreelitePredictorQuerySigmoidAlpha (PredictorHandle handle, float *out) |
Get alpha value of sigmoid transformation used to train the loaded model. More... | |
int | TreelitePredictorQueryGlobalBias (PredictorHandle handle, float *out) |
Get global bias which adjusting predicted margin scores. More... | |
int | TreelitePredictorQueryThresholdType (PredictorHandle handle, const char **out) |
int | TreelitePredictorQueryLeafOutputType (PredictorHandle handle, const char **out) |
int | TreelitePredictorFree (PredictorHandle handle) |
delete predictor from memory More... | |
Predictor interface
int TreeliteCreatePredictorOutputVector | ( | PredictorHandle | handle, |
DMatrixHandle | batch, | ||
PredictorOutputHandle * | out_output_vector | ||
) |
Convenience function to allocate an output vector that is able to hold the prediction result for a given data matrix. The vector's length will be identical to TreelitePredictorQueryResultSize() and its type will be identical to TreelitePredictorQueryLeafOutputType(). To prevent memory leak, make sure to de-allocate the vector with TreeliteDeletePredictorOutputVector().
Note. To access the element values from the output vector, you should cast the opaque handle (PredictorOutputHandle type) to an appropriate pointer LeafOutputType*, where the type is either float, double, or uint32_t. So carry out the following steps:
handle | predictor |
batch | the data matrix containing a batch of rows |
out_output_vector | Handle to the newly allocated output vector. |
Definition at line 54 of file c_api_runtime.cc.
int TreeliteDeletePredictorOutputVector | ( | PredictorHandle | handle, |
PredictorOutputHandle | output_vector | ||
) |
De-allocate an output vector.
handle | predictor |
output_vector | Output vector to delete from memory |
Definition at line 63 of file c_api_runtime.cc.
int TreelitePredictorFree | ( | PredictorHandle | handle | ) |
delete predictor from memory
handle | predictor to remove |
Definition at line 135 of file c_api_runtime.cc.
int TreelitePredictorLoad | ( | const char * | library_path, |
int | num_worker_thread, | ||
PredictorHandle * | out | ||
) |
load prediction code into memory. This function assumes that the prediction code has been already compiled into a dynamic shared library object (.so/.dll/.dylib).
library_path | path to library object file containing prediction code |
num_worker_thread | number of worker threads (-1 to use max number) |
out | handle to predictor |
Definition at line 31 of file c_api_runtime.cc.
int TreelitePredictorPredictBatch | ( | PredictorHandle | handle, |
DMatrixHandle | batch, | ||
int | verbose, | ||
int | pred_margin, | ||
PredictorOutputHandle | out_result, | ||
size_t * | out_result_size | ||
) |
Make predictions on a batch of data rows (synchronously). This function internally divides the workload among all worker threads.
Note. This function does not allocate the result vector. Use TreeliteCreatePredictorOutputVector() convenience function to allocate the vector of the right length and type.
Note. To access the element values from the output vector, you should cast the opaque handle (PredictorOutputHandle type) to an appropriate pointer LeafOutputType*, where the type is either float, double, or uint32_t. So carry out the following steps:
handle | predictor |
batch | the data matrix containing a batch of rows |
verbose | whether to produce extra messages |
pred_margin | whether to produce raw margin scores instead of transformed probabilities |
out_result | Resulting output vector. This pointer must point to an array of length TreelitePredictorQueryResultSize() and of type TreelitePredictorQueryLeafOutputType(). |
out_result_size | used to save length of the output vector, which is guaranteed to be less than or equal to TreelitePredictorQueryResultSize() |
Definition at line 39 of file c_api_runtime.cc.
int TreelitePredictorQueryGlobalBias | ( | PredictorHandle | handle, |
float * | out | ||
) |
Get global bias which adjusting predicted margin scores.
handle | predictor |
out | global bias value |
Definition at line 110 of file c_api_runtime.cc.
int TreelitePredictorQueryNumClass | ( | PredictorHandle | handle, |
size_t * | out | ||
) |
Get the number classes in the loaded model The number is 1 for most tasks; it is greater than 1 for multiclass classification.
handle | predictor |
out | length of prediction array |
Definition at line 79 of file c_api_runtime.cc.
int TreelitePredictorQueryNumFeature | ( | PredictorHandle | handle, |
size_t * | out | ||
) |
Get the width (number of features) of each instance used to train the loaded model.
handle | predictor |
out | number of features |
Definition at line 86 of file c_api_runtime.cc.
int TreelitePredictorQueryPredTransform | ( | PredictorHandle | handle, |
const char ** | out | ||
) |
Get name of post prediction transformation used to train the loaded model.
handle | predictor |
out | name of post prediction transformation |
Definition at line 93 of file c_api_runtime.cc.
int TreelitePredictorQueryResultSize | ( | PredictorHandle | handle, |
DMatrixHandle | batch, | ||
size_t * | out | ||
) |
Given a batch of data rows, query the necessary size of array to hold predictions for all data points.
handle | predictor |
batch | the data matrix containing a batch of rows |
out | used to store the length of prediction array |
Definition at line 71 of file c_api_runtime.cc.
int TreelitePredictorQuerySigmoidAlpha | ( | PredictorHandle | handle, |
float * | out | ||
) |
Get alpha value of sigmoid transformation used to train the loaded model.
handle | predictor |
out | alpha value of sigmoid transformation |
Definition at line 103 of file c_api_runtime.cc.