.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_tutorial/plot_bbegin_measure_time.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_tutorial_plot_bbegin_measure_time.py: Benchmark ONNX conversion ========================= .. index:: benchmark Example :ref:`l-simple-deploy-1` converts a simple model. This example takes a similar example but on random data and compares the processing time required by each option to compute predictions. Training a pipeline +++++++++++++++++++ .. GENERATED FROM PYTHON SOURCE LINES 17-47 .. code-block:: default import numpy from pandas import DataFrame from tqdm import tqdm from onnx.reference import ReferenceEvaluator from sklearn import config_context from sklearn.datasets import make_regression from sklearn.ensemble import ( GradientBoostingRegressor, RandomForestRegressor, VotingRegressor, ) from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from onnxruntime import InferenceSession from skl2onnx import to_onnx from skl2onnx.tutorial import measure_time N = 11000 X, y = make_regression(N, n_features=10) X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.01) print("Train shape", X_train.shape) print("Test shape", X_test.shape) reg1 = GradientBoostingRegressor(random_state=1) reg2 = RandomForestRegressor(random_state=1) reg3 = LinearRegression() ereg = VotingRegressor([("gb", reg1), ("rf", reg2), ("lr", reg3)]) ereg.fit(X_train, y_train) .. rst-class:: sphx-glr-script-out .. code-block:: none Train shape (110, 10) Test shape (10890, 10) .. raw:: html
VotingRegressor(estimators=[('gb', GradientBoostingRegressor(random_state=1)),
                                ('rf', RandomForestRegressor(random_state=1)),
                                ('lr', LinearRegression())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 48-57 Measure the processing time +++++++++++++++++++++++++++ We use function :func:`skl2onnx.tutorial.measure_time`. The page about `assume_finite `_ may be useful if you need to optimize the prediction. We measure the processing time per observation whether or not an observation belongs to a batch or is a single one. .. GENERATED FROM PYTHON SOURCE LINES 57-74 .. code-block:: default sizes = [(1, 50), (10, 50), (100, 10)] with config_context(assume_finite=True): obs = [] for batch_size, repeat in tqdm(sizes): context = {"ereg": ereg, "X": X_test[:batch_size]} mt = measure_time( "ereg.predict(X)", context, div_by_number=True, number=10, repeat=repeat ) mt["size"] = context["X"].shape[0] mt["mean_obs"] = mt["average"] / mt["size"] obs.append(mt) df_skl = DataFrame(obs) df_skl .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/3 [00:00
average deviation min_exec max_exec repeat number size mean_obs
0 0.014108 0.004352 0.008928 0.029686 50 10 1 0.014108
1 0.011358 0.003103 0.008228 0.020336 50 10 10 0.001136
2 0.013486 0.002790 0.009885 0.018525 10 10 100 0.000135


.. GENERATED FROM PYTHON SOURCE LINES 75-76 Graphe. .. GENERATED FROM PYTHON SOURCE LINES 76-79 .. code-block:: default df_skl.set_index("size")[["mean_obs"]].plot(title="scikit-learn", logx=True, logy=True) .. image-sg:: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_001.png :alt: scikit-learn :srcset: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 80-85 ONNX runtime ++++++++++++ The same is done with the two ONNX runtime available. .. GENERATED FROM PYTHON SOURCE LINES 85-129 .. code-block:: default onx = to_onnx(ereg, X_train[:1].astype(numpy.float32), target_opset=14) sess = InferenceSession(onx.SerializeToString(), providers=["CPUExecutionProvider"]) oinf = ReferenceEvaluator(onx) obs = [] for batch_size, repeat in tqdm(sizes): # scikit-learn context = {"ereg": ereg, "X": X_test[:batch_size].astype(numpy.float32)} mt = measure_time( "ereg.predict(X)", context, div_by_number=True, number=10, repeat=repeat ) mt["size"] = context["X"].shape[0] mt["skl"] = mt["average"] / mt["size"] # onnxruntime context = {"sess": sess, "X": X_test[:batch_size].astype(numpy.float32)} mt2 = measure_time( "sess.run(None, {'X': X})[0]", context, div_by_number=True, number=10, repeat=repeat, ) mt["ort"] = mt2["average"] / mt["size"] # ReferenceEvaluator context = {"oinf": oinf, "X": X_test[:batch_size].astype(numpy.float32)} mt2 = measure_time( "oinf.run(None, {'X': X})[0]", context, div_by_number=True, number=10, repeat=repeat, ) mt["pyrt"] = mt2["average"] / mt["size"] # end obs.append(mt) df = DataFrame(obs) df .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/3 [00:00
average deviation min_exec max_exec repeat number size skl ort pyrt
0 0.012201 0.003428 0.008608 0.021660 50 10 1 0.012201 0.000030 0.018960
1 0.011550 0.003386 0.008357 0.023143 50 10 10 0.001155 0.000021 0.003811
2 0.013936 0.004899 0.009295 0.023858 10 10 100 0.000139 0.000004 0.002129


.. GENERATED FROM PYTHON SOURCE LINES 130-131 Graph. .. GENERATED FROM PYTHON SOURCE LINES 131-136 .. code-block:: default df.set_index("size")[["skl", "ort", "pyrt"]].plot( title="Average prediction time per runtime", logx=True, logy=True ) .. image-sg:: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_002.png :alt: Average prediction time per runtime :srcset: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 137-143 :epkg:`ONNX` runtimes are much faster than :epkg:`scikit-learn` to predict one observation. :epkg:`scikit-learn` is optimized for training, for batch prediction. That explains why :epkg:`scikit-learn` and ONNX runtimes seem to converge for big batches. They use similar implementation, parallelization and languages (:epkg:`C++`, :epkg:`openmp`). .. rst-class:: sphx-glr-timing **Total running time of the script:** (1 minutes 19.181 seconds) .. _sphx_glr_download_auto_tutorial_plot_bbegin_measure_time.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_bbegin_measure_time.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_bbegin_measure_time.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_