.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_tutorial/plot_bbegin_measure_time.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_tutorial_plot_bbegin_measure_time.py: Benchmark ONNX conversion ========================= .. index:: benchmark Example :ref:`l-simple-deploy-1` converts a simple model. This example takes a similar example but on random data and compares the processing time required by each option to compute predictions. Training a pipeline +++++++++++++++++++ .. GENERATED FROM PYTHON SOURCE LINES 17-48 .. code-block:: Python import numpy from pandas import DataFrame from tqdm import tqdm from onnx.reference import ReferenceEvaluator from sklearn import config_context from sklearn.datasets import make_regression from sklearn.ensemble import ( GradientBoostingRegressor, RandomForestRegressor, VotingRegressor, ) from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from onnxruntime import InferenceSession from skl2onnx import to_onnx from skl2onnx.tutorial import measure_time N = 11000 X, y = make_regression(N, n_features=10) X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.01) print("Train shape", X_train.shape) print("Test shape", X_test.shape) reg1 = GradientBoostingRegressor(random_state=1) reg2 = RandomForestRegressor(random_state=1) reg3 = LinearRegression() ereg = VotingRegressor([("gb", reg1), ("rf", reg2), ("lr", reg3)]) ereg.fit(X_train, y_train) .. rst-class:: sphx-glr-script-out .. code-block:: none Train shape (110, 10) Test shape (10890, 10) .. raw:: html
VotingRegressor(estimators=[('gb', GradientBoostingRegressor(random_state=1)),
                                ('rf', RandomForestRegressor(random_state=1)),
                                ('lr', LinearRegression())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 49-58 Measure the processing time +++++++++++++++++++++++++++ We use function :func:`skl2onnx.tutorial.measure_time`. The page about `assume_finite `_ may be useful if you need to optimize the prediction. We measure the processing time per observation whether or not an observation belongs to a batch or is a single one. .. GENERATED FROM PYTHON SOURCE LINES 58-75 .. code-block:: Python sizes = [(1, 50), (10, 50), (100, 10)] with config_context(assume_finite=True): obs = [] for batch_size, repeat in tqdm(sizes): context = {"ereg": ereg, "X": X_test[:batch_size]} mt = measure_time( "ereg.predict(X)", context, div_by_number=True, number=10, repeat=repeat ) mt["size"] = context["X"].shape[0] mt["mean_obs"] = mt["average"] / mt["size"] obs.append(mt) df_skl = DataFrame(obs) df_skl .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/3 [00:00
average deviation min_exec max_exec repeat number size mean_obs
0 0.004927 0.000599 0.004307 0.006856 50 10 1 0.004927
1 0.005127 0.000824 0.004372 0.008680 50 10 10 0.000513
2 0.006102 0.000432 0.005597 0.006985 10 10 100 0.000061


.. GENERATED FROM PYTHON SOURCE LINES 76-77 Graphe. .. GENERATED FROM PYTHON SOURCE LINES 77-80 .. code-block:: Python df_skl.set_index("size")[["mean_obs"]].plot(title="scikit-learn", logx=True, logy=True) .. image-sg:: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_001.png :alt: scikit-learn :srcset: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 81-86 ONNX runtime ++++++++++++ The same is done with the two ONNX runtime available. .. GENERATED FROM PYTHON SOURCE LINES 86-130 .. code-block:: Python onx = to_onnx(ereg, X_train[:1].astype(numpy.float32), target_opset=14) sess = InferenceSession(onx.SerializeToString(), providers=["CPUExecutionProvider"]) oinf = ReferenceEvaluator(onx) obs = [] for batch_size, repeat in tqdm(sizes): # scikit-learn context = {"ereg": ereg, "X": X_test[:batch_size].astype(numpy.float32)} mt = measure_time( "ereg.predict(X)", context, div_by_number=True, number=10, repeat=repeat ) mt["size"] = context["X"].shape[0] mt["skl"] = mt["average"] / mt["size"] # onnxruntime context = {"sess": sess, "X": X_test[:batch_size].astype(numpy.float32)} mt2 = measure_time( "sess.run(None, {'X': X})[0]", context, div_by_number=True, number=10, repeat=repeat, ) mt["ort"] = mt2["average"] / mt["size"] # ReferenceEvaluator context = {"oinf": oinf, "X": X_test[:batch_size].astype(numpy.float32)} mt2 = measure_time( "oinf.run(None, {'X': X})[0]", context, div_by_number=True, number=10, repeat=repeat, ) mt["pyrt"] = mt2["average"] / mt["size"] # end obs.append(mt) df = DataFrame(obs) df .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/3 [00:00
average deviation min_exec max_exec repeat number size skl ort pyrt
0 0.005081 0.000962 0.004419 0.010415 50 10 1 0.005081 0.000036 0.007617
1 0.004960 0.000507 0.004386 0.007150 50 10 10 0.000496 0.000008 0.001852
2 0.006076 0.000710 0.005279 0.007308 10 10 100 0.000061 0.000004 0.001088


.. GENERATED FROM PYTHON SOURCE LINES 131-132 Graph. .. GENERATED FROM PYTHON SOURCE LINES 132-137 .. code-block:: Python df.set_index("size")[["skl", "ort", "pyrt"]].plot( title="Average prediction time per runtime", logx=True, logy=True ) .. image-sg:: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_002.png :alt: Average prediction time per runtime :srcset: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 138-144 :epkg:`ONNX` runtimes are much faster than :epkg:`scikit-learn` to predict one observation. :epkg:`scikit-learn` is optimized for training, for batch prediction. That explains why :epkg:`scikit-learn` and ONNX runtimes seem to converge for big batches. They use similar implementation, parallelization and languages (:epkg:`C++`, :epkg:`openmp`). .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 36.073 seconds) .. _sphx_glr_download_auto_tutorial_plot_bbegin_measure_time.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_bbegin_measure_time.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_bbegin_measure_time.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_bbegin_measure_time.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_