.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_backend.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_backend.py: .. _l-example-backend-api: ONNX Runtime Backend for ONNX ============================= .. index:: backend *ONNX Runtime* extends the `onnx backend API `_ to run predictions using this runtime. Let's use the API to compute the prediction of a simple logistic regression model. .. GENERATED FROM PYTHON SOURCE LINES 20-32 .. code-block:: Python import skl2onnx import onnxruntime import onnx import sklearn from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression import numpy from onnxruntime import get_device import numpy as np import onnxruntime.backend as backend .. GENERATED FROM PYTHON SOURCE LINES 33-34 Let's create an ONNX graph first. .. GENERATED FROM PYTHON SOURCE LINES 34-43 .. code-block:: Python data = load_iris() X, Y = data.data, data.target logreg = LogisticRegression(C=1e5).fit(X, Y) model = skl2onnx.to_onnx(logreg, X.astype(np.float32)) name = "logreg_iris.onnx" with open(name, "wb") as f: f.write(model.SerializeToString()) .. GENERATED FROM PYTHON SOURCE LINES 44-45 Let's use ONNX backend API to test it. .. GENERATED FROM PYTHON SOURCE LINES 45-56 .. code-block:: Python model = onnx.load(name) rep = backend.prepare(model) x = np.array( [[-1.0, -2.0, 5.0, 6.0], [-1.0, -2.0, -3.0, -4.0], [-1.0, -2.0, 7.0, 8.0]], dtype=np.float32, ) label, proba = rep.run(x) print("label={}".format(label)) print("probabilities={}".format(proba)) .. rst-class:: sphx-glr-script-out .. code-block:: none label=[2 0 2] probabilities=[{0: 0.0, 1: 0.0, 2: 1.0}, {0: 1.0, 1: 1.9515885113950192e-38, 2: 0.0}, {0: 0.0, 1: 0.0, 2: 1.0}] .. GENERATED FROM PYTHON SOURCE LINES 57-59 The device depends on how the package was compiled, GPU or CPU. .. GENERATED FROM PYTHON SOURCE LINES 59-61 .. code-block:: Python print(get_device()) .. rst-class:: sphx-glr-script-out .. code-block:: none GPU .. GENERATED FROM PYTHON SOURCE LINES 62-64 The backend can also directly load the model without using *onnx*. .. GENERATED FROM PYTHON SOURCE LINES 64-74 .. code-block:: Python rep = backend.prepare(name) x = np.array( [[-1.0, -2.0, -3.0, -4.0], [-1.0, -2.0, -3.0, -4.0], [-1.0, -2.0, -3.0, -4.0]], dtype=np.float32, ) label, proba = rep.run(x) print("label={}".format(label)) print("probabilities={}".format(proba)) .. rst-class:: sphx-glr-script-out .. code-block:: none label=[0 0 0] probabilities=[{0: 1.0, 1: 1.9515885113950192e-38, 2: 0.0}, {0: 1.0, 1: 1.9515885113950192e-38, 2: 0.0}, {0: 1.0, 1: 1.9515885113950192e-38, 2: 0.0}] .. GENERATED FROM PYTHON SOURCE LINES 75-78 The backend API is implemented by other frameworks and makes it easier to switch between multiple runtimes with the same API. .. GENERATED FROM PYTHON SOURCE LINES 80-81 **Versions used for this example** .. GENERATED FROM PYTHON SOURCE LINES 81-87 .. code-block:: Python print("numpy:", numpy.__version__) print("scikit-learn:", sklearn.__version__) print("onnx: ", onnx.__version__) print("onnxruntime: ", onnxruntime.__version__) print("skl2onnx: ", skl2onnx.__version__) .. rst-class:: sphx-glr-script-out .. code-block:: none numpy: 1.26.4 scikit-learn: 1.6.dev0 onnx: 1.17.0 onnxruntime: 1.18.0+cu118 skl2onnx: 1.17.0 .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 17.934 seconds) .. _sphx_glr_download_auto_examples_plot_backend.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_backend.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_backend.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_