.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_metadata.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_metadata.py: Metadata ======== .. index:: metadata ONNX format contains metadata related to how the model was produced. It is useful when the model is deployed to production to keep track of which instance was used at a specific time. Let's see how to do that with a simple logistic regression model trained with *scikit-learn*. .. GENERATED FROM PYTHON SOURCE LINES 18-39 .. code-block:: default import skl2onnx import onnxruntime import sklearn import numpy from onnxruntime import InferenceSession import onnx from onnxruntime.datasets import get_example example = get_example("logreg_iris.onnx") model = onnx.load(example) print("doc_string={}".format(model.doc_string)) print("domain={}".format(model.domain)) print("ir_version={}".format(model.ir_version)) print("metadata_props={}".format(model.metadata_props)) print("model_version={}".format(model.model_version)) print("producer_name={}".format(model.producer_name)) print("producer_version={}".format(model.producer_version)) .. rst-class:: sphx-glr-script-out .. code-block:: none doc_string= domain=onnxml ir_version=3 metadata_props=[] model_version=0 producer_name=OnnxMLTools producer_version=1.2.0.0116 .. GENERATED FROM PYTHON SOURCE LINES 40-41 With *ONNX Runtime*: .. GENERATED FROM PYTHON SOURCE LINES 41-52 .. code-block:: default sess = InferenceSession(example) meta = sess.get_modelmeta() print("custom_metadata_map={}".format(meta.custom_metadata_map)) print("description={}".format(meta.description)) print("domain={}".format(meta.domain)) print("graph_name={}".format(meta.graph_name)) print("producer_name={}".format(meta.producer_name)) print("version={}".format(meta.version)) .. rst-class:: sphx-glr-script-out .. code-block:: pytb Traceback (most recent call last): File "/home/xadupre/github/sklearn-onnx/docs/examples/plot_metadata.py", line 42, in sess = InferenceSession(example) File "/home/xadupre/github/onnxruntime/build/linux_cuda/Release/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/xadupre/github/onnxruntime/build/linux_cuda/Release/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/xadupre/github/onnxruntime/build/linux_cuda/Release/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['CUDAExecutionProvider', 'CPUExecutionProvider'], ...) .. GENERATED FROM PYTHON SOURCE LINES 53-54 **Versions used for this example** .. GENERATED FROM PYTHON SOURCE LINES 54-60 .. code-block:: default print("numpy:", numpy.__version__) print("scikit-learn:", sklearn.__version__) print("onnx: ", onnx.__version__) print("onnxruntime: ", onnxruntime.__version__) print("skl2onnx: ", skl2onnx.__version__) .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 0.008 seconds) .. _sphx_glr_download_auto_examples_plot_metadata.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_metadata.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_metadata.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_