onnx.reference

DefaultNone

class onnx.reference.op_run.DefaultNone[source]

Default value for parameters when the parameter is not set but the operator has a default behavior for it.

ReferenceEvaluator

class onnx.reference.ReferenceEvaluator(proto: Any, opsets: dict[str, int] | None = None, functions: list[ReferenceEvaluator | FunctionProto] | None = None, verbose: int = 0, new_ops: list[OpRun] | None = None, optimized: bool = True)[source]

Computes the outputs of an ONNX proto (ModelProto, FunctionProto, GraphProto, NodeProto).

This is a pure python implementation of ONNX specifications. Mismatches may remain between the official specifications and the implementation here. In the case of such a mismatch, the official spec overrides this implementation.

Parameters:
  • protoonnx.ModelProto, onnx.GraphProto, onnx.FunctionProto, onnx.NodeProto, filename or bytes

  • verbose – display intermediate results on the standard output during the execution

  • opsets – if proto is an instance of GraphProto, opsets must be defined by a dictionary of

  • functions – known onnx functions

  • new_ops – this runtime can be used to test the implementations of new operators, new_ops is a list of classes derived from OpRun, every class must define the static attribute domain, there may be multiple implementations for the same operator, the first one in the list is used.

  • optimized – some operators have two implementations, a naive one corresponding to definition of the mathematical definition of the operator, another one more efficient. This is the case for operator Conv. The naive version is ten times slower than the optimized one using a decomposition into Conv = im2col + Gemm. If True, all optimized kernels are added in new_ops and are used instead of the inner implementation if list new_ops does not already contain one.

The class maps every node to its associated implementation. When a subgraph of a function is met, it uses this class to execute the subgraph or the function. Next example shows how to run ReferenceEvaluator with an onnx model stored in file model.onnx.

import numpy as np
from onnx.reference import ReferenceEvaluator

X = np.array(...)
sess = ReferenceEvaluator("model.onnx")
results = sess.run(None, {"X": X})
print(results[0])  # display the first result

Parameter verbose may be used to show intermediate results.

import numpy as np
from onnx.reference import ReferenceEvaluator

X = np.array(...)
sess = ReferenceEvaluator("model.onnx", verbose=1)
results = sess.run(None, {"X": X})
print(results[0])  # display the first result

The class can use any implementation available in folder ops. Adding an implementation requires two changes. The first one is the implementation itself. Any existing node can be used as a template. The second is one line in file _op_list.py to import the file and let the reference evaluator know it exists.

This class can also be used to test an implementation of a custom operator. Let’s assume this new operator is InvAlpha from domain custom. The implementation must take place in a class inheriting from OpRun. It must also define attribute op_domain. Here is an example which computes \(\\frac{1}{X + \\alpha}\).

from onnx.reference.op_run import OpRun

class InvAlpha(OpRun):

    op_domain = "custom"

    def _run(self, x, alpha=None):  # type: ignore
        # None must be the default value, it is automatically
        # replaced by class OpRun with either the default value
        # specified in the NodeProto or an attribute value defined
        # in a `FunctionProto`.
        return (1 / (x + alpha),)

alpha is an attribute. It can be defined by the onnx node or be defined by the function using this node. It is safe to assume that attributes are known at the same time as the input. Class ReferenceEvaluator must know about this new implementation and this can be done by specified argument new_ops.

sess = ReferenceEvaluator(onnx_model, new_ops=[InvAlpha])
got = sess.run(None, {"X": x})[0]

A specific node can be simply evaluated.

import numpy as np
from onnx.reference.ops._op_list import Celu

x = np.array([[0, 1], [-1, 2]], dtype=np.float32)
y = Celu.eval(x, alpha=0.5)
print(y)
[[ 0.          1.        ]
 [-0.43233237  2.        ]]

This can also be expressed as:

import numpy as np
from onnx.reference.ops import load_op

Celu = load_op("", "Celu")  # domain is ""
x = np.array([[0, 1], [-1, 2]], dtype=np.float32)
y = Celu.eval(x, alpha=0.5)
print(y)
[[ 0.          1.        ]
 [-0.43233237  2.        ]]

It is possible to overwrite an existing operator. The class name must be the same. The domain does not have to be specified for the default domain. However, by default, class OpRun will load the most recent for this operator. It can be explicitly specified by adding static attribute op_schema of type OpSchema.

from onnx.reference.op_run.op_conv import Conv as _Conv

class Conv(_Conv):

    op_schema = instance_of_OpSchema()

    def _run(self, ...):
        ...

An operator may be different in a later opset. In that case,
a new implementation needs to be registered. `Pad_11`, `Pad_18`.
`Pad_11` is the implementation chose for opset in [11, 17].
`Pad_18` is selected for any greater opset. Both classes must be
imported into file `_op_list.py` to register their existence to the
runtime.

An operator may have a reference implementation such as `CastLike`
and still be defined as a function. By default, the reference implementation
is used. This behavior can be changed by adding a class to the list
of overwritten operators. It must inherit from :class:`OpRunExpand`.

::

    from onnx.reference.op_run import OpRunExpand

    class CastLike(OpRunExpand):
        op_domain = ""

    ref = ReferenceEvaluator(model, new_ops=[CastLike])
    # ...

    This mechanism is used in unit test to check the function
    implementation a schema may define.
property input_names

Returns the input names.

property opsets

Returns the opsets.

property output_names

Returns the output names.

run(output_names, feed_inputs: dict[str, Any], attributes: dict[str, Any] | None = None, intermediate: bool = False) dict[str, Any] | list[Any][source]

Executes the onnx model.

Parameters:
  • output_names – requested outputs by names, None for all

  • feed_inputs – dictionary { input name: input value }

  • attributes – attributes value if the instance runs a FunctionProto

  • intermediate – if True, the function returns all the results, final ones and intermediates one in a same dictionary, if False, only the final results are returned in a list

Returns:

list of requested outputs if intermediate is False, named results in a dictionary otherwise

OpFunction

class onnx.reference.op_run.OpFunction(onnx_node: NodeProto, run_params: dict[str, Any] | None, impl: Any | None = None, attributes: dict[str, Any] | None = None)[source]

Runs a custom function.

classmethod create(n_inputs: int | None = None, n_outputs: int | None = None, verbose: int = 0, **kwargs: Any) Any

Instantiates this class based on the given information.

Parameters:
  • n_inputs – number of inputs (default is defined by the operator schema)

  • n_outputs – number of outputs (default is defined by the operator schema)

  • verbose – verbosity

  • **kwargs – node attributes

Returns:

NodeProto

property domain: str

Returns node attribute domain.

classmethod eval(*args: list[Any], n_outputs: int | None = None, verbose: int = 0, **kwargs: Any) Any

Evaluates this operator.

Parameters:
  • *args – inputs

  • n_outputs – number of outputs (default is defined by the operator schema)

  • verbose – verbosity

  • **kwargs – node attributes

Returns:

NodeProto

static implicit_inputs(graph: GraphProto) list[str]

Returns all varibles not registered as inputs and not produced by an node inside the graph. This inputs are part of the context existing in the graph calling this one.

property input: Iterable[str]

Returns node attribute input.

classmethod make_node(n_inputs: int | None = None, n_outputs: int | None = None, **kwargs: Any) NodeProto

Creates an ONNX node for this class based on the given information.

Parameters:
  • n_inputs – number of inputs (default is defined by the operator schema)

  • n_outputs – number of outputs (default is defined by the operator schema)

  • verbose – verbosity

  • **kwargs – node attributes

Returns:

NodeProto

Method eval creates an onnx node returned by method make_node.

import numpy as np
from onnx.reference.ops._op_list import Celu

onnx_node = Celu.make_node(alpha=0.5)
print(onnx_node)
input: "x0"
output: "y0"
op_type: "Celu"
attribute {
  name: "alpha"
  f: 0.5
  type: FLOAT
}
need_context() bool

Tells the runtime if this node needs the context (all the results produced so far) as it may silently access one of them (operator Scan, If, Loop). The default answer is False.

property output: Iterable[str]

Returns node attribute output.

run(*args, linked_attributes=None, context=None)

Calls method _run, catches exceptions, displays a longer error message.

Parameters:
  • *args – inputs

  • linked_attributes – used if this has an attriute linked to the attribute of the function it belongs to

  • context – if this node is part of the subgraph, context is a dictionary with the values this node may use

Returns:

tuple of results

OpRun

class onnx.reference.op_run.OpRun(onnx_node: NodeProto, run_params: dict[str, Any], schema: Any | None = None)[source]

Ancestor to all operators in this subfolder.

Parameters:
  • onnx_nodeonnx node

  • run_params – additional parameters such as verbose, opsets (it can be more than one if the operator has a subgraph), log for a logging function

  • schema – operator schema

classmethod create(n_inputs: int | None = None, n_outputs: int | None = None, verbose: int = 0, **kwargs: Any) Any[source]

Instantiates this class based on the given information.

Parameters:
  • n_inputs – number of inputs (default is defined by the operator schema)

  • n_outputs – number of outputs (default is defined by the operator schema)

  • verbose – verbosity

  • **kwargs – node attributes

Returns:

NodeProto

property domain: str

Returns node attribute domain.

classmethod eval(*args: list[Any], n_outputs: int | None = None, verbose: int = 0, **kwargs: Any) Any[source]

Evaluates this operator.

Parameters:
  • *args – inputs

  • n_outputs – number of outputs (default is defined by the operator schema)

  • verbose – verbosity

  • **kwargs – node attributes

Returns:

NodeProto

static implicit_inputs(graph: GraphProto) list[str][source]

Returns all varibles not registered as inputs and not produced by an node inside the graph. This inputs are part of the context existing in the graph calling this one.

property input: Iterable[str]

Returns node attribute input.

classmethod make_node(n_inputs: int | None = None, n_outputs: int | None = None, **kwargs: Any) NodeProto[source]

Creates an ONNX node for this class based on the given information.

Parameters:
  • n_inputs – number of inputs (default is defined by the operator schema)

  • n_outputs – number of outputs (default is defined by the operator schema)

  • verbose – verbosity

  • **kwargs – node attributes

Returns:

NodeProto

Method eval creates an onnx node returned by method make_node.

import numpy as np
from onnx.reference.ops._op_list import Celu

onnx_node = Celu.make_node(alpha=0.5)
print(onnx_node)
input: "x0"
output: "y0"
op_type: "Celu"
attribute {
  name: "alpha"
  f: 0.5
  type: FLOAT
}
need_context() bool[source]

Tells the runtime if this node needs the context (all the results produced so far) as it may silently access one of them (operator Scan, If, Loop). The default answer is False.

property output: Iterable[str]

Returns node attribute output.

run(*args, linked_attributes=None, context=None)[source]

Calls method _run, catches exceptions, displays a longer error message.

Parameters:
  • *args – inputs

  • linked_attributes – used if this has an attriute linked to the attribute of the function it belongs to

  • context – if this node is part of the subgraph, context is a dictionary with the values this node may use

Returns:

tuple of results

RuntimeTypeError

class onnx.reference.op_run.RuntimeTypeError[source]

Raised when a type of a variable is unexpected.

SparseTensor

class onnx.reference.op_run.SparseTensor(values: ndarray, indices: ndarray, shape: tuple[int])[source]

Simple representation of a sparse tensor. It is based on numpy but does not require scipy.