Supported scikitlearn Models¶
skl2onnx currently can convert the following list
of models for skl2onnx . They
were tested using onnxruntime
.
All the following classes overloads the following methods
such as
OnnxSklearnPipeline
does. They wrap existing
scikitlearn classes by dynamically creating a new one
which inherits from OnnxOperatorMixin
which
implements to_onnx methods.
Covered Converters¶
Name 
Package 
Supported 

ARDRegression 
linear_model 
Yes 
AdaBoostClassifier 
ensemble 
Yes 
AdaBoostRegressor 
ensemble 
Yes 
AdditiveChi2Sampler 
kernel_approximation 

AffinityPropagation 
cluster 

AgglomerativeClustering 
cluster 

BaggingClassifier 
ensemble 
Yes 
BaggingRegressor 
ensemble 
Yes 
BaseDecisionTree 
tree 

BaseEnsemble 
ensemble 

BayesianGaussianMixture 
mixture 
Yes 
BayesianRidge 
linear_model 
Yes 
BernoulliNB 
naive_bayes 
Yes 
BernoulliRBM 
neural_network 

Binarizer 
preprocessing 
Yes 
Birch 
cluster 

CCA 
cross_decomposition 

CalibratedClassifierCV 
calibration 
Yes 
CategoricalNB 
naive_bayes 
Yes 
ClassifierChain 
multioutput 

ComplementNB 
naive_bayes 
Yes 
DBSCAN 
cluster 

DecisionTreeClassifier 
tree 
Yes 
DecisionTreeRegressor 
tree 
Yes 
DictVectorizer 
feature_extraction 
Yes 
DictionaryLearning 
decomposition 

ElasticNet 
linear_model 
Yes 
ElasticNetCV 
linear_model 
Yes 
EllipticEnvelope 
covariance 

EmpiricalCovariance 
covariance 

ExtraTreeClassifier 
tree 
Yes 
ExtraTreeRegressor 
tree 
Yes 
ExtraTreesClassifier 
ensemble 
Yes 
ExtraTreesRegressor 
ensemble 
Yes 
FactorAnalysis 
decomposition 

FastICA 
decomposition 

FeatureAgglomeration 
cluster 

FeatureHasher 
feature_extraction 

FunctionTransformer 
preprocessing 
Yes 
GammaRegressor 
linear_model 

GaussianMixture 
mixture 
Yes 
GaussianNB 
naive_bayes 
Yes 
GaussianProcessClassifier 
gaussian_process 

GaussianProcessRegressor 
gaussian_process 
Yes 
GaussianRandomProjection 
random_projection 
Yes 
GenericUnivariateSelect 
feature_selection 
Yes 
GradientBoostingClassifier 
ensemble 
Yes 
GradientBoostingRegressor 
ensemble 
Yes 
GraphicalLasso 
covariance 

GraphicalLassoCV 
covariance 

GridSearchCV 
model_selection 
Yes 
HuberRegressor 
linear_model 
Yes 
IncrementalPCA 
decomposition 
Yes 
IsolationForest 
ensemble 
Yes 
IsotonicRegression 
isotonic 

KBinsDiscretizer 
preprocessing 
Yes 
KMeans 
cluster 
Yes 
KNNImputer 
impute 
Yes 
KNeighborsClassifier 
neighbors 
Yes 
KNeighborsRegressor 
neighbors 
Yes 
KNeighborsTransformer 
neighbors 
Yes 
KernelCenterer 
preprocessing 

KernelDensity 
neighbors 

KernelPCA 
decomposition 

KernelRidge 
kernel_ridge 

LabelBinarizer 
preprocessing 
Yes 
LabelEncoder 
preprocessing 
Yes 
LabelPropagation 
semi_supervised 

LabelSpreading 
semi_supervised 

Lars 
linear_model 
Yes 
LarsCV 
linear_model 
Yes 
Lasso 
linear_model 
Yes 
LassoCV 
linear_model 
Yes 
LassoLars 
linear_model 
Yes 
LassoLarsCV 
linear_model 
Yes 
LassoLarsIC 
linear_model 
Yes 
LatentDirichletAllocation 
decomposition 

LedoitWolf 
covariance 

LinearDiscriminantAnalysis 
discriminant_analysis 
Yes 
LinearRegression 
linear_model 
Yes 
LinearSVC 
svm 
Yes 
LinearSVR 
svm 
Yes 
LocalOutlierFactor 
neighbors 

LogisticRegression 
linear_model 
Yes 
LogisticRegressionCV 
linear_model 
Yes 
MLPClassifier 
neural_network 
Yes 
MLPRegressor 
neural_network 
Yes 
MaxAbsScaler 
preprocessing 
Yes 
MeanShift 
cluster 

MinCovDet 
covariance 

MinMaxScaler 
preprocessing 
Yes 
MiniBatchDictionaryLearning 
decomposition 

MiniBatchKMeans 
cluster 
Yes 
MiniBatchSparsePCA 
decomposition 

MissingIndicator 
impute 

MultiLabelBinarizer 
preprocessing 

MultiOutputClassifier 
multioutput 

MultiOutputRegressor 
multioutput 

MultiTaskElasticNet 
linear_model 
Yes 
MultiTaskElasticNetCV 
linear_model 
Yes 
MultiTaskLasso 
linear_model 
Yes 
MultiTaskLassoCV 
linear_model 
Yes 
MultinomialNB 
naive_bayes 
Yes 
NMF 
decomposition 

NearestCentroid 
neighbors 

NearestNeighbors 
neighbors 
Yes 
NeighborhoodComponentsAnalysis 
neighbors 
Yes 
Normalizer 
preprocessing 
Yes 
NuSVC 
svm 
Yes 
NuSVR 
svm 
Yes 
Nystroem 
kernel_approximation 

OAS 
covariance 

OPTICS 
cluster 

OneClassSVM 
svm 
Yes 
OneHotEncoder 
preprocessing 
Yes 
OneVsOneClassifier 
multiclass 

OneVsRestClassifier 
multiclass 
Yes 
OrdinalEncoder 
preprocessing 
Yes 
OrthogonalMatchingPursuit 
linear_model 
Yes 
OrthogonalMatchingPursuitCV 
linear_model 
Yes 
OutputCodeClassifier 
multiclass 

PCA 
decomposition 
Yes 
PLSCanonical 
cross_decomposition 

PLSRegression 
cross_decomposition 
Yes 
PLSSVD 
cross_decomposition 

PassiveAggressiveClassifier 
linear_model 
Yes 
PassiveAggressiveRegressor 
linear_model 
Yes 
Perceptron 
linear_model 
Yes 
PoissonRegressor 
linear_model 

PolynomialFeatures 
preprocessing 
Yes 
PowerTransformer 
preprocessing 
Yes 
QuadraticDiscriminantAnalysis 
discriminant_analysis 

QuantileTransformer 
preprocessing 

RANSACRegressor 
linear_model 
Yes 
RBFSampler 
kernel_approximation 

RFE 
feature_selection 
Yes 
RFECV 
feature_selection 
Yes 
RadiusNeighborsClassifier 
neighbors 
Yes 
RadiusNeighborsRegressor 
neighbors 
Yes 
RadiusNeighborsTransformer 
neighbors 

RandomForestClassifier 
ensemble 
Yes 
RandomForestRegressor 
ensemble 
Yes 
RandomTreesEmbedding 
ensemble 

RandomizedSearchCV 
model_selection 

RegressorChain 
multioutput 

Ridge 
linear_model 
Yes 
RidgeCV 
linear_model 
Yes 
RidgeClassifier 
linear_model 
Yes 
RidgeClassifierCV 
linear_model 
Yes 
RobustScaler 
preprocessing 
Yes 
SGDClassifier 
linear_model 
Yes 
SGDRegressor 
linear_model 
Yes 
SVC 
svm 
Yes 
SVR 
svm 
Yes 
SelectFdr 
feature_selection 
Yes 
SelectFpr 
feature_selection 
Yes 
SelectFromModel 
feature_selection 
Yes 
SelectFwe 
feature_selection 
Yes 
SelectKBest 
feature_selection 
Yes 
SelectPercentile 
feature_selection 
Yes 
ShrunkCovariance 
covariance 

SimpleImputer 
impute 
Yes 
SkewedChi2Sampler 
kernel_approximation 

SparseCoder 
decomposition 

SparsePCA 
decomposition 

SparseRandomProjection 
random_projection 

SpectralBiclustering 
cluster 

SpectralClustering 
cluster 

SpectralCoclustering 
cluster 

StackingClassifier 
ensemble 
Yes 
StackingRegressor 
ensemble 
Yes 
StandardScaler 
preprocessing 
Yes 
TheilSenRegressor 
linear_model 
Yes 
TransformedTargetRegressor 
compose 

TruncatedSVD 
decomposition 
Yes 
TweedieRegressor 
linear_model 

VarianceThreshold 
feature_selection 
Yes 
VotingClassifier 
ensemble 
Yes 
VotingRegressor 
ensemble 
Yes 
scikitlearn’s version is 0.23.2. 113/179 models are covered.
Pipeline¶

class
skl2onnx.algebra.sklearn_ops.
OnnxSklearnPipeline
(steps, memory=None, verbose=False, op_version=None)[source]¶ Combines Pipeline and
OnnxSubGraphOperatorMixin
.
onnx_converter
()¶ Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikitlearn parent it can find.

onnx_parser
(scope=None, inputs=None)¶ Returns a parser for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.

onnx_shape_calculator
()¶ Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.

to_onnx
(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None)¶ Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.
 Parameters
X – training data, at least one sample, it is used to guess the type of the input data.
name – name of the model, if None, it is replaced by the the class name.
options – specific options given to converters (see Converters with options)
white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed
black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted
final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.

to_onnx_operator
(inputs=None, outputs=None)¶ This function must be overloaded.


class
skl2onnx.algebra.sklearn_ops.
OnnxSklearnColumnTransformer
(op_version=None)[source]¶ Combines ColumnTransformer and
OnnxSubGraphOperatorMixin
.
onnx_converter
()¶ Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikitlearn parent it can find.

onnx_parser
(scope=None, inputs=None)¶ Returns a parser for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.

onnx_shape_calculator
()¶ Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.

to_onnx
(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None)¶ Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.
 Parameters
X – training data, at least one sample, it is used to guess the type of the input data.
name – name of the model, if None, it is replaced by the the class name.
options – specific options given to converters (see Converters with options)
white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed
black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted
final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.

to_onnx_operator
(inputs=None, outputs=None)¶ This function must be overloaded.


class
skl2onnx.algebra.sklearn_ops.
OnnxSklearnFeatureUnion
(op_version=None)[source]¶ Combines FeatureUnion and
OnnxSubGraphOperatorMixin
.
onnx_converter
()¶ Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikitlearn parent it can find.

onnx_parser
(scope=None, inputs=None)¶ Returns a parser for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.

onnx_shape_calculator
()¶ Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.

to_onnx
(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None)¶ Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.
 Parameters
X – training data, at least one sample, it is used to guess the type of the input data.
name – name of the model, if None, it is replaced by the the class name.
options – specific options given to converters (see Converters with options)
white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed
black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted
final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.

to_onnx_operator
(inputs=None, outputs=None)¶ This function must be overloaded.

Available ONNX operators¶
skl2onnx maps every ONNX operators into a class easy to insert into a graph. These operators get dynamically added and the list depends on the installed ONNX package. The documentation for these operators can be found on github: ONNX Operators.md and ONNXML Operators. Associated to onnxruntime, the mapping makes it easier to easily check the output of the ONNX operators on any data as shown in example Play with ONNX operators.
OnnxAbs¶

class
skl2onnx.algebra.onnx_ops.
OnnxAbs
(*args, **kwargs)¶ Version
Onnx name: Abs
This version of the operator has been available since version 6.
Summary
Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxAbs_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxAbs_1
(*args, **kwargs)¶ Version
Onnx name: Abs
This version of the operator has been available since version 1.
Summary
Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.
Attributes
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAbs_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxAbs_6
(*args, **kwargs)¶ Version
Onnx name: Abs
This version of the operator has been available since version 6.
Summary
Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxAcos¶

class
skl2onnx.algebra.onnx_ops.
OnnxAcos
(*args, **kwargs)¶ Version
Onnx name: Acos
This version of the operator has been available since version 7.
Summary
Calculates the arccosine (inverse of cosine) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arccosine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAcos_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxAcos_7
(*args, **kwargs)¶ Version
Onnx name: Acos
This version of the operator has been available since version 7.
Summary
Calculates the arccosine (inverse of cosine) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arccosine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAcosh¶

class
skl2onnx.algebra.onnx_ops.
OnnxAcosh
(*args, **kwargs)¶ Version
Onnx name: Acosh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arccosine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arccosine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAcosh_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxAcosh_9
(*args, **kwargs)¶ Version
Onnx name: Acosh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arccosine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arccosine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdagrad¶

class
skl2onnx.algebra.onnx_ops.
OnnxAdagrad
(*args, **kwargs)¶ Version
Onnx name: Adagrad
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Compute one iteration of ADAGRAD, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.
Let’s define the behavior of this operator. As you can imagine, ADAGRAD requires some parameters:
The initial learningrate “R”.
The update count “T”. That is, the number of training iterations conducted.
A L2norm regularization coefficient “norm_coefficient”.
A learningrate decay factor “decay_factor”.
A small constant “epsilon” to avoid dividingbyzero.
At each ADAGRAD iteration, the optimized tensors are moved along a direction computed based on their estimated gradient and accumulated squared gradient. Assume that only a single tensor “X” is updated by this operator. We need the value of “X”, its gradient “G”, and its accumulated squared gradient “H”. Therefore, variables in this operator’s input list are sequentially “R”, “T”, “X”, “G”, and “H”. Other parameters are given as attributes because they are usually constants. Also, the corresponding output tensors are the new value of “X” (called “X_new”), and then the new accumulated squared gradient (called “H_new”). Those outputs are computed from the given inputs following the pseudo code below.
Let “+”, ““, “*”, and “/” are all elementwise arithmetic operations with numpystyle broadcasting support. The pseudo code to compute those outputs is:
// Compute a scalar learningrate factor. At the first update of X, T is generally // 0 (0based update index) or 1 (1based update index). r = R / (1 + T * decay_factor);
// Add gradient of 0.5 * norm_coefficient * X_2^2, where X_2 is the 2norm. G_regularized = norm_coefficient * X + G;
// Compute new accumulated squared gradient. H_new = H + G_regularized * G_regularized;
// Compute the adaptive part of percoordinate learning rate. Note that Sqrt(…) // computes elementwise squareroot. H_adaptive = Sqrt(H_new) + epsilon
// Compute the new value of “X”. X_new = X  r * G_regularized / H_adaptive;
If one assign this operators to optimize multiple inputs, for example, “X_1” and “X_2”, the same pseudo code may be extended to handle all tensors jointly. More specifically, we can view “X” as a concatenation of “X_1” and “X_2” (of course, their gradient and accumulate gradient should be concatenated too) and then just reuse the entire pseudo code.
Note that ADAGRAD was first proposed in http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf. In that reference paper, this operator is a special case of the Figure 1’s composite mirror descent update.
Attributes
decay_factor: The decay factor of learning rate after one update.The effective learning rate is computed by r = R / (1 + T * decay_factor). Default to 0 so that increasing update counts doesn’t reduce the learning rate. Default value is ``name: “decay_factor”
f: 0.0 type: FLOAT `` * epsilon: Small scalar to avoid dividing by zero. Default value is ``name: “epsilon” f: 9.999999974752427e07 type: FLOAT `` * norm_coefficient: Regularization coefficient in 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient” f: 0.0 type: FLOAT ``
Inputs
Between 3 and 2147483647 inputs.
R (heterogeneous)T1: The initial learning rate.
T (heterogeneous)T2: The update count of “X”. It should be a scalar.
inputs (variadic)T3: The current values of optimized tensors, followed by their respective gradients, followed by their respective accumulated squared gradients.For example, if two tensor “X_1” and “X_2” are optimized, The input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)T3: Updated values of optimized tensors, followed by their updated values of accumulated squared gradients. For example, if two tensor “X_1” and “X_2” are optimized, the output list would be [new value of “X_1,” new value of “X_2” new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float scalars.
T2 tensor(int64): Constrain input types to 64bit integer scalars.
T3 tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdagrad_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxAdagrad_1
(*args, **kwargs)¶ Version
Onnx name: Adagrad
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Compute one iteration of ADAGRAD, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.
Let’s define the behavior of this operator. As you can imagine, ADAGRAD requires some parameters:
The initial learningrate “R”.
The update count “T”. That is, the number of training iterations conducted.
A L2norm regularization coefficient “norm_coefficient”.
A learningrate decay factor “decay_factor”.
A small constant “epsilon” to avoid dividingbyzero.
At each ADAGRAD iteration, the optimized tensors are moved along a direction computed based on their estimated gradient and accumulated squared gradient. Assume that only a single tensor “X” is updated by this operator. We need the value of “X”, its gradient “G”, and its accumulated squared gradient “H”. Therefore, variables in this operator’s input list are sequentially “R”, “T”, “X”, “G”, and “H”. Other parameters are given as attributes because they are usually constants. Also, the corresponding output tensors are the new value of “X” (called “X_new”), and then the new accumulated squared gradient (called “H_new”). Those outputs are computed from the given inputs following the pseudo code below.
Let “+”, ““, “*”, and “/” are all elementwise arithmetic operations with numpystyle broadcasting support. The pseudo code to compute those outputs is:
// Compute a scalar learningrate factor. At the first update of X, T is generally // 0 (0based update index) or 1 (1based update index). r = R / (1 + T * decay_factor);
// Add gradient of 0.5 * norm_coefficient * X_2^2, where X_2 is the 2norm. G_regularized = norm_coefficient * X + G;
// Compute new accumulated squared gradient. H_new = H + G_regularized * G_regularized;
// Compute the adaptive part of percoordinate learning rate. Note that Sqrt(…) // computes elementwise squareroot. H_adaptive = Sqrt(H_new) + epsilon
// Compute the new value of “X”. X_new = X  r * G_regularized / H_adaptive;
If one assign this operators to optimize multiple inputs, for example, “X_1” and “X_2”, the same pseudo code may be extended to handle all tensors jointly. More specifically, we can view “X” as a concatenation of “X_1” and “X_2” (of course, their gradient and accumulate gradient should be concatenated too) and then just reuse the entire pseudo code.
Note that ADAGRAD was first proposed in http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf. In that reference paper, this operator is a special case of the Figure 1’s composite mirror descent update.
Attributes
decay_factor: The decay factor of learning rate after one update.The effective learning rate is computed by r = R / (1 + T * decay_factor). Default to 0 so that increasing update counts doesn’t reduce the learning rate. Default value is ``name: “decay_factor”
f: 0.0 type: FLOAT `` * epsilon: Small scalar to avoid dividing by zero. Default value is ``name: “epsilon” f: 9.999999974752427e07 type: FLOAT `` * norm_coefficient: Regularization coefficient in 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient” f: 0.0 type: FLOAT ``
Inputs
Between 3 and 2147483647 inputs.
R (heterogeneous)T1: The initial learning rate.
T (heterogeneous)T2: The update count of “X”. It should be a scalar.
inputs (variadic)T3: The current values of optimized tensors, followed by their respective gradients, followed by their respective accumulated squared gradients.For example, if two tensor “X_1” and “X_2” are optimized, The input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)T3: Updated values of optimized tensors, followed by their updated values of accumulated squared gradients. For example, if two tensor “X_1” and “X_2” are optimized, the output list would be [new value of “X_1,” new value of “X_2” new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float scalars.
T2 tensor(int64): Constrain input types to 64bit integer scalars.
T3 tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdam¶

class
skl2onnx.algebra.onnx_ops.
OnnxAdam
(*args, **kwargs)¶ Version
Onnx name: Adam
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Compute one iteration of Adam, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.
Let’s define the behavior of this operator. First of all, Adam requires some parameters:
The learningrate “R”.
The update count “T”. That is, the number of training iterations conducted.
A L2norm regularization coefficient “norm_coefficient”.
A small constant “epsilon” to avoid dividingbyzero.
Two coefficients, “alpha” and “beta”.
At each Adam iteration, the optimized tensors are moved along a direction computed based on their exponentiallyaveraged historical gradient and exponentiallyaveraged historical squared gradient. Assume that only a tensor “X” is being optimized. The rest of required information is
the value of “X”,
“X“‘s gradient (denoted by “G”),
“X“‘s exponentiallyaveraged historical gradient (denoted by “V”), and
“X“‘s exponentiallyaveraged historical squared gradient (denoted by “H”).
Some of those parameters are passed into this operator as input tensors and others are stored as this operator’s attributes. Specifically, this operator’s input tensor list is [“R”, “T”, “X”, “G”, “V”, “H”]. That is, “R” is the first input, “T” is the second input, and so on. Other parameters are given as attributes because they are constants. Moreover, the corresponding output tensors are
the new value of “X” (called “X_new”),
the new exponentiallyaveraged historical gradient (denoted by “V_new”), and
the new exponentiallyaveraged historical squared gradient (denoted by “H_new”).
Those outputs are computed following the pseudo code below.
Let “+”, ““, “*”, and “/” are all elementwise arithmetic operations with numpystyle broadcasting support. The pseudo code to compute those outputs is:
// Add gradient of 0.5 * norm_coefficient * X_2^2, where X_2 is the 2norm. G_regularized = norm_coefficient * X + G
// Update exponentiallyaveraged historical gradient. V_new = alpha * V + (1  alpha) * G_regularized
// Update exponentiallyaveraged historical squared gradient. H_new = beta * H + (1  beta) * G_regularized * G_regularized
// Compute the elementwise squareroot of H_new. V_new will be elementwisely // divided by H_sqrt for a better update direction. H_sqrt = Sqrt(H_new) + epsilon
// Compute learningrate. Note that “alpha**T”/”beta**T” is alpha’s/beta’s Tth power. R_adjusted = T > 0 ? R * Sqrt(1  beta**T) / (1  alpha**T) : R
// Compute new value of “X”. X_new = X  R_adjusted * V_new / H_sqrt
// Postupdate regularization. X_final = (1  norm_coefficient_post) * X_new
If there are multiple inputs to be optimized, the pseudo code will be applied independently to each of them.
Attributes
alpha: Coefficient of previously accumulated gradient in running average. Default to 0.9. Default value is ``name: “alpha”
f: 0.8999999761581421 type: FLOAT `` * beta: Coefficient of previously accumulated squaredgradient in running average. Default to 0.999. Default value is ``name: “beta” f: 0.9990000128746033 type: FLOAT `` * epsilon: Small scalar to avoid dividing by zero. Default value is ``name: “epsilon” f: 9.999999974752427e07 type: FLOAT `` * norm_coefficient: Regularization coefficient of 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient” f: 0.0 type: FLOAT `` * norm_coefficient_post: Regularization coefficient of 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient_post” f: 0.0 type: FLOAT ``
Inputs
Between 3 and 2147483647 inputs.
R (heterogeneous)T1: The initial learning rate.
T (heterogeneous)T2: The update count of “X”. It should be a scalar.
inputs (variadic)T3: The tensors to be optimized, followed by their respective gradients, followed by their respective accumulated gradients (aka momentum), followed by their respective accumulated squared gradients. For example, to optimize tensors “X_1” and “X_2,”, the input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated gradient of “X_1”, accumulated gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)T3: New values of optimized tensors, followed by their respective new accumulated gradients, followed by their respective new accumulated squared gradients. For example, if two tensors “X_1” and “X_2” are optimized, the outputs list would be [new value of “X_1”, new value of “X_2”, new accumulated gradient of “X_1”, new accumulated gradient of “X_2”, new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float scalars.
T2 tensor(int64): Constrain input types to 64bit integer scalars.
T3 tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdam_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxAdam_1
(*args, **kwargs)¶ Version
Onnx name: Adam
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Compute one iteration of Adam, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.
Let’s define the behavior of this operator. First of all, Adam requires some parameters:
The learningrate “R”.
The update count “T”. That is, the number of training iterations conducted.
A L2norm regularization coefficient “norm_coefficient”.
A small constant “epsilon” to avoid dividingbyzero.
Two coefficients, “alpha” and “beta”.
At each Adam iteration, the optimized tensors are moved along a direction computed based on their exponentiallyaveraged historical gradient and exponentiallyaveraged historical squared gradient. Assume that only a tensor “X” is being optimized. The rest of required information is
the value of “X”,
“X“‘s gradient (denoted by “G”),
“X“‘s exponentiallyaveraged historical gradient (denoted by “V”), and
“X“‘s exponentiallyaveraged historical squared gradient (denoted by “H”).
Some of those parameters are passed into this operator as input tensors and others are stored as this operator’s attributes. Specifically, this operator’s input tensor list is [“R”, “T”, “X”, “G”, “V”, “H”]. That is, “R” is the first input, “T” is the second input, and so on. Other parameters are given as attributes because they are constants. Moreover, the corresponding output tensors are
the new value of “X” (called “X_new”),
the new exponentiallyaveraged historical gradient (denoted by “V_new”), and
the new exponentiallyaveraged historical squared gradient (denoted by “H_new”).
Those outputs are computed following the pseudo code below.
Let “+”, ““, “*”, and “/” are all elementwise arithmetic operations with numpystyle broadcasting support. The pseudo code to compute those outputs is:
// Add gradient of 0.5 * norm_coefficient * X_2^2, where X_2 is the 2norm. G_regularized = norm_coefficient * X + G
// Update exponentiallyaveraged historical gradient. V_new = alpha * V + (1  alpha) * G_regularized
// Update exponentiallyaveraged historical squared gradient. H_new = beta * H + (1  beta) * G_regularized * G_regularized
// Compute the elementwise squareroot of H_new. V_new will be elementwisely // divided by H_sqrt for a better update direction. H_sqrt = Sqrt(H_new) + epsilon
// Compute learningrate. Note that “alpha**T”/”beta**T” is alpha’s/beta’s Tth power. R_adjusted = T > 0 ? R * Sqrt(1  beta**T) / (1  alpha**T) : R
// Compute new value of “X”. X_new = X  R_adjusted * V_new / H_sqrt
// Postupdate regularization. X_final = (1  norm_coefficient_post) * X_new
If there are multiple inputs to be optimized, the pseudo code will be applied independently to each of them.
Attributes
alpha: Coefficient of previously accumulated gradient in running average. Default to 0.9. Default value is ``name: “alpha”
f: 0.8999999761581421 type: FLOAT `` * beta: Coefficient of previously accumulated squaredgradient in running average. Default to 0.999. Default value is ``name: “beta” f: 0.9990000128746033 type: FLOAT `` * epsilon: Small scalar to avoid dividing by zero. Default value is ``name: “epsilon” f: 9.999999974752427e07 type: FLOAT `` * norm_coefficient: Regularization coefficient of 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient” f: 0.0 type: FLOAT `` * norm_coefficient_post: Regularization coefficient of 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient_post” f: 0.0 type: FLOAT ``
Inputs
Between 3 and 2147483647 inputs.
R (heterogeneous)T1: The initial learning rate.
T (heterogeneous)T2: The update count of “X”. It should be a scalar.
inputs (variadic)T3: The tensors to be optimized, followed by their respective gradients, followed by their respective accumulated gradients (aka momentum), followed by their respective accumulated squared gradients. For example, to optimize tensors “X_1” and “X_2,”, the input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated gradient of “X_1”, accumulated gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)T3: New values of optimized tensors, followed by their respective new accumulated gradients, followed by their respective new accumulated squared gradients. For example, if two tensors “X_1” and “X_2” are optimized, the outputs list would be [new value of “X_1”, new value of “X_2”, new accumulated gradient of “X_1”, new accumulated gradient of “X_2”, new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float scalars.
T2 tensor(int64): Constrain input types to 64bit integer scalars.
T3 tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdd¶

class
skl2onnx.algebra.onnx_ops.
OnnxAdd
(*args, **kwargs)¶ Version
Onnx name: Add
This version of the operator has been available since version 7.
Summary
Performs elementwise binary addition (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxAdd_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxAdd_1
(*args, **kwargs)¶ Version
Onnx name: Add
This version of the operator has been available since version 1.
Summary
Performs elementwise binary addition (with limited broadcast support).
If necessary the righthandside argument will be broadcasted to match the shape of lefthandside argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1dim expansion doesn’t work yet.
For example, the following tensor shapes are supported (with broadcast=1):
shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0
Attribute broadcast=1 needs to be passed to enable broadcasting.
Attributes
axis: If set, defines the broadcast dimensions. See doc for details. Default value is ````
broadcast: Pass 1 to enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT `` * consumed_inputs: legacy optimization attribute. Default value is ````
Inputs
A (heterogeneous)T: First operand, should share the type with the second operand.
B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.
Outputs
C (heterogeneous)T: Result, has same dimensions and type as A
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdd_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxAdd_6
(*args, **kwargs)¶ Version
Onnx name: Add
This version of the operator has been available since version 6.
Summary
Performs elementwise binary addition (with limited broadcast support).
If necessary the righthandside argument will be broadcasted to match the shape of lefthandside argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1dim expansion doesn’t work yet.
For example, the following tensor shapes are supported (with broadcast=1):
shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0
Attribute broadcast=1 needs to be passed to enable broadcasting.
Attributes
axis: If set, defines the broadcast dimensions. See doc for details. Default value is ````
broadcast: Pass 1 to enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: First operand, should share the type with the second operand.
B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.
Outputs
C (heterogeneous)T: Result, has same dimensions and type as A
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxAdd_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxAdd_7
(*args, **kwargs)¶ Version
Onnx name: Add
This version of the operator has been available since version 7.
Summary
Performs elementwise binary addition (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxAnd¶

class
skl2onnx.algebra.onnx_ops.
OnnxAnd
(*args, **kwargs)¶ Version
Onnx name: And
This version of the operator has been available since version 7.
Summary
Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool): Constrains input to boolean tensor.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxAnd_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxAnd_1
(*args, **kwargs)¶ Version
Onnx name: And
This version of the operator has been available since version 1.
Summary
Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B.
If broadcasting is enabled, the righthandside argument will be broadcasted to match the shape of lefthandside argument. See the doc of Add for a detailed description of the broadcasting rules.
Attributes
axis: If set, defines the broadcast dimensions. Default value is ````
broadcast: Enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: Left input tensor for the logical operator.
B (heterogeneous)T: Right input tensor for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool): Constrains input to boolean tensor.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxAnd_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxAnd_7
(*args, **kwargs)¶ Version
Onnx name: And
This version of the operator has been available since version 7.
Summary
Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool): Constrains input to boolean tensor.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxArgMax¶

class
skl2onnx.algebra.onnx_ops.
OnnxArgMax
(*args, **kwargs)¶ Version
Onnx name: ArgMax
This version of the operator has been available since version 12.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMax_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxArgMax_1
(*args, **kwargs)¶ Version
Onnx name: ArgMax
This version of the operator has been available since version 1.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulted tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMax_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxArgMax_11
(*args, **kwargs)¶ Version
Onnx name: ArgMax
This version of the operator has been available since version 11.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMax_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxArgMax_12
(*args, **kwargs)¶ Version
Onnx name: ArgMax
This version of the operator has been available since version 12.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMin¶

class
skl2onnx.algebra.onnx_ops.
OnnxArgMin
(*args, **kwargs)¶ Version
Onnx name: ArgMin
This version of the operator has been available since version 12.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMin_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxArgMin_1
(*args, **kwargs)¶ Version
Onnx name: ArgMin
This version of the operator has been available since version 1.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulted tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMin_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxArgMin_11
(*args, **kwargs)¶ Version
Onnx name: ArgMin
This version of the operator has been available since version 11.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMin_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxArgMin_12
(*args, **kwargs)¶ Version
Onnx name: ArgMin
This version of the operator has been available since version 12.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArrayFeatureExtractor¶

class
skl2onnx.algebra.onnx_ops.
OnnxArrayFeatureExtractor
(*args, **kwargs)¶ Version
Onnx name: ArrayFeatureExtractor
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Select elements of the input tensor based on the indices passed.
The indices are applied to the last axes of the tensor.
Inputs
X (heterogeneous)T: Data to be selected
Y (heterogeneous)tensor(int64): The indices, based on 0 as the first index of any dimension.
Outputs
Z (heterogeneous)T: Selected output data as an array
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32), tensor(string): The input must be a tensor of a numeric type or string. The output will be of the same tensor type.
OnnxArrayFeatureExtractor_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxArrayFeatureExtractor_1
(*args, **kwargs)¶ Version
Onnx name: ArrayFeatureExtractor
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Select elements of the input tensor based on the indices passed.
The indices are applied to the last axes of the tensor.
Inputs
X (heterogeneous)T: Data to be selected
Y (heterogeneous)tensor(int64): The indices, based on 0 as the first index of any dimension.
Outputs
Z (heterogeneous)T: Selected output data as an array
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32), tensor(string): The input must be a tensor of a numeric type or string. The output will be of the same tensor type.
OnnxAsin¶

class
skl2onnx.algebra.onnx_ops.
OnnxAsin
(*args, **kwargs)¶ Version
Onnx name: Asin
This version of the operator has been available since version 7.
Summary
Calculates the arcsine (inverse of sine) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arcsine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAsin_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxAsin_7
(*args, **kwargs)¶ Version
Onnx name: Asin
This version of the operator has been available since version 7.
Summary
Calculates the arcsine (inverse of sine) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arcsine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAsinh¶

class
skl2onnx.algebra.onnx_ops.
OnnxAsinh
(*args, **kwargs)¶ Version
Onnx name: Asinh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arcsine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arcsine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAsinh_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxAsinh_9
(*args, **kwargs)¶ Version
Onnx name: Asinh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arcsine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arcsine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAtan¶

class
skl2onnx.algebra.onnx_ops.
OnnxAtan
(*args, **kwargs)¶ Version
Onnx name: Atan
This version of the operator has been available since version 7.
Summary
Calculates the arctangent (inverse of tangent) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arctangent of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAtan_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxAtan_7
(*args, **kwargs)¶ Version
Onnx name: Atan
This version of the operator has been available since version 7.
Summary
Calculates the arctangent (inverse of tangent) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arctangent of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAtanh¶

class
skl2onnx.algebra.onnx_ops.
OnnxAtanh
(*args, **kwargs)¶ Version
Onnx name: Atanh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arctangent of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arctangent values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAtanh_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxAtanh_9
(*args, **kwargs)¶ Version
Onnx name: Atanh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arctangent of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arctangent values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool¶

class
skl2onnx.algebra.onnx_ops.
OnnxAveragePool
(*args, **kwargs)¶ Version
Onnx name: AveragePool
This version of the operator has been available since version 11.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is
name: "ceil_mode" i: 0 type: INT `` * *count_include_pad*: Whether include pad pixels when calculating values for the edges. Default is 0, doesn't count include pad. Default value is ``name: "count_include_pad" i: 0 type: INT `` * *kernel_shape* (required): The size of the kernel along each axis. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis. Default value is ````Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxAveragePool_1
(*args, **kwargs)¶ Version
Onnx name: AveragePool
This version of the operator has been available since version 1.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1) * pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements exclude pad.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * kernel_shape (required): The size of the kernel along each axis. Default value is ```` * pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool_10¶

class
skl2onnx.algebra.onnx_ops.
OnnxAveragePool_10
(*args, **kwargs)¶ Version
Onnx name: AveragePool
This version of the operator has been available since version 10.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is
name: "ceil_mode" i: 0 type: INT `` * *count_include_pad*: Whether include pad pixels when calculating values for the edges. Default is 0, doesn't count include pad. Default value is ``name: "count_include_pad" i: 0 type: INT `` * *kernel_shape* (required): The size of the kernel along each axis. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxAveragePool_11
(*args, **kwargs)¶ Version
Onnx name: AveragePool
This version of the operator has been available since version 11.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is
name: "ceil_mode" i: 0 type: INT `` * *count_include_pad*: Whether include pad pixels when calculating values for the edges. Default is 0, doesn't count include pad. Default value is ``name: "count_include_pad" i: 0 type: INT `` * *kernel_shape* (required): The size of the kernel along each axis. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis. Default value is ````Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxAveragePool_7
(*args, **kwargs)¶ Version
Onnx name: AveragePool
This version of the operator has been available since version 7.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1) * pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad. Default value is
name: "count_include_pad" i: 0 type: INT `` * *kernel_shape* (required): The size of the kernel along each axis. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization¶

class
skl2onnx.algebra.onnx_ops.
OnnxBatchNormalization
(*args, **kwargs)¶ Version
Onnx name: BatchNormalization
This version of the operator has been available since version 9.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
For previous (depreciated) nonspatial cases, implementors are suggested to flatten the input shape to (N x C*D1*D2 ..*Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum). Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1
scale (heterogeneous)T: Scale tensor of shape (C).
B (heterogeneous)T: Bias tensor of shape (C).
mean (heterogeneous)T: running (training) or estimated (testing) mean tensor of shape (C).
var (heterogeneous)T: running (training) or estimated (testing) variance tensor of shape (C).
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxBatchNormalization_1
(*args, **kwargs)¶ Version
Onnx name: BatchNormalization
This version of the operator has been available since version 1.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
Attributes
consumed_inputs (required): legacy optimization attribute. Default value is ````
epsilon: The epsilon value to use to avoid division by zero, default is 1e5f. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * is_test: If set to nonzero, run spatial batch normalization in test mode, default is 0. Default value is ``name: “is_test” i: 0 type: INT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum), default is 0.9f. Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT `` * spatial: If true, compute the mean and variance across all spatial elements If false, compute the mean and variance across per feature.Default is 1. Default value is ``name: “spatial” i: 1 type: INT ``
Inputs
X (heterogeneous)T: The input 4dimensional tensor of shape NCHW.
scale (heterogeneous)T: The scale as a 1dimensional tensor of size C to be applied to the output.
B (heterogeneous)T: The bias as a 1dimensional tensor of size C to be applied to the output.
mean (heterogeneous)T: The running mean (training) or the estimated mean (testing) as a 1dimensional tensor of size C.
var (heterogeneous)T: The running variance (training) or the estimated variance (testing) as a 1dimensional tensor of size C.
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output 4dimensional tensor of the same shape as X.
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator. Must be inplace with the input mean. Should not be used for testing.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator. Must be inplace with the input var. Should not be used for testing.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation. Should not be used for testing.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation. Should not be used for testing.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxBatchNormalization_6
(*args, **kwargs)¶ Version
Onnx name: BatchNormalization
This version of the operator has been available since version 6.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
Attributes
epsilon: The epsilon value to use to avoid division by zero, default is 1e5f. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * is_test: If set to nonzero, run spatial batch normalization in test mode, default is 0. Default value is ``name: “is_test” i: 0 type: INT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum), default is 0.9f. Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT `` * spatial: If true, compute the mean and variance across all spatial elements If false, compute the mean and variance across per feature.Default is 1. Default value is ``name: “spatial” i: 1 type: INT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
scale (heterogeneous)T: The scale as a 1dimensional tensor of size C to be applied to the output.
B (heterogeneous)T: The bias as a 1dimensional tensor of size C to be applied to the output.
mean (heterogeneous)T: The running mean (training) or the estimated mean (testing) as a 1dimensional tensor of size C.
var (heterogeneous)T: The running variance (training) or the estimated variance (testing) as a 1dimensional tensor of size C.
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X.
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator. Must be inplace with the input mean. Should not be used for testing.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator. Must be inplace with the input var. Should not be used for testing.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation. Should not be used for testing.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation. Should not be used for testing.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxBatchNormalization_7
(*args, **kwargs)¶ Version
Onnx name: BatchNormalization
This version of the operator has been available since version 7.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum). Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT `` * spatial: If true, compute the mean and variance across per activation. If false, compute the mean and variance across per feature over each minibatch. Default value is ``name: “spatial” i: 1 type: INT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
scale (heterogeneous)T: If spatial is true, the dimension of scale is (C). If spatial is false, the dimensions of scale are (C x D1 x … x Dn)
B (heterogeneous)T: If spatial is true, the dimension of bias is (C). If spatial is false, the dimensions of bias are (C x D1 x … x Dn)
mean (heterogeneous)T: If spatial is true, the dimension of the running mean (training) or the estimated mean (testing) is (C). If spatial is false, the dimensions of the running mean (training) or the estimated mean (testing) are (C x D1 x … x Dn).
var (heterogeneous)T: If spatial is true, the dimension of the running variance(training) or the estimated variance (testing) is (C). If spatial is false, the dimensions of the running variance(training) or the estimated variance (testing) are (C x D1 x … x Dn).
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxBatchNormalization_9
(*args, **kwargs)¶ Version
Onnx name: BatchNormalization
This version of the operator has been available since version 9.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
For previous (depreciated) nonspatial cases, implementors are suggested to flatten the input shape to (N x C*D1*D2 ..*Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum). Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1
scale (heterogeneous)T: Scale tensor of shape (C).
B (heterogeneous)T: Bias tensor of shape (C).
mean (heterogeneous)T: running (training) or estimated (testing) mean tensor of shape (C).
var (heterogeneous)T: running (training) or estimated (testing) variance tensor of shape (C).
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBinarizer¶

class
skl2onnx.algebra.onnx_ops.
OnnxBinarizer
(*args, **kwargs)¶ Version
Onnx name: Binarizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Maps the values of the input tensor to either 0 or 1, elementwise, based on the outcome of a comparison against a threshold value.
Attributes
threshold: Values greater than this are mapped to 1, others to 0. Default value is ``name: “threshold”
f: 0.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Data to be binarized
Outputs
Y (heterogeneous)T: Binarized output data
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32): The input must be a tensor of a numeric type. The output will be of the same tensor type.
OnnxBinarizer_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxBinarizer_1
(*args, **kwargs)¶ Version
Onnx name: Binarizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Maps the values of the input tensor to either 0 or 1, elementwise, based on the outcome of a comparison against a threshold value.
Attributes
threshold: Values greater than this are mapped to 1, others to 0. Default value is ``name: “threshold”
f: 0.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Data to be binarized
Outputs
Y (heterogeneous)T: Binarized output data
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32): The input must be a tensor of a numeric type. The output will be of the same tensor type.
OnnxBitShift¶

class
skl2onnx.algebra.onnx_ops.
OnnxBitShift
(*args, **kwargs)¶ Version
Onnx name: BitShift
This version of the operator has been available since version 11.
Summary
 Bitwise shift operator performs elementwise operation. For each input element, if the
attribute “direction” is “RIGHT”, this operator moves its binary representation toward the right side so that the input value is effectively decreased. If the attribute “direction” is “LEFT”, bits of binary representation moves toward the left side, which results the increase of its actual value. The input X is the tensor to be shifted and another input Y specifies the amounts of shifting. For example, if “direction” is “Right”, X is [1, 4], and S is [1, 1], the corresponding output Z would be [0, 2]. If “direction” is “LEFT” with X=[1, 2] and S=[1, 2], the corresponding output Y would be [2, 8].
Because this operator supports Numpystyle broadcasting, X’s and Y’s shapes are not necessarily identical.
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Attributes
direction (required): Direction of moving bits. It can be either “RIGHT” (for right shift) or “LEFT” (for left shift). Default value is ````
Inputs
X (heterogeneous)T: First operand, input to be shifted.
Y (heterogeneous)T: Second operand, amounts of shift.
Outputs
Z (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64): Constrain input and output types to integer tensors.
OnnxBitShift_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxBitShift_11
(*args, **kwargs)¶ Version
Onnx name: BitShift
This version of the operator has been available since version 11.
Summary
 Bitwise shift operator performs elementwise operation. For each input element, if the
attribute “direction” is “RIGHT”, this operator moves its binary representation toward the right side so that the input value is effectively decreased. If the attribute “direction” is “LEFT”, bits of binary representation moves toward the left side, which results the increase of its actual value. The input X is the tensor to be shifted and another input Y specifies the amounts of shifting. For example, if “direction” is “Right”, X is [1, 4], and S is [1, 1], the corresponding output Z would be [0, 2]. If “direction” is “LEFT” with X=[1, 2] and S=[1, 2], the corresponding output Y would be [2, 8].
Because this operator supports Numpystyle broadcasting, X’s and Y’s shapes are not necessarily identical.
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Attributes
direction (required): Direction of moving bits. It can be either “RIGHT” (for right shift) or “LEFT” (for left shift). Default value is ````
Inputs
X (heterogeneous)T: First operand, input to be shifted.
Y (heterogeneous)T: Second operand, amounts of shift.
Outputs
Z (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64): Constrain input and output types to integer tensors.
OnnxCast¶

class
skl2onnx.algebra.onnx_ops.
OnnxCast
(*args, **kwargs)¶ Version
Onnx name: Cast
This version of the operator has been available since version 9.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.
Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may result 100. There are some string literals reserved for special floatingpoint values; “+INF” (and “INF”), “INF”, and “NaN” are positive infinity, negative infinity, and notanumber, respectively. Any string which can exactly match “+INF” in a caseinsensitive way would be mapped to positive infinite. Similarly, this caseinsensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floatingpoint representation (such as “314.15926”) would be used. Converting nonnumericalliteral string such as “Hello World!” is an undefined behavior. Cases of converting string representing floatingpoint arithmetic value, such as “2.718”, to INT is an undefined behavior.
Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64bit float 3.1415926459 may be round to a 32bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string): Constrain input types. Casting from complex is not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string): Constrain output types. Casting to complex is not supported.
OnnxCastMap¶

class
skl2onnx.algebra.onnx_ops.
OnnxCastMap
(*args, **kwargs)¶ Version
Onnx name: CastMap
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts a map to a tensor. The map key must be an int64 and the values will be ordered in ascending order based on this key. The operator supports dense packing or sparse packing. If using sparse packing, the key cannot exceed the max_map1 value.
Attributes
cast_to: A string indicating the desired element type of the output tensor, one of ‘TO_FLOAT’, ‘TO_STRING’, ‘TO_INT64’. Default value is ``name: “cast_to”
s: “TO_FLOAT” type: STRING `` * map_form: Indicates whether to only output as many values as are in the input (dense), or position the input based on using the key of the map as the index of the output (sparse).<br>One of ‘DENSE’, ‘SPARSE’. Default value is ``name: “map_form” s: “DENSE” type: STRING `` * max_map: If the value of map_form is ‘SPARSE,’ this attribute indicates the total length of the output tensor. Default value is ``name: “max_map” i: 1 type: INT ``
Inputs
X (heterogeneous)T1: The input map that is to be cast to a tensor
Outputs
Y (heterogeneous)T2: A tensor representing the same data as the input map, ordered by their keys
Type Constraints
T1 map(int64, string), map(int64, float): The input must be an integer map to either string or float.
T2 tensor(string), tensor(float), tensor(int64): The output is a 1D tensor of string, float, or integer.
OnnxCastMap_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxCastMap_1
(*args, **kwargs)¶ Version
Onnx name: CastMap
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts a map to a tensor. The map key must be an int64 and the values will be ordered in ascending order based on this key. The operator supports dense packing or sparse packing. If using sparse packing, the key cannot exceed the max_map1 value.
Attributes
cast_to: A string indicating the desired element type of the output tensor, one of ‘TO_FLOAT’, ‘TO_STRING’, ‘TO_INT64’. Default value is ``name: “cast_to”
s: “TO_FLOAT” type: STRING `` * map_form: Indicates whether to only output as many values as are in the input (dense), or position the input based on using the key of the map as the index of the output (sparse).<br>One of ‘DENSE’, ‘SPARSE’. Default value is ``name: “map_form” s: “DENSE” type: STRING `` * max_map: If the value of map_form is ‘SPARSE,’ this attribute indicates the total length of the output tensor. Default value is ``name: “max_map” i: 1 type: INT ``
Inputs
X (heterogeneous)T1: The input map that is to be cast to a tensor
Outputs
Y (heterogeneous)T2: A tensor representing the same data as the input map, ordered by their keys
Type Constraints
T1 map(int64, string), map(int64, float): The input must be an integer map to either string or float.
T2 tensor(string), tensor(float), tensor(int64): The output is a 1D tensor of string, float, or integer.
OnnxCast_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxCast_1
(*args, **kwargs)¶ Version
Onnx name: Cast
This version of the operator has been available since version 1.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message. NOTE: Casting to and from strings is not supported yet.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain input types. Casting from strings and complex are not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types. Casting to strings and complex are not supported.
OnnxCast_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxCast_6
(*args, **kwargs)¶ Version
Onnx name: Cast
This version of the operator has been available since version 6.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message. NOTE: Casting to and from strings is not supported yet.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain input types. Casting from strings and complex are not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types. Casting to strings and complex are not supported.
OnnxCast_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxCast_9
(*args, **kwargs)¶ Version
Onnx name: Cast
This version of the operator has been available since version 9.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.
Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may result 100. There are some string literals reserved for special floatingpoint values; “+INF” (and “INF”), “INF”, and “NaN” are positive infinity, negative infinity, and notanumber, respectively. Any string which can exactly match “+INF” in a caseinsensitive way would be mapped to positive infinite. Similarly, this caseinsensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floatingpoint representation (such as “314.15926”) would be used. Converting nonnumericalliteral string such as “Hello World!” is an undefined behavior. Cases of converting string representing floatingpoint arithmetic value, such as “2.718”, to INT is an undefined behavior.
Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64bit float 3.1415926459 may be round to a 32bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string): Constrain input types. Casting from complex is not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string): Constrain output types. Casting to complex is not supported.
OnnxCategoryMapper¶

class
skl2onnx.algebra.onnx_ops.
OnnxCategoryMapper
(*args, **kwargs)¶ Version
Onnx name: CategoryMapper
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts strings to integers and vice versa.
Two sequences of equal length are used to map between integers and strings, with strings and integers at the same index detailing the mapping.
Each operator converts either integers to strings or strings to integers, depending on which default value attribute is provided. Only one default value attribute should be defined.
If the string default value is set, it will convert integers to strings. If the int default value is set, it will convert strings to integers.
Attributes
cats_int64s: The integers of the map. This sequence must be the same length as the ‘cats_strings’ sequence. Default value is ````
cats_strings: The strings of the map. This sequence must be the same length as the ‘cats_int64s’ sequence Default value is ````
default_int64: An integer to use when an input string value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_int64”
i: 1 type: INT `` * default_string: A string to use when an input integer value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_string” s: “_Unused” type: STRING ``
Inputs
X (heterogeneous)T1: Input data
Outputs
Y (heterogeneous)T2: Output data. If strings are input, the output values are integers, and vice versa.
Type Constraints
T1 tensor(string), tensor(int64): The input must be a tensor of strings or integers, either [N,C] or [C].
T2 tensor(string), tensor(int64): The output is a tensor of strings or integers. Its shape will be the same as the input shape.
OnnxCategoryMapper_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxCategoryMapper_1
(*args, **kwargs)¶ Version
Onnx name: CategoryMapper
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts strings to integers and vice versa.
Two sequences of equal length are used to map between integers and strings, with strings and integers at the same index detailing the mapping.
Each operator converts either integers to strings or strings to integers, depending on which default value attribute is provided. Only one default value attribute should be defined.
If the string default value is set, it will convert integers to strings. If the int default value is set, it will convert strings to integers.
Attributes
cats_int64s: The integers of the map. This sequence must be the same length as the ‘cats_strings’ sequence. Default value is ````
cats_strings: The strings of the map. This sequence must be the same length as the ‘cats_int64s’ sequence Default value is ````
default_int64: An integer to use when an input string value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_int64”
i: 1 type: INT `` * default_string: A string to use when an input integer value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_string” s: “_Unused” type: STRING ``
Inputs
X (heterogeneous)T1: Input data
Outputs
Y (heterogeneous)T2: Output data. If strings are input, the output values are integers, and vice versa.
Type Constraints
T1 tensor(string), tensor(int64): The input must be a tensor of strings or integers, either [N,C] or [C].
T2 tensor(string), tensor(int64): The output is a tensor of strings or integers. Its shape will be the same as the input shape.
OnnxCeil¶

class
skl2onnx.algebra.onnx_ops.
OnnxCeil
(*args, **kwargs)¶ Version
Onnx name: Ceil
This version of the operator has been available since version 6.
Summary
Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCeil_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxCeil_1
(*args, **kwargs)¶ Version
Onnx name: Ceil
This version of the operator has been available since version 1.
Summary
Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.
Attributes
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCeil_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxCeil_6
(*args, **kwargs)¶ Version
Onnx name: Ceil
This version of the operator has been available since version 6.
Summary
Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCelu¶

class
skl2onnx.algebra.onnx_ops.
OnnxCelu
(*args, **kwargs)¶ Version
Onnx name: Celu
This version of the operator has been available since version 12.
Summary
Continuously Differentiable Exponential Linear Units: Perform the linear unit elementwise on the input tensor X using formula:
max(0,x) + min(0,alpha*(exp(x/alpha)1))
Attributes
alpha: The Alpha value in Celu formula which control the shape of the unit. The default value is 1.0. Default value is ``name: “alpha”
f: 1.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float): Constrain input and output types to float32 tensors.
OnnxCelu_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxCelu_12
(*args, **kwargs)¶ Version
Onnx name: Celu
This version of the operator has been available since version 12.
Summary
Continuously Differentiable Exponential Linear Units: Perform the linear unit elementwise on the input tensor X using formula:
max(0,x) + min(0,alpha*(exp(x/alpha)1))
Attributes
alpha: The Alpha value in Celu formula which control the shape of the unit. The default value is 1.0. Default value is ``name: “alpha”
f: 1.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float): Constrain input and output types to float32 tensors.
OnnxClip¶

class
skl2onnx.algebra.onnx_ops.
OnnxClip
(*args, **kwargs)¶ Version
Onnx name: Clip
This version of the operator has been available since version 12.
Summary
Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.
Inputs
Between 1 and 3 inputs.
input (heterogeneous)T: Input tensor whose elements to be clipped
min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).
max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxClip_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxClip_1
(*args, **kwargs)¶ Version
Onnx name: Clip
This version of the operator has been available since version 1.
Summary
Clip operator limits the given input within an interval. The interval is specified with arguments ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max() respectively.
Attributes
consumed_inputs: legacy optimization attribute. Default value is ````
max: Maximum value, above which element is replaced by max Default value is ````
min: Minimum value, under which element is replaced by min Default value is ````
Inputs
input (heterogeneous)T: Input tensor whose elements to be clipped
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxClip_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxClip_11
(*args, **kwargs)¶ Version
Onnx name: Clip
This version of the operator has been available since version 11.
Summary
Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.
Inputs
Between 1 and 3 inputs.
input (heterogeneous)T: Input tensor whose elements to be clipped
min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).
max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxClip_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxClip_12
(*args, **kwargs)¶ Version
Onnx name: Clip
This version of the operator has been available since version 12.
Summary
Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.
Inputs
Between 1 and 3 inputs.
input (heterogeneous)T: Input tensor whose elements to be clipped
min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).
max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxClip_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxClip_6
(*args, **kwargs)¶ Version
Onnx name: Clip
This version of the operator has been available since version 6.
Summary
Clip operator limits the given input within an interval. The interval is specified with arguments ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max() respectively.
Attributes
max: Maximum value, above which element is replaced by max Default value is ``name: “max”
f: 3.4028234663852886e+38 type: FLOAT `` * min: Minimum value, under which element is replaced by min Default value is ``name: “min” f: 3.4028234663852886e+38 type: FLOAT ``
Inputs
input (heterogeneous)T: Input tensor whose elements to be clipped
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCompress¶

class
skl2onnx.algebra.onnx_ops.
OnnxCompress
(*args, **kwargs)¶ Version
Onnx name: Compress
This version of the operator has been available since version 11.
Summary
Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html
Attributes
axis: (Optional) Axis along which to take slices. If not specified, input is flattened before elements being selected. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(input). Default value is ````
Inputs
input (heterogeneous)T: Tensor of rank r >= 1.
condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length along the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.
Outputs
output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
T1 tensor(bool): Constrains to boolean tensors.
OnnxCompress_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxCompress_11
(*args, **kwargs)¶ Version
Onnx name: Compress
This version of the operator has been available since version 11.
Summary
Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html
Attributes
axis: (Optional) Axis along which to take slices. If not specified, input is flattened before elements being selected. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(input). Default value is ````
Inputs
input (heterogeneous)T: Tensor of rank r >= 1.
condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length along the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.
Outputs
output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
T1 tensor(bool): Constrains to boolean tensors.
OnnxCompress_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxCompress_9
(*args, **kwargs)¶ Version
Onnx name: Compress
This version of the operator has been available since version 9.
Summary
Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html
Attributes
axis: (Optional) Axis along which to take slices. If not specified, input is flattened before elements being selected. Default value is ````
Inputs
input (heterogeneous)T: Tensor of rank r >= 1.
condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length alone the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.
Outputs
output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
T1 tensor(bool): Constrains to boolean tensors.
OnnxConcat¶

class
skl2onnx.algebra.onnx_ops.
OnnxConcat
(*args, **kwargs)¶ Version
Onnx name: Concat
This version of the operator has been available since version 11.
Summary
Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.
Attributes
axis (required): Which axis to concat on. A negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(inputs).. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcatFromSequence¶

class
skl2onnx.algebra.onnx_ops.
OnnxConcatFromSequence
(*args, **kwargs)¶ Version
Onnx name: ConcatFromSequence
This version of the operator has been available since version 11.
Summary
Concatenate a sequence of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on. By default ‘new_axis’ is 0, the behavior is similar to numpy.concatenate. When ‘new_axis’ is 1, the behavior is similar to numpy.stack.
Attributes
axis (required): Which axis to concat on. Accepted range in [r, r  1], where r is the rank of input tensors. When new_axis is 1, accepted range is [r  1, r]. Default value is ````
new_axis: Insert and concatenate on a new axis or not, default 0 means do not insert new axis. Default value is ``name: “new_axis”
i: 0 type: INT ``
Inputs
input_sequence (heterogeneous)S: Sequence of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
S seq(tensor(uint8)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(int8)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(float16)), seq(tensor(float)), seq(tensor(double)), seq(tensor(string)), seq(tensor(bool)), seq(tensor(complex64)), seq(tensor(complex128)): Constrain input types to any tensor type.
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcatFromSequence_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxConcatFromSequence_11
(*args, **kwargs)¶ Version
Onnx name: ConcatFromSequence
This version of the operator has been available since version 11.
Summary
Concatenate a sequence of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on. By default ‘new_axis’ is 0, the behavior is similar to numpy.concatenate. When ‘new_axis’ is 1, the behavior is similar to numpy.stack.
Attributes
axis (required): Which axis to concat on. Accepted range in [r, r  1], where r is the rank of input tensors. When new_axis is 1, accepted range is [r  1, r]. Default value is ````
new_axis: Insert and concatenate on a new axis or not, default 0 means do not insert new axis. Default value is ``name: “new_axis”
i: 0 type: INT ``
Inputs
input_sequence (heterogeneous)S: Sequence of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
S seq(tensor(uint8)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(int8)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(float16)), seq(tensor(float)), seq(tensor(double)), seq(tensor(string)), seq(tensor(bool)), seq(tensor(complex64)), seq(tensor(complex128)): Constrain input types to any tensor type.
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcat_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxConcat_1
(*args, **kwargs)¶ Version
Onnx name: Concat
This version of the operator has been available since version 1.
Summary
Concatenate a list of tensors into a single tensor
Attributes
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain output types to float tensors.
OnnxConcat_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxConcat_11
(*args, **kwargs)¶ Version
Onnx name: Concat
This version of the operator has been available since version 11.
Summary
Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.
Attributes
axis (required): Which axis to concat on. A negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(inputs).. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcat_4¶

class
skl2onnx.algebra.onnx_ops.
OnnxConcat_4
(*args, **kwargs)¶ Version
Onnx name: Concat
This version of the operator has been available since version 4.
Summary
Concatenate a list of tensors into a single tensor
Attributes
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConstant¶

class
skl2onnx.algebra.onnx_ops.
OnnxConstant
(*args, **kwargs)¶ Version
Onnx name: Constant
This version of the operator has been available since version 12.
Summary
This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.
Attributes
sparse_value: The value for the elements of the output tensor in sparse format. Default value is ````
value: The value for the elements of the output tensor. Default value is ````
value_float: The value for the sole element for the scalar, float32, output tensor. Default value is ````
value_floats: The values for the elements for the 1D, float32, output tensor. Default value is ````
value_int: The value for the sole element for the scalar, int64, output tensor. Default value is ````
value_ints: The values for the elements for the 1D, int64, output tensor. Default value is ````
value_string: The value for the sole element for the scalar, UTF8 string, output tensor. Default value is ````
value_strings: The values for the elements for the 1D, UTF8 string, output tensor. Default value is ````
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConstantOfShape¶

class
skl2onnx.algebra.onnx_ops.
OnnxConstantOfShape
(*args, **kwargs)¶ Version
Onnx name: ConstantOfShape
This version of the operator has been available since version 9.
Summary
Generate a tensor with given value and shape.
Attributes
value: (Optional) The value of the output elements.Should be a oneelement tensor. If not specified, it defaults to a tensor of value 0 and datatype float32 Default value is ````
Inputs
input (heterogeneous)T1: 1D tensor. The shape of the expected output tensor. If empty tensor is given, the output would be a scalar. All values must be >= 0.
Outputs
output (heterogeneous)T2: Output tensor of shape specified by ‘input’.If attribute ‘value’ is specified, the value and datatype of the output tensor is taken from ‘value’.If attribute ‘value’ is not specified, the value in the output defaults to 0, and the datatype defaults to float32.
Type Constraints
T1 tensor(int64): Constrain input types.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types to be numerics.
OnnxConstantOfShape_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxConstantOfShape_9
(*args, **kwargs)¶ Version
Onnx name: ConstantOfShape
This version of the operator has been available since version 9.
Summary
Generate a tensor with given value and shape.
Attributes
value: (Optional) The value of the output elements.Should be a oneelement tensor. If not specified, it defaults to a tensor of value 0 and datatype float32 Default value is ````
Inputs
input (heterogeneous)T1: 1D tensor. The shape of the expected output tensor. If empty tensor is given, the output would be a scalar. All values must be >= 0.
Outputs
output (heterogeneous)T2: Output tensor of shape specified by ‘input’.If attribute ‘value’ is specified, the value and datatype of the output tensor is taken from ‘value’.If attribute ‘value’ is not specified, the value in the output defaults to 0, and the datatype defaults to float32.
Type Constraints
T1 tensor(int64): Constrain input types.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types to be numerics.
OnnxConstant_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxConstant_1
(*args, **kwargs)¶ Version
Onnx name: Constant
This version of the operator has been available since version 1.
Summary
A constant tensor.
Attributes
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConstant_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxConstant_11
(*args, **kwargs)¶ Version
Onnx name: Constant
This version of the operator has been available since version 11.
Summary
A constant tensor. Exactly one of the two attributes, either value or sparse_value, must be specified.
Attributes
sparse_value: The value for the elements of the output tensor in sparse format. Default value is ````
value: The value for the elements of the output tensor. Default value is ````
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConstant_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxConstant_12
(*args, **kwargs)¶ Version
Onnx name: Constant
This version of the operator has been available since version 12.
Summary
This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.
Attributes
sparse_value: The value for the elements of the output tensor in sparse format. Default value is ````
value: The value for the elements of the output tensor. Default value is ````
value_float: The value for the sole element for the scalar, float32, output tensor. Default value is ````
value_floats: The values for the elements for the 1D, float32, output tensor. Default value is ````
value_int: The value for the sole element for the scalar, int64, output tensor. Default value is ````
value_ints: The values for the elements for the 1D, int64, output tensor. Default value is ````
value_string: The value for the sole element for the scalar, UTF8 string, output tensor. Default value is ````
value_strings: The values for the elements for the 1D, UTF8 string, output tensor. Default value is ````
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConstant_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxConstant_9
(*args, **kwargs)¶ Version
Onnx name: Constant
This version of the operator has been available since version 9.
Summary
A constant tensor.
Attributes
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConv¶

class
skl2onnx.algebra.onnx_ops.
OnnxConv
(*args, **kwargs)¶ Version
Onnx name: Conv
This version of the operator has been available since version 11.
Summary
The convolution operator consumes an input tensor and a filter, and computes the output.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults is 1 along each spatial axis. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults is 1 along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConvInteger¶

class
skl2onnx.algebra.onnx_ops.
OnnxConvInteger
(*args, **kwargs)¶ Version
Onnx name: ConvInteger
This version of the operator has been available since version 10.
Summary
The integer convolution operator consumes an input tensor, its zeropoint, a filter, and its zeropoint, and computes the output. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each axis. Default value is ```` * group: number of groups input channels and output channels are divided into. default is 1. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input 'w'. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0.The value represent the number of pixels added to the beginning and end part of the corresponding axis.`pads` format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number ofpixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i.This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaultsto 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each axis. Default value is ````Inputs
Between 2 and 4 inputs.
x (heterogeneous)T1: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
w (heterogeneous)T2: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
x_zero_point (optional, heterogeneous)T1: Zero point tensor for input ‘x’. It’s optional and default value is 0. It’s a scalar, which means a pertensor/layer quantization.
w_zero_point (optional, heterogeneous)T2: Zero point tensor for input ‘w’. It’s optional and default value is 0. It could be a scalar or a 1D tensor, which means a pertensor/layer or per output channel quantization. If it’s a 1D tensor, its number of elements should be equal to the number of output channels (M)
Outputs
y (heterogeneous)T3: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T1 tensor(int8), tensor(uint8): Constrain input x and its zero point data type to 8bit integer tensor.
T2 tensor(int8), tensor(uint8): Constrain input w and its zero point data type to 8bit integer tensor.
T3 tensor(int32): Constrain output y data type to 32bit integer tensor.
OnnxConvInteger_10¶

class
skl2onnx.algebra.onnx_ops.
OnnxConvInteger_10
(*args, **kwargs)¶ Version
Onnx name: ConvInteger
This version of the operator has been available since version 10.
Summary
The integer convolution operator consumes an input tensor, its zeropoint, a filter, and its zeropoint, and computes the output. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each axis. Default value is ```` * group: number of groups input channels and output channels are divided into. default is 1. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input 'w'. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0.The value represent the number of pixels added to the beginning and end part of the corresponding axis.`pads` format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number ofpixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i.This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaultsto 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each axis. Default value is ````Inputs
Between 2 and 4 inputs.
x (heterogeneous)T1: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
w (heterogeneous)T2: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
x_zero_point (optional, heterogeneous)T1: Zero point tensor for input ‘x’. It’s optional and default value is 0. It’s a scalar, which means a pertensor/layer quantization.
w_zero_point (optional, heterogeneous)T2: Zero point tensor for input ‘w’. It’s optional and default value is 0. It could be a scalar or a 1D tensor, which means a pertensor/layer or per output channel quantization. If it’s a 1D tensor, its number of elements should be equal to the number of output channels (M)
Outputs
y (heterogeneous)T3: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T1 tensor(int8), tensor(uint8): Constrain input x and its zero point data type to 8bit integer tensor.
T2 tensor(int8), tensor(uint8): Constrain input w and its zero point data type to 8bit integer tensor.
T3 tensor(int32): Constrain output y data type to 32bit integer tensor.
OnnxConvTranspose¶

class
skl2onnx.algebra.onnx_ops.
OnnxConvTranspose
(*args, **kwargs)¶ Version
Onnx name: ConvTranspose
This version of the operator has been available since version 11.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  pads[start_i]  pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i]  (total_padding[i]/2) Else: pads[start_i] = total_padding[i]  (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each spatial axis. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* output_padding: Additional elements added to the side with higher coordinate indices in the output. Each padding value in “output_padding” must be less than the corresponding stride/dilation dimension. By default, this attribute is a zero vector. Note that this attribute doesn’t directly affect the computed output values. It only controls the selection of the computed values, so changing this attribute only adds or removes output elements. If “output_shape” is explicitly provided, “output_padding” does not contribute additional size to “output_shape” but participates in the computation of the needed padding amount. This is also called adjs or adjustment in some frameworks. Default value is ```` * output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads Default value is ```` * pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConvTranspose_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxConvTranspose_1
(*args, **kwargs)¶ Version
Onnx name: ConvTranspose
This version of the operator has been available since version 1.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  pads[start_i]  pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i]  (total_padding[i]/2) Else: pads[start_i] = total_padding[i]  (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* output_padding: The zeropadding added to one side of the output. This is also called adjs/adjustment in some frameworks. Default value is ```` * output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads Default value is ```` * pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConvTranspose_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxConvTranspose_11
(*args, **kwargs)¶ Version
Onnx name: ConvTranspose
This version of the operator has been available since version 11.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  pads[start_i]  pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i]  (total_padding[i]/2) Else: pads[start_i] = total_padding[i]  (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each spatial axis. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* output_padding: Additional elements added to the side with higher coordinate indices in the output. Each padding value in “output_padding” must be less than the corresponding stride/dilation dimension. By default, this attribute is a zero vector. Note that this attribute doesn’t directly affect the computed output values. It only controls the selection of the computed values, so changing this attribute only adds or removes output elements. If “output_shape” is explicitly provided, “output_padding” does not contribute additional size to “output_shape” but participates in the computation of the needed padding amount. This is also called adjs or adjustment in some frameworks. Default value is ```` * output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads Default value is ```` * pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConv_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxConv_1
(*args, **kwargs)¶ Version
Onnx name: Conv
This version of the operator has been available since version 1.
Summary
The convolution operator consumes an input tensor and a filter, and computes the output.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConv_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxConv_11
(*args, **kwargs)¶ Version
Onnx name: Conv
This version of the operator has been available since version 11.
Summary
The convolution operator consumes an input tensor and a filter, and computes the output.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults is 1 along each spatial axis. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults is 1 along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCos¶

class
skl2onnx.algebra.onnx_ops.
OnnxCos
(*args, **kwargs)¶ Version
Onnx name: Cos
This version of the operator has been available since version 7.
Summary
Calculates the cosine of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The cosine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCos_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxCos_7
(*args, **kwargs)¶ Version
Onnx name: Cos
This version of the operator has been available since version 7.
Summary
Calculates the cosine of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The cosine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCosh¶

class
skl2onnx.algebra.onnx_ops.
OnnxCosh
(*args, **kwargs)¶ Version
Onnx name: Cosh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic cosine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic cosine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCosh_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxCosh_9
(*args, **kwargs)¶ Version
Onnx name: Cosh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic cosine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic cosine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCumSum¶

class
skl2onnx.algebra.onnx_ops.
OnnxCumSum
(*args, **kwargs)¶ Version
Onnx name: CumSum
This version of the operator has been available since version 11.
Summary
Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively meaning the first element is copied as is. Through an exclusive attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to 1.
Example:
input_x = [1, 2, 3] axis=0 output = [1, 3, 6] exclusive=1 output = [0, 1, 3] exclusive=0 reverse=1 output = [6, 5, 3] exclusive=1 reverse=1 output = [5, 3, 0]
Attributes
exclusive: If set to 1 will return exclusive sum in which the top element is not included. In other terms, if set to 1, the jth output element would be the sum of the first (j1) elements. Otherwise, it would be the sum of the first j elements. Default value is ``name: “exclusive”
i: 0 type: INT `` * reverse: If set to 1 will perform the sums in reverse direction. Default value is ``name: “reverse” i: 0 type: INT ``
Inputs
x (heterogeneous)T: An input tensor that is to be processed.
axis (heterogeneous)T2: (Optional) A 0D tensor. Must be in the range [rank(x), rank(x)1]. Negative value means counting dimensions from the back.
Outputs
y (heterogeneous)T: Output tensor of the same type as ‘x’ with cumulative sums of the x’s elements
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float), tensor(double): Input can be of any tensor type.
T2 tensor(int32), tensor(int64): axis tensor can be int32 or int64 only
OnnxCumSum_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxCumSum_11
(*args, **kwargs)¶ Version
Onnx name: CumSum
This version of the operator has been available since version 11.
Summary
Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively meaning the first element is copied as is. Through an exclusive attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to 1.
Example:
input_x = [1, 2, 3] axis=0 output = [1, 3, 6] exclusive=1 output = [0, 1, 3] exclusive=0 reverse=1 output = [6, 5, 3] exclusive=1 reverse=1 output = [5, 3, 0]
Attributes
exclusive: If set to 1 will return exclusive sum in which the top element is not included. In other terms, if set to 1, the jth output element would be the sum of the first (j1) elements. Otherwise, it would be the sum of the first j elements. Default value is ``name: “exclusive”
i: 0 type: INT `` * reverse: If set to 1 will perform the sums in reverse direction. Default value is ``name: “reverse” i: 0 type: INT ``
Inputs
x (heterogeneous)T: An input tensor that is to be processed.
axis (heterogeneous)T2: (Optional) A 0D tensor. Must be in the range [rank(x), rank(x)1]. Negative value means counting dimensions from the back.
Outputs
y (heterogeneous)T: Output tensor of the same type as ‘x’ with cumulative sums of the x’s elements
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float), tensor(double): Input can be of any tensor type.
T2 tensor(int32), tensor(int64): axis tensor can be int32 or int64 only
OnnxDepthToSpace¶

class
skl2onnx.algebra.onnx_ops.
OnnxDepthToSpace
(*args, **kwargs)¶ Version
Onnx name: DepthToSpace
This version of the operator has been available since version 11.
Summary
DepthToSpace rearranges (permutes) data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. By default, mode = DCR. In the DCR mode, elements along the depth dimension from the input tensor are rearranged in the following order: depth, column, and then row. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, blocksize, blocksize, c // (blocksize**2), h, w])
tmp = np.transpose(tmp, [0, 3, 4, 1, 5, 2])
y = np.reshape(tmp, [b, c // (blocksize**2), h * blocksize, w * blocksize])
In the CRD mode, elements along the depth dimension from the input tensor are rearranged in the following order: column, row, and the depth. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, c // (blocksize ** 2), blocksize, blocksize, h, w])
tmp = np.transpose(tmp, [0, 1, 4, 2, 5, 3])
y = np.reshape(tmp, [b, c // (blocksize ** 2), h * blocksize, w * blocksize])
Attributes
blocksize (required): Blocks of [blocksize, blocksize] are moved. Default value is ````
mode: DCR (default) for depthcolumnrow order rearrangement. Use CRD for columnrowdepth order. Default value is ``name: “mode”
s: “DCR” type: STRING ``
Inputs
input (heterogeneous)T: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.
Outputs
output (heterogeneous)T: Output tensor of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxDepthToSpace_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxDepthToSpace_1
(*args, **kwargs)¶ Version
Onnx name: DepthToSpace
This version of the operator has been available since version 1.
Summary
DepthToSpace rearranges (permutes) data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions.
Attributes
Inputs
input (heterogeneous)T: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.
Outputs
output (heterogeneous)T: Output tensor of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxDepthToSpace_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxDepthToSpace_11
(*args, **kwargs)¶ Version
Onnx name: DepthToSpace
This version of the operator has been available since version 11.
Summary
DepthToSpace rearranges (permutes) data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. By default, mode = DCR. In the DCR mode, elements along the depth dimension from the input tensor are rearranged in the following order: depth, column, and then row. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, blocksize, blocksize, c // (blocksize**2), h, w])
tmp = np.transpose(tmp, [0, 3, 4, 1, 5, 2])
y = np.reshape(tmp, [b, c // (blocksize**2), h * blocksize, w * blocksize])
In the CRD mode, elements along the depth dimension from the input tensor are rearranged in the following order: column, row, and the depth. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, c // (blocksize ** 2), blocksize, blocksize, h, w])
tmp = np.transpose(tmp, [0, 1, 4, 2, 5, 3])
y = np.reshape(tmp, [b, c // (blocksize ** 2), h * blocksize, w * blocksize])
Attributes
blocksize (required): Blocks of [blocksize, blocksize] are moved. Default value is ````
mode: DCR (default) for depthcolumnrow order rearrangement. Use CRD for columnrowdepth order. Default value is ``name: “mode”
s: “DCR” type: STRING ``
Inputs
input (heterogeneous)T: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.
Outputs
output (heterogeneous)T: Output tensor of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxDequantizeLinear¶

class
skl2onnx.algebra.onnx_ops.
OnnxDequantizeLinear
(*args, **kwargs)¶ Version
Onnx name: DequantizeLinear
This version of the operator has been available since version 10.
Summary
The linear dequantization operator. It consumes a quantized tensor, a scale, a zero point to compute the full precision tensor. The dequantization formula is y = (x  x_zero_point) * x_scale. ‘x_scale’ and ‘x_zero_point’ must have same shape. ‘x_zero_point’ and ‘x’ must have same type. ‘x’ and ‘y’ must have same shape. In the case of dequantizing int32, there’s no zero point (zero point is supposed to be 0).
Inputs
Between 2 and 3 inputs.
x (heterogeneous)T: ND quantized input tensor to be dequantized.
x_scale (heterogeneous)tensor(float): Scale for input ‘x’. It’s a scalar, which means a pertensor/layer quantization.
x_zero_point (optional, heterogeneous)T: Zero point for input ‘x’. It’s a scalar, which means a pertensor/layer quantization. It’s optional. 0 is the default value when it’s not specified.
Outputs
y (heterogeneous)tensor(float): ND full precision output tensor. It has same shape as input ‘x’.
Type Constraints
T tensor(int8), tensor(uint8), tensor(int32): Constrain ‘x_zero_point’ and ‘x’ to 8bit/32bit integer tensor.
OnnxDequantizeLinear_10¶

class
skl2onnx.algebra.onnx_ops.
OnnxDequantizeLinear_10
(*args, **kwargs)¶ Version
Onnx name: DequantizeLinear
This version of the operator has been available since version 10.
Summary
The linear dequantization operator. It consumes a quantized tensor, a scale, a zero point to compute the full precision tensor. The dequantization formula is y = (x  x_zero_point) * x_scale. ‘x_scale’ and ‘x_zero_point’ must have same shape. ‘x_zero_point’ and ‘x’ must have same type. ‘x’ and ‘y’ must have same shape. In the case of dequantizing int32, there’s no zero point (zero point is supposed to be 0).
Inputs
Between 2 and 3 inputs.
x (heterogeneous)T: ND quantized input tensor to be dequantized.
x_scale (heterogeneous)tensor(float): Scale for input ‘x’. It’s a scalar, which means a pertensor/layer quantization.
x_zero_point (optional, heterogeneous)T: Zero point for input ‘x’. It’s a scalar, which means a pertensor/layer quantization. It’s optional. 0 is the default value when it’s not specified.
Outputs
y (heterogeneous)tensor(float): ND full precision output tensor. It has same shape as input ‘x’.
Type Constraints
T tensor(int8), tensor(uint8), tensor(int32): Constrain ‘x_zero_point’ and ‘x’ to 8bit/32bit integer tensor.
OnnxDet¶

class
skl2onnx.algebra.onnx_ops.
OnnxDet
(*args, **kwargs)¶ Version
Onnx name: Det
This version of the operator has been available since version 11.
Summary
Det calculates determinant of a square matrix or batches of square matrices. Det takes one input tensor of shape [*, M, M], where * is zero or more batch dimensions, and the innermost 2 dimensions form square matrices. The output is a tensor of shape [*], containing the determinants of all input submatrices. e.g., When the input is 2D, the output is a scalar(shape is empty: []).
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to floatingpoint tensors.
OnnxDet_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxDet_11
(*args, **kwargs)¶ Version
Onnx name: Det
This version of the operator has been available since version 11.
Summary
Det calculates determinant of a square matrix or batches of square matrices. Det takes one input tensor of shape [*, M, M], where * is zero or more batch dimensions, and the innermost 2 dimensions form square matrices. The output is a tensor of shape [*], containing the determinants of all input submatrices. e.g., When the input is 2D, the output is a scalar(shape is empty: []).
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to floatingpoint tensors.
OnnxDictVectorizer¶

class
skl2onnx.algebra.onnx_ops.
OnnxDictVectorizer
(*args, **kwargs)¶ Version
Onnx name: DictVectorizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Uses an index mapping to convert a dictionary to an array.
Given a dictionary, each key is looked up in the vocabulary attribute corresponding to the key type. The index into the vocabulary array at which the key is found is then used to index the output 1D tensor ‘Y’ and insert into it the value found in the dictionary ‘X’.
The key type of the input map must correspond to the element type of the defined vocabulary attribute. Therefore, the output array will be equal in length to the index mapping vector parameter. All keys in the input dictionary must be present in the index mapping vector. For each item in the input dictionary, insert its value in the output array. Any keys not present in the input dictionary, will be zero in the output array.
For example: if the
string_vocabulary
parameter is set to["a", "c", "b", "z"]
, then an input of{"a": 4, "c": 8}
will produce an output of[4, 8, 0, 0]
.Attributes
int64_vocabulary: An integer vocabulary array.<br>One and only one of the vocabularies must be defined. Default value is ````
string_vocabulary: A string vocabulary array.<br>One and only one of the vocabularies must be defined. Default value is ````
Inputs
X (heterogeneous)T1: A dictionary.
Outputs
Y (heterogeneous)T2: A 1D tensor holding values from the input dictionary.
Type Constraints
T1 map(string, int64), map(int64, string), map(int64, float), map(int64, double), map(string, float), map(string, double): The input must be a map from strings or integers to either strings or a numeric type. The key and value types cannot be the same.
T2 tensor(int64), tensor(float), tensor(double), tensor(string): The output will be a tensor of the value type of the input map. It’s shape will be [1,C], where C is the length of the input dictionary.
OnnxDictVectorizer_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxDictVectorizer_1
(*args, **kwargs)¶ Version
Onnx name: DictVectorizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Uses an index mapping to convert a dictionary to an array.
Given a dictionary, each key is looked up in the vocabulary attribute corresponding to the key type. The index into the vocabulary array at which the key is found is then used to index the output 1D tensor ‘Y’ and insert into it the value found in the dictionary ‘X’.
The key type of the input map must correspond to the element type of the defined vocabulary attribute. Therefore, the output array will be equal in length to the index mapping vector parameter. All keys in the input dictionary must be present in the index mapping vector. For each item in the input dictionary, insert its value in the output array. Any keys not present in the input dictionary, will be zero in the output array.
For example: if the
string_vocabulary
parameter is set to["a", "c", "b", "z"]
, then an input of{"a": 4, "c": 8}
will produce an output of[4, 8, 0, 0]
.Attributes
int64_vocabulary: An integer vocabulary array.<br>One and only one of the vocabularies must be defined. Default value is ````
string_vocabulary: A string vocabulary array.<br>One and only one of the vocabularies must be defined. Default value is ````
Inputs
X (heterogeneous)T1: A dictionary.
Outputs
Y (heterogeneous)T2: A 1D tensor holding values from the input dictionary.
Type Constraints
T1 map(string, int64), map(int64, string), map(int64, float), map(int64, double), map(string, float), map(string, double): The input must be a map from strings or integers to either strings or a numeric type. The key and value types cannot be the same.
T2 tensor(int64), tensor(float), tensor(double), tensor(string): The output will be a tensor of the value type of the input map. It’s shape will be [1,C], where C is the length of the input dictionary.
OnnxDiv¶

class
skl2onnx.algebra.onnx_ops.
OnnxDiv
(*args, **kwargs)¶ Version
Onnx name: Div
This version of the operator has been available since version 7.
Summary
Performs elementwise binary division (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxDiv_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxDiv_1
(*args, **kwargs)¶ Version
Onnx name: Div
This version of the operator has been available since version 1.
Summary
Performs elementwise binary division (with limited broadcast support).
If necessary the righthandside argument will be broadcasted to match the shape of lefthandside argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1dim expansion doesn’t work yet.
For example, the following tensor shapes are supported (with broadcast=1):
shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0
Attribute broadcast=1 needs to be passed to enable broadcasting.
Attributes
axis: If set, defines the broadcast dimensions. See doc for details. Default value is ````
broadcast: Pass 1 to enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT `` * consumed_inputs: legacy optimization attribute. Default value is ````
Inputs
A (heterogeneous)T: First operand, should share the type with the second operand.
B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.
Outputs
C (heterogeneous)T: Result, has same dimensions and type as A
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxDiv_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxDiv_6
(*args, **kwargs)¶ Version
Onnx name: Div
This version of the operator has been available since version 6.
Summary
Performs elementwise binary division (with limited broadcast support).
If necessary the righthandside argument will be broadcasted to match the shape of lefthandside argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1dim expansion doesn’t work yet.
For example, the following tensor shapes are supported (with broadcast=1):
shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0
Attribute broadcast=1 needs to be passed to enable broadcasting.
Attributes
axis: If set, defines the broadcast dimensions. See doc for details. Default value is ````
broadcast: Pass 1 to enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: First operand, should share the type with the second operand.
B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.
Outputs
C (heterogeneous)T: Result, has same dimensions and type as A
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxDiv_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxDiv_7
(*args, **kwargs)¶ Version
Onnx name: Div
This version of the operator has been available since version 7.
Summary
Performs elementwise binary division (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxDropout¶

class
skl2onnx.algebra.onnx_ops.
OnnxDropout
(*args, **kwargs)¶ Version
Onnx name: Dropout
This version of the operator has been available since version 12.
Summary
Dropout takes an input floatingpoint tensor, an optional input ratio (floatingpoint scalar) and an optional input training_mode (boolean scalar). It produces two tensor outputs, output (floatingpoint tensor) and mask (optional Tensor<bool>). If training_mode is true then the output Y will be a random dropout; Note that this Dropout scales the masked input data by the following equation, so to convert the trained model into inference mode, the user can simply not pass training_mode input or set it to false.
output = scale * data * mask,
where
scale = 1. / (1.  ratio).
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
seed: (Optional) Seed to the random generator, if not specified we will auto generate one. Default value is ````
Inputs
Between 1 and 3 inputs.
data (heterogeneous)T: The input data as Tensor.
ratio (optional, heterogeneous)T1: The ratio of random dropout, with value in [0, 1). If this input was not set, or if it was set to 0, the output would be a simple copy of the input. If it’s nonzero, output will be a random dropout of the scaled input, which is typically the case during training. It is an optional value, if not specified it will default to 0.5.
training_mode (optional, heterogeneous)T2: If set to true then it indicates dropout is being used for training. It is an optional value hence unless specified explicitly, it is false. If it is false, ratio is ignored and the operation mimics inference mode where nothing will be dropped from the input data and if mask is requested as output it will contain all ones.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T2: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(float16), tensor(float), tensor(double): Constrain input ‘ratio’ types to float tensors.
T2 tensor(bool): Constrain output ‘mask’ types to boolean tensors.
OnnxDropout_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxDropout_1
(*args, **kwargs)¶ Version
Onnx name: Dropout
This version of the operator has been available since version 1.
Summary
Dropout takes one input data (Tensor<float>) and produces two Tensor outputs, output (Tensor<float>) and mask (Tensor<bool>). Depending on whether it is in test mode or not, the output Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done.
Attributes
consumed_inputs: legacy optimization attribute. Default value is ````
is_test: (int, default 0) if nonzero, run dropout in test mode where the output is simply Y = X. Default value is ``name: “is_test”
i: 0 type: INT `` * ratio: (float, default 0.5) the ratio of random dropout Default value is ``name: “ratio” f: 0.5 type: FLOAT ``
Inputs
data (heterogeneous)T: The input data as Tensor.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T: The output mask. If is_test is nonzero, this output is not filled.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxDropout_10¶

class
skl2onnx.algebra.onnx_ops.
OnnxDropout_10
(*args, **kwargs)¶ Version
Onnx name: Dropout
This version of the operator has been available since version 10.
Summary
Dropout takes one input floating tensor and produces two tensor outputs, output (floating tensor) and mask (Tensor<bool>). Depending on whether it is in test mode or not, the output Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
ratio: The ratio of random dropout Default value is ``name: “ratio”
f: 0.5 type: FLOAT ``
Inputs
data (heterogeneous)T: The input data as Tensor.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T1: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(bool): Constrain output mask types to boolean tensors.
OnnxDropout_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxDropout_12
(*args, **kwargs)¶ Version
Onnx name: Dropout
This version of the operator has been available since version 12.
Summary
Dropout takes an input floatingpoint tensor, an optional input ratio (floatingpoint scalar) and an optional input training_mode (boolean scalar). It produces two tensor outputs, output (floatingpoint tensor) and mask (optional Tensor<bool>). If training_mode is true then the output Y will be a random dropout; Note that this Dropout scales the masked input data by the following equation, so to convert the trained model into inference mode, the user can simply not pass training_mode input or set it to false.
output = scale * data * mask,
where
scale = 1. / (1.  ratio).
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
seed: (Optional) Seed to the random generator, if not specified we will auto generate one. Default value is ````
Inputs
Between 1 and 3 inputs.
data (heterogeneous)T: The input data as Tensor.
ratio (optional, heterogeneous)T1: The ratio of random dropout, with value in [0, 1). If this input was not set, or if it was set to 0, the output would be a simple copy of the input. If it’s nonzero, output will be a random dropout of the scaled input, which is typically the case during training. It is an optional value, if not specified it will default to 0.5.
training_mode (optional, heterogeneous)T2: If set to true then it indicates dropout is being used for training. It is an optional value hence unless specified explicitly, it is false. If it is false, ratio is ignored and the operation mimics inference mode where nothing will be dropped from the input data and if mask is requested as output it will contain all ones.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T2: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(float16), tensor(float), tensor(double): Constrain input ‘ratio’ types to float tensors.
T2 tensor(bool): Constrain output ‘mask’ types to boolean tensors.
OnnxDropout_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxDropout_6
(*args, **kwargs)¶ Version
Onnx name: Dropout
This version of the operator has been available since version 6.
Summary
Dropout takes one input data (Tensor<float>) and produces two Tensor outputs, output (Tensor<float>) and mask (Tensor<bool>). Depending on whether it is in test mode or not, the output Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done.
Attributes
is_test: (int, default 0) if nonzero, run dropout in test mode where the output is simply Y = X. Default value is ``name: “is_test”
i: 0 type: INT `` * ratio: (float, default 0.5) the ratio of random dropout Default value is ``name: “ratio” f: 0.5 type: FLOAT ``
Inputs
data (heterogeneous)T: The input data as Tensor.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T: The output mask. If is_test is nonzero, this output is not filled.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxDropout_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxDropout_7
(*args, **kwargs)¶ Version
Onnx name: Dropout
This version of the operator has been available since version 7.
Summary
Dropout takes one input data (Tensor<float>) and produces two Tensor outputs, output (Tensor<float>) and mask (Tensor<bool>). Depending on whether it is in test mode or not, the output Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
ratio: The ratio of random dropout Default value is ``name: “ratio”
f: 0.5 type: FLOAT ``
Inputs
data (heterogeneous)T: The input data as Tensor.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxDynamicQuantizeLinear¶

class
skl2onnx.algebra.onnx_ops.
OnnxDynamicQuantizeLinear
(*args, **kwargs)¶ Version
Onnx name: DynamicQuantizeLinear
This version of the operator has been available since version 11.
Summary
A Function to fuse calculation for Scale, Zero Point and FP32>8Bit convertion of FP32 Input data. Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input. Scale is calculated as:
y_scale = (max(x)  min(x))/(qmax  qmin) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * data range is adjusted to include 0.
Zero point is calculated as:
intermediate_zero_point = qmin  min(x)/y_scale y_zero_point = cast(round(saturate(itermediate_zero_point))) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * for saturation, it saturates to [0, 255] if it's uint8, or [127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even.
Data quantization formula is:
y = saturate (round (x / y_scale) + y_zero_point) * for saturation, it saturates to [0, 255] if it's uint8, or [127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even.
Inputs
x (heterogeneous)T1: Input tensor
Outputs
y (heterogeneous)T2: Quantized output tensor
y_scale (heterogeneous)tensor(float): Output scale. It’s a scalar, which means a pertensor/layer quantization.
y_zero_point (heterogeneous)T2: Output zero point. It’s a scalar, which means a pertensor/layer quantization.
Type Constraints
T1 tensor(float): Constrain ‘x’ to float tensor.
T2 tensor(uint8): Constrain ‘y_zero_point’ and ‘y’ to 8bit unsigned integer tensor.
OnnxDynamicQuantizeLinear_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxDynamicQuantizeLinear_11
(*args, **kwargs)¶ Version
Onnx name: DynamicQuantizeLinear
This version of the operator has been available since version 11.
Summary
A Function to fuse calculation for Scale, Zero Point and FP32>8Bit convertion of FP32 Input data. Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input. Scale is calculated as:
y_scale = (max(x)  min(x))/(qmax  qmin) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * data range is adjusted to include 0.
Zero point is calculated as:
intermediate_zero_point = qmin  min(x)/y_scale y_zero_point = cast(round(saturate(itermediate_zero_point))) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * for saturation, it saturates to [0, 255] if it's uint8, or [127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even.
Data quantization formula is:
y = saturate (round (x / y_scale) + y_zero_point) * for saturation, it saturates to [0, 255] if it's uint8, or [127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even.
Inputs
x (heterogeneous)T1: Input tensor
Outputs
y (heterogeneous)T2: Quantized output tensor
y_scale (heterogeneous)tensor(float): Output scale. It’s a scalar, which means a pertensor/layer quantization.
y_zero_point (heterogeneous)T2: Output zero point. It’s a scalar, which means a pertensor/layer quantization.
Type Constraints
T1 tensor(float): Constrain ‘x’ to float tensor.
T2 tensor(uint8): Constrain ‘y_zero_point’ and ‘y’ to 8bit unsigned integer tensor.
OnnxEinsum¶

class
skl2onnx.algebra.onnx_ops.
OnnxEinsum
(*args, **kwargs)¶ Version
Onnx name: Einsum
This version of the operator has been available since version 12.
Summary
An einsum of the form
`term1, term2 > outputterm`
produces an output tensor using the following equationwhere the reducesum performs a summation over all the indices occurring in in the input terms (term1, term2) that do not occur in the outputterm. The Einsum operator evaluates algebraic tensor operations on a sequence of tensors, using the Einstein summation convention. The equation string contains a commaseparated sequence of lower case letters. Each term corresponds to an operand tensor, and the characters within the terms correspond to operands dimensions. This sequence may be followed by ">" to separate the left and right hand side of the equation. If the equation contains ">" followed by the righthand side, the explicit (not classical) form of the Einstein summation is performed, and the righthand side indices indicate output tensor dimensions. In other cases, output indices are (implicitly) set to the alphabetically sorted sequence of indices appearing exactly once in the equation. When a dimension character is repeated in the lefthand side, it represents summation along the dimension. The equation may contain ellipsis ("...") to enable broadcasting. Ellipsis must indicate a fixed number of dimensions. Specifically, every occurrence of ellipsis in the equation must represent the same number of dimensions. The righthand side may contain exactly one ellipsis. In implicit mode, the ellipsis dimensions are set to the beginning of the output. The equation string may contain space (U+0020) character.
Attributes
Inputs
Between 1 and 2147483647 inputs.
Inputs (variadic, heterogeneous)T: Operands
Outputs
Output (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numerical tensor types.
OnnxEinsum_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxEinsum_12
(*args, **kwargs)¶ Version
Onnx name: Einsum
This version of the operator has been available since version 12.
Summary
An einsum of the form
`term1, term2 > outputterm`
produces an output tensor using the following equationwhere the reducesum performs a summation over all the indices occurring in in the input terms (term1, term2) that do not occur in the outputterm. The Einsum operator evaluates algebraic tensor operations on a sequence of tensors, using the Einstein summation convention. The equation string contains a commaseparated sequence of lower case letters. Each term corresponds to an operand tensor, and the characters within the terms correspond to operands dimensions. This sequence may be followed by ">" to separate the left and right hand side of the equation. If the equation contains ">" followed by the righthand side, the explicit (not classical) form of the Einstein summation is performed, and the righthand side indices indicate output tensor dimensions. In other cases, output indices are (implicitly) set to the alphabetically sorted sequence of indices appearing exactly once in the equation. When a dimension character is repeated in the lefthand side, it represents summation along the dimension. The equation may contain ellipsis ("...") to enable broadcasting. Ellipsis must indicate a fixed number of dimensions. Specifically, every occurrence of ellipsis in the equation must represent the same number of dimensions. The righthand side may contain exactly one ellipsis. In implicit mode, the ellipsis dimensions are set to the beginning of the output. The equation string may contain space (U+0020) character.
Attributes
Inputs
Between 1 and 2147483647 inputs.
Inputs (variadic, heterogeneous)T: Operands
Outputs
Output (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numerical tensor types.
OnnxElu¶

class
skl2onnx.algebra.onnx_ops.
OnnxElu
(*args, **kwargs)¶ Version
Onnx name: Elu
This version of the operator has been available since version 6.
Summary
Elu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the function f(x) = alpha * (exp(x)  1.) for x < 0, f(x) = x for x >= 0., is applied to the tensor elementwise.
Attributes
alpha: Coefficient of ELU. Default value is ``name: “alpha”
f: 1.0 type: FLOAT ``
Inputs
X (heterogeneous)T: 1D input tensor
Outputs
Y (heterogeneous)T: 1D input tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxElu_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxElu_1
(*args, **kwargs)¶ Version
Onnx name: Elu
This version of the operator has been available since version 1.
Summary
Elu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the function f(x) = alpha * (exp(x)  1.) for x < 0, f(x) = x for x >= 0., is applied to the tensor elementwise.
Attributes
alpha: Coefficient of ELU default to 1.0. Default value is ``name: “alpha”
f: 1.0 type: FLOAT `` * consumed_inputs: legacy optimization attribute. Default value is ````
Inputs
X (heterogeneous)T: 1D input tensor
Outputs
Y (heterogeneous)T: 1D input tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxElu_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxElu_6
(*args, **kwargs)¶ Version
Onnx name: Elu
This version of the operator has been available since version 6.
Summary
Elu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the function f(x) = alpha * (exp(x)  1.) for x < 0, f(x) = x for x >= 0., is applied to the tensor elementwise.
Attributes
alpha: Coefficient of ELU. Default value is ``name: “alpha”
f: 1.0 type: FLOAT ``
Inputs
X (heterogeneous)T: 1D input tensor
Outputs
Y (heterogeneous)T: 1D input tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxEqual¶

class
skl2onnx.algebra.onnx_ops.
OnnxEqual
(*args, **kwargs)¶ Version
Onnx name: Equal
This version of the operator has been available since version 11.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxEqual_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxEqual_1
(*args, **kwargs)¶ Version
Onnx name: Equal
This version of the operator has been available since version 1.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B.
If broadcasting is enabled, the righthandside argument will be broadcasted to match the shape of lefthandside argument. See the doc of Add for a detailed description of the broadcasting rules.
Attributes
axis: If set, defines the broadcast dimensions. Default value is ````
broadcast: Enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: Left input tensor for the logical operator.
B (heterogeneous)T: Right input tensor for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(int32), tensor(int64): Constrains input to integral tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxEqual_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxEqual_11
(*args, **kwargs)¶ Version
Onnx name: Equal
This version of the operator has been available since version 11.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxEqual_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxEqual_7
(*args, **kwargs)¶ Version
Onnx name: Equal
This version of the operator has been available since version 7.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(int32), tensor(int64): Constrains input to integral tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxErf¶

class
skl2onnx.algebra.onnx_ops.
OnnxErf
(*args, **kwargs)¶ Version
Onnx name: Erf
This version of the operator has been available since version 9.
Summary
Computes the error function of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The error function of the input tensor computed elementwise. It has the same shape and type of the input.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxErf_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxErf_9
(*args, **kwargs)¶ Version
Onnx name: Erf
This version of the operator has been available since version 9.
Summary
Computes the error function of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The error function of the input tensor computed elementwise. It has the same shape and type of the input.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxExp¶

class
skl2onnx.algebra.onnx_ops.
OnnxExp
(*args, **kwargs)¶ Version
Onnx name: Exp
This version of the operator has been available since version 6.
Summary
Calculates the exponential of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The exponential of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxExp_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxExp_1
(*args, **kwargs)¶ Version
Onnx name: Exp
This version of the operator has been available since version 1.
Summary
Calculates the exponential of the given input tensor, elementwise.
Attributes
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The exponential of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxExp_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxExp_6
(*args, **kwargs)¶ Version
Onnx name: Exp
This version of the operator has been available since version 6.
Summary
Calculates the exponential of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The exponential of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxExpand¶

class
skl2onnx.algebra.onnx_ops.
OnnxExpand
(*args, **kwargs)¶ Version
Onnx name: Expand
This version of the operator has been available since version 8.
Summary
Broadcast the input tensor following the given shape and the broadcast rule. The broadcast rule is similar to numpy.array(input) * numpy.ones(shape): Dimensions are right alignment; Two corresponding dimension must have the same value, or one of them is equal to 1. Also, this operator is similar to numpy.broadcast_to(input, shape), but the major difference is numpy.broadcast_to() does not allow shape to be smaller than input.size(). It is possible that the output.shape is not equal to shape, when some dimensions in shape is equal to 1, or the shape.ndim < input.shape.ndim.
Inputs
input (heterogeneous)T: Input tensor
shape (heterogeneous)tensor(int64): A 1D tensor indicates the shape you want to expand to, following the broadcast rule
Outputs
output (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensors.
OnnxExpand_8¶

class
skl2onnx.algebra.onnx_ops.
OnnxExpand_8
(*args, **kwargs)¶ Version
Onnx name: Expand
This version of the operator has been available since version 8.
Summary
Broadcast the input tensor following the given shape and the broadcast rule. The broadcast rule is similar to numpy.array(input) * numpy.ones(shape): Dimensions are right alignment; Two corresponding dimension must have the same value, or one of them is equal to 1. Also, this operator is similar to numpy.broadcast_to(input, shape), but the major difference is numpy.broadcast_to() does not allow shape to be smaller than input.size(). It is possible that the output.shape is not equal to shape, when some dimensions in shape is equal to 1, or the shape.ndim < input.shape.ndim.
Inputs
input (heterogeneous)T: Input tensor
shape (heterogeneous)tensor(int64): A 1D tensor indicates the shape you want to expand to, following the broadcast rule
Outputs
output (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensors.
OnnxEyeLike¶

class
skl2onnx.algebra.onnx_ops.
OnnxEyeLike
(*args, **kwargs)¶ Version
Onnx name: EyeLike
This version of the operator has been available since version 9.
Summary
Generate a 2D tensor (matrix) with ones on the diagonal and zeros everywhere else. Only 2D tensors are supported, i.e. input T1 must be of rank 2. The shape of the output tensor is the same as the input tensor. The data type can be specified by the ‘dtype’ argument. If ‘dtype’ is not specified, then the type of input tensor is used. By default, the main diagonal is populated with ones, but attribute ‘k’ can be used to populate upper or lower diagonals. The ‘dtype’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message and be valid as an output type.
Attributes
dtype: (Optional) The data type for the elements of the output tensor. If not specified,the data type of the input tensor T1 is used. If input tensor T1 is also notspecified, then type defaults to ‘float’. Default value is ````
k: (Optional) Index of the diagonal to be populated with ones. Default is 0. If T2 is the output, this op sets T2[i, i+k] = 1. k = 0 populates the main diagonal, k > 0 populates an upper diagonal, and k < 0 populates a lower diagonal. Default value is ``name: “k”
i: 0 type: INT ``
Inputs
input (heterogeneous)T1: 2D input tensor to copy shape, and optionally, type information from.
Outputs
output (heterogeneous)T2: Output tensor, same shape as input tensor T1.
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain input types. Strings and complex are not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types. Strings and complex are not supported.
OnnxEyeLike_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxEyeLike_9
(*args, **kwargs)¶ Version
Onnx name: EyeLike
This version of the operator has been available since version 9.
Summary
Generate a 2D tensor (matrix) with ones on the diagonal and zeros everywhere else. Only 2D tensors are supported, i.e. input T1 must be of rank 2. The shape of the output tensor is the same as the input tensor. The data type can be specified by the ‘dtype’ argument. If ‘dtype’ is not specified, then the type of input tensor is used. By default, the main diagonal is populated with ones, but attribute ‘k’ can be used to populate upper or lower diagonals. The ‘dtype’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message and be valid as an output type.
Attributes
dtype: (Optional) The data type for the elements of the output tensor. If not specified,the data type of the input tensor T1 is used. If input tensor T1 is also notspecified, then type defaults to ‘float’. Default value is ````
k: (Optional) Index of the diagonal to be populated with ones. Default is 0. If T2 is the output, this op sets T2[i, i+k] = 1. k = 0 populates the main diagonal, k > 0 populates an upper diagonal, and k < 0 populates a lower diagonal. Default value is ``name: “k”
i: 0 type: INT ``
Inputs
input (heterogeneous)T1: 2D input tensor to copy shape, and optionally, type information from.
Outputs
output (heterogeneous)T2: Output tensor, same shape as input tensor T1.
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain input types. Strings and complex are not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types. Strings and complex are not supported.
OnnxFeatureVectorizer¶

class
skl2onnx.algebra.onnx_ops.
OnnxFeatureVectorizer
(*args, **kwargs)¶ Version
Onnx name: FeatureVectorizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Concatenates input tensors into one continuous output.
All input shapes are 2D and are concatenated along the second dimention. 1D tensors are treated as [1,C]. Inputs are copied to the output maintaining the order of the input arguments.
All inputs must be integers or floats, while the output will be all floating point values.
Attributes
Inputs
Between 1 and 2147483647 inputs.
X (variadic, heterogeneous)T1: An ordered collection of tensors, all with the same element type.
Outputs
Y (heterogeneous)tensor(float): The output array, elements ordered as the inputs.
Type Constraints
T1 tensor(int32), tensor(int64), tensor(float), tensor(double): The input type must be a tensor of a numeric type.
OnnxFeatureVectorizer_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxFeatureVectorizer_1
(*args, **kwargs)¶ Version
Onnx name: FeatureVectorizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Concatenates input tensors into one continuous output.
All input shapes are 2D and are concatenated along the second dimention. 1D tensors are treated as [1,C]. Inputs are copied to the output maintaining the order of the input arguments.
All inputs must be integers or floats, while the output will be all floating point values.
Attributes
Inputs
Between 1 and 2147483647 inputs.
X (variadic, heterogeneous)T1: An ordered collection of tensors, all with the same element type.
Outputs
Y (heterogeneous)tensor(float): The output array, elements ordered as the inputs.
Type Constraints
T1 tensor(int32), tensor(int64), tensor(float), tensor(double): The input type must be a tensor of a numeric type.
OnnxFlatten¶

class
skl2onnx.algebra.onnx_ops.
OnnxFlatten
(*args, **kwargs)¶ Version
Onnx name: Flatten
This version of the operator has been available since version 11.
Summary
Flattens the input tensor into a 2D matrix. If input tensor has shape (d_0, d_1, … d_n) then the output will have shape (d_0 X d_1 … d_(axis1), d_axis X d_(axis+1) … X dn).
Attributes
axis: Indicate up to which input dimensions (exclusive) should be flattened to the outer dimension of the output. The value for axis must be in the range [r, r], where r is the rank of the input tensor. Negative value means counting dimensions from the back. When axis = 0, the shape of the output tensor is (1, (d_0 X d_1 … d_n), where the shape of the input tensor is (d_0, d_1, … d_n). Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
input (heterogeneous)T: A tensor of rank >= axis.
Outputs
output (heterogeneous)T: A 2D tensor with the contents of the input tensor, with input dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output to all tensor types.
OnnxFlatten_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxFlatten_1
(*args, **kwargs)¶ Version
Onnx name: Flatten
This version of the operator has been available since version 1.
Summary
Flattens the input tensor into a 2D matrix. If input tensor has shape (d_0, d_1, … d_n) then the output will have shape (d_0 X d_1 … d_(axis1), d_axis X d_(axis+1) … X dn).
Attributes
axis: Indicate up to which input dimensions (exclusive) should be flattened to the outer dimension of the output. The value for axis must be in the range [0, R], where R is the rank of the input tensor. When axis = 0, the shape of the output tensor is (1, (d_0 X d_1 … d_n), where the shape of the input tensor is (d_0, d_1, … d_n). Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
input (heterogeneous)T: A tensor of rank >= axis.
Outputs
output (heterogeneous)T: A 2D tensor with the contents of the input tensor, with input dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxFlatten_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxFlatten_11
(*args, **kwargs)¶ Version
Onnx name: Flatten
This version of the operator has been available since version 11.
Summary
Flattens the input tensor into a 2D matrix. If input tensor has shape (d_0, d_1, … d_n) then the output will have shape (d_0 X d_1 … d_(axis1), d_axis X d_(axis+1) … X dn).
Attributes
axis: Indicate up to which input dimensions (exclusive) should be flattened to the outer dimension of the output. The value for axis must be in the range [r, r], where r is the rank of the input tensor. Negative value means counting dimensions from the back. When axis = 0, the shape of the output tensor is (1, (d_0 X d_1 … d_n), where the shape of the input tensor is (d_0, d_1, … d_n). Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
input (heterogeneous)T: A tensor of rank >= axis.
Outputs
output (heterogeneous)T: A 2D tensor with the contents of the input tensor, with input dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output to all tensor types.
OnnxFlatten_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxFlatten_9
(*args, **kwargs)¶ Version
Onnx name: Flatten
This version of the operator has been available since version 9.
Summary
Flattens the input tensor into a 2D matrix. If input tensor has shape (d_0, d_1, … d_n) then the output will have shape (d_0 X d_1 … d_(axis1), d_axis X d_(axis+1) … X dn).
Attributes
axis: Indicate up to which input dimensions (exclusive) should be flattened to the outer dimension of the output. The value for axis must be in the range [0, R], where R is the rank of the input tensor. When axis = 0, the shape of the output tensor is (1, (d_0 X d_1 … d_n), where the shape of the input tensor is (d_0, d_1, … d_n). Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
input (heterogeneous)T: A tensor of rank >= axis.
Outputs
output (heterogeneous)T: A 2D tensor with the contents of the input tensor, with input dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output to all tensor types.
OnnxFloor¶

class
skl2onnx.algebra.onnx_ops.
OnnxFloor
(*args, **kwargs)¶ Version
Onnx name: Floor
This version of the operator has been available since version 6.
Summary
Floor takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the floor is, y = floor(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxFloor_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxFloor_1
(*args, **kwargs)¶ Version
Onnx name: Floor
This version of the operator has been available since version 1.
Summary
Floor takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the floor is, y = floor(x), is applied to the tensor elementwise.
Attributes
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxFloor_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxFloor_6
(*args, **kwargs)¶ Version
Onnx name: Floor
This version of the operator has been available since version 6.
Summary
Floor takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the floor is, y = floor(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGRU¶

class
skl2onnx.algebra.onnx_ops.
OnnxGRU
(*args, **kwargs)¶ Version
Onnx name: GRU
This version of the operator has been available since version 7.
Summary
Computes an onelayer GRU. This operator is usually supported via some custom implementation such as CuDNN.
Notations:
X  input tensor
z  update gate
r  reset gate
h  hidden gate
t  time step (t1 means previous time step)
W[zrh]  W parameter weight matrix for update, reset, and hidden gates
R[zrh]  R recurrence weight matrix for update, reset, and hidden gates
Wb[zrh]  W bias vectors for update, reset, and hidden gates
Rb[zrh]  R bias vectors for update, reset, and hidden gates
WB[zrh]  W parameter weight matrix for backward update, reset, and hidden gates
RB[zrh]  R recurrence weight matrix for backward update, reset, and hidden gates
WBb[zrh]  W bias vectors for backward update, reset, and hidden gates
RBb[zrh]  R bias vectors for backward update, reset, and hidden gates
H  Hidden state
num_directions  2 if direction == bidirectional else 1
Activation functions:
Relu(x)  max(0, x)
Tanh(x)  (1  e^{2x})/(1 + e^{2x})
Sigmoid(x)  1/(1 + e^{x})
(NOTE: Below are optional)
Affine(x)  alpha*x + beta
LeakyRelu(x)  x if x >= 0 else alpha * x
ThresholdedRelu(x)  x if x >= alpha else 0
ScaledTanh(x)  alpha*Tanh(beta*x)
HardSigmoid(x)  min(max(alpha*x + beta, 0), 1)
Elu(x)  x if x >= 0 else alpha*(e^x  1)
Softsign(x)  x/(1 + x)
Softplus(x)  log(1 + e^x)
Equations (Default: f=Sigmoid, g=Tanh):
zt = f(Xt*(Wz^T) + Ht1*(Rz^T) + Wbz + Rbz)
rt = f(Xt*(Wr^T) + Ht1*(Rr^T) + Wbr + Rbr)
ht = g(Xt*(Wh^T) + (rt (.) Ht1)*(Rh^T) + Rbh + Wbh) # default, when linear_before_reset = 0
ht = g(Xt*(Wh^T) + (rt (.) (Ht1*(Rh^T) + Rbh)) + Wbh) # when linear_before_reset != 0
Ht = (1  zt) (.) ht + zt (.) Ht1
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01. Default value is ````
activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators. Default value is ````
activations: A list of 2 (or 4 if bidirectional) activation functions for update, reset, and hidden gates. The activation functions must be one of the activation functions specified above. Optional: See the equations for default if not specified. Default value is ````
clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [threshold, +threshold] and is applied to the input of activations. No clip if not specified. Default value is ````
direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is ``name: “direction”
s: “forward” type: STRING `` * hidden_size: Number of neurons in the hidden layer Default value is ```` * linear_before_reset: When computing the output of the hidden gate, apply the linear transformation before multiplying by the output of the reset gate. Default value is ``name: “linear_before_reset” i: 0 type: INT ``
Inputs
Between 3 and 6 inputs.
X (heterogeneous)T: The input sequences packed (and potentially padded) into one 3D tensor with the shape of [seq_length, batch_size, input_size].
W (heterogeneous)T: The weight tensor for the gates. Concatenation of W[zrh] and WB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, input_size].
R (heterogeneous)T: The recurrence weight tensor. Concatenation of R[zrh] and RB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, hidden_size].
B (optional, heterogeneous)T: The bias tensor for the gates. Concatenation of [Wb[zrh], Rb[zrh]] and [WBb[zrh], RBb[zrh]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 6*hidden_size]. Optional: If not specified  assumed to be 0
sequence_lens (optional, heterogeneous)T1: Optional tensor specifying lengths of the sequences in a batch. If not specified  assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
initial_h (optional, heterogeneous)T: Optional initial value of the hidden. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
Outputs
Between 0 and 2 outputs.
Y (optional, heterogeneous)T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].
Y_h (optional, heterogeneous)T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(int32): Constrain seq_lens to integer tensor.
OnnxGRU_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGRU_1
(*args, **kwargs)¶ Version
Onnx name: GRU
This version of the operator has been available since version 1.
Summary
Computes an onelayer GRU. This operator is usually supported via some custom implementation such as CuDNN.
Notations:
X  input tensor
z  update gate
r  reset gate
h  hidden gate
t  time step (t1 means previous time step)
W[zrh]  W parameter weight matrix for update, reset, and hidden gates
R[zrh]  R recurrence weight matrix for update, reset, and hidden gates
Wb[zrh]  W bias vectors for update, reset, and hidden gates
Rb[zrh]  R bias vectors for update, reset, and hidden gates
WB[zrh]  W parameter weight matrix for backward update, reset, and hidden gates
RB[zrh]  R recurrence weight matrix for backward update, reset, and hidden gates
WBb[zrh]  W bias vectors for backward update, reset, and hidden gates
RBb[zrh]  R bias vectors for backward update, reset, and hidden gates
H  Hidden state
num_directions  2 if direction == bidirectional else 1
Activation functions:
Relu(x)  max(0, x)
Tanh(x)  (1  e^{2x})/(1 + e^{2x})
Sigmoid(x)  1/(1 + e^{x})
(NOTE: Below are optional)
Affine(x)  alpha*x + beta
LeakyRelu(x)  x if x >= 0 else alpha * x
ThresholdedRelu(x)  x if x >= alpha else 0
ScaledTanh(x)  alpha*Tanh(beta*x)
HardSigmoid(x)  min(max(alpha*x + beta, 0), 1)
Elu(x)  x if x >= 0 else alpha*(e^x  1)
Softsign(x)  x/(1 + x)
Softplus(x)  log(1 + e^x)
Equations (Default: f=Sigmoid, g=Tanh):
zt = f(Xt*(Wz^T) + Ht1*Rz + Wbz + Rbz)
rt = f(Xt*(Wr^T) + Ht1*Rr + Wbr + Rbr)
ht = g(Xt*(Wh^T) + (rt (.) Ht1)*Rh + Rbh + Wbh) # default, when linear_before_reset = 0
ht = g(Xt*(Wh^T) + (rt (.) (Ht1*Rh + Rbh) + Wbh) # when linear_before_reset != 0
Ht = (1  zt) (.) ht + zt (.) Ht1
Attributes
activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default value is ````
activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default value is ````
activations: A list of 2 (or 4 if bidirectional) activation functions for update, reset, and hidden gates. The activation functions must be one of the activation functions specified above. Optional: See the equations for default if not specified. Default value is ````
clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [threshold, +threshold] and is applied to the input of activations. No clip if not specified. Default value is ````
direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is ``name: “direction”
s: “foward” type: STRING `` * hidden_size: Number of neurons in the hidden layer Default value is ```` * output_sequence: The sequence output for the hidden is optional if 0. Default 0. Default value is ``name: “output_sequence” i: 0 type: INT ``
Inputs
Between 3 and 6 inputs.
X (heterogeneous)T: The input sequences packed (and potentially padded) into one 3D tensor with the shape of [seq_length, batch_size, input_size].
W (heterogeneous)T: The weight tensor for the gates. Concatenation of W[zrh] and WB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, input_size].
R (heterogeneous)T: The recurrence weight tensor. Concatenation of R[zrh] and RB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, hidden_size].
B (optional, heterogeneous)T: The bias tensor for the gates. Concatenation of [Wb[zrh], Rb[zrh]] and [WBb[zrh], RBb[zrh]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 6*hidden_size]. Optional: If not specified  assumed to be 0
sequence_lens (optional, heterogeneous)T1: Optional tensor specifying lengths of the sequences in a batch. If not specified  assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
initial_h (optional, heterogeneous)T: Optional initial value of the hidden. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
Outputs
Y (optional, heterogeneous)T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size]. It is optional if output_sequence is 0.
Y_h (heterogeneous)T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(int32): Constrain seq_lens to integer tensor.
OnnxGRU_3¶

class
skl2onnx.algebra.onnx_ops.
OnnxGRU_3
(*args, **kwargs)¶ Version
Onnx name: GRU
This version of the operator has been available since version 3.
Summary
Computes an onelayer GRU. This operator is usually supported via some custom implementation such as CuDNN.
Notations:
X  input tensor
z  update gate
r  reset gate
h  hidden gate
t  time step (t1 means previous time step)
W[zrh]  W parameter weight matrix for update, reset, and hidden gates
R[zrh]  R recurrence weight matrix for update, reset, and hidden gates
Wb[zrh]  W bias vectors for update, reset, and hidden gates
Rb[zrh]  R bias vectors for update, reset, and hidden gates
WB[zrh]  W parameter weight matrix for backward update, reset, and hidden gates
RB[zrh]  R recurrence weight matrix for backward update, reset, and hidden gates
WBb[zrh]  W bias vectors for backward update, reset, and hidden gates
RBb[zrh]  R bias vectors for backward update, reset, and hidden gates
H  Hidden state
num_directions  2 if direction == bidirectional else 1
Activation functions:
Relu(x)  max(0, x)
Tanh(x)  (1  e^{2x})/(1 + e^{2x})
Sigmoid(x)  1/(1 + e^{x})
(NOTE: Below are optional)
Affine(x)  alpha*x + beta
LeakyRelu(x)  x if x >= 0 else alpha * x
ThresholdedRelu(x)  x if x >= alpha else 0
ScaledTanh(x)  alpha*Tanh(beta*x)
HardSigmoid(x)  min(max(alpha*x + beta, 0), 1)
Elu(x)  x if x >= 0 else alpha*(e^x  1)
Softsign(x)  x/(1 + x)
Softplus(x)  log(1 + e^x)
Equations (Default: f=Sigmoid, g=Tanh):
zt = f(Xt*(Wz^T) + Ht1*Rz + Wbz + Rbz)
rt = f(Xt*(Wr^T) + Ht1*Rr + Wbr + Rbr)
ht = g(Xt*(Wh^T) + (rt (.) Ht1)*Rh + Rbh + Wbh) # default, when linear_before_reset = 0
ht = g(Xt*(Wh^T) + (rt (.) (Ht1*Rh + Rbh) + Wbh) # when linear_before_reset != 0
Ht = (1  zt) (.) ht + zt (.) Ht1
Attributes
activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01. Default value is ````
activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators. Default value is ````
activations: A list of 2 (or 4 if bidirectional) activation functions for update, reset, and hidden gates. The activation functions must be one of the activation functions specified above. Optional: See the equations for default if not specified. Default value is ````
clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [threshold, +threshold] and is applied to the input of activations. No clip if not specified. Default value is ````
direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is ``name: “direction”
s: “forward” type: STRING `` * hidden_size: Number of neurons in the hidden layer Default value is ```` * linear_before_reset: When computing the output of the hidden gate, apply the linear transformation before multiplying by the output of the reset gate. Default value is ``name: “linear_before_reset” i: 0 type: INT `` * output_sequence: The sequence output for the hidden is optional if 0. Default 0. Default value is ``name: “output_sequence” i: 0 type: INT ``
Inputs
Between 3 and 6 inputs.
X (heterogeneous)T: The input sequences packed (and potentially padded) into one 3D tensor with the shape of [seq_length, batch_size, input_size].
W (heterogeneous)T: The weight tensor for the gates. Concatenation of W[zrh] and WB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, input_size].
R (heterogeneous)T: The recurrence weight tensor. Concatenation of R[zrh] and RB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, hidden_size].
B (optional, heterogeneous)T: The bias tensor for the gates. Concatenation of [Wb[zrh], Rb[zrh]] and [WBb[zrh], RBb[zrh]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 6*hidden_size]. Optional: If not specified  assumed to be 0
sequence_lens (optional, heterogeneous)T1: Optional tensor specifying lengths of the sequences in a batch. If not specified  assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
initial_h (optional, heterogeneous)T: Optional initial value of the hidden. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
Outputs
Between 0 and 2 outputs.
Y (optional, heterogeneous)T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size]. It is optional if output_sequence is 0.
Y_h (optional, heterogeneous)T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(int32): Constrain seq_lens to integer tensor.
OnnxGRU_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxGRU_7
(*args, **kwargs)¶ Version
Onnx name: GRU
This version of the operator has been available since version 7.
Summary
Computes an onelayer GRU. This operator is usually supported via some custom implementation such as CuDNN.
Notations:
X  input tensor
z  update gate
r  reset gate
h  hidden gate
t  time step (t1 means previous time step)
W[zrh]  W parameter weight matrix for update, reset, and hidden gates
R[zrh]  R recurrence weight matrix for update, reset, and hidden gates
Wb[zrh]  W bias vectors for update, reset, and hidden gates
Rb[zrh]  R bias vectors for update, reset, and hidden gates
WB[zrh]  W parameter weight matrix for backward update, reset, and hidden gates
RB[zrh]  R recurrence weight matrix for backward update, reset, and hidden gates
WBb[zrh]  W bias vectors for backward update, reset, and hidden gates
RBb[zrh]  R bias vectors for backward update, reset, and hidden gates
H  Hidden state
num_directions  2 if direction == bidirectional else 1
Activation functions:
Relu(x)  max(0, x)
Tanh(x)  (1  e^{2x})/(1 + e^{2x})
Sigmoid(x)  1/(1 + e^{x})
(NOTE: Below are optional)
Affine(x)  alpha*x + beta
LeakyRelu(x)  x if x >= 0 else alpha * x
ThresholdedRelu(x)  x if x >= alpha else 0
ScaledTanh(x)  alpha*Tanh(beta*x)
HardSigmoid(x)  min(max(alpha*x + beta, 0), 1)
Elu(x)  x if x >= 0 else alpha*(e^x  1)
Softsign(x)  x/(1 + x)
Softplus(x)  log(1 + e^x)
Equations (Default: f=Sigmoid, g=Tanh):
zt = f(Xt*(Wz^T) + Ht1*(Rz^T) + Wbz + Rbz)
rt = f(Xt*(Wr^T) + Ht1*(Rr^T) + Wbr + Rbr)
ht = g(Xt*(Wh^T) + (rt (.) Ht1)*(Rh^T) + Rbh + Wbh) # default, when linear_before_reset = 0
ht = g(Xt*(Wh^T) + (rt (.) (Ht1*(Rh^T) + Rbh)) + Wbh) # when linear_before_reset != 0
Ht = (1  zt) (.) ht + zt (.) Ht1
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01. Default value is ````
activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators. Default value is ````
activations: A list of 2 (or 4 if bidirectional) activation functions for update, reset, and hidden gates. The activation functions must be one of the activation functions specified above. Optional: See the equations for default if not specified. Default value is ````
clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [threshold, +threshold] and is applied to the input of activations. No clip if not specified. Default value is ````
direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is ``name: “direction”
s: “forward” type: STRING `` * hidden_size: Number of neurons in the hidden layer Default value is ```` * linear_before_reset: When computing the output of the hidden gate, apply the linear transformation before multiplying by the output of the reset gate. Default value is ``name: “linear_before_reset” i: 0 type: INT ``
Inputs
Between 3 and 6 inputs.
X (heterogeneous)T: The input sequences packed (and potentially padded) into one 3D tensor with the shape of [seq_length, batch_size, input_size].
W (heterogeneous)T: The weight tensor for the gates. Concatenation of W[zrh] and WB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, input_size].
R (heterogeneous)T: The recurrence weight tensor. Concatenation of R[zrh] and RB[zrh] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 3*hidden_size, hidden_size].
B (optional, heterogeneous)T: The bias tensor for the gates. Concatenation of [Wb[zrh], Rb[zrh]] and [WBb[zrh], RBb[zrh]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 6*hidden_size]. Optional: If not specified  assumed to be 0
sequence_lens (optional, heterogeneous)T1: Optional tensor specifying lengths of the sequences in a batch. If not specified  assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
initial_h (optional, heterogeneous)T: Optional initial value of the hidden. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
Outputs
Between 0 and 2 outputs.
Y (optional, heterogeneous)T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].
Y_h (optional, heterogeneous)T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(int32): Constrain seq_lens to integer tensor.
OnnxGather¶

class
skl2onnx.algebra.onnx_ops.
OnnxGather
(*args, **kwargs)¶ Version
Onnx name: Gather
This version of the operator has been available since version 11.
Summary
Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis dimension of data (by default outermost one as axis=0) indexed by indices, and concatenates them in an output tensor of rank q + (r  1).
axis = 0 :
Let k = indices[i_{0}, …, i_{q1}] Then output[i_{0}, …, i_{q1}, j_{0}, …, j_{r2}] = input[k , j_{0}, …, j_{r2}]
data = [ [1.0, 1.2], [2.3, 3.4], [4.5, 5.7], ] indices = [ [0, 1], [1, 2], ] output = [ [ [1.0, 1.2], [2.3, 3.4], ], [ [2.3, 3.4], [4.5, 5.7], ], ]
axis = 1 :
Let k = indices[i_{0}, …, i_{q1}] Then output[i_{0}, …, i_{q1}, j_{0}, …, j_{r2}] = input[j_{0}, k, j_{1}, …, j_{r2}]
data = [ [1.0, 1.2, 1.9], [2.3, 3.4, 3.9], [4.5, 5.7, 5.9], ] indices = [ [0, 2], ] axis = 1, output = [ [ [1.0, 1.9], [2.3, 3.9], [4.5, 5.9], ], ]
Attributes
axis: Which axis to gather on. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT ``
Inputs
data (heterogeneous)T: Tensor of rank r >= 1.
indices (heterogeneous)Tind: Tensor of int32/int64 indices, of any rank q. All index values are expected to be within bounds [s, s1] along axis of size s. It is an error if any of the index values are out of bounds.
Outputs
output (heterogeneous)T: Tensor of rank q + (r  1).
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to any tensor type.
Tind tensor(int32), tensor(int64): Constrain indices to integer types
OnnxGatherElements¶

class
skl2onnx.algebra.onnx_ops.
OnnxGatherElements
(*args, **kwargs)¶ Version
Onnx name: GatherElements
This version of the operator has been available since version 11.
Summary
GatherElements takes two inputs data and indices of the same rank r >= 1 and an optional attribute axis that identifies an axis of data (by default, the outermost axis, that is axis 0). It is an indexing operation that produces its output by indexing into the input data tensor at index positions determined by elements of the indices tensor. Its output shape is the same as the shape of indices and consists of one value (gathered from the data) for each element in indices.
For instance, in the 3D case (r = 3), the output produced is determined by the following equations:
out[i][j][k] = input[index[i][j][k]][j][k] if axis = 0, out[i][j][k] = input[i][index[i][j][k]][k] if axis = 1, out[i][j][k] = input[i][j][index[i][j][k]] if axis = 2,
This operator is also the inverse of ScatterElements. It is similar to Torch’s gather operation.
Example 1:
data = [ [1, 2], [3, 4], ] indices = [ [0, 0], [1, 0], ] axis = 1 output = [ [ [1, 1], [4, 3], ], ]
Example 2:
data = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ] indices = [ [1, 2, 0], [2, 0, 0], ] axis = 0 output = [ [ [4, 8, 3], [7, 2, 3], ], ]
Attributes
axis: Which axis to gather on. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT ``
Inputs
data (heterogeneous)T: Tensor of rank r >= 1.
indices (heterogeneous)Tind: Tensor of int32/int64 indices, with the same rank r as the input. All index values are expected to be within bounds [s, s1] along axis of size s. It is an error if any of the index values are out of bounds.
Outputs
output (heterogeneous)T: Tensor of the same shape as indices.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to any tensor type.
Tind tensor(int32), tensor(int64): Constrain indices to integer types
OnnxGatherElements_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxGatherElements_11
(*args, **kwargs)¶ Version
Onnx name: GatherElements
This version of the operator has been available since version 11.
Summary
GatherElements takes two inputs data and indices of the same rank r >= 1 and an optional attribute axis that identifies an axis of data (by default, the outermost axis, that is axis 0). It is an indexing operation that produces its output by indexing into the input data tensor at index positions determined by elements of the indices tensor. Its output shape is the same as the shape of indices and consists of one value (gathered from the data) for each element in indices.
For instance, in the 3D case (r = 3), the output produced is determined by the following equations:
out[i][j][k] = input[index[i][j][k]][j][k] if axis = 0, out[i][j][k] = input[i][index[i][j][k]][k] if axis = 1, out[i][j][k] = input[i][j][index[i][j][k]] if axis = 2,
This operator is also the inverse of ScatterElements. It is similar to Torch’s gather operation.
Example 1:
data = [ [1, 2], [3, 4], ] indices = [ [0, 0], [1, 0], ] axis = 1 output = [ [ [1, 1], [4, 3], ], ]
Example 2:
data = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ] indices = [ [1, 2, 0], [2, 0, 0], ] axis = 0 output = [ [ [4, 8, 3], [7, 2, 3], ], ]
Attributes
axis: Which axis to gather on. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT ``
Inputs
data (heterogeneous)T: Tensor of rank r >= 1.
indices (heterogeneous)Tind: Tensor of int32/int64 indices, with the same rank r as the input. All index values are expected to be within bounds [s, s1] along axis of size s. It is an error if any of the index values are out of bounds.
Outputs
output (heterogeneous)T: Tensor of the same shape as indices.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to any tensor type.
Tind tensor(int32), tensor(int64): Constrain indices to integer types
OnnxGatherND¶

class
skl2onnx.algebra.onnx_ops.
OnnxGatherND
(*args, **kwargs)¶ Version
Onnx name: GatherND
This version of the operator has been available since version 12.
Summary
Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers slices of data into an output tensor of rank q + r  indices_shape[1]  1  b.
indices is an qdimensional integer tensor, best thought of as a (q1)dimensional tensor of indextuples into data, where each element defines a slice of data
batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of data tensor and indices are representing the batches, and the gather starts from the b+1 dimension.
Some salient points about the inputs’ rank and shape:
r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q
The first b dimensions of the shape of indices tensor and data tensor must be equal.
b < min(q, r) is to be honored.
The indices_shape[1] should have a value between 1 (inclusive) and rank rb (inclusive)
All values in indices are expected to be within bounds [s, s1] along axis of size s (i.e.) data_shape[i] <= indices[…,i] <= data_shape[i]  1. It is an error if any of the index values are out of bounds.
The output is computed as follows:
The output tensor is obtained by mapping each indextuple in the indices tensor to the corresponding slice of the input data.
If indices_shape[1] > rb => error condition
If indices_shape[1] == rb, since the rank of indices is q, indices can be thought of as N (qb1)dimensional tensors containing 1D tensors of dimension rb, where N is an integer equals to the product of 1 and all the elements in the batch dimensions of the indices_shape. Let us think of each such rb ranked tensor as indices_slice. Each scalar value corresponding to data[0:b1,indices_slice] is filled into the corresponding location of the (qb1)dimensional tensor to form the output tensor (Example 1 below)
If indices_shape[1] < rb, since the rank of indices is q, indices can be thought of as N (qb1)dimensional tensor containing 1D tensors of dimension < rb. Let us think of each such tensors as indices_slice. Each tensor slice corresponding to data[0:b1, indices_slice , :] is filled into the corresponding location of the (qb1)dimensional tensor to form the output tensor (Examples 2, 3, 4 and 5 below)
This operator is the inverse of ScatterND.
Example 1
batch_dims = 0
data = [[0,1],[2,3]] # data_shape = [2, 2]
indices = [[0,0],[1,1]] # indices_shape = [2, 2]
output = [0,3] # output_shape = [2]
Example 2
batch_dims = 0
data = [[0,1],[2,3]] # data_shape = [2, 2]
indices = [[1],[0]] # indices_shape = [2, 1]
output = [[2,3],[0,1]] # output_shape = [2, 2]
Example 3
batch_dims = 0
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
indices = [[0,1],[1,0]] # indices_shape = [2, 2]
output = [[2,3],[4,5]] # output_shape = [2, 2]
Example 4
batch_dims = 0
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
Example 5
batch_dims = 1
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
indices = [[1],[0]] # indices_shape = [2, 1]
output = [[2,3],[4,5]] # output_shape = [2, 2]
Attributes
batch_dims: The number of batch dimensions. The gather of indexing starts from dimension of data[batch_dims:] Default value is ``name: “batch_dims”
i: 0 type: INT ``
Inputs
data (heterogeneous)T: Tensor of rank r >= 1.
indices (heterogeneous)tensor(int64): Tensor of rank q >= 1. All index values are expected to be within bounds [s, s1] along axis of size s. It is an error if any of the index values are out of bounds.
Outputs
output (heterogeneous)T: Tensor of rank q + r  indices_shape[1]  1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to any tensor type.
OnnxGatherND_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxGatherND_11
(*args, **kwargs)¶ Version
Onnx name: GatherND
This version of the operator has been available since version 11.
Summary
Given data tensor of rank r >= 1, and indices tensor of rank q >= 1, this operator gathers slices of data into an output tensor of rank q + r  indices_shape[1]  1.
indices is an qdimensional integer tensor, best thought of as a (q1)dimensional tensor of indextuples into data, where each element defines a slice of data
Some salient points about the inputs’ rank and shape:
r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q
The indices_shape[1] should have a value between 1 (inclusive) and rank r (inclusive)
All values in indices are expected to be within bounds [s, s1] along axis of size s (i.e.) data_shape[i] <= indices[…,i] <= data_shape[i]  1. It is an error if any of the index values are out of bounds.
The output is computed as follows:
The output tensor is obtained by mapping each indextuple in the indices tensor to the corresponding slice of the input data.
If indices_shape[1] > r => error condition
If indices_shape[1] == r, since the rank of indices is q, indices can be thought of as a (q1)dimensional tensor containing 1D tensors of dimension r. Let us think of each such r ranked tensor as indices_slice. Each scalar value corresponding to data[indices_slice] is filled into the corresponding location of the (q1)dimensional tensor to form the output tensor (Example 1 below)
If indices_shape[1] < r, since the rank of indices is q, indices can be thought of as a (q1)dimensional tensor containing 1D tensors of dimension < r. Let us think of each such tensors as indices_slice. Each tensor slice corresponding to data[indices_slice , :] is filled into the corresponding location of the (q1)dimensional tensor to form the output tensor (Examples 2, 3, and 4 below)
This operator is the inverse of ScatterND.
Example 1
data = [[0,1],[2,3]] # data_shape = [2, 2]
indices = [[0,0],[1,1]] # indices_shape = [2, 2]
output = [0,3] # output_shape = [2]
Example 2
data = [[0,1],[2,3]] # data_shape = [2, 2]
indices = [[1],[0]] # indices_shape = [2, 1]
output = [[2,3],[0,1]] # output_shape = [2, 2]
Example 3
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
indices = [[0,1],[1,0]] # indices_shape = [2, 2]
output = [[2,3],[4,5]] # output_shape = [2, 2]
Example 4
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
Inputs
data (heterogeneous)T: Tensor of rank r >= 1.
indices (heterogeneous)tensor(int64): Tensor of rank q >= 1. All index values are expected to be within bounds [s, s1] along axis of size s. It is an error if any of the index values are out of bounds.
Outputs
output (heterogeneous)T: Tensor of rank q + r  indices_shape[1]  1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to any tensor type.
OnnxGatherND_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxGatherND_12
(*args, **kwargs)¶ Version
Onnx name: GatherND
This version of the operator has been available since version 12.
Summary
Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers slices of data into an output tensor of rank q + r  indices_shape[1]  1  b.
indices is an qdimensional integer tensor, best thought of as a (q1)dimensional tensor of indextuples into data, where each element defines a slice of data
batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of data tensor and indices are representing the batches, and the gather starts from the b+1 dimension.
Some salient points about the inputs’ rank and shape:
r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q
The first b dimensions of the shape of indices tensor and data tensor must be equal.
b < min(q, r) is to be honored.
The indices_shape[1] should have a value between 1 (inclusive) and rank rb (inclusive)
All values in indices are expected to be within bounds [s, s1] along axis of size s (i.e.) data_shape[i] <= indices[…,i] <= data_shape[i]  1. It is an error if any of the index values are out of bounds.
The output is computed as follows:
The output tensor is obtained by mapping each indextuple in the indices tensor to the corresponding slice of the input data.
If indices_shape[1] > rb => error condition
If indices_shape[1] == rb, since the rank of indices is q, indices can be thought of as N (qb1)dimensional tensors containing 1D tensors of dimension rb, where N is an integer equals to the product of 1 and all the elements in the batch dimensions of the indices_shape. Let us think of each such rb ranked tensor as indices_slice. Each scalar value corresponding to data[0:b1,indices_slice] is filled into the corresponding location of the (qb1)dimensional tensor to form the output tensor (Example 1 below)
If indices_shape[1] < rb, since the rank of indices is q, indices can be thought of as N (qb1)dimensional tensor containing 1D tensors of dimension < rb. Let us think of each such tensors as indices_slice. Each tensor slice corresponding to data[0:b1, indices_slice , :] is filled into the corresponding location of the (qb1)dimensional tensor to form the output tensor (Examples 2, 3, 4 and 5 below)
This operator is the inverse of ScatterND.
Example 1
batch_dims = 0
data = [[0,1],[2,3]] # data_shape = [2, 2]
indices = [[0,0],[1,1]] # indices_shape = [2, 2]
output = [0,3] # output_shape = [2]
Example 2
batch_dims = 0
data = [[0,1],[2,3]] # data_shape = [2, 2]
indices = [[1],[0]] # indices_shape = [2, 1]
output = [[2,3],[0,1]] # output_shape = [2, 2]
Example 3
batch_dims = 0
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
indices = [[0,1],[1,0]] # indices_shape = [2, 2]
output = [[2,3],[4,5]] # output_shape = [2, 2]
Example 4
batch_dims = 0
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
Example 5
batch_dims = 1
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
indices = [[1],[0]] # indices_shape = [2, 1]
output = [[2,3],[4,5]] # output_shape = [2, 2]
Attributes
batch_dims: The number of batch dimensions. The gather of indexing starts from dimension of data[batch_dims:] Default value is ``name: “batch_dims”
i: 0 type: INT ``
Inputs
data (heterogeneous)T: Tensor of rank r >= 1.
indices (heterogeneous)tensor(int64): Tensor of rank q >= 1. All index values are expected to be within bounds [s, s1] along axis of size s. It is an error if any of the index values are out of bounds.
Outputs
output (heterogeneous)T: Tensor of rank q + r  indices_shape[1]  1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to any tensor type.
OnnxGather_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGather_1
(*args, **kwargs)¶ Version
Onnx name: Gather
This version of the operator has been available since version 1.
Summary
Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis dimension of data (by default outermost one as axis=0) indexed by indices, and concatenates them in an output tensor of rank q + (r  1). Example 1:
data = [ [1.0, 1.2], [2.3, 3.4], [4.5, 5.7], ] indices = [ [0, 1], [1, 2], ] output = [ [ [1.0, 1.2], [2.3, 3.4], ], [ [2.3, 3.4], [4.5, 5.7], ], ]
Example 2:
data = [ [1.0, 1.2, 1.9], [2.3, 3.4, 3.9], [4.5, 5.7, 5.9], ] indices = [ [0, 2], ] axis = 1, output = [ [ [1.0, 1.9], [2.3, 3.9], [4.5, 5.9], ], ]
Attributes
axis: Which axis to gather on. Negative value means counting dimensions from the back. Accepted range is [r, r1] Default value is ``name: “axis”
i: 0 type: INT ``
Inputs
data (heterogeneous)T: Tensor of rank r >= 1.
indices (heterogeneous)Tind: Tensor of int32/int64 indices, of any rank q. All index values are expected to be within bounds. It is an error if any of the index values are out of bounds.
Outputs
output (heterogeneous)T: Tensor of rank q + (r  1).
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to any tensor type.
Tind tensor(int32), tensor(int64): Constrain indices to integer types
OnnxGather_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxGather_11
(*args, **kwargs)¶ Version
Onnx name: Gather
This version of the operator has been available since version 11.
Summary
Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis dimension of data (by default outermost one as axis=0) indexed by indices, and concatenates them in an output tensor of rank q + (r  1).
axis = 0 :
Let k = indices[i_{0}, …, i_{q1}] Then output[i_{0}, …, i_{q1}, j_{0}, …, j_{r2}] = input[k , j_{0}, …, j_{r2}]
data = [ [1.0, 1.2], [2.3, 3.4], [4.5, 5.7], ] indices = [ [0, 1], [1, 2], ] output = [ [ [1.0, 1.2], [2.3, 3.4], ], [ [2.3, 3.4], [4.5, 5.7], ], ]
axis = 1 :
Let k = indices[i_{0}, …, i_{q1}] Then output[i_{0}, …, i_{q1}, j_{0}, …, j_{r2}] = input[j_{0}, k, j_{1}, …, j_{r2}]
data = [ [1.0, 1.2, 1.9], [2.3, 3.4, 3.9], [4.5, 5.7, 5.9], ] indices = [ [0, 2], ] axis = 1, output = [ [ [1.0, 1.9], [2.3, 3.9], [4.5, 5.9], ], ]
Attributes
axis: Which axis to gather on. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT ``
Inputs
data (heterogeneous)T: Tensor of rank r >= 1.
indices (heterogeneous)Tind: Tensor of int32/int64 indices, of any rank q. All index values are expected to be within bounds [s, s1] along axis of size s. It is an error if any of the index values are out of bounds.
Outputs
output (heterogeneous)T: Tensor of rank q + (r  1).
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to any tensor type.
Tind tensor(int32), tensor(int64): Constrain indices to integer types
OnnxGemm¶

class
skl2onnx.algebra.onnx_ops.
OnnxGemm
(*args, **kwargs)¶ Version
Onnx name: Gemm
This version of the operator has been available since version 11.
Summary
General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3
A’ = transpose(A) if transA else A
B’ = transpose(B) if transB else B
Compute Y = alpha * A’ * B’ + beta * C, where input tensor A has shape (M, K) or (K, M), input tensor B has shape (K, N) or (N, K), input tensor C is broadcastable to shape (M, N), and output tensor Y has shape (M, N). A will be transposed before doing the computation if attribute transA is nonzero, same for B and transB. This operator supports unidirectional broadcasting (tensor C should be unidirectional broadcastable to tensor A * B); for more details please check Broadcasting in ONNX. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
alpha: Scalar multiplier for the product of input tensors A * B. Default value is ``name: “alpha”
f: 1.0 type: FLOAT `` * beta: Scalar multiplier for input tensor C. Default value is ``name: “beta” f: 1.0 type: FLOAT `` * transA: Whether A should be transposed Default value is ``name: “transA” i: 0 type: INT `` * transB: Whether B should be transposed Default value is ``name: “transB” i: 0 type: INT ``
Inputs
Between 2 and 3 inputs.
A (heterogeneous)T: Input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is nonzero.
B (heterogeneous)T: Input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is nonzero.
C (optional, heterogeneous)T: Optional input tensor C. If not specified, the computation is done as if C is a scalar 0. The shape of C should be unidirectional broadcastable to (M, N).
Outputs
Y (heterogeneous)T: Output tensor of shape (M, N).
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(uint32), tensor(uint64), tensor(int32), tensor(int64): Constrain input and output types to float/int tensors.
OnnxGemm_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGemm_1
(*args, **kwargs)¶ Version
Onnx name: Gemm
This version of the operator has been available since version 1.
Summary
General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3 Compute Y = alpha * A * B + beta * C, where input tensor A has dimension (M X K), input tensor B has dimension (K X N), input tensor C and output tensor Y have dimension (M X N). If attribute broadcast is nonzero, input tensor C will be broadcasted to match the dimension requirement. A will be transposed before doing the computation if attribute transA is nonzero, same for B and transB.
Attributes
alpha: Scalar multiplier for the product of input tensors A * B, the default value is 1.0. Default value is ``name: “alpha”
f: 1.0 type: FLOAT `` * beta: Scalar multiplier for input tensor C, the default value is 1.0. Default value is ``name: “beta” f: 1.0 type: FLOAT `` * broadcast: Whether C should be broadcasted Default value is ``name: “broadcast” i: 0 type: INT `` * transA: Whether A should be transposed Default value is ``name: “transA” i: 0 type: INT `` * transB: Whether B should be transposed Default value is ``name: “transB” i: 0 type: INT ``
Inputs
A (heterogeneous)T: Input tensor A
B (heterogeneous)T: Input tensor B
C (heterogeneous)T: Input tensor C, can be inplace.
Outputs
Y (heterogeneous)T: Output tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGemm_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxGemm_11
(*args, **kwargs)¶ Version
Onnx name: Gemm
This version of the operator has been available since version 11.
Summary
General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3
A’ = transpose(A) if transA else A
B’ = transpose(B) if transB else B
Compute Y = alpha * A’ * B’ + beta * C, where input tensor A has shape (M, K) or (K, M), input tensor B has shape (K, N) or (N, K), input tensor C is broadcastable to shape (M, N), and output tensor Y has shape (M, N). A will be transposed before doing the computation if attribute transA is nonzero, same for B and transB. This operator supports unidirectional broadcasting (tensor C should be unidirectional broadcastable to tensor A * B); for more details please check Broadcasting in ONNX. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
alpha: Scalar multiplier for the product of input tensors A * B. Default value is ``name: “alpha”
f: 1.0 type: FLOAT `` * beta: Scalar multiplier for input tensor C. Default value is ``name: “beta” f: 1.0 type: FLOAT `` * transA: Whether A should be transposed Default value is ``name: “transA” i: 0 type: INT `` * transB: Whether B should be transposed Default value is ``name: “transB” i: 0 type: INT ``
Inputs
Between 2 and 3 inputs.
A (heterogeneous)T: Input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is nonzero.
B (heterogeneous)T: Input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is nonzero.
C (optional, heterogeneous)T: Optional input tensor C. If not specified, the computation is done as if C is a scalar 0. The shape of C should be unidirectional broadcastable to (M, N).
Outputs
Y (heterogeneous)T: Output tensor of shape (M, N).
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(uint32), tensor(uint64), tensor(int32), tensor(int64): Constrain input and output types to float/int tensors.
OnnxGemm_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxGemm_6
(*args, **kwargs)¶ Version
Onnx name: Gemm
This version of the operator has been available since version 6.
Summary
General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3 Compute Y = alpha * A * B + beta * C, where input tensor A has dimension (M X K), input tensor B has dimension (K X N), input tensor C and output tensor Y have dimension (M X N). If attribute broadcast is nonzero, input tensor C will be broadcasted to match the dimension requirement. A will be transposed before doing the computation if attribute transA is nonzero, same for B and transB.
Attributes
alpha: Scalar multiplier for the product of input tensors A * B, the default value is 1.0. Default value is ``name: “alpha”
f: 1.0 type: FLOAT `` * beta: Scalar multiplier for input tensor C, the default value is 1.0. Default value is ``name: “beta” f: 1.0 type: FLOAT `` * broadcast: Whether C should be broadcasted Default value is ``name: “broadcast” i: 0 type: INT `` * transA: Whether A should be transposed Default value is ``name: “transA” i: 0 type: INT `` * transB: Whether B should be transposed Default value is ``name: “transB” i: 0 type: INT ``
Inputs
A (heterogeneous)T: Input tensor A
B (heterogeneous)T: Input tensor B
C (heterogeneous)T: Input tensor C
Outputs
Y (heterogeneous)T: Output tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGemm_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxGemm_7
(*args, **kwargs)¶ Version
Onnx name: Gemm
This version of the operator has been available since version 7.
Summary
General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3
A’ = transpose(A) if transA else A
B’ = transpose(B) if transB else B
Compute Y = alpha * A’ * B’ + beta * C, where input tensor A has shape (M, K) or (K, M), input tensor B has shape (K, N) or (N, K), input tensor C is broadcastable to shape (M, N), and output tensor Y has shape (M, N). A will be transposed before doing the computation if attribute transA is nonzero, same for B and transB. This operator supports unidirectional broadcasting (tensor C should be unidirectional broadcastable to tensor A * B); for more details please check Broadcasting in ONNX.
Attributes
alpha: Scalar multiplier for the product of input tensors A * B. Default value is ``name: “alpha”
f: 1.0 type: FLOAT `` * beta: Scalar multiplier for input tensor C. Default value is ``name: “beta” f: 1.0 type: FLOAT `` * transA: Whether A should be transposed Default value is ``name: “transA” i: 0 type: INT `` * transB: Whether B should be transposed Default value is ``name: “transB” i: 0 type: INT ``
Inputs
A (heterogeneous)T: Input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is nonzero.
B (heterogeneous)T: Input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is nonzero.
C (heterogeneous)T: Input tensor C. The shape of C should be unidirectional broadcastable to (M, N).
Outputs
Y (heterogeneous)T: Output tensor of shape (M, N).
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGemm_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxGemm_9
(*args, **kwargs)¶ Version
Onnx name: Gemm
This version of the operator has been available since version 9.
Summary
General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3
A’ = transpose(A) if transA else A
B’ = transpose(B) if transB else B
Compute Y = alpha * A’ * B’ + beta * C, where input tensor A has shape (M, K) or (K, M), input tensor B has shape (K, N) or (N, K), input tensor C is broadcastable to shape (M, N), and output tensor Y has shape (M, N). A will be transposed before doing the computation if attribute transA is nonzero, same for B and transB. This operator supports unidirectional broadcasting (tensor C should be unidirectional broadcastable to tensor A * B); for more details please check Broadcasting in ONNX.
Attributes
alpha: Scalar multiplier for the product of input tensors A * B. Default value is ``name: “alpha”
f: 1.0 type: FLOAT `` * beta: Scalar multiplier for input tensor C. Default value is ``name: “beta” f: 1.0 type: FLOAT `` * transA: Whether A should be transposed Default value is ``name: “transA” i: 0 type: INT `` * transB: Whether B should be transposed Default value is ``name: “transB” i: 0 type: INT ``
Inputs
A (heterogeneous)T: Input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is nonzero.
B (heterogeneous)T: Input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is nonzero.
C (heterogeneous)T: Input tensor C. The shape of C should be unidirectional broadcastable to (M, N).
Outputs
Y (heterogeneous)T: Output tensor of shape (M, N).
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(uint32), tensor(uint64), tensor(int32), tensor(int64): Constrain input and output types to float/int tensors.
OnnxGlobalAveragePool¶

class
skl2onnx.algebra.onnx_ops.
OnnxGlobalAveragePool
(*args, **kwargs)¶ Version
Onnx name: GlobalAveragePool
This version of the operator has been available since version 1.
Summary
GlobalAveragePool consumes an input tensor X and applies average pooling across the values in the same channel. This is equivalent to AveragePool with kernel size equal to the spatial dimension of input tensor.
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
Outputs
Y (heterogeneous)T: Output data tensor from pooling across the input tensor. The output tensor has the same rank as the input. The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGlobalAveragePool_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGlobalAveragePool_1
(*args, **kwargs)¶ Version
Onnx name: GlobalAveragePool
This version of the operator has been available since version 1.
Summary
GlobalAveragePool consumes an input tensor X and applies average pooling across the values in the same channel. This is equivalent to AveragePool with kernel size equal to the spatial dimension of input tensor.
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
Outputs
Y (heterogeneous)T: Output data tensor from pooling across the input tensor. The output tensor has the same rank as the input. The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGlobalLpPool¶

class
skl2onnx.algebra.onnx_ops.
OnnxGlobalLpPool
(*args, **kwargs)¶ Version
Onnx name: GlobalLpPool
This version of the operator has been available since version 2.
Summary
GlobalLpPool consumes an input tensor X and applies lp pool pooling across the values in the same channel. This is equivalent to LpPool with kernel size equal to the spatial dimension of input tensor.
Attributes
p: p value of the Lp norm used to pool over the input data. Default value is ``name: “p”
i: 2 type: INT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
Outputs
Y (heterogeneous)T: Output data tensor from pooling across the input tensor. The output tensor has the same rank as the input. The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGlobalLpPool_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGlobalLpPool_1
(*args, **kwargs)¶ Version
Onnx name: GlobalLpPool
This version of the operator has been available since version 1.
Summary
GlobalLpPool consumes an input tensor X and applies lp pool pooling across the the values in the same channel. This is equivalent to LpPool with kernel size equal to the spatial dimension of input tensor.
Attributes
p: p value of the Lp norm used to pool over the input data, default is 2.0. Default value is ``name: “p”
f: 2.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimension are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
Outputs
Y (heterogeneous)T: Output data tensor from pooling across the input tensor. Dimensions will be N x C x 1 x 1
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGlobalLpPool_2¶

class
skl2onnx.algebra.onnx_ops.
OnnxGlobalLpPool_2
(*args, **kwargs)¶ Version
Onnx name: GlobalLpPool
This version of the operator has been available since version 2.
Summary
GlobalLpPool consumes an input tensor X and applies lp pool pooling across the values in the same channel. This is equivalent to LpPool with kernel size equal to the spatial dimension of input tensor.
Attributes
p: p value of the Lp norm used to pool over the input data. Default value is ``name: “p”
i: 2 type: INT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
Outputs
Y (heterogeneous)T: Output data tensor from pooling across the input tensor. The output tensor has the same rank as the input. The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGlobalMaxPool¶

class
skl2onnx.algebra.onnx_ops.
OnnxGlobalMaxPool
(*args, **kwargs)¶ Version
Onnx name: GlobalMaxPool
This version of the operator has been available since version 1.
Summary
GlobalMaxPool consumes an input tensor X and applies max pooling across the values in the same channel. This is equivalent to MaxPool with kernel size equal to the spatial dimension of input tensor.
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
Outputs
Y (heterogeneous)T: Output data tensor from pooling across the input tensor. The output tensor has the same rank as the input. The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGlobalMaxPool_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGlobalMaxPool_1
(*args, **kwargs)¶ Version
Onnx name: GlobalMaxPool
This version of the operator has been available since version 1.
Summary
GlobalMaxPool consumes an input tensor X and applies max pooling across the values in the same channel. This is equivalent to MaxPool with kernel size equal to the spatial dimension of input tensor.
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
Outputs
Y (heterogeneous)T: Output data tensor from pooling across the input tensor. The output tensor has the same rank as the input. The first two dimensions of output shape are the same as the input (N x C), while the other dimensions are all 1.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxGradient¶

class
skl2onnx.algebra.onnx_ops.
OnnxGradient
(*args, **kwargs)¶ Version
Onnx name: Gradient
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Gradient operator computes the partial derivatives of a specific tensor w.r.t. some other tensors. This operator is widely used in gradientbased training algorithms. To illustrate its use, let’s consider a computation graph,
X .  v W > Conv > H > Gemm > Y ^  Z
, where W and Z are trainable tensors. Note that operators’ attributes are omitted for the sake of simplicity. Let dY/dW (dY/dZ) be the gradient of Y with respect to W (Z). The user can compute gradient by inserting Gradient operator to form another graph shown below.
W > Conv > H > Gemm > Y  ^ ^     X Z      .'    (W/Z/X is the 1st/2nd/3rd input of Gradient as shown in    "xs" followed by "zs")  v v '> Gradient(xs=["W", "Z"], zs=["X"], y="Y")    '> dY/dW (1st output of Gradient)  '> dY/dZ (2nd output of Gradient)
By definition, the tensor “y” is a function of independent variables in “xs” and “zs”. Since we only compute the gradient of “y” w.r.t. the differentiable variables in “xs”, this Gradient only outputs dY/dW and dY/dZ. Note that “H” cannot appear in “xs” and “zs”. The reason is that “H” can be determined by tensors “W” and “X” and therefore “H” is not an independent variable.
All outputs are optional. If needed, for example, user can assign an empty string to the 1st output name of that Gradient to skip the generation of dY/dW. Note that the concept of optional outputs can also be found in ONNX’s RNN, GRU, and LSTM.
Gradient operator can compute derivative against intermediate tensors. For example, the gradient of Y with respect to H can be done via
W > Conv > H > Gemm > Y ^  ^    X  Z .'   .'   (H/Z is the 1st/2nd input of Gradient as shown in "xs") v v Gradient(xs=["H", "Z"], y="Y")    '> dY/dH (1st output of Gradient)  '> dY/dZ (2nd output of Gradient)
It is possible to represent highorder differentiation using Gradient operators. For example, given the following linear model:
W > Gemm > Y > Loss > O ^ ^   X L
To compute the 2nd order derivative of O with respect to W (denoted by d^2O/dW^2), one can do
W > Gemm > Y > Loss > O  ^ ^     X .L        v +++> Gradient(xs=["X", "W"], zs=["L"], y="O") > dO/dX (1st output of Gradient)        '> dO/dW (2nd output of Gradient)  v v '> Gradient(xs=["X", "W"], zs=["L"], y="dO/dW") > d(dO/dW)dX (1st output of  Gradient)   '> d^2O/dW^2 (2nd output of Gradient)
The tensors named in attributes “xs”, “zs”, and “y” define the differentiated computation graph, and the inputs to Gradient node define the values at which the gradient is computed. We can feed different tensors to the identified graph. For example, one can compute the gradient of Y with respect to H at a specific value of H, H_1, by providing that value as an input to the Gradient node.
W > Conv > H > Gemm > Y ^ ^   X Z Z_1 (2nd input of Gradient)  v H_1 > Gradient(xs=["H", "Z"], y="Y") > dY/dH when H = H_1 and Y = Y_1.  '> dY/dZ (2nd output of Gradient)
When the inputs of Gradient are the tensors named in “xs” and “zs”, the computation can be optimized. More specifically, intermediate variables in forward pass can be reused if the gradient is computed via reversemode autodifferentiation.
Attributes
xs (required): Input tensor names of the differentiated subgraph. It contains only the necessary differentiated inputs of a (sub)graph. Variables (usually called intermediate variables) that can be generated from inputs cannot be included in this attribute. Default value is ````
y (required): The targeted tensor. It can be viewed as the output of the differentiated function. The attribute “xs” and attribute “zs” are the minimal independent variable set that determines the value of “y”. Default value is ````
zs: Input tensor names of the differentiated subgraph. It contains only the necessary nondifferentiated inputs of a (sub)graph. Variables (usually called intermediate variables) that can be generated from inputs cannot be included in this attribute. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
Inputs (variadic)T1: The values fed into graph identified by the attributes. The ith input is the value of the ith tensor specified in the concatenated list of the attribute “xs” and the attribute “zs”. For example, if xs=[“A”, “B”] and zs=[“C”], the first input is used as the value of symbol “A” and the 3rd input is substituted for all the occurrences of “C”.
Outputs
Between 1 and 2147483647 outputs.
Outputs (variadic)T2: The gradient of the tensor specified by the attribute “y” with respect to each of tensors specified in the attribute “xs”. The ith output is the gradient of “y” with respect to the ith tensor specified in the attribute “xs”.
Type Constraints
T1 tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Allow outputs to be any kind of tensor.
T2 tensor(float16), tensor(float), tensor(double): Allow inputs to be any kind of floatingpoint tensor.
OnnxGradient_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGradient_1
(*args, **kwargs)¶ Version
Onnx name: Gradient
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Gradient operator computes the partial derivatives of a specific tensor w.r.t. some other tensors. This operator is widely used in gradientbased training algorithms. To illustrate its use, let’s consider a computation graph,
X .  v W > Conv > H > Gemm > Y ^  Z
, where W and Z are trainable tensors. Note that operators’ attributes are omitted for the sake of simplicity. Let dY/dW (dY/dZ) be the gradient of Y with respect to W (Z). The user can compute gradient by inserting Gradient operator to form another graph shown below.
W > Conv > H > Gemm > Y  ^ ^     X Z      .'    (W/Z/X is the 1st/2nd/3rd input of Gradient as shown in    "xs" followed by "zs")  v v '> Gradient(xs=["W", "Z"], zs=["X"], y="Y")    '> dY/dW (1st output of Gradient)  '> dY/dZ (2nd output of Gradient)
By definition, the tensor “y” is a function of independent variables in “xs” and “zs”. Since we only compute the gradient of “y” w.r.t. the differentiable variables in “xs”, this Gradient only outputs dY/dW and dY/dZ. Note that “H” cannot appear in “xs” and “zs”. The reason is that “H” can be determined by tensors “W” and “X” and therefore “H” is not an independent variable.
All outputs are optional. If needed, for example, user can assign an empty string to the 1st output name of that Gradient to skip the generation of dY/dW. Note that the concept of optional outputs can also be found in ONNX’s RNN, GRU, and LSTM.
Gradient operator can compute derivative against intermediate tensors. For example, the gradient of Y with respect to H can be done via
W > Conv > H > Gemm > Y ^  ^    X  Z .'   .'   (H/Z is the 1st/2nd input of Gradient as shown in "xs") v v Gradient(xs=["H", "Z"], y="Y")    '> dY/dH (1st output of Gradient)  '> dY/dZ (2nd output of Gradient)
It is possible to represent highorder differentiation using Gradient operators. For example, given the following linear model:
W > Gemm > Y > Loss > O ^ ^   X L
To compute the 2nd order derivative of O with respect to W (denoted by d^2O/dW^2), one can do
W > Gemm > Y > Loss > O  ^ ^     X .L        v +++> Gradient(xs=["X", "W"], zs=["L"], y="O") > dO/dX (1st output of Gradient)        '> dO/dW (2nd output of Gradient)  v v '> Gradient(xs=["X", "W"], zs=["L"], y="dO/dW") > d(dO/dW)dX (1st output of  Gradient)   '> d^2O/dW^2 (2nd output of Gradient)
The tensors named in attributes “xs”, “zs”, and “y” define the differentiated computation graph, and the inputs to Gradient node define the values at which the gradient is computed. We can feed different tensors to the identified graph. For example, one can compute the gradient of Y with respect to H at a specific value of H, H_1, by providing that value as an input to the Gradient node.
W > Conv > H > Gemm > Y ^ ^   X Z Z_1 (2nd input of Gradient)  v H_1 > Gradient(xs=["H", "Z"], y="Y") > dY/dH when H = H_1 and Y = Y_1.  '> dY/dZ (2nd output of Gradient)
When the inputs of Gradient are the tensors named in “xs” and “zs”, the computation can be optimized. More specifically, intermediate variables in forward pass can be reused if the gradient is computed via reversemode autodifferentiation.
Attributes
xs (required): Input tensor names of the differentiated subgraph. It contains only the necessary differentiated inputs of a (sub)graph. Variables (usually called intermediate variables) that can be generated from inputs cannot be included in this attribute. Default value is ````
y (required): The targeted tensor. It can be viewed as the output of the differentiated function. The attribute “xs” and attribute “zs” are the minimal independent variable set that determines the value of “y”. Default value is ````
zs: Input tensor names of the differentiated subgraph. It contains only the necessary nondifferentiated inputs of a (sub)graph. Variables (usually called intermediate variables) that can be generated from inputs cannot be included in this attribute. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
Inputs (variadic)T1: The values fed into graph identified by the attributes. The ith input is the value of the ith tensor specified in the concatenated list of the attribute “xs” and the attribute “zs”. For example, if xs=[“A”, “B”] and zs=[“C”], the first input is used as the value of symbol “A” and the 3rd input is substituted for all the occurrences of “C”.
Outputs
Between 1 and 2147483647 outputs.
Outputs (variadic)T2: The gradient of the tensor specified by the attribute “y” with respect to each of tensors specified in the attribute “xs”. The ith output is the gradient of “y” with respect to the ith tensor specified in the attribute “xs”.
Type Constraints
T1 tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Allow outputs to be any kind of tensor.
T2 tensor(float16), tensor(float), tensor(double): Allow inputs to be any kind of floatingpoint tensor.
OnnxGraphCall¶

class
skl2onnx.algebra.onnx_ops.
OnnxGraphCall
(*args, **kwargs)¶ Version
Onnx name: GraphCall
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
The GraphCall operator invokes a graph inside TrainingInfoProto’s algorithm field. The GraphCall inputs and outputs are bound to those of invoked graph by position. If a graph input has an initializer, that input is considered optional. All graph outputs are optional.
Below Python syntax is used for describing dictionary and list.
Assume that ModelProto’s graph field has  name: “MyInferenceGraph”  input: [“X”, “W”, “Z”]  initializer: [W]  output: [“Y”]
as visualized below for inference.
X .  v W > Conv > H > Gemm > Y ^  Z
Assume that the training algorithm contains
inputs: [“X_1”, “Z_1”, “C”]
initializer: [T]
outputs: [“W_new”]
with a dictionary
update_binding: {“W”: “W_new”, “T”: “T_new”}
Inside the training algorithm graph, one can invoke the inference graph via adding a GraphCall node with
inputs: [“X_1”, “W”, Z_1”]
outputs: [“Y_1”]
an attribute graph_name=”MyInferenceGraph”,
The initializers, “W” and “T” in this case, in update_binding are considered globallyvisible and mutable variables, which can be used as inputs of operators in the training graph.
An example training algorithm graph may look like
. W (a global and mutable variable from   the inference graph)    .'.      v   . X_1 > GraphCall(graph_name="MyInferenceGraph")              Z_1 '      V     Y_1 > Loss > O     ^        `.  C          .'        v v v  `> Gradient(xs=["W"], zs=["X_1", "Z_1", "C"], y="O")    v  dO_dW (gradient of W) 1 (a scalar one)     V v  Div < T > Add > T_new   (T is the number of training iterations.   T is also globally visible and mutable.)  v `> Sub > W_new
where Loss is a dummy node which computes the minimized objective function.
The variable “W” is an optional input in the called graph. If the user omits it, the input list of GraphCall becomes [“X_1”, “”, “Z_1”]. In this case, from the view of computation graph, the Conv operator invoked by GraphCall’s may be still connected the global “W” variable and therefore the structure of the computation graph is unchanged.
Attributes
graph_name (required): The invoked graph’s name. The only allowed value is the name of the inference graph, which is stored in “ModelProto.graph.name” in the ONNX model format. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
Inputs (variadic)T: Inputs fed to the invoked graph. The ith input here goes to the ith input of the invoked graph. To omit an optional input in this field, the user can drop it or use an empty string.
Outputs
Between 1 and 2147483647 outputs.
Outputs (variadic)T: The outputs generated by the called graph. Its ith value is bound to the ith output of the called graph. Similar to the inputs, all outputs are optional.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Allow inputs and outputs to be any kind of tensor.
OnnxGraphCall_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGraphCall_1
(*args, **kwargs)¶ Version
Onnx name: GraphCall
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
The GraphCall operator invokes a graph inside TrainingInfoProto’s algorithm field. The GraphCall inputs and outputs are bound to those of invoked graph by position. If a graph input has an initializer, that input is considered optional. All graph outputs are optional.
Below Python syntax is used for describing dictionary and list.
Assume that ModelProto’s graph field has  name: “MyInferenceGraph”  input: [“X”, “W”, “Z”]  initializer: [W]  output: [“Y”]
as visualized below for inference.
X .  v W > Conv > H > Gemm > Y ^  Z
Assume that the training algorithm contains
inputs: [“X_1”, “Z_1”, “C”]
initializer: [T]
outputs: [“W_new”]
with a dictionary
update_binding: {“W”: “W_new”, “T”: “T_new”}
Inside the training algorithm graph, one can invoke the inference graph via adding a GraphCall node with
inputs: [“X_1”, “W”, Z_1”]
outputs: [“Y_1”]
an attribute graph_name=”MyInferenceGraph”,
The initializers, “W” and “T” in this case, in update_binding are considered globallyvisible and mutable variables, which can be used as inputs of operators in the training graph.
An example training algorithm graph may look like
. W (a global and mutable variable from   the inference graph)    .'.      v   . X_1 > GraphCall(graph_name="MyInferenceGraph")              Z_1 '      V     Y_1 > Loss > O     ^        `.  C          .'        v v v  `> Gradient(xs=["W"], zs=["X_1", "Z_1", "C"], y="O")    v  dO_dW (gradient of W) 1 (a scalar one)     V v  Div < T > Add > T_new   (T is the number of training iterations.   T is also globally visible and mutable.)  v `> Sub > W_new
where Loss is a dummy node which computes the minimized objective function.
The variable “W” is an optional input in the called graph. If the user omits it, the input list of GraphCall becomes [“X_1”, “”, “Z_1”]. In this case, from the view of computation graph, the Conv operator invoked by GraphCall’s may be still connected the global “W” variable and therefore the structure of the computation graph is unchanged.
Attributes
graph_name (required): The invoked graph’s name. The only allowed value is the name of the inference graph, which is stored in “ModelProto.graph.name” in the ONNX model format. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
Inputs (variadic)T: Inputs fed to the invoked graph. The ith input here goes to the ith input of the invoked graph. To omit an optional input in this field, the user can drop it or use an empty string.
Outputs
Between 1 and 2147483647 outputs.
Outputs (variadic)T: The outputs generated by the called graph. Its ith value is bound to the ith output of the called graph. Similar to the inputs, all outputs are optional.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Allow inputs and outputs to be any kind of tensor.
OnnxGreater¶

class
skl2onnx.algebra.onnx_ops.
OnnxGreater
(*args, **kwargs)¶ Version
Onnx name: Greater
This version of the operator has been available since version 9.
Summary
Returns the tensor resulted from performing the greater logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxGreaterOrEqual¶

class
skl2onnx.algebra.onnx_ops.
OnnxGreaterOrEqual
(*args, **kwargs)¶ Version
Onnx name: GreaterOrEqual
This version of the operator has been available since version 12.
Summary
Returns the tensor resulted from performing the greater_equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxGreaterOrEqual_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxGreaterOrEqual_12
(*args, **kwargs)¶ Version
Onnx name: GreaterOrEqual
This version of the operator has been available since version 12.
Summary
Returns the tensor resulted from performing the greater_equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxGreater_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxGreater_1
(*args, **kwargs)¶ Version
Onnx name: Greater
This version of the operator has been available since version 1.
Summary
Returns the tensor resulted from performing the greater logical operation elementwise on the input tensors A and B.
If broadcasting is enabled, the righthandside argument will be broadcasted to match the shape of lefthandside argument. See the doc of Add for a detailed description of the broadcasting rules.
Attributes
axis: If set, defines the broadcast dimensions. Default value is ````
broadcast: Enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: Left input tensor for the logical operator.
B (heterogeneous)T: Right input tensor for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrains input to float tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxGreater_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxGreater_7
(*args, **kwargs)¶ Version
Onnx name: Greater
This version of the operator has been available since version 7.
Summary
Returns the tensor resulted from performing the greater logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrains input to float tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxGreater_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxGreater_9
(*args, **kwargs)¶ Version
Onnx name: Greater
This version of the operator has been available since version 9.
Summary
Returns the tensor resulted from performing the greater logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxHardSigmoid¶

class
skl2onnx.algebra.onnx_ops.
OnnxHardSigmoid
(*args, **kwargs)¶ Version
Onnx name: HardSigmoid
This version of the operator has been available since version 6.
Summary
HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.
Attributes
alpha: Value of alpha. Default value is ``name: “alpha”
f: 0.20000000298023224 type: FLOAT `` * beta: Value of beta. Default value is ``name: “beta” f: 0.5 type: FLOAT ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxHardSigmoid_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxHardSigmoid_1
(*args, **kwargs)¶ Version
Onnx name: HardSigmoid
This version of the operator has been available since version 1.
Summary
HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.
Attributes
alpha: Value of alpha default to 0.2 Default value is ``name: “alpha”
f: 0.20000000298023224 type: FLOAT `` * beta: Value of beta default to 0.5 Default value is
name: "beta" f: 0.5 type: FLOAT `` * *consumed_inputs*: legacy optimization attribute. Default value is ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxHardSigmoid_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxHardSigmoid_6
(*args, **kwargs)¶ Version
Onnx name: HardSigmoid
This version of the operator has been available since version 6.
Summary
HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.
Attributes
alpha: Value of alpha. Default value is ``name: “alpha”
f: 0.20000000298023224 type: FLOAT `` * beta: Value of beta. Default value is ``name: “beta” f: 0.5 type: FLOAT ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxHardmax¶

class
skl2onnx.algebra.onnx_ops.
OnnxHardmax
(*args, **kwargs)¶ Version
Onnx name: Hardmax
This version of the operator has been available since version 11.
Summary
 The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch
of the given input.
The input does not need to explicitly be a 2D vector; rather, it will be coerced into one. For an arbitrary ndimensional tensor input in [a_0, a_1, …, a_{k1}, a_k, …, a_{n1}] and k is the axis provided, then input will be coerced into a 2dimensional tensor with dimensions [a_0 * … * a_{k1}, a_k * … * a_{n1}]. For the default case where axis=1, this means the input tensor will be coerced into a 2D tensor of dimensions [a_0, a_1 * … * a_{n1}], where a_0 is often the batch size. In this situation, we must have a_0 = N and a_1 * … * a_{n1} = D. Each of these dimensions must be matched correctly, or else the operator will throw errors. The output tensor has the same shape and contains the hardmax values of the corresponding input.
Attributes
axis: Describes the axis of the inputs when coerced to 2D; defaults to one because the 0th axis most likely describes the batch_size. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(input). Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
input (heterogeneous)T: The input tensor that’s coerced into a 2D matrix of size (NxD) as described above.
Outputs
output (heterogeneous)T: The output values with the same shape as input tensor (the original size without coercion).
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxHardmax_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxHardmax_1
(*args, **kwargs)¶ Version
Onnx name: Hardmax
This version of the operator has been available since version 1.
Summary
 The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch
of the given input. The input is a 2D tensor (Tensor<float>) of size
(batch_size x input_feature_dimensions). The output tensor has the same shape and contains the hardmax values of the corresponding input.
Input does not need to explicitly be a 2D vector; rather, it will be coerced into one. For an arbitrary ndimensional tensor input in [a_0, a_1, …, a_{k1}, a_k, …, a_{n1}] and k is the axis provided, then input will be coerced into a 2dimensional tensor with dimensions [a_0 * … * a_{k1}, a_k * … * a_{n1}]. For the default case where axis=1, this means the input tensor will be coerced into a 2D tensor of dimensions [a_0, a_1 * … * a_{n1}], where a_0 is often the batch size. In this situation, we must have a_0 = N and a_1 * … * a_{n1} = D. Each of these dimensions must be matched correctly, or else the operator will throw errors.
Attributes
axis: Describes the axis of the inputs when coerced to 2D; defaults to one because the 0th axis most likely describes the batch_size Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
input (heterogeneous)T: The input tensor that’s coerced into a 2D matrix of size (NxD) as described above.
Outputs
output (heterogeneous)T: The output values with the same shape as input tensor (the original size without coercion).
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxHardmax_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxHardmax_11
(*args, **kwargs)¶ Version
Onnx name: Hardmax
This version of the operator has been available since version 11.
Summary
 The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch
of the given input.
The input does not need to explicitly be a 2D vector; rather, it will be coerced into one. For an arbitrary ndimensional tensor input in [a_0, a_1, …, a_{k1}, a_k, …, a_{n1}] and k is the axis provided, then input will be coerced into a 2dimensional tensor with dimensions [a_0 * … * a_{k1}, a_k * … * a_{n1}]. For the default case where axis=1, this means the input tensor will be coerced into a 2D tensor of dimensions [a_0, a_1 * … * a_{n1}], where a_0 is often the batch size. In this situation, we must have a_0 = N and a_1 * … * a_{n1} = D. Each of these dimensions must be matched correctly, or else the operator will throw errors. The output tensor has the same shape and contains the hardmax values of the corresponding input.
Attributes
axis: Describes the axis of the inputs when coerced to 2D; defaults to one because the 0th axis most likely describes the batch_size. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(input). Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
input (heterogeneous)T: The input tensor that’s coerced into a 2D matrix of size (NxD) as described above.
Outputs
output (heterogeneous)T: The output values with the same shape as input tensor (the original size without coercion).
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxIdentity¶

class
skl2onnx.algebra.onnx_ops.
OnnxIdentity
(*args, **kwargs)¶ Version
Onnx name: Identity
This version of the operator has been available since version 1.
Summary
Identity operator
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: Tensor to copy input into.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxIdentity_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxIdentity_1
(*args, **kwargs)¶ Version
Onnx name: Identity
This version of the operator has been available since version 1.
Summary
Identity operator
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: Tensor to copy input into.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxIf¶

class
skl2onnx.algebra.onnx_ops.
OnnxIf
(*args, **kwargs)¶ Version
Onnx name: If
This version of the operator has been available since version 11.
Summary
If conditional
Attributes
else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be liveout to the enclosing scope. The number of outputs must match the number of outputs in the then_branch. Default value is ````
then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be liveout to the enclosing scope. The number of outputs must match the number of outputs in the else_branch. Default value is ````
Inputs
cond (heterogeneous)B: Condition for the if
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)V: Values that are liveout to the enclosing scope. The return values in the then_branch and else_branch must be of the same data type. The then_branch and else_branch may produce tensors with the same element type and different shapes. If corresponding outputs from the thenbranch and the elsebranch have static shapes S1 and S2, then the shape of the corresponding output variable of the ifnode (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of then_branch is typed float tensor with shape [2] and the first output of else_branch is another float tensor with shape [3], If’s first output should have (a) no shape set, or (b) a shape of rank 1 with neither dim_value nor dim_param set, or (c) a shape of rank 1 with a unique dim_param. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.
Type Constraints
V tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): All Tensor types
B tensor(bool): Only bool
OnnxIf_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxIf_1
(*args, **kwargs)¶ Version
Onnx name: If
This version of the operator has been available since version 1.
Summary
If conditional
Attributes
else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be liveout to the enclosing scope. The number of outputs must match the number of outputs in the then_branch. Default value is ````
then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be liveout to the enclosing scope. The number of outputs must match the number of outputs in the else_branch. Default value is ````
Inputs
cond (heterogeneous)B: Condition for the if
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)V: Values that are liveout to the enclosing scope. The return values in the then_branch and else_branch must be of the same shape and same data type.
Type Constraints
V tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): All Tensor types
B tensor(bool): Only bool
OnnxIf_11¶

class
skl2onnx.algebra.onnx_ops.
OnnxIf_11
(*args, **kwargs)¶ Version
Onnx name: If
This version of the operator has been available since version 11.
Summary
If conditional
Attributes
else_branch (required): Graph to run if condition is false. Has N outputs: values you wish to be liveout to the enclosing scope. The number of outputs must match the number of outputs in the then_branch. Default value is ````
then_branch (required): Graph to run if condition is true. Has N outputs: values you wish to be liveout to the enclosing scope. The number of outputs must match the number of outputs in the else_branch. Default value is ````
Inputs
cond (heterogeneous)B: Condition for the if
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)V: Values that are liveout to the enclosing scope. The return values in the then_branch and else_branch must be of the same data type. The then_branch and else_branch may produce tensors with the same element type and different shapes. If corresponding outputs from the thenbranch and the elsebranch have static shapes S1 and S2, then the shape of the corresponding output variable of the ifnode (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the the first output of then_branch is typed float tensor with shape [2] and the first output of else_branch is another float tensor with shape [3], If’s first output should have (a) no shape set, or (b) a shape of rank 1 with neither dim_value nor dim_param set, or (c) a shape of rank 1 with a unique dim_param. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible.
Type Constraints
V tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): All Tensor types
B tensor(bool): Only bool
OnnxImputer¶

class
skl2onnx.algebra.onnx_ops.
OnnxImputer
(*args, **kwargs)¶ Version
Onnx name: Imputer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Replaces inputs that equal one value with another, leaving all other elements alone.
This operator is typically used to replace missing values in situations where they have a canonical representation, such as 1, 0, NaN, or some extreme value.
One and only one of imputed_value_floats or imputed_value_int64s should be defined – floats if the input tensor holds floats, integers if the input tensor holds integers. The imputed values must all fit within the width of the tensor element type. One and only one of the replaced_value_float or replaced_value_int64 should be defined, which one depends on whether floats or integers are being processed.
The imputed_value attribute length can be 1 element, or it can have one element per input feature. In other words, if the input tensor has the shape [*,F], then the length of the attribute array may be 1 or F. If it is 1, then it is broadcast along the last dimension and applied to each feature.
Attributes
imputed_value_floats: Value(s) to change to Default value is ````
imputed_value_int64s: Value(s) to change to. Default value is ````
replaced_value_float: A value that needs replacing. Default value is ``name: “replaced_value_float”
f: 0.0 type: FLOAT `` * replaced_value_int64: A value that needs replacing. Default value is ``name: “replaced_value_int64” i: 0 type: INT ``
Inputs
X (heterogeneous)T: Data to be processed.
Outputs
Y (heterogeneous)T: Imputed output data
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32): The input type must be a tensor of a numeric type, either [N,C] or [C]. The output type will be of the same tensor type and shape.
OnnxImputer_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxImputer_1
(*args, **kwargs)¶ Version
Onnx name: Imputer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Replaces inputs that equal one value with another, leaving all other elements alone.
This operator is typically used to replace missing values in situations where they have a canonical representation, such as 1, 0, NaN, or some extreme value.
One and only one of imputed_value_floats or imputed_value_int64s should be defined – floats if the input tensor holds floats, integers if the input tensor holds integers. The imputed values must all fit within the width of the tensor element type. One and only one of the replaced_value_float or replaced_value_int64 should be defined, which one depends on whether floats or integers are being processed.
The imputed_value attribute length can be 1 element, or it can have one element per input feature. In other words, if the input tensor has the shape [*,F], then the length of the attribute array may be 1 or F. If it is 1, then it is broadcast along the last dimension and applied to each feature.
Attributes
imputed_value_floats: Value(s) to change to Default value is ````
imputed_value_int64s: Value(s) to change to. Default value is ````
replaced_value_float: A value that needs replacing. Default value is ``name: “replaced_value_float”
f: 0.0 type: FLOAT `` * replaced_value_int64: A value that needs replacing. Default value is ``name: “replaced_value_int64” i: 0 type: INT ``
Inputs
X (heterogeneous)T: Data to be processed.
Outputs
Y (heterogeneous)T: Imputed output data
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32): The input type must be a tensor of a numeric type, either [N,C] or [C]. The output type will be of the same tensor type and shape.
OnnxInstanceNormalization¶

class
skl2onnx.algebra.onnx_ops.
OnnxInstanceNormalization
(*args, **kwargs)¶ Version
Onnx name: InstanceNormalization
This version of the operator has been available since version 6.
Summary
Carries out instance normalization as described in the paper https://arxiv.org/abs/1607.08022.
y = scale * (x  mean) / sqrt(variance + epsilon) + B, where mean and variance are computed per instance per channel.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT ``
Inputs
input (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
scale (heterogeneous)T: The input 1dimensional scale tensor of size C.
B (heterogeneous)T: The input 1dimensional bias tensor of size C.
Outputs
output (heterogeneous)T: The output tensor of the same shape as input.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxInstanceNormalization_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxInstanceNormalization_1
(*args, **kwargs)¶ Version
Onnx name: InstanceNormalization
This version of the operator has been available since version 1.
Summary
Carries out instance normalization as described in the paper https://arxiv.org/abs/1607.08022.
y = scale * (x  mean) / sqrt(variance + epsilon) + B, where mean and variance are computed per instance per channel.
Attributes
consumed_inputs: legacy optimization attribute. Default value is ````
epsilon: The epsilon value to use to avoid division by zero, default is 1e5f. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT ``
Inputs
input (heterogeneous)T: The input 4dimensional tensor of shape NCHW.
scale (heterogeneous)T: The input 1dimensional scale tensor of size C.
B (heterogeneous)T: The input 1dimensional bias tensor of size C.
Outputs
output (heterogeneous)T: The output 4dimensional tensor of the same shape as input.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxInstanceNormalization_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxInstanceNormalization_6
(*args, **kwargs)¶ Version
Onnx name: InstanceNormalization
This version of the operator has been available since version 6.
Summary
Carries out instance normalization as described in the paper https://arxiv.org/abs/1607.08022.
y = scale * (x  mean) / sqrt(variance + epsilon) + B, where mean and variance are computed per instance per channel.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT ``
Inputs
input (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
scale (heterogeneous)T: The input 1dimensional scale tensor of size C.
B (heterogeneous)T: The input 1dimensional bias tensor of size C.
Outputs
output (heterogeneous)T: The output tensor of the same shape as input.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxIsInf¶

class
skl2onnx.algebra.onnx_ops.
OnnxIsInf
(*args, **kwargs)¶ Version
Onnx name: IsInf
This version of the operator has been available since version 10.
Summary
Map infinity to true and other values to false.
Attributes
detect_negative: (Optional) Whether map negative infinity to true. Default to 1 so that negative infinity induces true. Set this attribute to 0 if negative infinity should be mapped to false. Default value is ``name: “detect_negative”
i: 1 type: INT `` * detect_positive: (Optional) Whether map positive infinity to true. Default to 1 so that positive infinity induces true. Set this attribute to 0 if positive infinity should be mapped to false. Default value is ``name: “detect_positive” i: 1 type: INT ``
Inputs
X (heterogeneous)T1: input
Outputs
Y (heterogeneous)T2: output
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float tensors.
T2 tensor(bool): Constrain output types to boolean tensors.
OnnxIsInf_10¶

class
skl2onnx.algebra.onnx_ops.
OnnxIsInf_10
(*args, **kwargs)¶ Version
Onnx name: IsInf
This version of the operator has been available since version 10.
Summary
Map infinity to true and other values to false.
Attributes
detect_negative: (Optional) Whether map negative infinity to true. Default to 1 so that negative infinity induces true. Set this attribute to 0 if negative infinity should be mapped to false. Default value is ``name: “detect_negative”
i: 1 type: INT `` * detect_positive: (Optional) Whether map positive infinity to true. Default to 1 so that positive infinity induces true. Set this attribute to 0 if positive infinity should be mapped to false. Default value is ``name: “detect_positive” i: 1 type: INT ``
Inputs
X (heterogeneous)T1: input
Outputs
Y (heterogeneous)T2: output
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float tensors.
T2 tensor(bool): Constrain output types to boolean tensors.
OnnxIsNaN¶

class
skl2onnx.algebra.onnx_ops.
OnnxIsNaN
(*args, **kwargs)¶ Version
Onnx name: IsNaN
This version of the operator has been available since version 9.
Summary
Returns which elements of the input are NaN.
Inputs
X (heterogeneous)T1: input
Outputs
Y (heterogeneous)T2: output
Type Constraints
T1 tensor(float16), tensor(float), tensor(double): Constrain input types to float tensors.
T2 tensor(bool): Constrain output types to boolean tensors.
OnnxIsNaN_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxIsNaN_9
(*args, **kwargs)¶ Version
Onnx name: IsNaN
This version of the operator has been available since version 9.
Summary
Returns which elements of the input are NaN.
Inputs
X (heterogeneous)T1: input
Outputs
Y (heterogeneous)T2: output
Type Constraints
T1 tensor(float16), tensor(float), tensor(double): Constrain input types to float tensors.
T2 tensor(bool): Constrain output types to boolean tensors.
OnnxLRN¶

class
skl2onnx.algebra.onnx_ops.
OnnxLRN
(*args, **kwargs)¶ Version
Onnx name: LRN
This version of the operator has been available since version 1.
Summary
Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824imagenetclassificationwithdeepconvolutionalneuralnetworks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, …, dk] in a tensor of shape (N x C x D1 x D2, …, Dk), its region is {X[n, i, d1, …, dk]  max(0, c  floor((size  1) / 2)) <= i <= min(C  1, c + ceil((size  1) / 2))}.
square_sum[n, c, d1, …, dk] = sum(X[n, i, d1, …, dk] ^ 2), where max(0, c  floor((size  1) / 2)) <= i <= min(C  1, c + ceil((size  1) / 2)).
Y[n, c, d1, …, dk] = X[n, c, d1, …, dk] / (bias + alpha / size * square_sum[n, c, d1, …, dk] ) ^ beta
Attributes
alpha: Scaling parameter. Default value is ``name: “alpha”
f: 9.999999747378752e05 type: FLOAT `` * beta: The exponent. Default value is
name: "beta" f: 0.75 type: FLOAT `` * *bias*: Default value is ``name: "bias" f: 1.0 type: FLOAT `` * *size* (required): The number of channels to sum over Default value is ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output tensor, which has the shape and type as input tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxLRN_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxLRN_1
(*args, **kwargs)¶ Version
Onnx name: LRN
This version of the operator has been available since version 1.
Summary
Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824imagenetclassificationwithdeepconvolutionalneuralnetworks.pdf). It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, …, dk] in a tensor of shape (N x C x D1 x D2, …, Dk), its region is {X[n, i, d1, …, dk]  max(0, c  floor((size  1) / 2)) <= i <= min(C  1, c + ceil((size  1) / 2))}.
square_sum[n, c, d1, …, dk] = sum(X[n, i, d1, …, dk] ^ 2), where max(0, c  floor((size  1) / 2)) <= i <= min(C  1, c + ceil((size  1) / 2)).
Y[n, c, d1, …, dk] = X[n, c, d1, …, dk] / (bias + alpha / size * square_sum[n, c, d1, …, dk] ) ^ beta
Attributes
alpha: Scaling parameter. Default value is ``name: “alpha”
f: 9.999999747378752e05 type: FLOAT `` * beta: The exponent. Default value is
name: "beta" f: 0.75 type: FLOAT `` * *bias*: Default value is ``name: "bias" f: 1.0 type: FLOAT `` * *size* (required): The number of channels to sum over Default value is ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output tensor, which has the shape and type as input tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxLSTM¶

class
skl2onnx.algebra.onnx_ops.
OnnxLSTM
(*args, **kwargs)¶ Version
Onnx name: LSTM
This version of the operator has been available since version 7.
Summary
Computes an onelayer LSTM. This operator is usually supported via some custom implementation such as CuDNN.
Notations:
X  input tensor
i  input gate
o  output gate
f  forget gate
c  cell gate
t  time step (t1 means previous time step)
W[iofc]  W parameter weight matrix for input, output, forget, and cell gates
R[iofc]  R recurrence weight matrix for input, output, forget, and cell gates
Wb[iofc]  W bias vectors for input, output, forget, and cell gates
Rb[iofc]  R bias vectors for input, output, forget, and cell gates
P[iof]  P peephole weight vector for input, output, and forget gates
WB[iofc]  W parameter weight matrix for backward input, output, forget, and cell gates
RB[iofc]  R recurrence weight matrix for backward input, output, forget, and cell gates
WBb[iofc]  W bias vectors for backward input, output, forget, and cell gates
RBb[iofc]  R bias vectors for backward input, output, forget, and cell gates
PB[iof]  P peephole weight vector for backward input, output, and forget gates
H  Hidden state
num_directions  2 if direction == bidirectional else 1
Activation functions:
Relu(x)  max(0, x)
Tanh(x)  (1  e^{2x})/(1 + e^{2x})
Sigmoid(x)  1/(1 + e^{x})
(NOTE: Below are optional)
Affine(x)  alpha*x + beta
LeakyRelu(x)  x if x >= 0 else alpha * x
ThresholdedRelu(x)  x if x >= alpha else 0
ScaledTanh(x)  alpha*Tanh(beta*x)
HardSigmoid(x)  min(max(alpha*x + beta, 0), 1)
Elu(x)  x if x >= 0 else alpha*(e^x  1)
Softsign(x)  x/(1 + x)
Softplus(x)  log(1 + e^x)
Equations (Default: f=Sigmoid, g=Tanh, h=Tanh):
it = f(Xt*(Wi^T) + Ht1*(Ri^T) + Pi (.) Ct1 + Wbi + Rbi)
ft = f(Xt*(Wf^T) + Ht1*(Rf^T) + Pf (.) Ct1 + Wbf + Rbf)
ct = g(Xt*(Wc^T) + Ht1*(Rc^T) + Wbc + Rbc)
Ct = ft (.) Ct1 + it (.) ct
ot = f(Xt*(Wo^T) + Ht1*(Ro^T) + Po (.) Ct + Wbo + Rbo)
Ht = ot (.) h(Ct)
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01. Default value is ````
activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators. Default value is ````
activations: A list of 3 (or 6 if bidirectional) activation functions for input, output, forget, cell, and hidden. The activation functions must be one of the activation functions specified above. Optional: See the equations for default if not specified. Default value is ````
clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [threshold, +threshold] and is applied to the input of activations. No clip if not specified. Default value is ````
direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is ``name: “direction”
s: “forward” type: STRING `` * hidden_size: Number of neurons in the hidden layer Default value is ```` * input_forget: Couple the input and forget gates if 1. Default value is ``name: “input_forget” i: 0 type: INT ``
Inputs
Between 3 and 8 inputs.
X (heterogeneous)T: The input sequences packed (and potentially padded) into one 3D tensor with the shape of [seq_length, batch_size, input_size].
W (heterogeneous)T: The weight tensor for the gates. Concatenation of W[iofc] and WB[iofc] (if bidirectional) along dimension 0. The tensor has shape [num_directions, 4*hidden_size, input_size].
R (heterogeneous)T: The recurrence weight tensor. Concatenation of R[iofc] and RB[iofc] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 4*hidden_size, hidden_size].
B (optional, heterogeneous)T: The bias tensor for input gate. Concatenation of [Wb[iofc], Rb[iofc]], and [WBb[iofc], RBb[iofc]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 8*hidden_size]. Optional: If not specified  assumed to be 0.
sequence_lens (optional, heterogeneous)T1: Optional tensor specifying lengths of the sequences in a batch. If not specified  assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
initial_h (optional, heterogeneous)T: Optional initial value of the hidden. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
initial_c (optional, heterogeneous)T: Optional initial value of the cell. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
P (optional, heterogeneous)T: The weight tensor for peepholes. Concatenation of P[iof] and PB[iof] (if bidirectional) along dimension 0. It has shape [num_directions, 3*hidde_size]. Optional: If not specified  assumed to be 0.
Outputs
Between 0 and 3 outputs.
Y (optional, heterogeneous)T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].
Y_h (optional, heterogeneous)T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].
Y_c (optional, heterogeneous)T: The last output value of the cell. It has shape [num_directions, batch_size, hidden_size].
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(int32): Constrain seq_lens to integer tensor.
OnnxLSTM_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxLSTM_1
(*args, **kwargs)¶ Version
Onnx name: LSTM
This version of the operator has been available since version 1.
Summary
Computes an onelayer LSTM. This operator is usually supported via some custom implementation such as CuDNN.
Notations:
X  input tensor
i  input gate
o  output gate
f  forget gate
c  cell gate
t  time step (t1 means previous time step)
W[iofc]  W parameter weight matrix for input, output, forget, and cell gates
R[iofc]  R recurrence weight matrix for input, output, forget, and cell gates
Wb[iofc]  W bias vectors for input, output, forget, and cell gates
Rb[iofc]  R bias vectors for input, output, forget, and cell gates
P[iof]  P peephole weight vector for input, output, and forget gates
WB[iofc]  W parameter weight matrix for backward input, output, forget, and cell gates
RB[iofc]  R recurrence weight matrix for backward input, output, forget, and cell gates
WBb[iofc]  W bias vectors for backward input, output, forget, and cell gates
RBb[iofc]  R bias vectors for backward input, output, forget, and cell gates
PB[iof]  P peephole weight vector for backward input, output, and forget gates
H  Hidden state
num_directions  2 if direction == bidirectional else 1
Activation functions:
Relu(x)  max(0, x)
Tanh(x)  (1  e^{2x})/(1 + e^{2x})
Sigmoid(x)  1/(1 + e^{x})
(NOTE: Below are optional)
Affine(x)  alpha*x + beta
LeakyRelu(x)  x if x >= 0 else alpha * x
ThresholdedRelu(x)  x if x >= alpha else 0
ScaledTanh(x)  alpha*Tanh(beta*x)
HardSigmoid(x)  min(max(alpha*x + beta, 0), 1)
Elu(x)  x if x >= 0 else alpha*(e^x  1)
Softsign(x)  x/(1 + x)
Softplus(x)  log(1 + e^x)
Equations (Default: f=Sigmoid, g=Tanh, h=Tanh):
it = f(Xt*(Wi^T) + Ht1*Ri + Pi (.) Ct1 + Wbi + Rbi)
ft = f(Xt*(Wf^T) + Ht1*Rf + Pf (.) Ct1 + Wbf + Rbf)
ct = g(Xt*(Wc^T) + Ht1*Rc + Wbc + Rbc)
Ct = ft (.) Ct1 + it (.) ct
ot = f(Xt*(Wo^T) + Ht1*Ro + Po (.) Ct + Wbo + Rbo)
Ht = ot (.) h(Ct)
Attributes
activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01. Default value is ````
activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators. Default value is ````
activations: A list of 3 (or 6 if bidirectional) activation functions for input, output, forget, cell, and hidden. The activation functions must be one of the activation functions specified above. Optional: See the equations for default if not specified. Default value is ````
clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [threshold, +threshold] and is applied to the input of activations. No clip if not specified. Default value is ````
direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is ``name: “direction”
s: “forward” type: STRING `` * hidden_size: Number of neurons in the hidden layer Default value is ```` * input_forget: Couple the input and forget gates if 1, default 0. Default value is ``name: “input_forget” i: 0 type: INT `` * output_sequence: The sequence output for the hidden is optional if 0. Default 0. Default value is ``name: “output_sequence” i: 0 type: INT ``
Inputs
Between 3 and 8 inputs.
X (heterogeneous)T: The input sequences packed (and potentially padded) into one 3D tensor with the shape of [seq_length, batch_size, input_size].
W (heterogeneous)T: The weight tensor for the gates. Concatenation of W[iofc] and WB[iofc] (if bidirectional) along dimension 0. The tensor has shape [num_directions, 4*hidden_size, input_size].
R (heterogeneous)T: The recurrence weight tensor. Concatenation of R[iofc] and RB[iofc] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 4*hidden_size, hidden_size].
B (optional, heterogeneous)T: The bias tensor for input gate. Concatenation of [Wb[iofc], Rb[iofc]], and [WBb[iofc], RBb[iofc]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 8*hidden_size]. Optional: If not specified  assumed to be 0.
sequence_lens (optional, heterogeneous)T1: Optional tensor specifying lengths of the sequences in a batch. If not specified  assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
initial_h (optional, heterogeneous)T: Optional initial value of the hidden. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
initial_c (optional, heterogeneous)T: Optional initial value of the cell. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
P (optional, heterogeneous)T: The weight tensor for peepholes. Concatenation of P[iof] and PB[iof] (if bidirectional) along dimension 0. It has shape [num_directions, 3*hidde_size]. Optional: If not specified  assumed to be 0.
Outputs
Between 0 and 3 outputs.
Y (optional, heterogeneous)T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size]. It is optional if output_sequence is 0.
Y_h (optional, heterogeneous)T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].
Y_c (optional, heterogeneous)T: The last output value of the cell. It has shape [num_directions, batch_size, hidden_size].
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(int32): Constrain seq_lens to integer tensor.
OnnxLSTM_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxLSTM_7
(*args, **kwargs)¶ Version
Onnx name: LSTM
This version of the operator has been available since version 7.
Summary
Computes an onelayer LSTM. This operator is usually supported via some custom implementation such as CuDNN.
Notations:
X  input tensor
i  input gate
o  output gate
f  forget gate
c  cell gate
t  time step (t1 means previous time step)
W[iofc]  W parameter weight matrix for input, output, forget, and cell gates
R[iofc]  R recurrence weight matrix for input, output, forget, and cell gates
Wb[iofc]  W bias vectors for input, output, forget, and cell gates
Rb[iofc]  R bias vectors for input, output, forget, and cell gates
P[iof]  P peephole weight vector for input, output, and forget gates
WB[iofc]  W parameter weight matrix for backward input, output, forget, and cell gates
RB[iofc]  R recurrence weight matrix for backward input, output, forget, and cell gates
WBb[iofc]  W bias vectors for backward input, output, forget, and cell gates
RBb[iofc]  R bias vectors for backward input, output, forget, and cell gates
PB[iof]  P peephole weight vector for backward input, output, and forget gates
H  Hidden state
num_directions  2 if direction == bidirectional else 1
Activation functions:
Relu(x)  max(0, x)
Tanh(x)  (1  e^{2x})/(1 + e^{2x})
Sigmoid(x)  1/(1 + e^{x})
(NOTE: Below are optional)
Affine(x)  alpha*x + beta
LeakyRelu(x)  x if x >= 0 else alpha * x
ThresholdedRelu(x)  x if x >= alpha else 0
ScaledTanh(x)  alpha*Tanh(beta*x)
HardSigmoid(x)  min(max(alpha*x + beta, 0), 1)
Elu(x)  x if x >= 0 else alpha*(e^x  1)
Softsign(x)  x/(1 + x)
Softplus(x)  log(1 + e^x)
Equations (Default: f=Sigmoid, g=Tanh, h=Tanh):
it = f(Xt*(Wi^T) + Ht1*(Ri^T) + Pi (.) Ct1 + Wbi + Rbi)
ft = f(Xt*(Wf^T) + Ht1*(Rf^T) + Pf (.) Ct1 + Wbf + Rbf)
ct = g(Xt*(Wc^T) + Ht1*(Rc^T) + Wbc + Rbc)
Ct = ft (.) Ct1 + it (.) ct
ot = f(Xt*(Wo^T) + Ht1*(Ro^T) + Po (.) Ct + Wbo + Rbo)
Ht = ot (.) h(Ct)
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
activation_alpha: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators.For example with LeakyRelu, the default alpha is 0.01. Default value is ````
activation_beta: Optional scaling values used by some activation functions. The values are consumed in the order of activation functions, for example (f, g, h) in LSTM. Default values are the same as of corresponding ONNX operators. Default value is ````
activations: A list of 3 (or 6 if bidirectional) activation functions for input, output, forget, cell, and hidden. The activation functions must be one of the activation functions specified above. Optional: See the equations for default if not specified. Default value is ````
clip: Cell clip threshold. Clipping bounds the elements of a tensor in the range of [threshold, +threshold] and is applied to the input of activations. No clip if not specified. Default value is ````
direction: Specify if the RNN is forward, reverse, or bidirectional. Must be one of forward (default), reverse, or bidirectional. Default value is ``name: “direction”
s: “forward” type: STRING `` * hidden_size: Number of neurons in the hidden layer Default value is ```` * input_forget: Couple the input and forget gates if 1. Default value is ``name: “input_forget” i: 0 type: INT ``
Inputs
Between 3 and 8 inputs.
X (heterogeneous)T: The input sequences packed (and potentially padded) into one 3D tensor with the shape of [seq_length, batch_size, input_size].
W (heterogeneous)T: The weight tensor for the gates. Concatenation of W[iofc] and WB[iofc] (if bidirectional) along dimension 0. The tensor has shape [num_directions, 4*hidden_size, input_size].
R (heterogeneous)T: The recurrence weight tensor. Concatenation of R[iofc] and RB[iofc] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 4*hidden_size, hidden_size].
B (optional, heterogeneous)T: The bias tensor for input gate. Concatenation of [Wb[iofc], Rb[iofc]], and [WBb[iofc], RBb[iofc]] (if bidirectional) along dimension 0. This tensor has shape [num_directions, 8*hidden_size]. Optional: If not specified  assumed to be 0.
sequence_lens (optional, heterogeneous)T1: Optional tensor specifying lengths of the sequences in a batch. If not specified  assumed all sequences in the batch to have length seq_length. It has shape [batch_size].
initial_h (optional, heterogeneous)T: Optional initial value of the hidden. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
initial_c (optional, heterogeneous)T: Optional initial value of the cell. If not specified  assumed to be 0. It has shape [num_directions, batch_size, hidden_size].
P (optional, heterogeneous)T: The weight tensor for peepholes. Concatenation of P[iof] and PB[iof] (if bidirectional) along dimension 0. It has shape [num_directions, 3*hidde_size]. Optional: If not specified  assumed to be 0.
Outputs
Between 0 and 3 outputs.
Y (optional, heterogeneous)T: A tensor that concats all the intermediate output values of the hidden. It has shape [seq_length, num_directions, batch_size, hidden_size].
Y_h (optional, heterogeneous)T: The last output value of the hidden. It has shape [num_directions, batch_size, hidden_size].
Y_c (optional, heterogeneous)T: The last output value of the cell. It has shape [num_directions, batch_size, hidden_size].
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(int32): Constrain seq_lens to integer tensor.
OnnxLabelEncoder¶

class
skl2onnx.algebra.onnx_ops.
OnnxLabelEncoder
(*args, **kwargs)¶ Version
Onnx name: LabelEncoder
This version of the operator has been available since version 2 of domain ai.onnx.ml.
Summary
Maps each element in the input tensor to another value.
The mapping is determined by the two parallel attributes, ‘keys_*’ and ‘values_*’ attribute. The ith value in the specified ‘keys_*’ attribute would be mapped to the ith value in the specified ‘values_*’ attribute. It implies that input’s element type and the element type of the specified ‘keys_*’ should be identical while the output type is identical to the specified ‘values_*’ attribute. If an input element can not be found in the specified ‘keys_*’ attribute, the ‘default_*’ that matches the specified ‘values_*’ attribute may be used as its output value.
Let’s consider an example which maps a string tensor to an integer tensor. Assume and ‘keys_strings’ is [“Amy”, “Sally”], ‘values_int64s’ is [5, 6], and ‘default_int64’ is ‘1’. The input [“Dori”, “Amy”, “Amy”, “Sally”, “Sally”] would be mapped to [1, 5, 5, 6, 6].
Since this operator is an onetoone mapping, its input and output shapes are the same. Notice that only one of ‘keys_*’/’values_*’ can be set.
For key lookup, bitwise comparison is used so even a float NaN can be mapped to a value in ‘values_*’ attribute.
Attributes
default_float: A float. Default value is ``name: “default_float”
f: 0.0 type: FLOAT `` * default_int64: An integer. Default value is
name: "default_int64" i: 1 type: INT `` * *default_string*: A string. Default value is ``name: "default_string" s: "_Unused" type: STRING `` * *keys_floats*: A list of floats. Default value is ``
* keys_int64s: A list of ints. Default value is ```` * keys_strings: A list of strings. One and only one of ‘keys_*’s should be set. Default value is ```` * values_floats: A list of floats. Default value is ```` * values_int64s: A list of ints. Default value is ```` * values_strings: A list of strings. One and only one of ‘value_*’s should be set. Default value is ````Inputs
X (heterogeneous)T1: Input data. It can be either tensor or scalar.
Outputs
Y (heterogeneous)T2: Output data.
Type Constraints
T1 tensor(string), tensor(int64), tensor(float): The input type is a tensor of any shape.
T2 tensor(string), tensor(int64), tensor(float): Output type is determined by the specified ‘values_*’ attribute.
OnnxLabelEncoder_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxLabelEncoder_1
(*args, **kwargs)¶ Version
Onnx name: LabelEncoder
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts strings to integers and vice versa.
If the string default value is set, it will convert integers to strings. If the int default value is set, it will convert strings to integers.
Each operator converts either integers to strings or strings to integers, depending on which default value attribute is provided. Only one default value attribute should be defined.
When converting from integers to strings, the string is fetched from the ‘classes_strings’ list, by simple indexing.
When converting from strings to integers, the string is looked up in the list and the index at which it is found is used as the converted value.
Attributes
default_int64: An integer to use when an input string value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_int64”
i: 1 type: INT `` * default_string: A string to use when an input integer value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_string” s: “_Unused” type: STRING ``
Inputs
X (heterogeneous)T1: Input data.
Outputs
Y (heterogeneous)T2: Output data. If strings are input, the output values are integers, and vice versa.
Type Constraints
T1 tensor(string), tensor(int64): The input type must be a tensor of integers or strings, of any shape.
T2 tensor(string), tensor(int64): The output type will be a tensor of strings or integers, and will have the same shape as the input.
OnnxLabelEncoder_2¶

class
skl2onnx.algebra.onnx_ops.
OnnxLabelEncoder_2
(*args, **kwargs)¶ Version
Onnx name: LabelEncoder
This version of the operator has been available since version 2 of domain ai.onnx.ml.
Summary
Maps each element in the input tensor to another value.
The mapping is determined by the two parallel attributes, ‘keys_*’ and ‘values_*’ attribute. The ith value in the specified ‘keys_*’ attribute would be mapped to the ith value in the specified ‘values_*’ attribute. It implies that input’s element type and the element type of the specified ‘keys_*’ should be identical while the output type is identical to the specified ‘values_*’ attribute. If an input element can not be found in the specified ‘keys_*’ attribute, the ‘default_*’ that matches the specified ‘values_*’ attribute may be used as its output value.
Let’s consider an example which maps a string tensor to an integer tensor. Assume and ‘keys_strings’ is [“Amy”, “Sally”], ‘values_int64s’ is [5, 6], and ‘default_int64’ is ‘1’. The input [“Dori”, “Amy”, “Amy”, “Sally”, “Sally”] would be mapped to [1, 5, 5, 6, 6].
Since this operator is an onetoone mapping, its input and output shapes are the same. Notice that only one of ‘keys_*’/’values_*’ can be set.
For key lookup, bitwise comparison is used so even a float NaN can be mapped to a value in ‘values_*’ attribute.
Attributes
default_float: A float. Default value is ``name: “default_float”
f: 0.0 type: FLOAT `` * default_int64: An integer. Default value is
name: "default_int64" i: 1 type: INT `` * *default_string*: A string. Default value is ``name: "default_string" s: "_Unused" type: STRING `` * *keys_floats*: A list of floats. Default value is ``
* keys_int64s: A list of ints. Default value is ```` * keys_strings: A list of strings. One and only one of ‘keys_*’s should be set. Default value is ```` * values_floats: A list of floats. Default value is ```` * values_int64s: A list of ints. Default value is ```` * values_strings: A list of strings. One and only one of ‘value_*’s should be set. Default value is ````Inputs
X (heterogeneous)T1: Input data. It can be either tensor or scalar.
Outputs
Y (heterogeneous)T2: Output data.
Type Constraints
T1 tensor(string), tensor(int64), tensor(float): The input type is a tensor of any shape.
T2 tensor(string), tensor(int64), tensor(float): Output type is determined by the specified ‘values_*’ attribute.
OnnxLeakyRelu¶

class
skl2onnx.algebra.onnx_ops.
OnnxLeakyRelu
(*args, **kwargs)¶ Version
Onnx name: LeakyRelu
This version of the operator has been available since version 6.
Summary
LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one output data (Tensor<T>) where the function f(x) = alpha * x for x < 0, f(x) = x for x >= 0, is applied to the data tensor elementwise.
Attributes
alpha: Coefficient of leakage. Default value is ``name: “alpha”
f: 0.009999999776482582 type: FLOAT ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxLeakyRelu_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxLeakyRelu_1
(*args, **kwargs)¶ Version
Onnx name: LeakyRelu
This version of the operator has been available since version 1.
Summary
LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one output data (Tensor<T>) where the function f(x) = alpha * x for x < 0, f(x) = x for x >= 0, is applied to the data tensor elementwise.
Attributes
alpha: Coefficient of leakage default to 0.01. Default value is ``name: “alpha”
f: 0.009999999776482582 type: FLOAT `` * consumed_inputs: legacy optimization attribute. Default value is ````
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxLeakyRelu_6¶

class
skl2onnx.algebra.onnx_ops.
OnnxLeakyRelu_6
(*args, **kwargs)¶ Version
Onnx name: LeakyRelu
This version of the operator has been available since version 6.
Summary
LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one output data (Tensor<T>) where the function f(x) = alpha * x for x < 0, f(x) = x for x >= 0, is applied to the data tensor elementwise.
Attributes
alpha: Coefficient of leakage. Default value is ``name: “alpha”
f: 0.009999999776482582 type: FLOAT ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxLess¶

class
skl2onnx.algebra.onnx_ops.
OnnxLess
(*args, **kwargs)¶ Version
Onnx name: Less
This version of the operator has been available since version 9.
Summary
Returns the tensor resulted from performing the less logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxLessOrEqual¶

class
skl2onnx.algebra.onnx_ops.
OnnxLessOrEqual
(*args, **kwargs)¶ Version
Onnx name: LessOrEqual
This version of the operator has been available since version 12.
Summary
Returns the tensor resulted from performing the less_equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxLessOrEqual_12¶

class
skl2onnx.algebra.onnx_ops.
OnnxLessOrEqual_12
(*args, **kwargs)¶ Version
Onnx name: LessOrEqual
This version of the operator has been available since version 12.
Summary
Returns the tensor resulted from performing the less_equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxLess_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxLess_1
(*args, **kwargs)¶ Version
Onnx name: Less
This version of the operator has been available since version 1.
Summary
Returns the tensor resulted from performing the less logical operation elementwise on the input tensors A and B.
If broadcasting is enabled, the righthandside argument will be broadcasted to match the shape of lefthandside argument. See the doc of Add for a detailed description of the broadcasting rules.
Attributes
axis: If set, defines the broadcast dimensions. Default value is ````
broadcast: Enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: Left input tensor for the logical operator.
B (heterogeneous)T: Right input tensor for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrains input to float tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxLess_7¶

class
skl2onnx.algebra.onnx_ops.
OnnxLess_7
(*args, **kwargs)¶ Version
Onnx name: Less
This version of the operator has been available since version 7.
Summary
Returns the tensor resulted from performing the less logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrains input to float tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxLess_9¶

class
skl2onnx.algebra.onnx_ops.
OnnxLess_9
(*args, **kwargs)¶ Version
Onnx name: Less
This version of the operator has been available since version 9.
Summary
Returns the tensor resulted from performing the less logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxLinearClassifier¶

class
skl2onnx.algebra.onnx_ops.
OnnxLinearClassifier
(*args, **kwargs)¶ Version
Onnx name: LinearClassifier
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Linear classifier
Attributes
classlabels_ints: Class labels when using integer labels. One and only one ‘classlabels’ attribute must be defined. Default value is ````
classlabels_strings: Class labels when using string labels. One and only one ‘classlabels’ attribute must be defined. Default value is ````
coefficients (required): A collection of weights of the model(s). Default value is ````
intercepts: A collection of intercepts. Default value is ````
multi_class: Indicates whether to do OvR or multinomial (0=OvR is the default). Default value is ``name: “multi_class”
i: 0 type: INT `` * post_transform: Indicates the transform to apply to the scores vector.<br>One of ‘NONE,’ ‘SOFTMAX,’ ‘LOGISTIC,’ ‘SOFTMAX_ZERO,’ or ‘PROBIT’ Default value is ``name: “post_transform” s: “NONE” type: STRING ``
Inputs
X (heterogeneous)T1: Data to be classified.
Outputs
Y (heterogeneous)T2: Classification outputs (one class per example).
Z (heterogeneous)tensor(float): Classification scores ([N,E]  one score for each class and example
Type Constraints
T1 tensor(float), tensor(double), tensor(int64), tensor(int32): The input must be a tensor of a numeric type, and of of shape [N,C] or [C]. In the latter case, it will be treated as [1,C]
T2 tensor(string), tensor(int64): The output will be a tensor of strings or integers.
OnnxLinearClassifier_1¶

class
skl2onnx.algebra.onnx_ops.
OnnxLinearClassifier_1
(*args, **kwargs)¶ Version
Onnx name: LinearClassifier
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Linear classifier
Attributes
classlabels_ints: Class labels when using integer labels. One and only one ‘classlabels’ attribute must be defined. Default value is ````
classlabels_strings: Class labels when using string labels. One and only one ‘classlabels’ attribute must be defined. Default value is ````
coefficients (required): A collection of weights of the model(s). Default value is ````
intercepts: A collection of intercepts. Default value is ````
multi_class: Indicates whether to do OvR or multinomial (0=OvR is the default). Default value is ``name: “multi_class”
i: 0 type: INT `` * post_transform: Indicates the transform to apply to the scores vector.<br>One of ‘NONE,’ ‘SOFTMAX,’ ‘LOGISTIC,’ ‘SOFTMAX_ZERO,’ or ‘PROBIT’ Default value is ``name: “post_transform” s: “NONE” type: STRING ``
Inputs
X