Supported scikit-learn Models#

skl2onnx currently can convert the following list of models for skl2onnx . They were tested using onnxruntime . All the following classes overloads the following methods such as OnnxSklearnPipeline does. They wrap existing scikit-learn classes by dynamically creating a new one which inherits from OnnxOperatorMixin which implements to_onnx methods.

Covered Converters#

Name

Package

Supported

ARDRegression

linear_model

Yes

AdaBoostClassifier

ensemble

Yes

AdaBoostRegressor

ensemble

Yes

AdditiveChi2Sampler

kernel_approximation

AffinityPropagation

cluster

AgglomerativeClustering

cluster

BaggingClassifier

ensemble

Yes

BaggingRegressor

ensemble

Yes

BaseDecisionTree

tree

BaseEnsemble

ensemble

BayesianGaussianMixture

mixture

Yes

BayesianRidge

linear_model

Yes

BernoulliNB

naive_bayes

Yes

BernoulliRBM

neural_network

Binarizer

preprocessing

Yes

Birch

cluster

BisectingKMeans

cluster

CCA

cross_decomposition

CalibratedClassifierCV

calibration

Yes

CategoricalNB

naive_bayes

Yes

ClassifierChain

multioutput

ComplementNB

naive_bayes

Yes

DBSCAN

cluster

DecisionTreeClassifier

tree

Yes

DecisionTreeRegressor

tree

Yes

DictVectorizer

feature_extraction

Yes

DictionaryLearning

decomposition

ElasticNet

linear_model

Yes

ElasticNetCV

linear_model

Yes

EllipticEnvelope

covariance

EmpiricalCovariance

covariance

ExtraTreeClassifier

tree

Yes

ExtraTreeRegressor

tree

Yes

ExtraTreesClassifier

ensemble

Yes

ExtraTreesRegressor

ensemble

Yes

FactorAnalysis

decomposition

FastICA

decomposition

FeatureAgglomeration

cluster

FeatureHasher

feature_extraction

Yes

FunctionTransformer

preprocessing

Yes

GammaRegressor

linear_model

Yes

GaussianMixture

mixture

Yes

GaussianNB

naive_bayes

Yes

GaussianProcessClassifier

gaussian_process

Yes

GaussianProcessRegressor

gaussian_process

Yes

GaussianRandomProjection

random_projection

Yes

GenericUnivariateSelect

feature_selection

Yes

GradientBoostingClassifier

ensemble

Yes

GradientBoostingRegressor

ensemble

Yes

GraphicalLasso

covariance

GraphicalLassoCV

covariance

GridSearchCV

model_selection

Yes

HDBSCAN

cluster

HistGradientBoostingClassifier

ensemble

Yes

HistGradientBoostingRegressor

ensemble

Yes

HuberRegressor

linear_model

Yes

IncrementalPCA

decomposition

Yes

IsolationForest

ensemble

Yes

IsotonicRegression

isotonic

KBinsDiscretizer

preprocessing

Yes

KMeans

cluster

Yes

KNNImputer

impute

Yes

KNeighborsClassifier

neighbors

Yes

KNeighborsRegressor

neighbors

Yes

KNeighborsTransformer

neighbors

Yes

KernelCenterer

preprocessing

Yes

KernelDensity

neighbors

KernelPCA

decomposition

Yes

KernelRidge

kernel_ridge

LabelBinarizer

preprocessing

Yes

LabelEncoder

preprocessing

Yes

LabelPropagation

semi_supervised

LabelSpreading

semi_supervised

Lars

linear_model

Yes

LarsCV

linear_model

Yes

Lasso

linear_model

Yes

LassoCV

linear_model

Yes

LassoLars

linear_model

Yes

LassoLarsCV

linear_model

Yes

LassoLarsIC

linear_model

Yes

LatentDirichletAllocation

decomposition

LedoitWolf

covariance

LinearDiscriminantAnalysis

discriminant_analysis

Yes

LinearRegression

linear_model

Yes

LinearSVC

svm

Yes

LinearSVR

svm

Yes

LocalOutlierFactor

neighbors

Yes

LogisticRegression

linear_model

Yes

LogisticRegressionCV

linear_model

Yes

MLPClassifier

neural_network

Yes

MLPRegressor

neural_network

Yes

MaxAbsScaler

preprocessing

Yes

MeanShift

cluster

MinCovDet

covariance

MinMaxScaler

preprocessing

Yes

MiniBatchDictionaryLearning

decomposition

MiniBatchKMeans

cluster

Yes

MiniBatchNMF

decomposition

MiniBatchSparsePCA

decomposition

MissingIndicator

impute

MultiLabelBinarizer

preprocessing

MultiOutputClassifier

multioutput

Yes

MultiOutputRegressor

multioutput

Yes

MultiTaskElasticNet

linear_model

Yes

MultiTaskElasticNetCV

linear_model

Yes

MultiTaskLasso

linear_model

Yes

MultiTaskLassoCV

linear_model

Yes

MultinomialNB

naive_bayes

Yes

NMF

decomposition

NearestCentroid

neighbors

NearestNeighbors

neighbors

Yes

NeighborhoodComponentsAnalysis

neighbors

Yes

Normalizer

preprocessing

Yes

NuSVC

svm

Yes

NuSVR

svm

Yes

Nystroem

kernel_approximation

OAS

covariance

OPTICS

cluster

OneClassSVM

svm

Yes

OneHotEncoder

preprocessing

Yes

OneVsOneClassifier

multiclass

Yes

OneVsRestClassifier

multiclass

Yes

OrdinalEncoder

preprocessing

Yes

OrthogonalMatchingPursuit

linear_model

Yes

OrthogonalMatchingPursuitCV

linear_model

Yes

OutputCodeClassifier

multiclass

PCA

decomposition

Yes

PLSCanonical

cross_decomposition

PLSRegression

cross_decomposition

Yes

PLSSVD

cross_decomposition

PassiveAggressiveClassifier

linear_model

Yes

PassiveAggressiveRegressor

linear_model

Yes

Perceptron

linear_model

Yes

PoissonRegressor

linear_model

Yes

PolynomialCountSketch

kernel_approximation

PolynomialFeatures

preprocessing

Yes

PowerTransformer

preprocessing

Yes

QuadraticDiscriminantAnalysis

discriminant_analysis

Yes

QuantileRegressor

linear_model

Yes

QuantileTransformer

preprocessing

RANSACRegressor

linear_model

Yes

RBFSampler

kernel_approximation

RFE

feature_selection

Yes

RFECV

feature_selection

Yes

RadiusNeighborsClassifier

neighbors

Yes

RadiusNeighborsRegressor

neighbors

Yes

RadiusNeighborsTransformer

neighbors

RandomForestClassifier

ensemble

Yes

RandomForestRegressor

ensemble

Yes

RandomTreesEmbedding

ensemble

Yes

RandomizedSearchCV

model_selection

RegressorChain

multioutput

Ridge

linear_model

Yes

RidgeCV

linear_model

Yes

RidgeClassifier

linear_model

Yes

RidgeClassifierCV

linear_model

Yes

RobustScaler

preprocessing

Yes

SGDClassifier

linear_model

Yes

SGDOneClassSVM

linear_model

Yes

SGDRegressor

linear_model

Yes

SVC

svm

Yes

SVR

svm

Yes

SelectFdr

feature_selection

Yes

SelectFpr

feature_selection

Yes

SelectFromModel

feature_selection

Yes

SelectFwe

feature_selection

Yes

SelectKBest

feature_selection

Yes

SelectPercentile

feature_selection

Yes

SelfTrainingClassifier

semi_supervised

SequentialFeatureSelector

feature_selection

ShrunkCovariance

covariance

SimpleImputer

impute

Yes

SkewedChi2Sampler

kernel_approximation

SparseCoder

decomposition

SparsePCA

decomposition

SparseRandomProjection

random_projection

SpectralBiclustering

cluster

SpectralClustering

cluster

SpectralCoclustering

cluster

SplineTransformer

preprocessing

StackingClassifier

ensemble

Yes

StackingRegressor

ensemble

Yes

StandardScaler

preprocessing

Yes

TargetEncoder

preprocessing

TheilSenRegressor

linear_model

Yes

TransformedTargetRegressor

compose

TruncatedSVD

decomposition

Yes

TweedieRegressor

linear_model

Yes

VarianceThreshold

feature_selection

Yes

VotingClassifier

ensemble

Yes

VotingRegressor

ensemble

Yes

scikit-learn’s version is 1.4.dev0. 130/191 models are covered.

Converters Documentation#

OnnxCastRegressor

OnnxSklearnKNeighborsClassifier

OnnxSklearnPassiveAggressiveClassifier

OnnxCastTransformer

OnnxSklearnKNeighborsRegressor

OnnxSklearnPassiveAggressiveRegressor

OnnxReplaceTransformer

OnnxSklearnKNeighborsTransformer

OnnxSklearnPerceptron

OnnxSklearnARDRegression

OnnxSklearnKernelCenterer

OnnxSklearnPipeline

OnnxSklearnAdaBoostClassifier

OnnxSklearnKernelPCA

OnnxSklearnPoissonRegressor

OnnxSklearnAdaBoostRegressor

OnnxSklearnLabelBinarizer

OnnxSklearnPolynomialFeatures

OnnxSklearnBaggingClassifier

OnnxSklearnLabelEncoder

OnnxSklearnPowerTransformer

OnnxSklearnBaggingRegressor

OnnxSklearnLars

OnnxSklearnQuadraticDiscriminantAnalysis

OnnxSklearnBayesianGaussianMixture

OnnxSklearnLarsCV

OnnxSklearnQuantileRegressor

OnnxSklearnBayesianRidge

OnnxSklearnLasso

OnnxSklearnRANSACRegressor

OnnxSklearnBernoulliNB

OnnxSklearnLassoCV

OnnxSklearnRFE

OnnxSklearnBinarizer

OnnxSklearnLassoLars

OnnxSklearnRFECV

OnnxSklearnCalibratedClassifierCV

OnnxSklearnLassoLarsCV

OnnxSklearnRadiusNeighborsClassifier

OnnxSklearnCategoricalNB

OnnxSklearnLassoLarsIC

OnnxSklearnRadiusNeighborsRegressor

OnnxSklearnColumnTransformer

OnnxSklearnLinearDiscriminantAnalysis

OnnxSklearnRandomForestClassifier

OnnxSklearnComplementNB

OnnxSklearnLinearRegression

OnnxSklearnRandomForestRegressor

OnnxSklearnCountVectorizer

OnnxSklearnLinearSVC

OnnxSklearnRandomTreesEmbedding

OnnxSklearnDecisionTreeClassifier

OnnxSklearnLinearSVR

OnnxSklearnRidge

OnnxSklearnDecisionTreeRegressor

OnnxSklearnLocalOutlierFactor

OnnxSklearnRidgeCV

OnnxSklearnDictVectorizer

OnnxSklearnLogisticRegression

OnnxSklearnRidgeClassifier

OnnxSklearnElasticNet

OnnxSklearnLogisticRegressionCV

OnnxSklearnRidgeClassifierCV

OnnxSklearnElasticNetCV

OnnxSklearnMLPClassifier

OnnxSklearnRobustScaler

OnnxSklearnExtraTreeClassifier

OnnxSklearnMLPRegressor

OnnxSklearnSGDClassifier

OnnxSklearnExtraTreeRegressor

OnnxSklearnMaxAbsScaler

OnnxSklearnSGDOneClassSVM

OnnxSklearnExtraTreesClassifier

OnnxSklearnMinMaxScaler

OnnxSklearnSGDRegressor

OnnxSklearnExtraTreesRegressor

OnnxSklearnMiniBatchKMeans

OnnxSklearnSVC

OnnxSklearnFeatureHasher

OnnxSklearnMultiOutputClassifier

OnnxSklearnSVR

OnnxSklearnFeatureUnion

OnnxSklearnMultiOutputRegressor

OnnxSklearnSelectFdr

OnnxSklearnFunctionTransformer

OnnxSklearnMultiTaskElasticNet

OnnxSklearnSelectFpr

OnnxSklearnGammaRegressor

OnnxSklearnMultiTaskElasticNetCV

OnnxSklearnSelectFromModel

OnnxSklearnGaussianMixture

OnnxSklearnMultiTaskLasso

OnnxSklearnSelectFwe

OnnxSklearnGaussianNB

OnnxSklearnMultiTaskLassoCV

OnnxSklearnSelectKBest

OnnxSklearnGaussianProcessClassifier

OnnxSklearnMultinomialNB

OnnxSklearnSelectPercentile

OnnxSklearnGaussianProcessRegressor

OnnxSklearnNearestNeighbors

OnnxSklearnSimpleImputer

OnnxSklearnGaussianRandomProjection

OnnxSklearnNeighborhoodComponentsAnalysis

OnnxSklearnStackingClassifier

OnnxSklearnGenericUnivariateSelect

OnnxSklearnNormalizer

OnnxSklearnStackingRegressor

OnnxSklearnGradientBoostingClassifier

OnnxSklearnNuSVC

OnnxSklearnStandardScaler

OnnxSklearnGradientBoostingRegressor

OnnxSklearnNuSVR

OnnxSklearnTfidfTransformer

OnnxSklearnGridSearchCV

OnnxSklearnOneClassSVM

OnnxSklearnTfidfVectorizer

OnnxSklearnHistGradientBoostingClassifier

OnnxSklearnOneHotEncoder

OnnxSklearnTheilSenRegressor

OnnxSklearnHistGradientBoostingRegressor

OnnxSklearnOneVsOneClassifier

OnnxSklearnTruncatedSVD

OnnxSklearnHuberRegressor

OnnxSklearnOneVsRestClassifier

OnnxSklearnTweedieRegressor

OnnxSklearnIncrementalPCA

OnnxSklearnOrdinalEncoder

OnnxSklearnVarianceThreshold

OnnxSklearnIsolationForest

OnnxSklearnOrthogonalMatchingPursuit

OnnxSklearnVotingClassifier

OnnxSklearnKBinsDiscretizer

OnnxSklearnOrthogonalMatchingPursuitCV

OnnxSklearnVotingRegressor

OnnxSklearnKMeans

OnnxSklearnPCA

OnnxSklearn_ConstantPredictor

OnnxSklearnKNNImputer

OnnxSklearnPLSRegression

OnnxCastRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxCastRegressor(estimator, *, dtype=<class 'numpy.float32'>)#

OnnxOperatorMixin for CastRegressor

OnnxCastTransformer#

class skl2onnx.algebra.sklearn_ops.OnnxCastTransformer(*, dtype=<class 'numpy.float32'>)#

OnnxOperatorMixin for CastTransformer

OnnxReplaceTransformer#

class skl2onnx.algebra.sklearn_ops.OnnxReplaceTransformer(*, from_value=0, to_value=nan, dtype=<class 'numpy.float32'>)#

OnnxOperatorMixin for ReplaceTransformer

OnnxSklearnARDRegression#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnARDRegression(*, max_iter=None, tol=0.001, alpha_1=1e-06, alpha_2=1e-06, lambda_1=1e-06, lambda_2=1e-06, compute_score=False, threshold_lambda=10000.0, fit_intercept=True, copy_X=True, verbose=False, n_iter='deprecated')#

OnnxOperatorMixin for ARDRegression

OnnxSklearnAdaBoostClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnAdaBoostClassifier(estimator=None, *, n_estimators=50, learning_rate=1.0, algorithm='SAMME.R', random_state=None, base_estimator='deprecated')#

OnnxOperatorMixin for AdaBoostClassifier

OnnxSklearnAdaBoostRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnAdaBoostRegressor(estimator=None, *, n_estimators=50, learning_rate=1.0, loss='linear', random_state=None, base_estimator='deprecated')#

OnnxOperatorMixin for AdaBoostRegressor

OnnxSklearnBaggingClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnBaggingClassifier(estimator=None, n_estimators=10, *, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0, base_estimator='deprecated')#

OnnxOperatorMixin for BaggingClassifier

OnnxSklearnBaggingRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnBaggingRegressor(estimator=None, n_estimators=10, *, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0, base_estimator='deprecated')#

OnnxOperatorMixin for BaggingRegressor

OnnxSklearnBayesianGaussianMixture#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnBayesianGaussianMixture(*, n_components=1, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weight_concentration_prior_type='dirichlet_process', weight_concentration_prior=None, mean_precision_prior=None, mean_prior=None, degrees_of_freedom_prior=None, covariance_prior=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10)#

OnnxOperatorMixin for BayesianGaussianMixture

OnnxSklearnBayesianRidge#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnBayesianRidge(*, max_iter=None, tol=0.001, alpha_1=1e-06, alpha_2=1e-06, lambda_1=1e-06, lambda_2=1e-06, alpha_init=None, lambda_init=None, compute_score=False, fit_intercept=True, copy_X=True, verbose=False, n_iter='deprecated')#

OnnxOperatorMixin for BayesianRidge

OnnxSklearnBernoulliNB#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnBernoulliNB(*, alpha=1.0, force_alpha='warn', binarize=0.0, fit_prior=True, class_prior=None)#

OnnxOperatorMixin for BernoulliNB

OnnxSklearnBinarizer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnBinarizer(*, threshold=0.0, copy=True)#

OnnxOperatorMixin for Binarizer

OnnxSklearnCalibratedClassifierCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnCalibratedClassifierCV(estimator=None, *, method='sigmoid', cv=None, n_jobs=None, ensemble=True, base_estimator='deprecated')#

OnnxOperatorMixin for CalibratedClassifierCV

OnnxSklearnCategoricalNB#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnCategoricalNB(*, alpha=1.0, force_alpha='warn', fit_prior=True, class_prior=None, min_categories=None)#

OnnxOperatorMixin for CategoricalNB

OnnxSklearnColumnTransformer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnColumnTransformer(transformers, *, remainder='drop', sparse_threshold=0.3, n_jobs=None, transformer_weights=None, verbose=False, verbose_feature_names_out=True)[source]#

OnnxOperatorMixin for ColumnTransformer

OnnxSklearnComplementNB#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnComplementNB(*, alpha=1.0, force_alpha='warn', fit_prior=True, class_prior=None, norm=False)#

OnnxOperatorMixin for ComplementNB

OnnxSklearnCountVectorizer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnCountVectorizer(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.int64'>)#

OnnxOperatorMixin for CountVectorizer

OnnxSklearnDecisionTreeClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnDecisionTreeClassifier(*, criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, class_weight=None, ccp_alpha=0.0, monotonic_cst=None)#

OnnxOperatorMixin for DecisionTreeClassifier

OnnxSklearnDecisionTreeRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnDecisionTreeRegressor(*, criterion='squared_error', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, ccp_alpha=0.0, monotonic_cst=None)#

OnnxOperatorMixin for DecisionTreeRegressor

OnnxSklearnDictVectorizer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnDictVectorizer(*, dtype=<class 'numpy.float64'>, separator='=', sparse=True, sort=True)#

OnnxOperatorMixin for DictVectorizer

OnnxSklearnElasticNet#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, precompute=False, max_iter=1000, copy_X=True, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic')#

OnnxOperatorMixin for ElasticNet

OnnxSklearnElasticNetCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, precompute='auto', max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, positive=False, random_state=None, selection='cyclic')#

OnnxOperatorMixin for ElasticNetCV

OnnxSklearnExtraTreeClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnExtraTreeClassifier(*, criterion='gini', splitter='random', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt', random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, class_weight=None, ccp_alpha=0.0, monotonic_cst=None)#

OnnxOperatorMixin for ExtraTreeClassifier

OnnxSklearnExtraTreeRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnExtraTreeRegressor(*, criterion='squared_error', splitter='random', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=1.0, random_state=None, min_impurity_decrease=0.0, max_leaf_nodes=None, ccp_alpha=0.0, monotonic_cst=None)#

OnnxOperatorMixin for ExtraTreeRegressor

OnnxSklearnExtraTreesClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnExtraTreesClassifier(n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None, monotonic_cst=None)#

OnnxOperatorMixin for ExtraTreesClassifier

OnnxSklearnExtraTreesRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnExtraTreesRegressor(n_estimators=100, *, criterion='squared_error', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=1.0, max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, ccp_alpha=0.0, max_samples=None, monotonic_cst=None)#

OnnxOperatorMixin for ExtraTreesRegressor

OnnxSklearnFeatureHasher#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnFeatureHasher(n_features=1048576, *, input_type='dict', dtype=<class 'numpy.float64'>, alternate_sign=True)#

OnnxOperatorMixin for FeatureHasher

OnnxSklearnFeatureUnion#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnFeatureUnion(transformer_list, *, n_jobs=None, transformer_weights=None, verbose=False)[source]#

OnnxOperatorMixin for FeatureUnion

OnnxSklearnFunctionTransformer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnFunctionTransformer(func=None, inverse_func=None, *, validate=False, accept_sparse=False, check_inverse=True, feature_names_out=None, kw_args=None, inv_kw_args=None)#

OnnxOperatorMixin for FunctionTransformer

OnnxSklearnGammaRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGammaRegressor(*, alpha=1.0, fit_intercept=True, solver='lbfgs', max_iter=100, tol=0.0001, warm_start=False, verbose=0)#

OnnxOperatorMixin for GammaRegressor

OnnxSklearnGaussianMixture#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianMixture(n_components=1, *, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weights_init=None, means_init=None, precisions_init=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10)#

OnnxOperatorMixin for GaussianMixture

OnnxSklearnGaussianNB#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianNB(*, priors=None, var_smoothing=1e-09)#

OnnxOperatorMixin for GaussianNB

OnnxSklearnGaussianProcessClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianProcessClassifier(kernel=None, *, optimizer='fmin_l_bfgs_b', n_restarts_optimizer=0, max_iter_predict=100, warm_start=False, copy_X_train=True, random_state=None, multi_class='one_vs_rest', n_jobs=None)#

OnnxOperatorMixin for GaussianProcessClassifier

OnnxSklearnGaussianProcessRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianProcessRegressor(kernel=None, *, alpha=1e-10, optimizer='fmin_l_bfgs_b', n_restarts_optimizer=0, normalize_y=False, copy_X_train=True, n_targets=None, random_state=None)#

OnnxOperatorMixin for GaussianProcessRegressor

OnnxSklearnGaussianRandomProjection#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianRandomProjection(n_components='auto', *, eps=0.1, compute_inverse_components=False, random_state=None)#

OnnxOperatorMixin for GaussianRandomProjection

OnnxSklearnGenericUnivariateSelect#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGenericUnivariateSelect(score_func=<function f_classif>, *, mode='percentile', param=1e-05)#

OnnxOperatorMixin for GenericUnivariateSelect

OnnxSklearnGradientBoostingClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGradientBoostingClassifier(*, loss='log_loss', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, init=None, random_state=None, max_features=None, verbose=0, max_leaf_nodes=None, warm_start=False, validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0)#

OnnxOperatorMixin for GradientBoostingClassifier

OnnxSklearnGradientBoostingRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGradientBoostingRegressor(*, loss='squared_error', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, init=None, random_state=None, max_features=None, alpha=0.9, verbose=0, max_leaf_nodes=None, warm_start=False, validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0)#

OnnxOperatorMixin for GradientBoostingRegressor

OnnxSklearnGridSearchCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnGridSearchCV(estimator, param_grid, *, scoring=None, n_jobs=None, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', error_score=nan, return_train_score=False)#

OnnxOperatorMixin for GridSearchCV

OnnxSklearnHistGradientBoostingClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnHistGradientBoostingClassifier(loss='log_loss', *, learning_rate=0.1, max_iter=100, max_leaf_nodes=31, max_depth=None, min_samples_leaf=20, l2_regularization=0.0, max_bins=255, categorical_features=None, monotonic_cst=None, interaction_cst=None, warm_start=False, early_stopping='auto', scoring='loss', validation_fraction=0.1, n_iter_no_change=10, tol=1e-07, verbose=0, random_state=None, class_weight=None)#

OnnxOperatorMixin for HistGradientBoostingClassifier

OnnxSklearnHistGradientBoostingRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnHistGradientBoostingRegressor(loss='squared_error', *, quantile=None, learning_rate=0.1, max_iter=100, max_leaf_nodes=31, max_depth=None, min_samples_leaf=20, l2_regularization=0.0, max_bins=255, categorical_features=None, monotonic_cst=None, interaction_cst=None, warm_start=False, early_stopping='auto', scoring='loss', validation_fraction=0.1, n_iter_no_change=10, tol=1e-07, verbose=0, random_state=None)#

OnnxOperatorMixin for HistGradientBoostingRegressor

OnnxSklearnHuberRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnHuberRegressor(*, epsilon=1.35, max_iter=100, alpha=0.0001, warm_start=False, fit_intercept=True, tol=1e-05)#

OnnxOperatorMixin for HuberRegressor

OnnxSklearnIncrementalPCA#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnIncrementalPCA(n_components=None, *, whiten=False, copy=True, batch_size=None)#

OnnxOperatorMixin for IncrementalPCA

OnnxSklearnIsolationForest#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnIsolationForest(*, n_estimators=100, max_samples='auto', contamination='auto', max_features=1.0, bootstrap=False, n_jobs=None, random_state=None, verbose=0, warm_start=False)#

OnnxOperatorMixin for IsolationForest

OnnxSklearnKBinsDiscretizer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnKBinsDiscretizer(n_bins=5, *, encode='onehot', strategy='quantile', dtype=None, subsample='warn', random_state=None)#

OnnxOperatorMixin for KBinsDiscretizer

OnnxSklearnKMeans#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnKMeans(n_clusters=8, *, init='k-means++', n_init='warn', max_iter=300, tol=0.0001, verbose=0, random_state=None, copy_x=True, algorithm='lloyd')#

OnnxOperatorMixin for KMeans

OnnxSklearnKNNImputer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnKNNImputer(*, missing_values=nan, n_neighbors=5, weights='uniform', metric='nan_euclidean', copy=True, add_indicator=False, keep_empty_features=False)#

OnnxOperatorMixin for KNNImputer

OnnxSklearnKNeighborsClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnKNeighborsClassifier(n_neighbors=5, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None)#

OnnxOperatorMixin for KNeighborsClassifier

OnnxSklearnKNeighborsRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnKNeighborsRegressor(n_neighbors=5, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None)#

OnnxOperatorMixin for KNeighborsRegressor

OnnxSklearnKNeighborsTransformer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnKNeighborsTransformer(*, mode='distance', n_neighbors=5, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=None)#

OnnxOperatorMixin for KNeighborsTransformer

OnnxSklearnKernelCenterer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnKernelCenterer#

OnnxOperatorMixin for KernelCenterer

OnnxSklearnKernelPCA#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnKernelPCA(n_components=None, *, kernel='linear', gamma=None, degree=3, coef0=1, kernel_params=None, alpha=1.0, fit_inverse_transform=False, eigen_solver='auto', tol=0, max_iter=None, iterated_power='auto', remove_zero_eig=False, random_state=None, copy_X=True, n_jobs=None)#

OnnxOperatorMixin for KernelPCA

OnnxSklearnLabelBinarizer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLabelBinarizer(*, neg_label=0, pos_label=1, sparse_output=False)#

OnnxOperatorMixin for LabelBinarizer

OnnxSklearnLabelEncoder#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLabelEncoder#

OnnxOperatorMixin for LabelEncoder

OnnxSklearnLars#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLars(*, fit_intercept=True, verbose=False, normalize='deprecated', precompute='auto', n_nonzero_coefs=500, eps=2.220446049250313e-16, copy_X=True, fit_path=True, jitter=None, random_state=None)#

OnnxOperatorMixin for Lars

OnnxSklearnLarsCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize='deprecated', precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e-16, copy_X=True)#

OnnxOperatorMixin for LarsCV

OnnxSklearnLasso#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLasso(alpha=1.0, *, fit_intercept=True, precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic')#

OnnxOperatorMixin for Lasso

OnnxSklearnLassoCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, precompute='auto', max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, positive=False, random_state=None, selection='cyclic')#

OnnxOperatorMixin for LassoCV

OnnxSklearnLassoLars#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLassoLars(alpha=1.0, *, fit_intercept=True, verbose=False, normalize='deprecated', precompute='auto', max_iter=500, eps=2.220446049250313e-16, copy_X=True, fit_path=True, positive=False, jitter=None, random_state=None)#

OnnxOperatorMixin for LassoLars

OnnxSklearnLassoLarsCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLassoLarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize='deprecated', precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e-16, copy_X=True, positive=False)#

OnnxOperatorMixin for LassoLarsCV

OnnxSklearnLassoLarsIC#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLassoLarsIC(criterion='aic', *, fit_intercept=True, verbose=False, normalize='deprecated', precompute='auto', max_iter=500, eps=2.220446049250313e-16, copy_X=True, positive=False, noise_variance=None)#

OnnxOperatorMixin for LassoLarsIC

OnnxSklearnLinearDiscriminantAnalysis#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLinearDiscriminantAnalysis(solver='svd', shrinkage=None, priors=None, n_components=None, store_covariance=False, tol=0.0001, covariance_estimator=None)#

OnnxOperatorMixin for LinearDiscriminantAnalysis

OnnxSklearnLinearRegression#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLinearRegression(*, fit_intercept=True, copy_X=True, n_jobs=None, positive=False)#

OnnxOperatorMixin for LinearRegression

OnnxSklearnLinearSVC#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLinearSVC(penalty='l2', loss='squared_hinge', *, dual='warn', tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000)#

OnnxOperatorMixin for LinearSVC

OnnxSklearnLinearSVR#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLinearSVR(*, epsilon=0.0, tol=0.0001, C=1.0, loss='epsilon_insensitive', fit_intercept=True, intercept_scaling=1.0, dual='warn', verbose=0, random_state=None, max_iter=1000)#

OnnxOperatorMixin for LinearSVR

OnnxSklearnLocalOutlierFactor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLocalOutlierFactor(n_neighbors=20, *, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, contamination='auto', novelty=False, n_jobs=None)#

OnnxOperatorMixin for LocalOutlierFactor

OnnxSklearnLogisticRegression#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLogisticRegression(penalty='l2', *, dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='lbfgs', max_iter=100, multi_class='auto', verbose=0, warm_start=False, n_jobs=None, l1_ratio=None)#

OnnxOperatorMixin for LogisticRegression

OnnxSklearnLogisticRegressionCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnLogisticRegressionCV(*, Cs=10, fit_intercept=True, cv=None, dual=False, penalty='l2', scoring=None, solver='lbfgs', tol=0.0001, max_iter=100, class_weight=None, n_jobs=None, verbose=0, refit=True, intercept_scaling=1.0, multi_class='auto', random_state=None, l1_ratios=None)#

OnnxOperatorMixin for LogisticRegressionCV

OnnxSklearnMLPClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMLPClassifier(hidden_layer_sizes=(100,), activation='relu', *, solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000)#

OnnxOperatorMixin for MLPClassifier

OnnxSklearnMLPRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMLPRegressor(hidden_layer_sizes=(100,), activation='relu', *, solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000)#

OnnxOperatorMixin for MLPRegressor

OnnxSklearnMaxAbsScaler#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMaxAbsScaler(*, copy=True)#

OnnxOperatorMixin for MaxAbsScaler

OnnxSklearnMinMaxScaler#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMinMaxScaler(feature_range=(0, 1), *, copy=True, clip=False)#

OnnxOperatorMixin for MinMaxScaler

OnnxSklearnMiniBatchKMeans#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMiniBatchKMeans(n_clusters=8, *, init='k-means++', max_iter=100, batch_size=1024, verbose=0, compute_labels=True, random_state=None, tol=0.0, max_no_improvement=10, init_size=None, n_init='warn', reassignment_ratio=0.01)#

OnnxOperatorMixin for MiniBatchKMeans

OnnxSklearnMultiOutputClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiOutputClassifier(estimator, *, n_jobs=None)#

OnnxOperatorMixin for MultiOutputClassifier

OnnxSklearnMultiOutputRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiOutputRegressor(estimator, *, n_jobs=None)#

OnnxOperatorMixin for MultiOutputRegressor

OnnxSklearnMultiTaskElasticNet#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiTaskElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic')#

OnnxOperatorMixin for MultiTaskElasticNet

OnnxSklearnMultiTaskElasticNetCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiTaskElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, random_state=None, selection='cyclic')#

OnnxOperatorMixin for MultiTaskElasticNetCV

OnnxSklearnMultiTaskLasso#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiTaskLasso(alpha=1.0, *, fit_intercept=True, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic')#

OnnxOperatorMixin for MultiTaskLasso

OnnxSklearnMultiTaskLassoCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiTaskLassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, random_state=None, selection='cyclic')#

OnnxOperatorMixin for MultiTaskLassoCV

OnnxSklearnMultinomialNB#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultinomialNB(*, alpha=1.0, force_alpha='warn', fit_prior=True, class_prior=None)#

OnnxOperatorMixin for MultinomialNB

OnnxSklearnNearestNeighbors#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnNearestNeighbors(*, n_neighbors=5, radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=None)#

OnnxOperatorMixin for NearestNeighbors

OnnxSklearnNeighborhoodComponentsAnalysis#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnNeighborhoodComponentsAnalysis(n_components=None, *, init='auto', warm_start=False, max_iter=50, tol=1e-05, callback=None, verbose=0, random_state=None)#

OnnxOperatorMixin for NeighborhoodComponentsAnalysis

OnnxSklearnNormalizer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnNormalizer(norm='l2', *, copy=True)#

OnnxOperatorMixin for Normalizer

OnnxSklearnNuSVC#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnNuSVC(*, nu=0.5, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape='ovr', break_ties=False, random_state=None)#

OnnxOperatorMixin for NuSVC

OnnxSklearnNuSVR#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnNuSVR(*, nu=0.5, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, tol=0.001, cache_size=200, verbose=False, max_iter=-1)#

OnnxOperatorMixin for NuSVR

OnnxSklearnOneClassSVM#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnOneClassSVM(*, kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, nu=0.5, shrinking=True, cache_size=200, verbose=False, max_iter=-1)#

OnnxOperatorMixin for OneClassSVM

OnnxSklearnOneHotEncoder#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnOneHotEncoder(*, categories='auto', drop=None, sparse='deprecated', sparse_output=True, dtype=<class 'numpy.float64'>, handle_unknown='error', min_frequency=None, max_categories=None, feature_name_combiner='concat')#

OnnxOperatorMixin for OneHotEncoder

OnnxSklearnOneVsOneClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnOneVsOneClassifier(estimator, *, n_jobs=None)#

OnnxOperatorMixin for OneVsOneClassifier

OnnxSklearnOneVsRestClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnOneVsRestClassifier(estimator, *, n_jobs=None, verbose=0)#

OnnxOperatorMixin for OneVsRestClassifier

OnnxSklearnOrdinalEncoder#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnOrdinalEncoder(*, categories='auto', dtype=<class 'numpy.float64'>, handle_unknown='error', unknown_value=None, encoded_missing_value=nan, min_frequency=None, max_categories=None)#

OnnxOperatorMixin for OrdinalEncoder

OnnxSklearnOrthogonalMatchingPursuit#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnOrthogonalMatchingPursuit(*, n_nonzero_coefs=None, tol=None, fit_intercept=True, normalize='deprecated', precompute='auto')#

OnnxOperatorMixin for OrthogonalMatchingPursuit

OnnxSklearnOrthogonalMatchingPursuitCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnOrthogonalMatchingPursuitCV(*, copy=True, fit_intercept=True, normalize='deprecated', max_iter=None, cv=None, n_jobs=None, verbose=False)#

OnnxOperatorMixin for OrthogonalMatchingPursuitCV

OnnxSklearnPCA#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPCA(n_components=None, *, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power='auto', n_oversamples=10, power_iteration_normalizer='auto', random_state=None)#

OnnxOperatorMixin for PCA

OnnxSklearnPLSRegression#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPLSRegression(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True)#

OnnxOperatorMixin for PLSRegression

OnnxSklearnPassiveAggressiveClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPassiveAggressiveClassifier(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='hinge', n_jobs=None, random_state=None, warm_start=False, class_weight=None, average=False)#

OnnxOperatorMixin for PassiveAggressiveClassifier

OnnxSklearnPassiveAggressiveRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPassiveAggressiveRegressor(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=False, average=False)#

OnnxOperatorMixin for PassiveAggressiveRegressor

OnnxSklearnPerceptron#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPerceptron(*, penalty=None, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, eta0=1.0, n_jobs=None, random_state=0, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False)#

OnnxOperatorMixin for Perceptron

OnnxSklearnPipeline#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPipeline(steps, *, memory=None, verbose=False)[source]#

OnnxOperatorMixin for Pipeline

OnnxSklearnPoissonRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPoissonRegressor(*, alpha=1.0, fit_intercept=True, solver='lbfgs', max_iter=100, tol=0.0001, warm_start=False, verbose=0)#

OnnxOperatorMixin for PoissonRegressor

OnnxSklearnPolynomialFeatures#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPolynomialFeatures(degree=2, *, interaction_only=False, include_bias=True, order='C')#

OnnxOperatorMixin for PolynomialFeatures

OnnxSklearnPowerTransformer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPowerTransformer(method='yeo-johnson', *, standardize=True, copy=True)#

OnnxOperatorMixin for PowerTransformer

OnnxSklearnQuadraticDiscriminantAnalysis#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnQuadraticDiscriminantAnalysis(*, priors=None, reg_param=0.0, store_covariance=False, tol=0.0001)#

OnnxOperatorMixin for QuadraticDiscriminantAnalysis

OnnxSklearnQuantileRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnQuantileRegressor(*, quantile=0.5, alpha=1.0, fit_intercept=True, solver='warn', solver_options=None)#

OnnxOperatorMixin for QuantileRegressor

OnnxSklearnRANSACRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRANSACRegressor(estimator=None, *, min_samples=None, residual_threshold=None, is_data_valid=None, is_model_valid=None, max_trials=100, max_skips=inf, stop_n_inliers=inf, stop_score=inf, stop_probability=0.99, loss='absolute_error', random_state=None)#

OnnxOperatorMixin for RANSACRegressor

OnnxSklearnRFE#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRFE(estimator, *, n_features_to_select=None, step=1, verbose=0, importance_getter='auto')#

OnnxOperatorMixin for RFE

OnnxSklearnRFECV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRFECV(estimator, *, step=1, min_features_to_select=1, cv=None, scoring=None, verbose=0, n_jobs=None, importance_getter='auto')#

OnnxOperatorMixin for RFECV

OnnxSklearnRadiusNeighborsClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRadiusNeighborsClassifier(radius=1.0, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', outlier_label=None, metric_params=None, n_jobs=None)#

OnnxOperatorMixin for RadiusNeighborsClassifier

OnnxSklearnRadiusNeighborsRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRadiusNeighborsRegressor(radius=1.0, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None)#

OnnxOperatorMixin for RadiusNeighborsRegressor

OnnxSklearnRandomForestClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRandomForestClassifier(n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None, monotonic_cst=None)#

OnnxOperatorMixin for RandomForestClassifier

OnnxSklearnRandomForestRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRandomForestRegressor(n_estimators=100, *, criterion='squared_error', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=1.0, max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, ccp_alpha=0.0, max_samples=None, monotonic_cst=None)#

OnnxOperatorMixin for RandomForestRegressor

OnnxSklearnRandomTreesEmbedding#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRandomTreesEmbedding(n_estimators=100, *, max_depth=5, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_leaf_nodes=None, min_impurity_decrease=0.0, sparse_output=True, n_jobs=None, random_state=None, verbose=0, warm_start=False)#

OnnxOperatorMixin for RandomTreesEmbedding

OnnxSklearnRidge#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRidge(alpha=1.0, *, fit_intercept=True, copy_X=True, max_iter=None, tol=0.0001, solver='auto', positive=False, random_state=None)#

OnnxOperatorMixin for Ridge

OnnxSklearnRidgeCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRidgeCV(alphas=(0.1, 1.0, 10.0), *, fit_intercept=True, scoring=None, cv=None, gcv_mode=None, store_cv_values=False, alpha_per_target=False)#

OnnxOperatorMixin for RidgeCV

OnnxSklearnRidgeClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRidgeClassifier(alpha=1.0, *, fit_intercept=True, copy_X=True, max_iter=None, tol=0.0001, class_weight=None, solver='auto', positive=False, random_state=None)#

OnnxOperatorMixin for RidgeClassifier

OnnxSklearnRidgeClassifierCV#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRidgeClassifierCV(alphas=(0.1, 1.0, 10.0), *, fit_intercept=True, scoring=None, cv=None, class_weight=None, store_cv_values=False)#

OnnxOperatorMixin for RidgeClassifierCV

OnnxSklearnRobustScaler#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnRobustScaler(*, with_centering=True, with_scaling=True, quantile_range=(25.0, 75.0), copy=True, unit_variance=False)#

OnnxOperatorMixin for RobustScaler

OnnxSklearnSGDClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSGDClassifier(loss='hinge', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, n_jobs=None, random_state=None, learning_rate='optimal', eta0=0.0, power_t=0.5, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False, average=False)#

OnnxOperatorMixin for SGDClassifier

OnnxSklearnSGDOneClassSVM#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSGDOneClassSVM(nu=0.5, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, random_state=None, learning_rate='optimal', eta0=0.0, power_t=0.5, warm_start=False, average=False)#

OnnxOperatorMixin for SGDOneClassSVM

OnnxSklearnSGDRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSGDRegressor(loss='squared_error', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate='invscaling', eta0=0.01, power_t=0.25, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, warm_start=False, average=False)#

OnnxOperatorMixin for SGDRegressor

OnnxSklearnSVC#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSVC(*, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape='ovr', break_ties=False, random_state=None)#

OnnxOperatorMixin for SVC

OnnxSklearnSVR#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSVR(*, kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=-1)#

OnnxOperatorMixin for SVR

OnnxSklearnSelectFdr#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectFdr(score_func=<function f_classif>, *, alpha=0.05)#

OnnxOperatorMixin for SelectFdr

OnnxSklearnSelectFpr#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectFpr(score_func=<function f_classif>, *, alpha=0.05)#

OnnxOperatorMixin for SelectFpr

OnnxSklearnSelectFromModel#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectFromModel(estimator, *, threshold=None, prefit=False, norm_order=1, max_features=None, importance_getter='auto')#

OnnxOperatorMixin for SelectFromModel

OnnxSklearnSelectFwe#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectFwe(score_func=<function f_classif>, *, alpha=0.05)#

OnnxOperatorMixin for SelectFwe

OnnxSklearnSelectKBest#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectKBest(score_func=<function f_classif>, *, k=10)#

OnnxOperatorMixin for SelectKBest

OnnxSklearnSelectPercentile#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectPercentile(score_func=<function f_classif>, *, percentile=10)#

OnnxOperatorMixin for SelectPercentile

OnnxSklearnSimpleImputer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnSimpleImputer(*, missing_values=nan, strategy='mean', fill_value=None, copy=True, add_indicator=False, keep_empty_features=False)#

OnnxOperatorMixin for SimpleImputer

OnnxSklearnStackingClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnStackingClassifier(estimators, final_estimator=None, *, cv=None, stack_method='auto', n_jobs=None, passthrough=False, verbose=0)#

OnnxOperatorMixin for StackingClassifier

OnnxSklearnStackingRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnStackingRegressor(estimators, final_estimator=None, *, cv=None, n_jobs=None, passthrough=False, verbose=0)#

OnnxOperatorMixin for StackingRegressor

OnnxSklearnStandardScaler#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnStandardScaler(*, copy=True, with_mean=True, with_std=True)#

OnnxOperatorMixin for StandardScaler

OnnxSklearnTfidfTransformer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnTfidfTransformer(*, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)#

OnnxOperatorMixin for TfidfTransformer

OnnxSklearnTfidfVectorizer#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnTfidfVectorizer(*, input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, analyzer='word', stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.float64'>, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)#

OnnxOperatorMixin for TfidfVectorizer

OnnxSklearnTheilSenRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnTheilSenRegressor(*, fit_intercept=True, copy_X=True, max_subpopulation=10000.0, n_subsamples=None, max_iter=300, tol=0.001, random_state=None, n_jobs=None, verbose=False)#

OnnxOperatorMixin for TheilSenRegressor

OnnxSklearnTruncatedSVD#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnTruncatedSVD(n_components=2, *, algorithm='randomized', n_iter=5, n_oversamples=10, power_iteration_normalizer='auto', random_state=None, tol=0.0)#

OnnxOperatorMixin for TruncatedSVD

OnnxSklearnTweedieRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnTweedieRegressor(*, power=0.0, alpha=1.0, fit_intercept=True, link='auto', solver='lbfgs', max_iter=100, tol=0.0001, warm_start=False, verbose=0)#

OnnxOperatorMixin for TweedieRegressor

OnnxSklearnVarianceThreshold#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnVarianceThreshold(threshold=0.0)#

OnnxOperatorMixin for VarianceThreshold

OnnxSklearnVotingClassifier#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnVotingClassifier(estimators, *, voting='hard', weights=None, n_jobs=None, flatten_transform=True, verbose=False)#

OnnxOperatorMixin for VotingClassifier

OnnxSklearnVotingRegressor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnVotingRegressor(estimators, *, weights=None, n_jobs=None, verbose=False)#

OnnxOperatorMixin for VotingRegressor

OnnxSklearn_ConstantPredictor#

class skl2onnx.algebra.sklearn_ops.OnnxSklearn_ConstantPredictor#

OnnxOperatorMixin for _ConstantPredictor

Pipeline#

class skl2onnx.algebra.sklearn_ops.OnnxSklearnPipeline(steps, *, memory=None, verbose=False)[source]#

OnnxOperatorMixin for Pipeline

onnx_converter()#

Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikit-learn parent it can find.

onnx_parser()#

Returns a parser for this model. If not overloaded, it calls the converter to guess the number of outputs. If it still fails, it fetches the parser mapped to the first scikit-learn parent it can find.

onnx_shape_calculator()#

Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikit-learn parent it can find.

to_onnx(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None, target_opset=None, verbose=0)#

Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.

Parameters:
  • X – training data, at least one sample, it is used to guess the type of the input data.

  • name – name of the model, if None, it is replaced by the the class name.

  • options – specific options given to converters (see Converters with options)

  • white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed

  • black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted

  • final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.

  • target_opset – to overwrite self.op_version

  • verbose – displays information while converting

to_onnx_operator(inputs=None, outputs=None, target_opset=None, options=None)#

This function must be overloaded.

class skl2onnx.algebra.sklearn_ops.OnnxSklearnColumnTransformer(transformers, *, remainder='drop', sparse_threshold=0.3, n_jobs=None, transformer_weights=None, verbose=False, verbose_feature_names_out=True)[source]#

OnnxOperatorMixin for ColumnTransformer

onnx_converter()#

Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikit-learn parent it can find.

onnx_parser()#

Returns a parser for this model. If not overloaded, it calls the converter to guess the number of outputs. If it still fails, it fetches the parser mapped to the first scikit-learn parent it can find.

onnx_shape_calculator()#

Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikit-learn parent it can find.

to_onnx(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None, target_opset=None, verbose=0)#

Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.

Parameters:
  • X – training data, at least one sample, it is used to guess the type of the input data.

  • name – name of the model, if None, it is replaced by the the class name.

  • options – specific options given to converters (see Converters with options)

  • white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed

  • black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted

  • final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.

  • target_opset – to overwrite self.op_version

  • verbose – displays information while converting

to_onnx_operator(inputs=None, outputs=None, target_opset=None, options=None)#

This function must be overloaded.

class skl2onnx.algebra.sklearn_ops.OnnxSklearnFeatureUnion(transformer_list, *, n_jobs=None, transformer_weights=None, verbose=False)[source]#

OnnxOperatorMixin for FeatureUnion

onnx_converter()#

Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikit-learn parent it can find.

onnx_parser()#

Returns a parser for this model. If not overloaded, it calls the converter to guess the number of outputs. If it still fails, it fetches the parser mapped to the first scikit-learn parent it can find.

onnx_shape_calculator()#

Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikit-learn parent it can find.

to_onnx(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None, target_opset=None, verbose=0)#

Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.

Parameters:
  • X – training data, at least one sample, it is used to guess the type of the input data.

  • name – name of the model, if None, it is replaced by the the class name.

  • options – specific options given to converters (see Converters with options)

  • white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed

  • black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted

  • final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.

  • target_opset – to overwrite self.op_version

  • verbose – displays information while converting

to_onnx_operator(inputs=None, outputs=None, target_opset=None, options=None)#

This function must be overloaded.

Available ONNX operators#

skl2onnx maps every ONNX operators into a class easy to insert into a graph. These operators get dynamically added and the list depends on the installed ONNX package. The documentation for these operators can be found on github: ONNX Operators.md and ONNX-ML Operators. Associated to onnxruntime, the mapping makes it easier to easily check the output of the ONNX operators on any data as shown in example Play with ONNX operators.

OnnxAbs

OnnxGreater

OnnxReduceL2_11

OnnxAbs_1

OnnxGreaterOrEqual

OnnxReduceL2_13

OnnxAbs_13

OnnxGreaterOrEqual_12

OnnxReduceL2_18

OnnxAbs_6

OnnxGreaterOrEqual_16

OnnxReduceLogSum

OnnxAcos

OnnxGreater_1

OnnxReduceLogSumExp

OnnxAcos_7

OnnxGreater_13

OnnxReduceLogSumExp_1

OnnxAcosh

OnnxGreater_7

OnnxReduceLogSumExp_11

OnnxAcosh_9

OnnxGreater_9

OnnxReduceLogSumExp_13

OnnxAdagrad

OnnxGridSample

OnnxReduceLogSumExp_18

OnnxAdagrad_1

OnnxGridSample_16

OnnxReduceLogSum_1

OnnxAdam

OnnxGridSample_20

OnnxReduceLogSum_11

OnnxAdam_1

OnnxGroupNormalization

OnnxReduceLogSum_13

OnnxAdd

OnnxGroupNormalization_18

OnnxReduceLogSum_18

OnnxAdd_1

OnnxHammingWindow

OnnxReduceMax

OnnxAdd_13

OnnxHammingWindow_17

OnnxReduceMax_1

OnnxAdd_14

OnnxHannWindow

OnnxReduceMax_11

OnnxAdd_6

OnnxHannWindow_17

OnnxReduceMax_12

OnnxAdd_7

OnnxHardSigmoid

OnnxReduceMax_13

OnnxAffineGrid

OnnxHardSigmoid_1

OnnxReduceMax_18

OnnxAffineGrid_20

OnnxHardSigmoid_6

OnnxReduceMean

OnnxAnd

OnnxHardSwish

OnnxReduceMean_1

OnnxAnd_1

OnnxHardSwish_14

OnnxReduceMean_11

OnnxAnd_7

OnnxHardmax

OnnxReduceMean_13

OnnxArgMax

OnnxHardmax_1

OnnxReduceMean_18

OnnxArgMax_1

OnnxHardmax_11

OnnxReduceMin

OnnxArgMax_11

OnnxHardmax_13

OnnxReduceMin_1

OnnxArgMax_12

OnnxIdentity

OnnxReduceMin_11

OnnxArgMax_13

OnnxIdentity_1

OnnxReduceMin_12

OnnxArgMin

OnnxIdentity_13

OnnxReduceMin_13

OnnxArgMin_1

OnnxIdentity_14

OnnxReduceMin_18

OnnxArgMin_11

OnnxIdentity_16

OnnxReduceProd

OnnxArgMin_12

OnnxIdentity_19

OnnxReduceProd_1

OnnxArgMin_13

OnnxIf

OnnxReduceProd_11

OnnxArrayFeatureExtractor

OnnxIf_1

OnnxReduceProd_13

OnnxArrayFeatureExtractor_1

OnnxIf_11

OnnxReduceProd_18

OnnxAsin

OnnxIf_13

OnnxReduceSum

OnnxAsin_7

OnnxIf_16

OnnxReduceSumSquare

OnnxAsinh

OnnxIf_19

OnnxReduceSumSquare_1

OnnxAsinh_9

OnnxImputer

OnnxReduceSumSquare_11

OnnxAtan

OnnxImputer_1

OnnxReduceSumSquare_13

OnnxAtan_7

OnnxInstanceNormalization

OnnxReduceSumSquare_18

OnnxAtanh

OnnxInstanceNormalization_1

OnnxReduceSum_1

OnnxAtanh_9

OnnxInstanceNormalization_6

OnnxReduceSum_11

OnnxAveragePool

OnnxIsInf

OnnxReduceSum_13

OnnxAveragePool_1

OnnxIsInf_10

OnnxRegexFullMatch

OnnxAveragePool_10

OnnxIsNaN

OnnxRegexFullMatch_20

OnnxAveragePool_11

OnnxIsNaN_13

OnnxRelu

OnnxAveragePool_19

OnnxIsNaN_9

OnnxRelu_1

OnnxAveragePool_7

OnnxLRN

OnnxRelu_13

OnnxBatchNormalization

OnnxLRN_1

OnnxRelu_14

OnnxBatchNormalization_1

OnnxLRN_13

OnnxRelu_6

OnnxBatchNormalization_14

OnnxLSTM

OnnxReshape

OnnxBatchNormalization_15

OnnxLSTM_1

OnnxReshape_1

OnnxBatchNormalization_6

OnnxLSTM_14

OnnxReshape_13

OnnxBatchNormalization_7

OnnxLSTM_7

OnnxReshape_14

OnnxBatchNormalization_9

OnnxLabelEncoder

OnnxReshape_19

OnnxBernoulli

OnnxLabelEncoder_1

OnnxReshape_5

OnnxBernoulli_15

OnnxLabelEncoder_2

OnnxResize

OnnxBinarizer

OnnxLayerNormalization

OnnxResize_10

OnnxBinarizer_1

OnnxLayerNormalization_17

OnnxResize_11

OnnxBitShift

OnnxLeakyRelu

OnnxResize_13

OnnxBitShift_11

OnnxLeakyRelu_1

OnnxResize_18

OnnxBitwiseAnd

OnnxLeakyRelu_16

OnnxResize_19

OnnxBitwiseAnd_18

OnnxLeakyRelu_6

OnnxReverseSequence

OnnxBitwiseNot

OnnxLess

OnnxReverseSequence_10

OnnxBitwiseNot_18

OnnxLessOrEqual

OnnxRoiAlign

OnnxBitwiseOr

OnnxLessOrEqual_12

OnnxRoiAlign_10

OnnxBitwiseOr_18

OnnxLessOrEqual_16

OnnxRoiAlign_16

OnnxBitwiseXor

OnnxLess_1

OnnxRound

OnnxBitwiseXor_18

OnnxLess_13

OnnxRound_11

OnnxBlackmanWindow

OnnxLess_7

OnnxSTFT

OnnxBlackmanWindow_17

OnnxLess_9

OnnxSTFT_17

OnnxCast

OnnxLinearClassifier

OnnxSVMClassifier

OnnxCastLike

OnnxLinearClassifier_1

OnnxSVMClassifier_1

OnnxCastLike_15

OnnxLinearRegressor

OnnxSVMRegressor

OnnxCastLike_19

OnnxLinearRegressor_1

OnnxSVMRegressor_1

OnnxCastMap

OnnxLog

OnnxScaler

OnnxCastMap_1

OnnxLogSoftmax

OnnxScaler_1

OnnxCast_1

OnnxLogSoftmax_1

OnnxScan

OnnxCast_13

OnnxLogSoftmax_11

OnnxScan_11

OnnxCast_19

OnnxLogSoftmax_13

OnnxScan_16

OnnxCast_6

OnnxLog_1

OnnxScan_19

OnnxCast_9

OnnxLog_13

OnnxScan_8

OnnxCategoryMapper

OnnxLog_6

OnnxScan_9

OnnxCategoryMapper_1

OnnxLoop

OnnxScatter

OnnxCeil

OnnxLoop_1

OnnxScatterElements

OnnxCeil_1

OnnxLoop_11

OnnxScatterElements_11

OnnxCeil_13

OnnxLoop_13

OnnxScatterElements_13

OnnxCeil_6

OnnxLoop_16

OnnxScatterElements_16

OnnxCelu

OnnxLoop_19

OnnxScatterElements_18

OnnxCelu_12

OnnxLpNormalization

OnnxScatterND

OnnxCenterCropPad

OnnxLpNormalization_1

OnnxScatterND_11

OnnxCenterCropPad_18

OnnxLpPool

OnnxScatterND_13

OnnxClip

OnnxLpPool_1

OnnxScatterND_16

OnnxClip_1

OnnxLpPool_11

OnnxScatterND_18

OnnxClip_11

OnnxLpPool_18

OnnxScatter_11

OnnxClip_12

OnnxLpPool_2

OnnxScatter_9

OnnxClip_13

OnnxMatMul

OnnxSelu

OnnxClip_6

OnnxMatMulInteger

OnnxSelu_1

OnnxCol2Im

OnnxMatMulInteger_10

OnnxSelu_6

OnnxCol2Im_18

OnnxMatMul_1

OnnxSequenceAt

OnnxCompress

OnnxMatMul_13

OnnxSequenceAt_11

OnnxCompress_11

OnnxMatMul_9

OnnxSequenceConstruct

OnnxCompress_9

OnnxMax

OnnxSequenceConstruct_11

OnnxConcat

OnnxMaxPool

OnnxSequenceEmpty

OnnxConcatFromSequence

OnnxMaxPool_1

OnnxSequenceEmpty_11

OnnxConcatFromSequence_11

OnnxMaxPool_10

OnnxSequenceErase

OnnxConcat_1

OnnxMaxPool_11

OnnxSequenceErase_11

OnnxConcat_11

OnnxMaxPool_12

OnnxSequenceInsert

OnnxConcat_13

OnnxMaxPool_8

OnnxSequenceInsert_11

OnnxConcat_4

OnnxMaxRoiPool

OnnxSequenceLength

OnnxConstant

OnnxMaxRoiPool_1

OnnxSequenceLength_11

OnnxConstantOfShape

OnnxMaxUnpool

OnnxSequenceMap

OnnxConstantOfShape_20

OnnxMaxUnpool_11

OnnxSequenceMap_17

OnnxConstantOfShape_9

OnnxMaxUnpool_9

OnnxShape

OnnxConstant_1

OnnxMax_1

OnnxShape_1

OnnxConstant_11

OnnxMax_12

OnnxShape_13

OnnxConstant_12

OnnxMax_13

OnnxShape_15

OnnxConstant_13

OnnxMax_6

OnnxShape_19

OnnxConstant_19

OnnxMax_8

OnnxShrink

OnnxConstant_9

OnnxMean

OnnxShrink_9

OnnxConv

OnnxMeanVarianceNormalization

OnnxSigmoid

OnnxConvInteger

OnnxMeanVarianceNormalization_13

OnnxSigmoid_1

OnnxConvInteger_10

OnnxMeanVarianceNormalization_9

OnnxSigmoid_13

OnnxConvTranspose

OnnxMean_1

OnnxSigmoid_6

OnnxConvTranspose_1

OnnxMean_13

OnnxSign

OnnxConvTranspose_11

OnnxMean_6

OnnxSign_13

OnnxConv_1

OnnxMean_8

OnnxSign_9

OnnxConv_11

OnnxMelWeightMatrix

OnnxSin

OnnxCos

OnnxMelWeightMatrix_17

OnnxSin_7

OnnxCos_7

OnnxMin

OnnxSinh

OnnxCosh

OnnxMin_1

OnnxSinh_9

OnnxCosh_9

OnnxMin_12

OnnxSize

OnnxCumSum

OnnxMin_13

OnnxSize_1

OnnxCumSum_11

OnnxMin_6

OnnxSize_13

OnnxCumSum_14

OnnxMin_8

OnnxSize_19

OnnxDFT

OnnxMish

OnnxSlice

OnnxDFT_17

OnnxMish_18

OnnxSlice_1

OnnxDeformConv

OnnxMod

OnnxSlice_10

OnnxDeformConv_19

OnnxMod_10

OnnxSlice_11

OnnxDepthToSpace

OnnxMod_13

OnnxSlice_13

OnnxDepthToSpace_1

OnnxMomentum

OnnxSoftmax

OnnxDepthToSpace_11

OnnxMomentum_1

OnnxSoftmaxCrossEntropyLoss

OnnxDepthToSpace_13

OnnxMul

OnnxSoftmaxCrossEntropyLoss_12

OnnxDequantizeLinear

OnnxMul_1

OnnxSoftmaxCrossEntropyLoss_13

OnnxDequantizeLinear_10

OnnxMul_13

OnnxSoftmax_1

OnnxDequantizeLinear_13

OnnxMul_14

OnnxSoftmax_11

OnnxDequantizeLinear_19

OnnxMul_6

OnnxSoftmax_13

OnnxDet

OnnxMul_7

OnnxSoftplus

OnnxDet_11

OnnxMultinomial

OnnxSoftplus_1

OnnxDictVectorizer

OnnxMultinomial_7

OnnxSoftsign

OnnxDictVectorizer_1

OnnxNeg

OnnxSoftsign_1

OnnxDiv

OnnxNeg_1

OnnxSpaceToDepth

OnnxDiv_1

OnnxNeg_13

OnnxSpaceToDepth_1

OnnxDiv_13

OnnxNeg_6

OnnxSpaceToDepth_13

OnnxDiv_14

OnnxNegativeLogLikelihoodLoss

OnnxSplit

OnnxDiv_6

OnnxNegativeLogLikelihoodLoss_12

OnnxSplitToSequence

OnnxDiv_7

OnnxNegativeLogLikelihoodLoss_13

OnnxSplitToSequence_11

OnnxDropout

OnnxNonMaxSuppression

OnnxSplit_1

OnnxDropout_1

OnnxNonMaxSuppression_10

OnnxSplit_11

OnnxDropout_10

OnnxNonMaxSuppression_11

OnnxSplit_13

OnnxDropout_12

OnnxNonZero

OnnxSplit_18

OnnxDropout_13

OnnxNonZero_13

OnnxSplit_2

OnnxDropout_6

OnnxNonZero_9

OnnxSqrt

OnnxDropout_7

OnnxNormalizer

OnnxSqrt_1

OnnxDynamicQuantizeLinear

OnnxNormalizer_1

OnnxSqrt_13

OnnxDynamicQuantizeLinear_11

OnnxNot

OnnxSqrt_6

OnnxDynamicQuantizeLinear_20

OnnxNot_1

OnnxSqueeze

OnnxEinsum

OnnxOneHot

OnnxSqueeze_1

OnnxEinsum_12

OnnxOneHotEncoder

OnnxSqueeze_11

OnnxElu

OnnxOneHotEncoder_1

OnnxSqueeze_13

OnnxElu_1

OnnxOneHot_11

OnnxStringConcat

OnnxElu_6

OnnxOneHot_9

OnnxStringConcat_20

OnnxEqual

OnnxOptional

OnnxStringNormalizer

OnnxEqual_1

OnnxOptionalGetElement

OnnxStringNormalizer_10

OnnxEqual_11

OnnxOptionalGetElement_15

OnnxStringSplit

OnnxEqual_13

OnnxOptionalGetElement_18

OnnxStringSplit_20

OnnxEqual_19

OnnxOptionalHasElement

OnnxSub

OnnxEqual_7

OnnxOptionalHasElement_15

OnnxSub_1

OnnxErf

OnnxOptionalHasElement_18

OnnxSub_13

OnnxErf_13

OnnxOptional_15

OnnxSub_14

OnnxErf_9

OnnxOr

OnnxSub_6

OnnxExp

OnnxOr_1

OnnxSub_7

OnnxExp_1

OnnxOr_7

OnnxSum

OnnxExp_13

OnnxPRelu

OnnxSum_1

OnnxExp_6

OnnxPRelu_1

OnnxSum_13

OnnxExpand

OnnxPRelu_16

OnnxSum_6

OnnxExpand_13

OnnxPRelu_6

OnnxSum_8

OnnxExpand_8

OnnxPRelu_7

OnnxTan

OnnxEyeLike

OnnxPRelu_9

OnnxTan_7

OnnxEyeLike_9

OnnxPad

OnnxTanh

OnnxFeatureVectorizer

OnnxPad_1

OnnxTanh_1

OnnxFeatureVectorizer_1

OnnxPad_11

OnnxTanh_13

OnnxFlatten

OnnxPad_13

OnnxTanh_6

OnnxFlatten_1

OnnxPad_18

OnnxTfIdfVectorizer

OnnxFlatten_11

OnnxPad_19

OnnxTfIdfVectorizer_9

OnnxFlatten_13

OnnxPad_2

OnnxThresholdedRelu

OnnxFlatten_9

OnnxPow

OnnxThresholdedRelu_10

OnnxFloor

OnnxPow_1

OnnxTile

OnnxFloor_1

OnnxPow_12

OnnxTile_1

OnnxFloor_13

OnnxPow_13

OnnxTile_13

OnnxFloor_6

OnnxPow_15

OnnxTile_6

OnnxGRU

OnnxPow_7

OnnxTopK

OnnxGRU_1

OnnxQLinearConv

OnnxTopK_1

OnnxGRU_14

OnnxQLinearConv_10

OnnxTopK_10

OnnxGRU_3

OnnxQLinearMatMul

OnnxTopK_11

OnnxGRU_7

OnnxQLinearMatMul_10

OnnxTranspose

OnnxGather

OnnxQuantizeLinear

OnnxTranspose_1

OnnxGatherElements

OnnxQuantizeLinear_10

OnnxTranspose_13

OnnxGatherElements_11

OnnxQuantizeLinear_13

OnnxTreeEnsembleClassifier

OnnxGatherElements_13

OnnxQuantizeLinear_19

OnnxTreeEnsembleClassifier_1

OnnxGatherND

OnnxRNN

OnnxTreeEnsembleClassifier_3

OnnxGatherND_11

OnnxRNN_1

OnnxTreeEnsembleRegressor

OnnxGatherND_12

OnnxRNN_14

OnnxTreeEnsembleRegressor_1

OnnxGatherND_13

OnnxRNN_7

OnnxTreeEnsembleRegressor_3

OnnxGather_1

OnnxRandomNormal

OnnxTrilu

OnnxGather_11

OnnxRandomNormalLike

OnnxTrilu_14

OnnxGather_13

OnnxRandomNormalLike_1

OnnxUnique

OnnxGelu

OnnxRandomNormal_1

OnnxUnique_11

OnnxGelu_20

OnnxRandomUniform

OnnxUnsqueeze

OnnxGemm

OnnxRandomUniformLike

OnnxUnsqueeze_1

OnnxGemm_1

OnnxRandomUniformLike_1

OnnxUnsqueeze_11

OnnxGemm_11

OnnxRandomUniform_1

OnnxUnsqueeze_13

OnnxGemm_13

OnnxRange

OnnxUpsample

OnnxGemm_6

OnnxRange_11

OnnxUpsample_10

OnnxGemm_7

OnnxReciprocal

OnnxUpsample_7

OnnxGemm_9

OnnxReciprocal_1

OnnxUpsample_9

OnnxGlobalAveragePool

OnnxReciprocal_13

OnnxWhere

OnnxGlobalAveragePool_1

OnnxReciprocal_6

OnnxWhere_16

OnnxGlobalLpPool

OnnxReduceL1

OnnxWhere_9

OnnxGlobalLpPool_1

OnnxReduceL1_1

OnnxXor

OnnxGlobalLpPool_2

OnnxReduceL1_11

OnnxXor_1

OnnxGlobalMaxPool

OnnxReduceL1_13

OnnxXor_7

OnnxGlobalMaxPool_1

OnnxReduceL1_18

OnnxZipMap

OnnxGradient

OnnxReduceL2

OnnxZipMap_1

OnnxGradient_1

OnnxReduceL2_1

OnnxAbs#

class skl2onnx.algebra.onnx_ops.OnnxAbs(*args, **kwargs)#

Version

Onnx name: Abs

This version of the operator has been available since version 13.

Summary

Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where absolute value, y = abs(x), is applied to the tensor elementwise.

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxAbs_1#

class skl2onnx.algebra.onnx_ops.OnnxAbs_1(*args, **kwargs)#

Version

Onnx name: Abs

This version of the operator has been available since version 1.

Summary

Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.

Attributes

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAbs_13#

class skl2onnx.algebra.onnx_ops.OnnxAbs_13(*args, **kwargs)#

Version

Onnx name: Abs

This version of the operator has been available since version 13.

Summary

Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where absolute value, y = abs(x), is applied to the tensor elementwise.

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxAbs_6#

class skl2onnx.algebra.onnx_ops.OnnxAbs_6(*args, **kwargs)#

Version

Onnx name: Abs

This version of the operator has been available since version 6.

Summary

Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.

OnnxAcos#

class skl2onnx.algebra.onnx_ops.OnnxAcos(*args, **kwargs)#

Version

Onnx name: Acos

This version of the operator has been available since version 7.

Summary

Calculates the arccosine (inverse of cosine) of the given input tensor, element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The arccosine of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAcos_7#

class skl2onnx.algebra.onnx_ops.OnnxAcos_7(*args, **kwargs)#

Version

Onnx name: Acos

This version of the operator has been available since version 7.

Summary

Calculates the arccosine (inverse of cosine) of the given input tensor, element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The arccosine of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAcosh#

class skl2onnx.algebra.onnx_ops.OnnxAcosh(*args, **kwargs)#

Version

Onnx name: Acosh

This version of the operator has been available since version 9.

Summary

Calculates the hyperbolic arccosine of the given input tensor element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The hyperbolic arccosine values of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAcosh_9#

class skl2onnx.algebra.onnx_ops.OnnxAcosh_9(*args, **kwargs)#

Version

Onnx name: Acosh

This version of the operator has been available since version 9.

Summary

Calculates the hyperbolic arccosine of the given input tensor element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The hyperbolic arccosine values of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAdagrad#

class skl2onnx.algebra.onnx_ops.OnnxAdagrad(*args, **kwargs)#

Version

Onnx name: Adagrad

This version of the operator has been available since version 1 of domain ai.onnx.preview.training.

Summary

Compute one iteration of ADAGRAD, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.

Let’s define the behavior of this operator. As you can imagine, ADAGRAD requires some parameters:

  • The initial learning-rate “R”.

  • The update count “T”. That is, the number of training iterations conducted.

  • A L2-norm regularization coefficient “norm_coefficient”.

  • A learning-rate decay factor “decay_factor”.

  • A small constant “epsilon” to avoid dividing-by-zero.

At each ADAGRAD iteration, the optimized tensors are moved along a direction computed based on their estimated gradient and accumulated squared gradient. Assume that only a single tensor “X” is updated by this operator. We need the value of “X”, its gradient “G”, and its accumulated squared gradient “H”. Therefore, variables in this operator’s input list are sequentially “R”, “T”, “X”, “G”, and “H”. Other parameters are given as attributes because they are usually constants. Also, the corresponding output tensors are the new value of “X” (called “X_new”), and then the new accumulated squared gradient (called “H_new”). Those outputs are computed from the given inputs following the pseudo code below.

Let “+”, “-”, “*”, and “/” are all element-wise arithmetic operations with numpy-style broadcasting support. The pseudo code to compute those outputs is:

// Compute a scalar learning-rate factor. At the first update of X, T is generally // 0 (0-based update index) or 1 (1-based update index). r = R / (1 + T * decay_factor);

// Add gradient of 0.5 * norm_coefficient * ||X||_2^2, where ||X||_2 is the 2-norm. G_regularized = norm_coefficient * X + G;

// Compute new accumulated squared gradient. H_new = H + G_regularized * G_regularized;

// Compute the adaptive part of per-coordinate learning rate. Note that Sqrt(…) // computes element-wise square-root. H_adaptive = Sqrt(H_new) + epsilon

// Compute the new value of “X”. X_new = X - r * G_regularized / H_adaptive;

If one assign this operators to optimize multiple inputs, for example, “X_1” and “X_2”, the same pseudo code may be extended to handle all tensors jointly. More specifically, we can view “X” as a concatenation of “X_1” and “X_2” (of course, their gradient and accumulate gradient should be concatenated too) and then just reuse the entire pseudo code.

Note that ADAGRAD was first proposed in http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf. In that reference paper, this operator is a special case of the Figure 1’s composite mirror descent update.

Attributes

  • decay_factor: The decay factor of learning rate after one update.The effective learning rate is computed by r = R / (1 + T * decay_factor). Default to 0 so that increasing update counts doesn’t reduce the learning rate. Default value is name: "decay_factor" type: FLOAT f: 0

  • epsilon: Small scalar to avoid dividing by zero. Default value is name: "epsilon" type: FLOAT f: 1e-06

  • norm_coefficient: Regularization coefficient in 0.5 * norm_coefficient * ||X||_2^2. Default to 0, which means no regularization. Default value is name: "norm_coefficient" type: FLOAT f: 0

Inputs

Between 3 and 2147483647 inputs.

  • R (heterogeneous)T1: The initial learning rate.

  • T (heterogeneous)T2: The update count of “X”. It should be a scalar.

  • inputs (variadic)T3: The current values of optimized tensors, followed by their respective gradients, followed by their respective accumulated squared gradients.For example, if two tensor “X_1” and “X_2” are optimized, The input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].

Outputs

Between 1 and 2147483647 outputs.

  • outputs (variadic)T3: Updated values of optimized tensors, followed by their updated values of accumulated squared gradients. For example, if two tensor “X_1” and “X_2” are optimized, the output list would be [new value of “X_1,” new value of “X_2” new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].

Type Constraints

  • T1 tensor(float), tensor(double): Constrain input types to float scalars.

  • T2 tensor(int64): Constrain input types to 64-bit integer scalars.

  • T3 tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAdagrad_1#

class skl2onnx.algebra.onnx_ops.OnnxAdagrad_1(*args, **kwargs)#

Version

Onnx name: Adagrad

This version of the operator has been available since version 1 of domain ai.onnx.preview.training.

Summary

Compute one iteration of ADAGRAD, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.

Let’s define the behavior of this operator. As you can imagine, ADAGRAD requires some parameters:

  • The initial learning-rate “R”.

  • The update count “T”. That is, the number of training iterations conducted.

  • A L2-norm regularization coefficient “norm_coefficient”.

  • A learning-rate decay factor “decay_factor”.

  • A small constant “epsilon” to avoid dividing-by-zero.

At each ADAGRAD iteration, the optimized tensors are moved along a direction computed based on their estimated gradient and accumulated squared gradient. Assume that only a single tensor “X” is updated by this operator. We need the value of “X”, its gradient “G”, and its accumulated squared gradient “H”. Therefore, variables in this operator’s input list are sequentially “R”, “T”, “X”, “G”, and “H”. Other parameters are given as attributes because they are usually constants. Also, the corresponding output tensors are the new value of “X” (called “X_new”), and then the new accumulated squared gradient (called “H_new”). Those outputs are computed from the given inputs following the pseudo code below.

Let “+”, “-”, “*”, and “/” are all element-wise arithmetic operations with numpy-style broadcasting support. The pseudo code to compute those outputs is:

// Compute a scalar learning-rate factor. At the first update of X, T is generally // 0 (0-based update index) or 1 (1-based update index). r = R / (1 + T * decay_factor);

// Add gradient of 0.5 * norm_coefficient * ||X||_2^2, where ||X||_2 is the 2-norm. G_regularized = norm_coefficient * X + G;

// Compute new accumulated squared gradient. H_new = H + G_regularized * G_regularized;

// Compute the adaptive part of per-coordinate learning rate. Note that Sqrt(…) // computes element-wise square-root. H_adaptive = Sqrt(H_new) + epsilon

// Compute the new value of “X”. X_new = X - r * G_regularized / H_adaptive;

If one assign this operators to optimize multiple inputs, for example, “X_1” and “X_2”, the same pseudo code may be extended to handle all tensors jointly. More specifically, we can view “X” as a concatenation of “X_1” and “X_2” (of course, their gradient and accumulate gradient should be concatenated too) and then just reuse the entire pseudo code.

Note that ADAGRAD was first proposed in http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf. In that reference paper, this operator is a special case of the Figure 1’s composite mirror descent update.

Attributes

  • decay_factor: The decay factor of learning rate after one update.The effective learning rate is computed by r = R / (1 + T * decay_factor). Default to 0 so that increasing update counts doesn’t reduce the learning rate. Default value is name: "decay_factor" type: FLOAT f: 0

  • epsilon: Small scalar to avoid dividing by zero. Default value is name: "epsilon" type: FLOAT f: 1e-06

  • norm_coefficient: Regularization coefficient in 0.5 * norm_coefficient * ||X||_2^2. Default to 0, which means no regularization. Default value is name: "norm_coefficient" type: FLOAT f: 0

Inputs

Between 3 and 2147483647 inputs.

  • R (heterogeneous)T1: The initial learning rate.

  • T (heterogeneous)T2: The update count of “X”. It should be a scalar.

  • inputs (variadic)T3: The current values of optimized tensors, followed by their respective gradients, followed by their respective accumulated squared gradients.For example, if two tensor “X_1” and “X_2” are optimized, The input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].

Outputs

Between 1 and 2147483647 outputs.

  • outputs (variadic)T3: Updated values of optimized tensors, followed by their updated values of accumulated squared gradients. For example, if two tensor “X_1” and “X_2” are optimized, the output list would be [new value of “X_1,” new value of “X_2” new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].

Type Constraints

  • T1 tensor(float), tensor(double): Constrain input types to float scalars.

  • T2 tensor(int64): Constrain input types to 64-bit integer scalars.

  • T3 tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAdam#

class skl2onnx.algebra.onnx_ops.OnnxAdam(*args, **kwargs)#

Version

Onnx name: Adam

This version of the operator has been available since version 1 of domain ai.onnx.preview.training.

Summary

Compute one iteration of Adam, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.

Let’s define the behavior of this operator. First of all, Adam requires some parameters:

  • The learning-rate “R”.

  • The update count “T”. That is, the number of training iterations conducted.

  • A L2-norm regularization coefficient “norm_coefficient”.

  • A small constant “epsilon” to avoid dividing-by-zero.

  • Two coefficients, “alpha” and “beta”.

At each Adam iteration, the optimized tensors are moved along a direction computed based on their exponentially-averaged historical gradient and exponentially-averaged historical squared gradient. Assume that only a tensor “X” is being optimized. The rest of required information is

  • the value of “X”,

  • “X“‘s gradient (denoted by “G”),

  • “X“‘s exponentially-averaged historical gradient (denoted by “V”), and

  • “X“‘s exponentially-averaged historical squared gradient (denoted by “H”).

Some of those parameters are passed into this operator as input tensors and others are stored as this operator’s attributes. Specifically, this operator’s input tensor list is [“R”, “T”, “X”, “G”, “V”, “H”]. That is, “R” is the first input, “T” is the second input, and so on. Other parameters are given as attributes because they are constants. Moreover, the corresponding output tensors are

  • the new value of “X” (called “X_new”),

  • the new exponentially-averaged historical gradient (denoted by “V_new”), and

  • the new exponentially-averaged historical squared gradient (denoted by “H_new”).

Those outputs are computed following the pseudo code below.

Let “+”, “-”, “*”, and “/” are all element-wise arithmetic operations with numpy-style broadcasting support. The pseudo code to compute those outputs is:

// Add gradient of 0.5 * norm_coefficient * ||X||_2^2, where ||X||_2 is the 2-norm. G_regularized = norm_coefficient * X + G

// Update exponentially-averaged historical gradient. V_new = alpha * V + (1 - alpha) * G_regularized

// Update exponentially-averaged historical squared gradient. H_new = beta * H + (1 - beta) * G_regularized * G_regularized

// Compute the element-wise square-root of H_new. V_new will be element-wisely // divided by H_sqrt for a better update direction. H_sqrt = Sqrt(H_new) + epsilon

// Compute learning-rate. Note that “alpha**T”/”beta**T” is alpha’s/beta’s T-th power. R_adjusted = T > 0 ? R * Sqrt(1 - beta**T) / (1 - alpha**T) : R

// Compute new value of “X”. X_new = X - R_adjusted * V_new / H_sqrt

// Post-update regularization. X_final = (1 - norm_coefficient_post) * X_new

If there are multiple inputs to be optimized, the pseudo code will be applied independently to each of them.

Attributes

  • alpha: Coefficient of previously accumulated gradient in running average. Default to 0.9. Default value is name: "alpha" type: FLOAT f: 0.9

  • beta: Coefficient of previously accumulated squared-gradient in running average. Default to 0.999. Default value is name: "beta" type: FLOAT f: 0.999

  • epsilon: Small scalar to avoid dividing by zero. Default value is name: "epsilon" type: FLOAT f: 1e-06

  • norm_coefficient: Regularization coefficient of 0.5 * norm_coefficient * ||X||_2^2. Default to 0, which means no regularization. Default value is name: "norm_coefficient" type: FLOAT f: 0

  • norm_coefficient_post: Regularization coefficient of 0.5 * norm_coefficient * ||X||_2^2. Default to 0, which means no regularization. Default value is name: "norm_coefficient_post" type: FLOAT f: 0

Inputs

Between 3 and 2147483647 inputs.

  • R (heterogeneous)T1: The initial learning rate.

  • T (heterogeneous)T2: The update count of “X”. It should be a scalar.

  • inputs (variadic)T3: The tensors to be optimized, followed by their respective gradients, followed by their respective accumulated gradients (aka momentum), followed by their respective accumulated squared gradients. For example, to optimize tensors “X_1” and “X_2,”, the input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated gradient of “X_1”, accumulated gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].

Outputs

Between 1 and 2147483647 outputs.

  • outputs (variadic)T3: New values of optimized tensors, followed by their respective new accumulated gradients, followed by their respective new accumulated squared gradients. For example, if two tensors “X_1” and “X_2” are optimized, the outputs list would be [new value of “X_1”, new value of “X_2”, new accumulated gradient of “X_1”, new accumulated gradient of “X_2”, new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].

Type Constraints

  • T1 tensor(float), tensor(double): Constrain input types to float scalars.

  • T2 tensor(int64): Constrain input types to 64-bit integer scalars.

  • T3 tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAdam_1#

class skl2onnx.algebra.onnx_ops.OnnxAdam_1(*args, **kwargs)#

Version

Onnx name: Adam

This version of the operator has been available since version 1 of domain ai.onnx.preview.training.

Summary

Compute one iteration of Adam, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.

Let’s define the behavior of this operator. First of all, Adam requires some parameters:

  • The learning-rate “R”.

  • The update count “T”. That is, the number of training iterations conducted.

  • A L2-norm regularization coefficient “norm_coefficient”.

  • A small constant “epsilon” to avoid dividing-by-zero.

  • Two coefficients, “alpha” and “beta”.

At each Adam iteration, the optimized tensors are moved along a direction computed based on their exponentially-averaged historical gradient and exponentially-averaged historical squared gradient. Assume that only a tensor “X” is being optimized. The rest of required information is

  • the value of “X”,

  • “X“‘s gradient (denoted by “G”),

  • “X“‘s exponentially-averaged historical gradient (denoted by “V”), and

  • “X“‘s exponentially-averaged historical squared gradient (denoted by “H”).

Some of those parameters are passed into this operator as input tensors and others are stored as this operator’s attributes. Specifically, this operator’s input tensor list is [“R”, “T”, “X”, “G”, “V”, “H”]. That is, “R” is the first input, “T” is the second input, and so on. Other parameters are given as attributes because they are constants. Moreover, the corresponding output tensors are

  • the new value of “X” (called “X_new”),

  • the new exponentially-averaged historical gradient (denoted by “V_new”), and

  • the new exponentially-averaged historical squared gradient (denoted by “H_new”).

Those outputs are computed following the pseudo code below.

Let “+”, “-”, “*”, and “/” are all element-wise arithmetic operations with numpy-style broadcasting support. The pseudo code to compute those outputs is:

// Add gradient of 0.5 * norm_coefficient * ||X||_2^2, where ||X||_2 is the 2-norm. G_regularized = norm_coefficient * X + G

// Update exponentially-averaged historical gradient. V_new = alpha * V + (1 - alpha) * G_regularized

// Update exponentially-averaged historical squared gradient. H_new = beta * H + (1 - beta) * G_regularized * G_regularized

// Compute the element-wise square-root of H_new. V_new will be element-wisely // divided by H_sqrt for a better update direction. H_sqrt = Sqrt(H_new) + epsilon

// Compute learning-rate. Note that “alpha**T”/”beta**T” is alpha’s/beta’s T-th power. R_adjusted = T > 0 ? R * Sqrt(1 - beta**T) / (1 - alpha**T) : R

// Compute new value of “X”. X_new = X - R_adjusted * V_new / H_sqrt

// Post-update regularization. X_final = (1 - norm_coefficient_post) * X_new

If there are multiple inputs to be optimized, the pseudo code will be applied independently to each of them.

Attributes

  • alpha: Coefficient of previously accumulated gradient in running average. Default to 0.9. Default value is name: "alpha" type: FLOAT f: 0.9

  • beta: Coefficient of previously accumulated squared-gradient in running average. Default to 0.999. Default value is name: "beta" type: FLOAT f: 0.999

  • epsilon: Small scalar to avoid dividing by zero. Default value is name: "epsilon" type: FLOAT f: 1e-06

  • norm_coefficient: Regularization coefficient of 0.5 * norm_coefficient * ||X||_2^2. Default to 0, which means no regularization. Default value is name: "norm_coefficient" type: FLOAT f: 0

  • norm_coefficient_post: Regularization coefficient of 0.5 * norm_coefficient * ||X||_2^2. Default to 0, which means no regularization. Default value is name: "norm_coefficient_post" type: FLOAT f: 0

Inputs

Between 3 and 2147483647 inputs.

  • R (heterogeneous)T1: The initial learning rate.

  • T (heterogeneous)T2: The update count of “X”. It should be a scalar.

  • inputs (variadic)T3: The tensors to be optimized, followed by their respective gradients, followed by their respective accumulated gradients (aka momentum), followed by their respective accumulated squared gradients. For example, to optimize tensors “X_1” and “X_2,”, the input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated gradient of “X_1”, accumulated gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].

Outputs

Between 1 and 2147483647 outputs.

  • outputs (variadic)T3: New values of optimized tensors, followed by their respective new accumulated gradients, followed by their respective new accumulated squared gradients. For example, if two tensors “X_1” and “X_2” are optimized, the outputs list would be [new value of “X_1”, new value of “X_2”, new accumulated gradient of “X_1”, new accumulated gradient of “X_2”, new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].

Type Constraints

  • T1 tensor(float), tensor(double): Constrain input types to float scalars.

  • T2 tensor(int64): Constrain input types to 64-bit integer scalars.

  • T3 tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAdd#

class skl2onnx.algebra.onnx_ops.OnnxAdd(*args, **kwargs)#

Version

Onnx name: Add

This version of the operator has been available since version 14.

Summary

Performs element-wise binary addition (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

(Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16.

Inputs

  • A (heterogeneous)T: First operand.

  • B (heterogeneous)T: Second operand.

Outputs

  • C (heterogeneous)T: Result, has same element type as two inputs

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxAdd_1#

class skl2onnx.algebra.onnx_ops.OnnxAdd_1(*args, **kwargs)#

Version

Onnx name: Add

This version of the operator has been available since version 1.

Summary

Performs element-wise binary addition (with limited broadcast support).

If necessary the right-hand-side argument will be broadcasted to match the shape of left-hand-side argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1-dim expansion doesn’t work yet.

For example, the following tensor shapes are supported (with broadcast=1):

shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1-element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0

Attribute broadcast=1 needs to be passed to enable broadcasting.

Attributes

  • broadcast: Pass 1 to enable broadcasting Default value is name: "broadcast" type: INT i: 0

Inputs

  • A (heterogeneous)T: First operand, should share the type with the second operand.

  • B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.

Outputs

  • C (heterogeneous)T: Result, has same dimensions and type as A

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAdd_13#

class skl2onnx.algebra.onnx_ops.OnnxAdd_13(*args, **kwargs)#

Version

Onnx name: Add

This version of the operator has been available since version 13.

Summary

Performs element-wise binary addition (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First operand.

  • B (heterogeneous)T: Second operand.

Outputs

  • C (heterogeneous)T: Result, has same element type as two inputs

Type Constraints

  • T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to high-precision numeric tensors.

OnnxAdd_14#

class skl2onnx.algebra.onnx_ops.OnnxAdd_14(*args, **kwargs)#

Version

Onnx name: Add

This version of the operator has been available since version 14.

Summary

Performs element-wise binary addition (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

(Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16.

Inputs

  • A (heterogeneous)T: First operand.

  • B (heterogeneous)T: Second operand.

Outputs

  • C (heterogeneous)T: Result, has same element type as two inputs

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxAdd_6#

class skl2onnx.algebra.onnx_ops.OnnxAdd_6(*args, **kwargs)#

Version

Onnx name: Add

This version of the operator has been available since version 6.

Summary

Performs element-wise binary addition (with limited broadcast support).

If necessary the right-hand-side argument will be broadcasted to match the shape of left-hand-side argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1-dim expansion doesn’t work yet.

For example, the following tensor shapes are supported (with broadcast=1):

shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1-element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0

Attribute broadcast=1 needs to be passed to enable broadcasting.

Attributes

  • broadcast: Pass 1 to enable broadcasting Default value is name: "broadcast" type: INT i: 0

Inputs

  • A (heterogeneous)T: First operand, should share the type with the second operand.

  • B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.

Outputs

  • C (heterogeneous)T: Result, has same dimensions and type as A

Type Constraints

  • T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to high-precision numeric tensors.

OnnxAdd_7#

class skl2onnx.algebra.onnx_ops.OnnxAdd_7(*args, **kwargs)#

Version

Onnx name: Add

This version of the operator has been available since version 7.

Summary

Performs element-wise binary addition (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First operand.

  • B (heterogeneous)T: Second operand.

Outputs

  • C (heterogeneous)T: Result, has same element type as two inputs

Type Constraints

  • T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to high-precision numeric tensors.

OnnxAffineGrid#

class skl2onnx.algebra.onnx_ops.OnnxAffineGrid(*args, **kwargs)#

Version

Onnx name: AffineGrid

This version of the operator has been available since version 20.

Summary

Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta (https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html). An affine matrix theta is applied to a position tensor represented in its homogeneous expression. Here is an example in 3D:

[r00, r01, r02, t0]   [x]   [x']
[r10, r11, r12, t1] * [y] = [y']
[r20, r21, r22, t2]   [z]   [z']
[0,   0,   0,   1 ]   [1]   [1 ]

where (x, y, z) is the position in the original space, (x’, y’, z’) is the position in the output space. The last row is always [0, 0, 0, 1] and is not stored in the affine matrix. Therefore we have theta of shape (N, 2, 3) for 2D or (N, 3, 4) for 3D.

Input size is used to define grid of positions evenly spaced in the original 2D or 3D space, with dimensions ranging from -1 to 1. The output grid contains positions in the output space.

When align_corners=1, consider -1 and 1 to refer to the centers of the corner pixels (mark v in illustration).

v            v            v            v
|-------------------|------------------|
-1                  0                  1

When align_corners=0, consider -1 and 1 to refer to the outer edge of the corner pixels.

    v        v         v         v
|------------------|-------------------|
-1                 0                   1

Attributes

  • align_corners: if align_corners=1, consider -1 and 1 to refer to the centers of the corner pixels. if align_corners=0, consider -1 and 1 to refer to the outer edge the corner pixels. Default value is name: "align_corners" type: INT i: 0

Inputs

  • theta (heterogeneous)T1: input batch of affine matrices with shape (N, 2, 3) for 2D or (N, 3, 4) for 3D

  • size (heterogeneous)T2: the target output image size (N, C, H, W) for 2D or (N, C, D, H, W) for 3D

Outputs

  • grid (heterogeneous)T1: output tensor of shape (N, C, H, W, 2) of 2D sample coordinates or (N, C, D, H, W, 3) of 3D sample coordinates.

Type Constraints

  • T1 tensor(bfloat16), tensor(float16), tensor(float), tensor(double): Constrain grid types to float tensors.

  • T2 tensor(int64): Constrain size’s type to int64 tensors.

OnnxAffineGrid_20#

class skl2onnx.algebra.onnx_ops.OnnxAffineGrid_20(*args, **kwargs)#

Version

Onnx name: AffineGrid

This version of the operator has been available since version 20.

Summary

Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta (https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html). An affine matrix theta is applied to a position tensor represented in its homogeneous expression. Here is an example in 3D:

[r00, r01, r02, t0]   [x]   [x']
[r10, r11, r12, t1] * [y] = [y']
[r20, r21, r22, t2]   [z]   [z']
[0,   0,   0,   1 ]   [1]   [1 ]

where (x, y, z) is the position in the original space, (x’, y’, z’) is the position in the output space. The last row is always [0, 0, 0, 1] and is not stored in the affine matrix. Therefore we have theta of shape (N, 2, 3) for 2D or (N, 3, 4) for 3D.

Input size is used to define grid of positions evenly spaced in the original 2D or 3D space, with dimensions ranging from -1 to 1. The output grid contains positions in the output space.

When align_corners=1, consider -1 and 1 to refer to the centers of the corner pixels (mark v in illustration).

v            v            v            v
|-------------------|------------------|
-1                  0                  1

When align_corners=0, consider -1 and 1 to refer to the outer edge of the corner pixels.

    v        v         v         v
|------------------|-------------------|
-1                 0                   1

Attributes

  • align_corners: if align_corners=1, consider -1 and 1 to refer to the centers of the corner pixels. if align_corners=0, consider -1 and 1 to refer to the outer edge the corner pixels. Default value is name: "align_corners" type: INT i: 0

Inputs

  • theta (heterogeneous)T1: input batch of affine matrices with shape (N, 2, 3) for 2D or (N, 3, 4) for 3D

  • size (heterogeneous)T2: the target output image size (N, C, H, W) for 2D or (N, C, D, H, W) for 3D

Outputs

  • grid (heterogeneous)T1: output tensor of shape (N, C, H, W, 2) of 2D sample coordinates or (N, C, D, H, W, 3) of 3D sample coordinates.

Type Constraints

  • T1 tensor(bfloat16), tensor(float16), tensor(float), tensor(double): Constrain grid types to float tensors.

  • T2 tensor(int64): Constrain size’s type to int64 tensors.

OnnxAnd#

class skl2onnx.algebra.onnx_ops.OnnxAnd(*args, **kwargs)#

Version

Onnx name: And

This version of the operator has been available since version 7.

Summary

Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First input operand for the logical operator.

  • B (heterogeneous)T: Second input operand for the logical operator.

Outputs

  • C (heterogeneous)T1: Result tensor.

Type Constraints

  • T tensor(bool): Constrain input to boolean tensor.

  • T1 tensor(bool): Constrain output to boolean tensor.

OnnxAnd_1#

class skl2onnx.algebra.onnx_ops.OnnxAnd_1(*args, **kwargs)#

Version

Onnx name: And

This version of the operator has been available since version 1.

Summary

Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B.

If broadcasting is enabled, the right-hand-side argument will be broadcasted to match the shape of left-hand-side argument. See the doc of Add for a detailed description of the broadcasting rules.

Attributes

  • broadcast: Enable broadcasting Default value is name: "broadcast" type: INT i: 0

Inputs

  • A (heterogeneous)T: Left input tensor for the logical operator.

  • B (heterogeneous)T: Right input tensor for the logical operator.

Outputs

  • C (heterogeneous)T1: Result tensor.

Type Constraints

  • T tensor(bool): Constrain input to boolean tensor.

  • T1 tensor(bool): Constrain output to boolean tensor.

OnnxAnd_7#

class skl2onnx.algebra.onnx_ops.OnnxAnd_7(*args, **kwargs)#

Version

Onnx name: And

This version of the operator has been available since version 7.

Summary

Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First input operand for the logical operator.

  • B (heterogeneous)T: Second input operand for the logical operator.

Outputs

  • C (heterogeneous)T1: Result tensor.

Type Constraints

  • T tensor(bool): Constrain input to boolean tensor.

  • T1 tensor(bool): Constrain output to boolean tensor.

OnnxArgMax#

class skl2onnx.algebra.onnx_ops.OnnxArgMax(*args, **kwargs)#

Version

Onnx name: ArgMax

This version of the operator has been available since version 13.

Summary

Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data). Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

  • select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is name: "select_last_index" type: INT i: 0

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxArgMax_1#

class skl2onnx.algebra.onnx_ops.OnnxArgMax_1(*args, **kwargs)#

Version

Onnx name: ArgMax

This version of the operator has been available since version 1.

Summary

Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.

OnnxArgMax_11#

class skl2onnx.algebra.onnx_ops.OnnxArgMax_11(*args, **kwargs)#

Version

Onnx name: ArgMax

This version of the operator has been available since version 11.

Summary

Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulting tensor has the reduced dimension pruned. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data). Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.

OnnxArgMax_12#

class skl2onnx.algebra.onnx_ops.OnnxArgMax_12(*args, **kwargs)#

Version

Onnx name: ArgMax

This version of the operator has been available since version 12.

Summary

Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulting tensor has the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data). Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

  • select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is name: "select_last_index" type: INT i: 0

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.

OnnxArgMax_13#

class skl2onnx.algebra.onnx_ops.OnnxArgMax_13(*args, **kwargs)#

Version

Onnx name: ArgMax

This version of the operator has been available since version 13.

Summary

Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data). Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

  • select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is name: "select_last_index" type: INT i: 0

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxArgMin#

class skl2onnx.algebra.onnx_ops.OnnxArgMin(*args, **kwargs)#

Version

Onnx name: ArgMin

This version of the operator has been available since version 13.

Summary

Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data). Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

  • select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is name: "select_last_index" type: INT i: 0

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxArgMin_1#

class skl2onnx.algebra.onnx_ops.OnnxArgMin_1(*args, **kwargs)#

Version

Onnx name: ArgMin

This version of the operator has been available since version 1.

Summary

Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.

OnnxArgMin_11#

class skl2onnx.algebra.onnx_ops.OnnxArgMin_11(*args, **kwargs)#

Version

Onnx name: ArgMin

This version of the operator has been available since version 11.

Summary

Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulting tensor has the reduced dimension pruned. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data). Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.

OnnxArgMin_12#

class skl2onnx.algebra.onnx_ops.OnnxArgMin_12(*args, **kwargs)#

Version

Onnx name: ArgMin

This version of the operator has been available since version 12.

Summary

Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulting tensor has the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data). Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

  • select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is name: "select_last_index" type: INT i: 0

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.

OnnxArgMin_13#

class skl2onnx.algebra.onnx_ops.OnnxArgMin_13(*args, **kwargs)#

Version

Onnx name: ArgMin

This version of the operator has been available since version 13.

Summary

Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

Attributes

  • axis: The axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data). Default value is name: "axis" type: INT i: 0

  • keepdims: Keep the reduced dimension or not, default 1 means keep reduced dimension. Default value is name: "keepdims" type: INT i: 1

  • select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is name: "select_last_index" type: INT i: 0

Inputs

  • data (heterogeneous)T: An input tensor.

Outputs

  • reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxArrayFeatureExtractor#

class skl2onnx.algebra.onnx_ops.OnnxArrayFeatureExtractor(*args, **kwargs)#

Version

Onnx name: ArrayFeatureExtractor

This version of the operator has been available since version 1 of domain ai.onnx.ml.

Summary

Select elements of the input tensor based on the indices passed.

The indices are applied to the last axes of the tensor.

Inputs

  • X (heterogeneous)T: Data to be selected

  • Y (heterogeneous)tensor(int64): The indices, based on 0 as the first index of any dimension.

Outputs

  • Z (heterogeneous)T: Selected output data as an array

Type Constraints

  • T tensor(float), tensor(double), tensor(int64), tensor(int32), tensor(string): The input must be a tensor of a numeric type or string. The output will be of the same tensor type.

OnnxArrayFeatureExtractor_1#

class skl2onnx.algebra.onnx_ops.OnnxArrayFeatureExtractor_1(*args, **kwargs)#

Version

Onnx name: ArrayFeatureExtractor

This version of the operator has been available since version 1 of domain ai.onnx.ml.

Summary

Select elements of the input tensor based on the indices passed.

The indices are applied to the last axes of the tensor.

Inputs

  • X (heterogeneous)T: Data to be selected

  • Y (heterogeneous)tensor(int64): The indices, based on 0 as the first index of any dimension.

Outputs

  • Z (heterogeneous)T: Selected output data as an array

Type Constraints

  • T tensor(float), tensor(double), tensor(int64), tensor(int32), tensor(string): The input must be a tensor of a numeric type or string. The output will be of the same tensor type.

OnnxAsin#

class skl2onnx.algebra.onnx_ops.OnnxAsin(*args, **kwargs)#

Version

Onnx name: Asin

This version of the operator has been available since version 7.

Summary

Calculates the arcsine (inverse of sine) of the given input tensor, element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The arcsine of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAsin_7#

class skl2onnx.algebra.onnx_ops.OnnxAsin_7(*args, **kwargs)#

Version

Onnx name: Asin

This version of the operator has been available since version 7.

Summary

Calculates the arcsine (inverse of sine) of the given input tensor, element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The arcsine of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAsinh#

class skl2onnx.algebra.onnx_ops.OnnxAsinh(*args, **kwargs)#

Version

Onnx name: Asinh

This version of the operator has been available since version 9.

Summary

Calculates the hyperbolic arcsine of the given input tensor element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The hyperbolic arcsine values of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAsinh_9#

class skl2onnx.algebra.onnx_ops.OnnxAsinh_9(*args, **kwargs)#

Version

Onnx name: Asinh

This version of the operator has been available since version 9.

Summary

Calculates the hyperbolic arcsine of the given input tensor element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The hyperbolic arcsine values of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAtan#

class skl2onnx.algebra.onnx_ops.OnnxAtan(*args, **kwargs)#

Version

Onnx name: Atan

This version of the operator has been available since version 7.

Summary

Calculates the arctangent (inverse of tangent) of the given input tensor, element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The arctangent of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAtan_7#

class skl2onnx.algebra.onnx_ops.OnnxAtan_7(*args, **kwargs)#

Version

Onnx name: Atan

This version of the operator has been available since version 7.

Summary

Calculates the arctangent (inverse of tangent) of the given input tensor, element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The arctangent of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAtanh#

class skl2onnx.algebra.onnx_ops.OnnxAtanh(*args, **kwargs)#

Version

Onnx name: Atanh

This version of the operator has been available since version 9.

Summary

Calculates the hyperbolic arctangent of the given input tensor element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The hyperbolic arctangent values of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAtanh_9#

class skl2onnx.algebra.onnx_ops.OnnxAtanh_9(*args, **kwargs)#

Version

Onnx name: Atanh

This version of the operator has been available since version 9.

Summary

Calculates the hyperbolic arctangent of the given input tensor element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The hyperbolic arctangent values of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAveragePool#

class skl2onnx.algebra.onnx_ops.OnnxAveragePool(*args, **kwargs)#

Version

Onnx name: AveragePool

This version of the operator has been available since version 19.

Summary

AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape is calculated differently depending on whether explicit padding is used, where pads is employed, or auto padding is used, where auto_pad is utilized. With explicit padding (https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html?highlight=maxpool#torch.nn.MaxPool2d):

output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)

or#

output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)

if ceil_mode is enabled. pad_shape[i] is the sum of pads along axis i.

auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:

VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])

or when ceil_mode is disabled (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D):

VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]) + 1
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor((input_spatial_shape[i] - 1) / strides_spatial_shape[i]) + 1

And pad shape will be following if SAME_UPPER or SAME_LOWER:

pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]

The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is name: "ceil_mode" type: INT i: 0

  • count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad. Default value is name: "count_include_pad" type: INT i: 0

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

Outputs

  • Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAveragePool_1#

class skl2onnx.algebra.onnx_ops.OnnxAveragePool_1(*args, **kwargs)#

Version

Onnx name: AveragePool

This version of the operator has been available since version 1.

Summary

AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:

output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)

* pad_shape[i] is sum of pads along axis i

auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:

VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])

And pad shape will be following if SAME_UPPER or SAME_LOWER:

pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]

The output of each pooling window is divided by the number of elements exclude pad.

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is name: "auto_pad" type: STRING s: "NOTSET"

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

Outputs

  • Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAveragePool_10#

class skl2onnx.algebra.onnx_ops.OnnxAveragePool_10(*args, **kwargs)#

Version

Onnx name: AveragePool

This version of the operator has been available since version 10.

Summary

AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:

output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)

or#

output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)

if ceil_mode is enabled

* pad_shape[i] is sum of pads along axis i

auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:

VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])

And pad shape will be following if SAME_UPPER or SAME_LOWER:

pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]

The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is name: "ceil_mode" type: INT i: 0

  • count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad. Default value is name: "count_include_pad" type: INT i: 0

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

Outputs

  • Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAveragePool_11#

class skl2onnx.algebra.onnx_ops.OnnxAveragePool_11(*args, **kwargs)#

Version

Onnx name: AveragePool

This version of the operator has been available since version 11.

Summary

AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following: ` output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1) ` or ` output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1) ` if ceil_mode is enabled

` * pad_shape[i] is sum of pads along axis i `

auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled: ` VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i]) `

or when ceil_mode is disabled:

` VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor(input_spatial_shape[i] / strides_spatial_shape[i]) `

And pad shape will be following if SAME_UPPER or SAME_LOWER: ` pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i] ` The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is name: "ceil_mode" type: INT i: 0

  • count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad. Default value is name: "count_include_pad" type: INT i: 0

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

Outputs

  • Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAveragePool_19#

class skl2onnx.algebra.onnx_ops.OnnxAveragePool_19(*args, **kwargs)#

Version

Onnx name: AveragePool

This version of the operator has been available since version 19.

Summary

AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape is calculated differently depending on whether explicit padding is used, where pads is employed, or auto padding is used, where auto_pad is utilized. With explicit padding (https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html?highlight=maxpool#torch.nn.MaxPool2d):

output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)

or#

output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)

if ceil_mode is enabled. pad_shape[i] is the sum of pads along axis i.

auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:

VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])

or when ceil_mode is disabled (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D):

VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]) + 1
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor((input_spatial_shape[i] - 1) / strides_spatial_shape[i]) + 1

And pad shape will be following if SAME_UPPER or SAME_LOWER:

pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]

The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is name: "ceil_mode" type: INT i: 0

  • count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad. Default value is name: "count_include_pad" type: INT i: 0

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

Outputs

  • Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxAveragePool_7#

class skl2onnx.algebra.onnx_ops.OnnxAveragePool_7(*args, **kwargs)#

Version

Onnx name: AveragePool

This version of the operator has been available since version 7.

Summary

AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:

output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)

* pad_shape[i] is sum of pads along axis i

auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:

VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])

And pad shape will be following if SAME_UPPER or SAME_LOWER:

pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]

The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad. Default value is name: "count_include_pad" type: INT i: 0

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

Outputs

  • Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxBatchNormalization#

class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization(*args, **kwargs)#

Version

Onnx name: BatchNormalization

This version of the operator has been available since version 15.

Summary

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, There are five required inputs ‘X’, ‘scale’, ‘B’, ‘input_mean’ and ‘input_var’. Note that ‘input_mean’ and ‘input_var’ are expected to be the estimated statistics in inference mode (training_mode=False, default), and the running statistics in training mode (training_mode=True). There are multiple cases for the number of outputs, which we list below:

  • Output case #1: Y, running_mean, running_var (training_mode=True)

  • Output case #2: Y (training_mode=False)

When training_mode=False, extra outputs are invalid. The outputs are updated as follows when training_mode=True:

running_mean = input_mean * momentum + current_mean * (1 - momentum)
running_var = input_var * momentum + current_var * (1 - momentum)

Y = (X - current_mean) / sqrt(current_var + epsilon) * scale + B

where:

current_mean = ReduceMean(X, axis=all_except_channel_index)
current_var =  ReduceVar(X, axis=all_except_channel_index)

Notice that ReduceVar refers to the population variance, and it equals to sum(sqrd(x_i - x_avg)) / N where N is the population size (this formula does not use sample size N - 1).

The computation of ReduceMean and ReduceVar uses float to avoid overflow for float16 inputs.

When training_mode=False:

Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B

For previous (depreciated) non-spatial cases, implementors are suggested to flatten the input shape to (N x C * D1 * D2 * … * Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

Attributes

  • epsilon: The epsilon value to use to avoid division by zero. Default value is name: "epsilon" type: FLOAT f: 1e-05

  • momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1 - momentum). Default value is name: "momentum" type: FLOAT f: 0.9

  • training_mode: If set to true, it indicates BatchNormalization is being used for training, and outputs 1, 2, 3, and 4 would be populated. Default value is name: "training_mode" type: INT i: 0

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1

  • scale (heterogeneous)T1: Scale tensor of shape (C).

  • B (heterogeneous)T1: Bias tensor of shape (C).

  • input_mean (heterogeneous)T2: running (training) or estimated (testing) mean tensor of shape (C).

  • input_var (heterogeneous)T2: running (training) or estimated (testing) variance tensor of shape (C).

Outputs

Between 1 and 3 outputs.

  • Y (heterogeneous)T: The output tensor of the same shape as X

  • running_mean (optional, heterogeneous)T2: The running mean after the BatchNormalization operator.

  • running_var (optional, heterogeneous)T2: The running variance after the BatchNormalization operator. This op uses the population size (N) for calculating variance, and not the sample size N-1.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.

  • T1 tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain scale and bias types to float tensors.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain mean and variance types to float tensors.

OnnxBatchNormalization_1#

class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_1(*args, **kwargs)#

Version

Onnx name: BatchNormalization

This version of the operator has been available since version 1.

Summary

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:

Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)

Attributes

  • epsilon: The epsilon value to use to avoid division by zero, default is 1e-5f. Default value is name: "epsilon" type: FLOAT f: 1e-05

  • is_test: If set to nonzero, run spatial batch normalization in test mode, default is 0. Default value is name: "is_test" type: INT i: 0

  • momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1 - momentum), default is 0.9f. Default value is name: "momentum" type: FLOAT f: 0.9

  • spatial: If true, compute the mean and variance across all spatial elements If false, compute the mean and variance across per feature.Default is 1. Default value is name: "spatial" type: INT i: 1

Inputs

  • X (heterogeneous)T: The input 4-dimensional tensor of shape NCHW.

  • scale (heterogeneous)T: The scale as a 1-dimensional tensor of size C to be applied to the output.

  • B (heterogeneous)T: The bias as a 1-dimensional tensor of size C to be applied to the output.

  • mean (heterogeneous)T: The running mean (training) or the estimated mean (testing) as a 1-dimensional tensor of size C.

  • var (heterogeneous)T: The running variance (training) or the estimated variance (testing) as a 1-dimensional tensor of size C.

Outputs

Between 1 and 5 outputs.

  • Y (heterogeneous)T: The output 4-dimensional tensor of the same shape as X.

  • mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator. Must be in-place with the input mean. Should not be used for testing.

  • var (optional, heterogeneous)T: The running variance after the BatchNormalization operator. Must be in-place with the input var. Should not be used for testing.

  • saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation. Should not be used for testing.

  • saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation. Should not be used for testing.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxBatchNormalization_14#

class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_14(*args, **kwargs)#

Version

Onnx name: BatchNormalization

This version of the operator has been available since version 14.

Summary

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, There are five required inputs ‘X’, ‘scale’, ‘B’, ‘input_mean’ and ‘input_var’. Note that ‘input_mean’ and ‘input_var’ are expected to be the estimated statistics in inference mode (training_mode=False, default), and the running statistics in training mode (training_mode=True). There are multiple cases for the number of outputs, which we list below:

Output case #1: Y, running_mean, running_var (training_mode=True) Output case #2: Y (training_mode=False)

When training_mode=False, extra outputs are invalid. The outputs are updated as follows when training_mode=True:

running_mean = input_mean * momentum + current_mean * (1 - momentum)
running_var = input_var * momentum + current_var * (1 - momentum)

Y = (X - current_mean) / sqrt(current_var + epsilon) * scale + B

where:

current_mean = ReduceMean(X, axis=all_except_channel_index)
current_var =  ReduceVar(X, axis=all_except_channel_index)

Notice that ReduceVar refers to the population variance, and it equals to
sum(sqrd(x_i - x_avg)) / N
where N is the population size (this formula does not use sample size N - 1).

When training_mode=False:

Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B

For previous (depreciated) non-spatial cases, implementors are suggested to flatten the input shape to (N x C * D1 * D2 * … * Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

Attributes

  • epsilon: The epsilon value to use to avoid division by zero. Default value is name: "epsilon" type: FLOAT f: 1e-05

  • momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1 - momentum). Default value is name: "momentum" type: FLOAT f: 0.9

  • training_mode: If set to true, it indicates BatchNormalization is being used for training, and outputs 1, 2, 3, and 4 would be populated. Default value is name: "training_mode" type: INT i: 0

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1

  • scale (heterogeneous)T: Scale tensor of shape (C).

  • B (heterogeneous)T: Bias tensor of shape (C).

  • input_mean (heterogeneous)U: running (training) or estimated (testing) mean tensor of shape (C).

  • input_var (heterogeneous)U: running (training) or estimated (testing) variance tensor of shape (C).

Outputs

Between 1 and 3 outputs.

  • Y (heterogeneous)T: The output tensor of the same shape as X

  • running_mean (optional, heterogeneous)U: The running mean after the BatchNormalization operator.

  • running_var (optional, heterogeneous)U: The running variance after the BatchNormalization operator. This op uses the population size (N) for calculating variance, and not the sample size N-1.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.

  • U tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain mean and variance types to float tensors. It allows all float type for U.

OnnxBatchNormalization_15#

class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_15(*args, **kwargs)#

Version

Onnx name: BatchNormalization

This version of the operator has been available since version 15.

Summary

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, There are five required inputs ‘X’, ‘scale’, ‘B’, ‘input_mean’ and ‘input_var’. Note that ‘input_mean’ and ‘input_var’ are expected to be the estimated statistics in inference mode (training_mode=False, default), and the running statistics in training mode (training_mode=True). There are multiple cases for the number of outputs, which we list below:

  • Output case #1: Y, running_mean, running_var (training_mode=True)

  • Output case #2: Y (training_mode=False)

When training_mode=False, extra outputs are invalid. The outputs are updated as follows when training_mode=True:

running_mean = input_mean * momentum + current_mean * (1 - momentum)
running_var = input_var * momentum + current_var * (1 - momentum)

Y = (X - current_mean) / sqrt(current_var + epsilon) * scale + B

where:

current_mean = ReduceMean(X, axis=all_except_channel_index)
current_var =  ReduceVar(X, axis=all_except_channel_index)

Notice that ReduceVar refers to the population variance, and it equals to sum(sqrd(x_i - x_avg)) / N where N is the population size (this formula does not use sample size N - 1).

The computation of ReduceMean and ReduceVar uses float to avoid overflow for float16 inputs.

When training_mode=False:

Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B

For previous (depreciated) non-spatial cases, implementors are suggested to flatten the input shape to (N x C * D1 * D2 * … * Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

Attributes

  • epsilon: The epsilon value to use to avoid division by zero. Default value is name: "epsilon" type: FLOAT f: 1e-05

  • momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1 - momentum). Default value is name: "momentum" type: FLOAT f: 0.9

  • training_mode: If set to true, it indicates BatchNormalization is being used for training, and outputs 1, 2, 3, and 4 would be populated. Default value is name: "training_mode" type: INT i: 0

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1

  • scale (heterogeneous)T1: Scale tensor of shape (C).

  • B (heterogeneous)T1: Bias tensor of shape (C).

  • input_mean (heterogeneous)T2: running (training) or estimated (testing) mean tensor of shape (C).

  • input_var (heterogeneous)T2: running (training) or estimated (testing) variance tensor of shape (C).

Outputs

Between 1 and 3 outputs.

  • Y (heterogeneous)T: The output tensor of the same shape as X

  • running_mean (optional, heterogeneous)T2: The running mean after the BatchNormalization operator.

  • running_var (optional, heterogeneous)T2: The running variance after the BatchNormalization operator. This op uses the population size (N) for calculating variance, and not the sample size N-1.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.

  • T1 tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain scale and bias types to float tensors.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain mean and variance types to float tensors.

OnnxBatchNormalization_6#

class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_6(*args, **kwargs)#

Version

Onnx name: BatchNormalization

This version of the operator has been available since version 6.

Summary

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:

Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)

Attributes

  • epsilon: The epsilon value to use to avoid division by zero, default is 1e-5f. Default value is name: "epsilon" type: FLOAT f: 1e-05

  • is_test: If set to nonzero, run spatial batch normalization in test mode, default is 0. Default value is name: "is_test" type: INT i: 0

  • momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1 - momentum), default is 0.9f. Default value is name: "momentum" type: FLOAT f: 0.9

  • spatial: If true, compute the mean and variance across all spatial elements If false, compute the mean and variance across per feature.Default is 1. Default value is name: "spatial" type: INT i: 1

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.

  • scale (heterogeneous)T: The scale as a 1-dimensional tensor of size C to be applied to the output.

  • B (heterogeneous)T: The bias as a 1-dimensional tensor of size C to be applied to the output.

  • mean (heterogeneous)T: The running mean (training) or the estimated mean (testing) as a 1-dimensional tensor of size C.

  • var (heterogeneous)T: The running variance (training) or the estimated variance (testing) as a 1-dimensional tensor of size C.

Outputs

Between 1 and 5 outputs.

  • Y (heterogeneous)T: The output tensor of the same shape as X.

  • mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator. Must be in-place with the input mean. Should not be used for testing.

  • var (optional, heterogeneous)T: The running variance after the BatchNormalization operator. Must be in-place with the input var. Should not be used for testing.

  • saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation. Should not be used for testing.

  • saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation. Should not be used for testing.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxBatchNormalization_7#

class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_7(*args, **kwargs)#

Version

Onnx name: BatchNormalization

This version of the operator has been available since version 7.

Summary

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:

Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)

This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

Attributes

  • epsilon: The epsilon value to use to avoid division by zero. Default value is name: "epsilon" type: FLOAT f: 1e-05

  • momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1 - momentum). Default value is name: "momentum" type: FLOAT f: 0.9

  • spatial: If true, compute the mean and variance across per activation. If false, compute the mean and variance across per feature over each mini-batch. Default value is name: "spatial" type: INT i: 1

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.

  • scale (heterogeneous)T: If spatial is true, the dimension of scale is (C). If spatial is false, the dimensions of scale are (C x D1 x … x Dn)

  • B (heterogeneous)T: If spatial is true, the dimension of bias is (C). If spatial is false, the dimensions of bias are (C x D1 x … x Dn)

  • mean (heterogeneous)T: If spatial is true, the dimension of the running mean (training) or the estimated mean (testing) is (C). If spatial is false, the dimensions of the running mean (training) or the estimated mean (testing) are (C x D1 x … x Dn).

  • var (heterogeneous)T: If spatial is true, the dimension of the running variance(training) or the estimated variance (testing) is (C). If spatial is false, the dimensions of the running variance(training) or the estimated variance (testing) are (C x D1 x … x Dn).

Outputs

Between 1 and 5 outputs.

  • Y (heterogeneous)T: The output tensor of the same shape as X

  • mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator.

  • var (optional, heterogeneous)T: The running variance after the BatchNormalization operator.

  • saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation.

  • saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxBatchNormalization_9#

class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_9(*args, **kwargs)#

Version

Onnx name: BatchNormalization

This version of the operator has been available since version 9.

Summary

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:

Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)

For previous (depreciated) non-spatial cases, implementors are suggested to flatten the input shape to (N x C*D1*D2 ..*Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

Attributes

  • epsilon: The epsilon value to use to avoid division by zero. Default value is name: "epsilon" type: FLOAT f: 1e-05

  • momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1 - momentum). Default value is name: "momentum" type: FLOAT f: 0.9

Inputs

  • X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1

  • scale (heterogeneous)T: Scale tensor of shape (C).

  • B (heterogeneous)T: Bias tensor of shape (C).

  • mean (heterogeneous)T: running (training) or estimated (testing) mean tensor of shape (C).

  • var (heterogeneous)T: running (training) or estimated (testing) variance tensor of shape (C).

Outputs

Between 1 and 5 outputs.

  • Y (heterogeneous)T: The output tensor of the same shape as X

  • mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator.

  • var (optional, heterogeneous)T: The running variance after the BatchNormalization operator.

  • saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation.

  • saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxBernoulli#

class skl2onnx.algebra.onnx_ops.OnnxBernoulli(*args, **kwargs)#

Version

Onnx name: Bernoulli

This version of the operator has been available since version 15.

Summary

Draws binary random numbers (0 or 1) from a Bernoulli distribution. The input tensor should be a tensor containing probabilities p (a value in the range [0,1]) to be used for drawing the binary random number, where an output of 1 is produced with probability p and an output of 0 is produced with probability (1-p).

This operator is non-deterministic and may not produce the same values in different implementations (even if a seed is specified).

Attributes

Inputs

  • input (heterogeneous)T1: All values in input have to be in the range:[0, 1].

Outputs

  • output (heterogeneous)T2: The returned output tensor only has values 0 or 1, same shape as input tensor.

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double): Constrain input types to float tensors.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(bfloat16), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bool): Constrain output types to all numeric tensors and bool tensors.

OnnxBernoulli_15#

class skl2onnx.algebra.onnx_ops.OnnxBernoulli_15(*args, **kwargs)#

Version

Onnx name: Bernoulli

This version of the operator has been available since version 15.

Summary

Draws binary random numbers (0 or 1) from a Bernoulli distribution. The input tensor should be a tensor containing probabilities p (a value in the range [0,1]) to be used for drawing the binary random number, where an output of 1 is produced with probability p and an output of 0 is produced with probability (1-p).

This operator is non-deterministic and may not produce the same values in different implementations (even if a seed is specified).

Attributes

Inputs

  • input (heterogeneous)T1: All values in input have to be in the range:[0, 1].

Outputs

  • output (heterogeneous)T2: The returned output tensor only has values 0 or 1, same shape as input tensor.

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double): Constrain input types to float tensors.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(bfloat16), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bool): Constrain output types to all numeric tensors and bool tensors.

OnnxBinarizer#

class skl2onnx.algebra.onnx_ops.OnnxBinarizer(*args, **kwargs)#

Version

Onnx name: Binarizer

This version of the operator has been available since version 1 of domain ai.onnx.ml.

Summary

Maps the values of the input tensor to either 0 or 1, element-wise, based on the outcome of a comparison against a threshold value.

Attributes

  • threshold: Values greater than this are mapped to 1, others to 0. Default value is name: "threshold" type: FLOAT f: 0

Inputs

  • X (heterogeneous)T: Data to be binarized

Outputs

  • Y (heterogeneous)T: Binarized output data

Type Constraints

  • T tensor(float), tensor(double), tensor(int64), tensor(int32): The input must be a tensor of a numeric type. The output will be of the same tensor type.

OnnxBinarizer_1#

class skl2onnx.algebra.onnx_ops.OnnxBinarizer_1(*args, **kwargs)#

Version

Onnx name: Binarizer

This version of the operator has been available since version 1 of domain ai.onnx.ml.

Summary

Maps the values of the input tensor to either 0 or 1, element-wise, based on the outcome of a comparison against a threshold value.

Attributes

  • threshold: Values greater than this are mapped to 1, others to 0. Default value is name: "threshold" type: FLOAT f: 0

Inputs

  • X (heterogeneous)T: Data to be binarized

Outputs

  • Y (heterogeneous)T: Binarized output data

Type Constraints

  • T tensor(float), tensor(double), tensor(int64), tensor(int32): The input must be a tensor of a numeric type. The output will be of the same tensor type.

OnnxBitShift#

class skl2onnx.algebra.onnx_ops.OnnxBitShift(*args, **kwargs)#

Version

Onnx name: BitShift

This version of the operator has been available since version 11.

Summary

Bitwise shift operator performs element-wise operation. For each input element, if the attribute “direction” is “RIGHT”, this operator moves its binary representation toward the right side so that the input value is effectively decreased. If the attribute “direction” is “LEFT”, bits of binary representation moves toward the left side, which results the increase of its actual value. The input X is the tensor to be shifted and another input Y specifies the amounts of shifting. For example, if “direction” is “Right”, X is [1, 4], and S is [1, 1], the corresponding output Z would be [0, 2]. If “direction” is “LEFT” with X=[1, 2] and S=[1, 2], the corresponding output Y would be [2, 8].

Because this operator supports Numpy-style broadcasting, X’s and Y’s shapes are not necessarily identical. This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Attributes

Inputs

  • X (heterogeneous)T: First operand, input to be shifted.

  • Y (heterogeneous)T: Second operand, amounts of shift.

Outputs

  • Z (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64): Constrain input and output types to integer tensors.

OnnxBitShift_11#

class skl2onnx.algebra.onnx_ops.OnnxBitShift_11(*args, **kwargs)#

Version

Onnx name: BitShift

This version of the operator has been available since version 11.

Summary

Bitwise shift operator performs element-wise operation. For each input element, if the attribute “direction” is “RIGHT”, this operator moves its binary representation toward the right side so that the input value is effectively decreased. If the attribute “direction” is “LEFT”, bits of binary representation moves toward the left side, which results the increase of its actual value. The input X is the tensor to be shifted and another input Y specifies the amounts of shifting. For example, if “direction” is “Right”, X is [1, 4], and S is [1, 1], the corresponding output Z would be [0, 2]. If “direction” is “LEFT” with X=[1, 2] and S=[1, 2], the corresponding output Y would be [2, 8].

Because this operator supports Numpy-style broadcasting, X’s and Y’s shapes are not necessarily identical. This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Attributes

Inputs

  • X (heterogeneous)T: First operand, input to be shifted.

  • Y (heterogeneous)T: Second operand, amounts of shift.

Outputs

  • Z (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64): Constrain input and output types to integer tensors.

OnnxBitwiseAnd#

class skl2onnx.algebra.onnx_ops.OnnxBitwiseAnd(*args, **kwargs)#

Version

Onnx name: BitwiseAnd

This version of the operator has been available since version 18.

Summary

Returns the tensor resulting from performing the bitwise and operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First input operand for the bitwise operator.

  • B (heterogeneous)T: Second input operand for the bitwise operator.

Outputs

  • C (heterogeneous)T: Result tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64): Constrain input to integer tensors.

OnnxBitwiseAnd_18#

class skl2onnx.algebra.onnx_ops.OnnxBitwiseAnd_18(*args, **kwargs)#

Version

Onnx name: BitwiseAnd

This version of the operator has been available since version 18.

Summary

Returns the tensor resulting from performing the bitwise and operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First input operand for the bitwise operator.

  • B (heterogeneous)T: Second input operand for the bitwise operator.

Outputs

  • C (heterogeneous)T: Result tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64): Constrain input to integer tensors.

OnnxBitwiseNot#

class skl2onnx.algebra.onnx_ops.OnnxBitwiseNot(*args, **kwargs)#

Version

Onnx name: BitwiseNot

This version of the operator has been available since version 18.

Summary

Returns the bitwise not of the input tensor element-wise.

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64): Constrain input/output to integer tensors.

OnnxBitwiseNot_18#

class skl2onnx.algebra.onnx_ops.OnnxBitwiseNot_18(*args, **kwargs)#

Version

Onnx name: BitwiseNot

This version of the operator has been available since version 18.

Summary

Returns the bitwise not of the input tensor element-wise.

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64): Constrain input/output to integer tensors.

OnnxBitwiseOr#

class skl2onnx.algebra.onnx_ops.OnnxBitwiseOr(*args, **kwargs)#

Version

Onnx name: BitwiseOr

This version of the operator has been available since version 18.

Summary

Returns the tensor resulting from performing the bitwise or operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First input operand for the bitwise operator.

  • B (heterogeneous)T: Second input operand for the bitwise operator.

Outputs

  • C (heterogeneous)T: Result tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64): Constrain input to integer tensors.

OnnxBitwiseOr_18#

class skl2onnx.algebra.onnx_ops.OnnxBitwiseOr_18(*args, **kwargs)#

Version

Onnx name: BitwiseOr

This version of the operator has been available since version 18.

Summary

Returns the tensor resulting from performing the bitwise or operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First input operand for the bitwise operator.

  • B (heterogeneous)T: Second input operand for the bitwise operator.

Outputs

  • C (heterogeneous)T: Result tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64): Constrain input to integer tensors.

OnnxBitwiseXor#

class skl2onnx.algebra.onnx_ops.OnnxBitwiseXor(*args, **kwargs)#

Version

Onnx name: BitwiseXor

This version of the operator has been available since version 18.

Summary

Returns the tensor resulting from performing the bitwise xor operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First input operand for the bitwise operator.

  • B (heterogeneous)T: Second input operand for the bitwise operator.

Outputs

  • C (heterogeneous)T: Result tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64): Constrain input to integer tensors.

OnnxBitwiseXor_18#

class skl2onnx.algebra.onnx_ops.OnnxBitwiseXor_18(*args, **kwargs)#

Version

Onnx name: BitwiseXor

This version of the operator has been available since version 18.

Summary

Returns the tensor resulting from performing the bitwise xor operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

  • A (heterogeneous)T: First input operand for the bitwise operator.

  • B (heterogeneous)T: Second input operand for the bitwise operator.

Outputs

  • C (heterogeneous)T: Result tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64): Constrain input to integer tensors.

OnnxBlackmanWindow#

class skl2onnx.algebra.onnx_ops.OnnxBlackmanWindow(*args, **kwargs)#

Version

Onnx name: BlackmanWindow

This version of the operator has been available since version 17.

Summary

Generates a Blackman window as described in the paper https://ieeexplore.ieee.org/document/1455106.

Attributes

  • output_datatype: The data type of the output tensor. Strictly must be one of the values from DataType enum in TensorProto whose values correspond to T2. The default value is 1 = FLOAT. Default value is name: "output_datatype" type: INT i: 1

  • periodic: If 1, returns a window to be used as periodic function. If 0, return a symmetric window. When ‘periodic’ is specified, hann computes a window of length size + 1 and returns the first size points. The default value is 1. Default value is name: "periodic" type: INT i: 1

Inputs

  • size (heterogeneous)T1: A scalar value indicating the length of the window.

Outputs

  • output (heterogeneous)T2: A Blackman window with length: size. The output has the shape: [size].

Type Constraints

  • T1 tensor(int32), tensor(int64): Constrain the input size to int64_t.

  • T2 tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain output types to numeric tensors.

OnnxBlackmanWindow_17#

class skl2onnx.algebra.onnx_ops.OnnxBlackmanWindow_17(*args, **kwargs)#

Version

Onnx name: BlackmanWindow

This version of the operator has been available since version 17.

Summary

Generates a Blackman window as described in the paper https://ieeexplore.ieee.org/document/1455106.

Attributes

  • output_datatype: The data type of the output tensor. Strictly must be one of the values from DataType enum in TensorProto whose values correspond to T2. The default value is 1 = FLOAT. Default value is name: "output_datatype" type: INT i: 1

  • periodic: If 1, returns a window to be used as periodic function. If 0, return a symmetric window. When ‘periodic’ is specified, hann computes a window of length size + 1 and returns the first size points. The default value is 1. Default value is name: "periodic" type: INT i: 1

Inputs

  • size (heterogeneous)T1: A scalar value indicating the length of the window.

Outputs

  • output (heterogeneous)T2: A Blackman window with length: size. The output has the shape: [size].

Type Constraints

  • T1 tensor(int32), tensor(int64): Constrain the input size to int64_t.

  • T2 tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain output types to numeric tensors.

OnnxCast#

class skl2onnx.algebra.onnx_ops.OnnxCast(*args, **kwargs)#

Version

Onnx name: Cast

This version of the operator has been available since version 19.

Summary

The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.

Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e-5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may yield result 100. There are some string literals reserved for special floating-point values; “+INF” (and “INF”), “-INF”, and “NaN” are positive infinity, negative infinity, and not-a-number, respectively. Any string which can exactly match “+INF” in a case-insensitive way would be mapped to positive infinite. Similarly, this case-insensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floating-point representation (such as “314.15926”) would be used. Converting non-numerical-literal string such as “Hello World!” is an undefined behavior. Cases of converting string representing floating-point arithmetic value, such as “2.718”, to INT is an undefined behavior.

Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64-bit float 3.1415926459 may be round to a 32-bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.

In more detail, the conversion among numerical types should follow these rules if the destination type is not a float 8 type.

  • Casting from floating point to: * floating point: +/- infinity if OOR (out of range). * fixed point: undefined if OOR. * bool: +/- 0.0 to False; all else to True.

  • Casting from fixed point to: * floating point: +/- infinity if OOR. (+ infinity in the case of uint) * fixed point: when OOR, discard higher bits and reinterpret (with respect to two’s complement representation for

    signed types). For example, 200 (int16) -> -56 (int8).

    • bool: zero to False; nonzero to True.

  • Casting from bool to: * floating point: {1.0, 0.0}. * fixed point: {1, 0}. * bool: no change.

Float 8 type were introduced to speed up the training of deep models. By default the conversion of a float x obeys to the following rules. [x] means the value rounded to the target mantissa width.

x | E4M3FN | E4M3FNUZ | E5M2 | E5M2FNUZ |

|------|—-|----|—-|----| | 0 | 0 | 0 | 0 | 0 | |-0 | -0 | 0 | -0 | 0 | | NaN | NaN | NaN | NaN | NaN | | +/- Inf | +/- FLT_MAX | NaN | FLT_MAX | NaN | | [x] > FLT_MAX | FLT_MAX | FLT_MAX | FLT_MAX | FLT_MAX | | [x] < -FLT_MAX | -FLT_MAX | -FLT_MAX | -FLT_MAX | -FLT_MAX | | else | RNE | RNE | RNE | RNE |

The behavior changes if the parameter ‘saturate’ is set to False. The rules then become:

x | E4M3FN | E4M3FNUZ | E5M2 | E5M2FNUZ |

|------|—-|----|—-|----| | 0 | 0 | 0 | 0 | 0 | |-0 | -0 | 0 | -0 | 0 | | NaN | NaN | NaN | NaN | NaN | | +/- Inf | NaN | NaN | +/- Inf | NaN | | [x] > FLT_MAX | NaN | NaN | Inf | NaN | | [x] < -FLT_MAX | NaN | NaN | -Inf | NaN | | else | RNE | RNE | RNE | RNE |

Attributes

  • saturate: The parameter defines how the conversion behaves if an input value is out of range of the destination type. It only applies for float 8 conversion (float8e4m3fn, float8e4m3fnuz, float8e5m2, float8e5m2fnuz). It is true by default. All cases are fully described in two tables inserted in the operator description. Default value is name: "saturate" type: INT i: 1

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

Outputs

  • output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain input types. Casting from complex is not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain output types. Casting to complex is not supported.

OnnxCastLike#

class skl2onnx.algebra.onnx_ops.OnnxCastLike(*args, **kwargs)#

Version

Onnx name: CastLike

This version of the operator has been available since version 19.

Summary

The operator casts the elements of a given input tensor (the first input) to the same data type as the elements of the second input tensor. See documentation of the Cast operator for further details.

Attributes

  • saturate: The parameter defines how the conversion behaves if an input value is out of range of the destination type. It only applies for float 8 conversion (float8e4m3fn, float8e4m3fnuz, float8e5m2, float8e5m2fnuz). It is true by default. Please refer to operator Cast description for further details. Default value is name: "saturate" type: INT i: 1

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

  • target_type (heterogeneous)T2: The (first) input tensor will be cast to produce a tensor of the same type as this (second input) tensor.

Outputs

  • output (heterogeneous)T2: Output tensor produced by casting the first input tensor to have the same type as the second input tensor.

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain input types. Casting from complex is not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain output types. Casting to complex is not supported.

OnnxCastLike_15#

class skl2onnx.algebra.onnx_ops.OnnxCastLike_15(*args, **kwargs)#

Version

Onnx name: CastLike

This version of the operator has been available since version 15.

Summary

The operator casts the elements of a given input tensor (the first input) to the same data type as the elements of the second input tensor. See documentation of the Cast operator for further details.

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

  • target_type (heterogeneous)T2: The (first) input tensor will be cast to produce a tensor of the same type as this (second input) tensor.

Outputs

  • output (heterogeneous)T2: Output tensor produced by casting the first input tensor to have the same type as the second input tensor.

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16): Constrain input types. Casting from complex is not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16): Constrain output types. Casting to complex is not supported.

OnnxCastLike_19#

class skl2onnx.algebra.onnx_ops.OnnxCastLike_19(*args, **kwargs)#

Version

Onnx name: CastLike

This version of the operator has been available since version 19.

Summary

The operator casts the elements of a given input tensor (the first input) to the same data type as the elements of the second input tensor. See documentation of the Cast operator for further details.

Attributes

  • saturate: The parameter defines how the conversion behaves if an input value is out of range of the destination type. It only applies for float 8 conversion (float8e4m3fn, float8e4m3fnuz, float8e5m2, float8e5m2fnuz). It is true by default. Please refer to operator Cast description for further details. Default value is name: "saturate" type: INT i: 1

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

  • target_type (heterogeneous)T2: The (first) input tensor will be cast to produce a tensor of the same type as this (second input) tensor.

Outputs

  • output (heterogeneous)T2: Output tensor produced by casting the first input tensor to have the same type as the second input tensor.

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain input types. Casting from complex is not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain output types. Casting to complex is not supported.

OnnxCastMap#

class skl2onnx.algebra.onnx_ops.OnnxCastMap(*args, **kwargs)#

Version

Onnx name: CastMap

This version of the operator has been available since version 1 of domain ai.onnx.ml.

Summary

Converts a map to a tensor. The map key must be an int64 and the values will be ordered in ascending order based on this key. The operator supports dense packing or sparse packing. If using sparse packing, the key cannot exceed the max_map-1 value.

Attributes

  • cast_to: A string indicating the desired element type of the output tensor, one of ‘TO_FLOAT’, ‘TO_STRING’, ‘TO_INT64’. Default value is name: "cast_to" type: STRING s: "TO_FLOAT"

  • map_form: Indicates whether to only output as many values as are in the input (dense), or position the input based on using the key of the map as the index of the output (sparse).<br>One of ‘DENSE’, ‘SPARSE’. Default value is name: "map_form" type: STRING s: "DENSE"

  • max_map: If the value of map_form is ‘SPARSE,’ this attribute indicates the total length of the output tensor. Default value is name: "max_map" type: INT i: 1

Inputs

  • X (heterogeneous)T1: The input map that is to be cast to a tensor

Outputs

  • Y (heterogeneous)T2: A tensor representing the same data as the input map, ordered by their keys

Type Constraints

  • T1 map(int64, string), map(int64, float): The input must be an integer map to either string or float.

  • T2 tensor(string), tensor(float), tensor(int64): The output is a 1-D tensor of string, float, or integer.

OnnxCastMap_1#

class skl2onnx.algebra.onnx_ops.OnnxCastMap_1(*args, **kwargs)#

Version

Onnx name: CastMap

This version of the operator has been available since version 1 of domain ai.onnx.ml.

Summary

Converts a map to a tensor. The map key must be an int64 and the values will be ordered in ascending order based on this key. The operator supports dense packing or sparse packing. If using sparse packing, the key cannot exceed the max_map-1 value.

Attributes

  • cast_to: A string indicating the desired element type of the output tensor, one of ‘TO_FLOAT’, ‘TO_STRING’, ‘TO_INT64’. Default value is name: "cast_to" type: STRING s: "TO_FLOAT"

  • map_form: Indicates whether to only output as many values as are in the input (dense), or position the input based on using the key of the map as the index of the output (sparse).<br>One of ‘DENSE’, ‘SPARSE’. Default value is name: "map_form" type: STRING s: "DENSE"

  • max_map: If the value of map_form is ‘SPARSE,’ this attribute indicates the total length of the output tensor. Default value is name: "max_map" type: INT i: 1

Inputs

  • X (heterogeneous)T1: The input map that is to be cast to a tensor

Outputs

  • Y (heterogeneous)T2: A tensor representing the same data as the input map, ordered by their keys

Type Constraints

  • T1 map(int64, string), map(int64, float): The input must be an integer map to either string or float.

  • T2 tensor(string), tensor(float), tensor(int64): The output is a 1-D tensor of string, float, or integer.

OnnxCast_1#

class skl2onnx.algebra.onnx_ops.OnnxCast_1(*args, **kwargs)#

Version

Onnx name: Cast

This version of the operator has been available since version 1.

Summary

The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message. NOTE: Casting to and from strings is not supported yet.

Attributes

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

Outputs

  • output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain input types. Casting from strings and complex are not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types. Casting to strings and complex are not supported.

OnnxCast_13#

class skl2onnx.algebra.onnx_ops.OnnxCast_13(*args, **kwargs)#

Version

Onnx name: Cast

This version of the operator has been available since version 13.

Summary

The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.

Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e-5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may yield result 100. There are some string literals reserved for special floating-point values; “+INF” (and “INF”), “-INF”, and “NaN” are positive infinity, negative infinity, and not-a-number, respectively. Any string which can exactly match “+INF” in a case-insensitive way would be mapped to positive infinite. Similarly, this case-insensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floating-point representation (such as “314.15926”) would be used. Converting non-numerical-literal string such as “Hello World!” is an undefined behavior. Cases of converting string representing floating-point arithmetic value, such as “2.718”, to INT is an undefined behavior.

Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64-bit float 3.1415926459 may be round to a 32-bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.

In more detail, the conversion among numerical types should follow these rules:

  • Casting from floating point to: * floating point: +/- infinity if OOR (out of range). * fixed point: undefined if OOR. * bool: +/- 0.0 to False; all else to True.

  • Casting from fixed point to: * floating point: +/- infinity if OOR. (+ infinity in the case of uint) * fixed point: when OOR, discard higher bits and reinterpret (with respect to two’s complement representation for

    signed types). For example, 200 (int16) -> -56 (int8).

    • bool: zero to False; nonzero to True.

  • Casting from bool to: * floating point: {1.0, 0.0}. * fixed point: {1, 0}. * bool: no change.

Attributes

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

Outputs

  • output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16): Constrain input types. Casting from complex is not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16): Constrain output types. Casting to complex is not supported.

OnnxCast_19#

class skl2onnx.algebra.onnx_ops.OnnxCast_19(*args, **kwargs)#

Version

Onnx name: Cast

This version of the operator has been available since version 19.

Summary

The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.

Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e-5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may yield result 100. There are some string literals reserved for special floating-point values; “+INF” (and “INF”), “-INF”, and “NaN” are positive infinity, negative infinity, and not-a-number, respectively. Any string which can exactly match “+INF” in a case-insensitive way would be mapped to positive infinite. Similarly, this case-insensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floating-point representation (such as “314.15926”) would be used. Converting non-numerical-literal string such as “Hello World!” is an undefined behavior. Cases of converting string representing floating-point arithmetic value, such as “2.718”, to INT is an undefined behavior.

Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64-bit float 3.1415926459 may be round to a 32-bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.

In more detail, the conversion among numerical types should follow these rules if the destination type is not a float 8 type.

  • Casting from floating point to: * floating point: +/- infinity if OOR (out of range). * fixed point: undefined if OOR. * bool: +/- 0.0 to False; all else to True.

  • Casting from fixed point to: * floating point: +/- infinity if OOR. (+ infinity in the case of uint) * fixed point: when OOR, discard higher bits and reinterpret (with respect to two’s complement representation for

    signed types). For example, 200 (int16) -> -56 (int8).

    • bool: zero to False; nonzero to True.

  • Casting from bool to: * floating point: {1.0, 0.0}. * fixed point: {1, 0}. * bool: no change.

Float 8 type were introduced to speed up the training of deep models. By default the conversion of a float x obeys to the following rules. [x] means the value rounded to the target mantissa width.

x | E4M3FN | E4M3FNUZ | E5M2 | E5M2FNUZ |

|------|—-|----|—-|----| | 0 | 0 | 0 | 0 | 0 | |-0 | -0 | 0 | -0 | 0 | | NaN | NaN | NaN | NaN | NaN | | +/- Inf | +/- FLT_MAX | NaN | FLT_MAX | NaN | | [x] > FLT_MAX | FLT_MAX | FLT_MAX | FLT_MAX | FLT_MAX | | [x] < -FLT_MAX | -FLT_MAX | -FLT_MAX | -FLT_MAX | -FLT_MAX | | else | RNE | RNE | RNE | RNE |

The behavior changes if the parameter ‘saturate’ is set to False. The rules then become:

x | E4M3FN | E4M3FNUZ | E5M2 | E5M2FNUZ |

|------|—-|----|—-|----| | 0 | 0 | 0 | 0 | 0 | |-0 | -0 | 0 | -0 | 0 | | NaN | NaN | NaN | NaN | NaN | | +/- Inf | NaN | NaN | +/- Inf | NaN | | [x] > FLT_MAX | NaN | NaN | Inf | NaN | | [x] < -FLT_MAX | NaN | NaN | -Inf | NaN | | else | RNE | RNE | RNE | RNE |

Attributes

  • saturate: The parameter defines how the conversion behaves if an input value is out of range of the destination type. It only applies for float 8 conversion (float8e4m3fn, float8e4m3fnuz, float8e5m2, float8e5m2fnuz). It is true by default. All cases are fully described in two tables inserted in the operator description. Default value is name: "saturate" type: INT i: 1

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

Outputs

  • output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain input types. Casting from complex is not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain output types. Casting to complex is not supported.

OnnxCast_6#

class skl2onnx.algebra.onnx_ops.OnnxCast_6(*args, **kwargs)#

Version

Onnx name: Cast

This version of the operator has been available since version 6.

Summary

The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message. NOTE: Casting to and from strings is not supported yet.

Attributes

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

Outputs

  • output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain input types. Casting from strings and complex are not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types. Casting to strings and complex are not supported.

OnnxCast_9#

class skl2onnx.algebra.onnx_ops.OnnxCast_9(*args, **kwargs)#

Version

Onnx name: Cast

This version of the operator has been available since version 9.

Summary

The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.

Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e-5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may yield result 100. There are some string literals reserved for special floating-point values; “+INF” (and “INF”), “-INF”, and “NaN” are positive infinity, negative infinity, and not-a-number, respectively. Any string which can exactly match “+INF” in a case-insensitive way would be mapped to positive infinite. Similarly, this case-insensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floating-point representation (such as “314.15926”) would be used. Converting non-numerical-literal string such as “Hello World!” is an undefined behavior. Cases of converting string representing floating-point arithmetic value, such as “2.718”, to INT is an undefined behavior.

Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64-bit float 3.1415926459 may be round to a 32-bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.

Attributes

Inputs

  • input (heterogeneous)T1: Input tensor to be cast.

Outputs

  • output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument

Type Constraints

  • T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string): Constrain input types. Casting from complex is not supported.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string): Constrain output types. Casting to complex is not supported.

OnnxCategoryMapper#

class skl2onnx.algebra.onnx_ops.OnnxCategoryMapper(*args, **kwargs)#

Version

Onnx name: CategoryMapper

This version of the operator has been available since version 1 of domain ai.onnx.ml.

Summary

Converts strings to integers and vice versa.

Two sequences of equal length are used to map between integers and strings, with strings and integers at the same index detailing the mapping.

Each operator converts either integers to strings or strings to integers, depending on which default value attribute is provided. Only one default value attribute should be defined.

If the string default value is set, it will convert integers to strings. If the int default value is set, it will convert strings to integers.

Attributes

  • default_int64: An integer to use when an input string value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is name: "default_int64" type: INT i: -1

  • default_string: A string to use when an input integer value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is name: "default_string" type: STRING s: "_Unused"

Inputs

  • X (heterogeneous)T1: Input data

Outputs

  • Y (heterogeneous)T2: Output data. If strings are input, the output values are integers, and vice versa.

Type Constraints

  • T1 tensor(string), tensor(int64): The input must be a tensor of strings or integers, either [N,C] or [C].

  • T2 tensor(string), tensor(int64): The output is a tensor of strings or integers. Its shape will be the same as the input shape.

OnnxCategoryMapper_1#

class skl2onnx.algebra.onnx_ops.OnnxCategoryMapper_1(*args, **kwargs)#

Version

Onnx name: CategoryMapper

This version of the operator has been available since version 1 of domain ai.onnx.ml.

Summary

Converts strings to integers and vice versa.

Two sequences of equal length are used to map between integers and strings, with strings and integers at the same index detailing the mapping.

Each operator converts either integers to strings or strings to integers, depending on which default value attribute is provided. Only one default value attribute should be defined.

If the string default value is set, it will convert integers to strings. If the int default value is set, it will convert strings to integers.

Attributes

  • default_int64: An integer to use when an input string value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is name: "default_int64" type: INT i: -1

  • default_string: A string to use when an input integer value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is name: "default_string" type: STRING s: "_Unused"

Inputs

  • X (heterogeneous)T1: Input data

Outputs

  • Y (heterogeneous)T2: Output data. If strings are input, the output values are integers, and vice versa.

Type Constraints

  • T1 tensor(string), tensor(int64): The input must be a tensor of strings or integers, either [N,C] or [C].

  • T2 tensor(string), tensor(int64): The output is a tensor of strings or integers. Its shape will be the same as the input shape.

OnnxCeil#

class skl2onnx.algebra.onnx_ops.OnnxCeil(*args, **kwargs)#

Version

Onnx name: Ceil

This version of the operator has been available since version 13.

Summary

Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise. If x is integral, +0, -0, NaN, or infinite, x itself is returned.

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.

OnnxCeil_1#

class skl2onnx.algebra.onnx_ops.OnnxCeil_1(*args, **kwargs)#

Version

Onnx name: Ceil

This version of the operator has been available since version 1.

Summary

Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.

Attributes

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxCeil_13#

class skl2onnx.algebra.onnx_ops.OnnxCeil_13(*args, **kwargs)#

Version

Onnx name: Ceil

This version of the operator has been available since version 13.

Summary

Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise. If x is integral, +0, -0, NaN, or infinite, x itself is returned.

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.

OnnxCeil_6#

class skl2onnx.algebra.onnx_ops.OnnxCeil_6(*args, **kwargs)#

Version

Onnx name: Ceil

This version of the operator has been available since version 6.

Summary

Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxCelu#

class skl2onnx.algebra.onnx_ops.OnnxCelu(*args, **kwargs)#

Version

Onnx name: Celu

This version of the operator has been available since version 12.

Summary

Continuously Differentiable Exponential Linear Units: Perform the linear unit element-wise on the input tensor X using formula:

max(0,x) + min(0,alpha*(exp(x/alpha)-1))

Attributes

  • alpha: The Alpha value in Celu formula which control the shape of the unit. The default value is 1.0. Default value is name: "alpha" type: FLOAT f: 1

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(float): Constrain input and output types to float32 tensors.

OnnxCelu_12#

class skl2onnx.algebra.onnx_ops.OnnxCelu_12(*args, **kwargs)#

Version

Onnx name: Celu

This version of the operator has been available since version 12.

Summary

Continuously Differentiable Exponential Linear Units: Perform the linear unit element-wise on the input tensor X using formula:

max(0,x) + min(0,alpha*(exp(x/alpha)-1))

Attributes

  • alpha: The Alpha value in Celu formula which control the shape of the unit. The default value is 1.0. Default value is name: "alpha" type: FLOAT f: 1

Inputs

  • X (heterogeneous)T: Input tensor

Outputs

  • Y (heterogeneous)T: Output tensor

Type Constraints

  • T tensor(float): Constrain input and output types to float32 tensors.

OnnxCenterCropPad#

class skl2onnx.algebra.onnx_ops.OnnxCenterCropPad(*args, **kwargs)#

Version

Onnx name: CenterCropPad

This version of the operator has been available since version 18.

Summary

Center crop or pad an input to given dimensions.

The crop/pad dimensions can be specified for a subset of the axes. Non-specified dimensions will not be cropped or padded.

If the input dimensions are bigger than the crop shape, a centered cropping window is extracted from the input. If the input dimensions are smaller than the crop shape, the input is padded on each side equally, so that the input is centered in the output.

Attributes

Inputs

  • input_data (heterogeneous)T: Input to extract the centered crop from.

  • shape (heterogeneous)Tind: 1-D tensor representing the cropping window dimensions.

Outputs

  • output_data (heterogeneous)T: Output data.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

  • Tind tensor(int32), tensor(int64): Constrain indices to integer types

OnnxCenterCropPad_18#

class skl2onnx.algebra.onnx_ops.OnnxCenterCropPad_18(*args, **kwargs)#

Version

Onnx name: CenterCropPad

This version of the operator has been available since version 18.

Summary

Center crop or pad an input to given dimensions.

The crop/pad dimensions can be specified for a subset of the axes. Non-specified dimensions will not be cropped or padded.

If the input dimensions are bigger than the crop shape, a centered cropping window is extracted from the input. If the input dimensions are smaller than the crop shape, the input is padded on each side equally, so that the input is centered in the output.

Attributes

Inputs

  • input_data (heterogeneous)T: Input to extract the centered crop from.

  • shape (heterogeneous)Tind: 1-D tensor representing the cropping window dimensions.

Outputs

  • output_data (heterogeneous)T: Output data.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

  • Tind tensor(int32), tensor(int64): Constrain indices to integer types

OnnxClip#

class skl2onnx.algebra.onnx_ops.OnnxClip(*args, **kwargs)#

Version

Onnx name: Clip

This version of the operator has been available since version 13.

Summary

Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.

Inputs

Between 1 and 3 inputs.

  • input (heterogeneous)T: Input tensor whose elements to be clipped

  • min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).

  • max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).

Outputs

  • output (heterogeneous)T: Output tensor with clipped input elements

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxClip_1#

class skl2onnx.algebra.onnx_ops.OnnxClip_1(*args, **kwargs)#

Version

Onnx name: Clip

This version of the operator has been available since version 1.

Summary

Clip operator limits the given input within an interval. The interval is specified with arguments ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max() respectively.

Attributes

Inputs

  • input (heterogeneous)T: Input tensor whose elements to be clipped

Outputs

  • output (heterogeneous)T: Output tensor with clipped input elements

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxClip_11#

class skl2onnx.algebra.onnx_ops.OnnxClip_11(*args, **kwargs)#

Version

Onnx name: Clip

This version of the operator has been available since version 11.

Summary

Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.

Inputs

Between 1 and 3 inputs.

  • input (heterogeneous)T: Input tensor whose elements to be clipped

  • min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).

  • max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).

Outputs

  • output (heterogeneous)T: Output tensor with clipped input elements

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxClip_12#

class skl2onnx.algebra.onnx_ops.OnnxClip_12(*args, **kwargs)#

Version

Onnx name: Clip

This version of the operator has been available since version 12.

Summary

Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.

Inputs

Between 1 and 3 inputs.

  • input (heterogeneous)T: Input tensor whose elements to be clipped

  • min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).

  • max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).

Outputs

  • output (heterogeneous)T: Output tensor with clipped input elements

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.

OnnxClip_13#

class skl2onnx.algebra.onnx_ops.OnnxClip_13(*args, **kwargs)#

Version

Onnx name: Clip

This version of the operator has been available since version 13.

Summary

Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.

Inputs

Between 1 and 3 inputs.

  • input (heterogeneous)T: Input tensor whose elements to be clipped

  • min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).

  • max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).

Outputs

  • output (heterogeneous)T: Output tensor with clipped input elements

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.

OnnxClip_6#

class skl2onnx.algebra.onnx_ops.OnnxClip_6(*args, **kwargs)#

Version

Onnx name: Clip

This version of the operator has been available since version 6.

Summary

Clip operator limits the given input within an interval. The interval is specified with arguments ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max() respectively.

Attributes

  • max: Maximum value, above which element is replaced by max Default value is name: "max" type: FLOAT f: 3.40282347e+38

  • min: Minimum value, under which element is replaced by min Default value is name: "min" type: FLOAT f: -3.40282347e+38

Inputs

  • input (heterogeneous)T: Input tensor whose elements to be clipped

Outputs

  • output (heterogeneous)T: Output tensor with clipped input elements

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxCol2Im#

class skl2onnx.algebra.onnx_ops.OnnxCol2Im(*args, **kwargs)#

Version

Onnx name: Col2Im

This version of the operator has been available since version 18.

Summary

The operator rearranges column blocks back into a multidimensional image

Col2Im behaves similarly to PyTorch’s fold https://pytorch.org/docs/stable/generated/torch.nn.Fold.html, but it only supports batched multi-dimensional image tensors. Another implementation in Python with N-dimension support can be found at https://github.com/f-dangel/unfoldNd/.

NOTE:

Although specifying image_shape looks redundant because it could be calculated from convolution formulas, it is required as input for more advanced scenarios as explained at PyTorch’s implementation (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Col2Im.cpp#L10)

Attributes

Inputs

  • input (heterogeneous)T: Input data tensor to be rearranged from column blocks back into an image. This is a 3-dimensional tensor containing [N, C * n-ary-product(block_shape), L], where N is batch dimension, C is image channel dimension and L is number of blocks.The blocks are enumerated in increasing lexicographic-order of their indices.For example, with an image-size 10*20 and block-size 9*18, there would be 2*3 blocks, enumerated in the order block(0, 0), block(0, 1), block(0, 2), block(1, 0), block(1, 1), block(1, 2).

  • image_shape (heterogeneous)tensor(int64): The shape of the spatial dimensions of the image after rearranging the column blocks.This is a 1-dimensional tensor with size of at least 2, containing the value [H_img, W_img] for a 2-D image or [dim_i1, dim_i2, …, dim_iN] for a N-D image.

  • block_shape (heterogeneous)tensor(int64): The shape of the block to apply on the input.This is a 1-dimensional tensor of size of at least 2, containing the value [H_block, W_block] for a 2-D image or [dim_b1, dim_b2, …, dim_bN] for a N-D block.This is the block-shape before dilation is applied to it.

Outputs

  • output (heterogeneous)T: Output tensor produced by rearranging blocks into an image.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all numeric tensor types.

OnnxCol2Im_18#

class skl2onnx.algebra.onnx_ops.OnnxCol2Im_18(*args, **kwargs)#

Version

Onnx name: Col2Im

This version of the operator has been available since version 18.

Summary

The operator rearranges column blocks back into a multidimensional image

Col2Im behaves similarly to PyTorch’s fold https://pytorch.org/docs/stable/generated/torch.nn.Fold.html, but it only supports batched multi-dimensional image tensors. Another implementation in Python with N-dimension support can be found at https://github.com/f-dangel/unfoldNd/.

NOTE:

Although specifying image_shape looks redundant because it could be calculated from convolution formulas, it is required as input for more advanced scenarios as explained at PyTorch’s implementation (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Col2Im.cpp#L10)

Attributes

Inputs

  • input (heterogeneous)T: Input data tensor to be rearranged from column blocks back into an image. This is a 3-dimensional tensor containing [N, C * n-ary-product(block_shape), L], where N is batch dimension, C is image channel dimension and L is number of blocks.The blocks are enumerated in increasing lexicographic-order of their indices.For example, with an image-size 10*20 and block-size 9*18, there would be 2*3 blocks, enumerated in the order block(0, 0), block(0, 1), block(0, 2), block(1, 0), block(1, 1), block(1, 2).

  • image_shape (heterogeneous)tensor(int64): The shape of the spatial dimensions of the image after rearranging the column blocks.This is a 1-dimensional tensor with size of at least 2, containing the value [H_img, W_img] for a 2-D image or [dim_i1, dim_i2, …, dim_iN] for a N-D image.

  • block_shape (heterogeneous)tensor(int64): The shape of the block to apply on the input.This is a 1-dimensional tensor of size of at least 2, containing the value [H_block, W_block] for a 2-D image or [dim_b1, dim_b2, …, dim_bN] for a N-D block.This is the block-shape before dilation is applied to it.

Outputs

  • output (heterogeneous)T: Output tensor produced by rearranging blocks into an image.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all numeric tensor types.

OnnxCompress#

class skl2onnx.algebra.onnx_ops.OnnxCompress(*args, **kwargs)#

Version

Onnx name: Compress

This version of the operator has been available since version 11.

Summary

Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html

Attributes

Inputs

  • input (heterogeneous)T: Tensor of rank r >= 1.

  • condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length along the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.

Outputs

  • output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

  • T1 tensor(bool): Constrain to boolean tensors.

OnnxCompress_11#

class skl2onnx.algebra.onnx_ops.OnnxCompress_11(*args, **kwargs)#

Version

Onnx name: Compress

This version of the operator has been available since version 11.

Summary

Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html

Attributes

Inputs

  • input (heterogeneous)T: Tensor of rank r >= 1.

  • condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length along the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.

Outputs

  • output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

  • T1 tensor(bool): Constrain to boolean tensors.

OnnxCompress_9#

class skl2onnx.algebra.onnx_ops.OnnxCompress_9(*args, **kwargs)#

Version

Onnx name: Compress

This version of the operator has been available since version 9.

Summary

Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html

Attributes

Inputs

  • input (heterogeneous)T: Tensor of rank r >= 1.

  • condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length alone the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.

Outputs

  • output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

  • T1 tensor(bool): Constrain to boolean tensors.

OnnxConcat#

class skl2onnx.algebra.onnx_ops.OnnxConcat(*args, **kwargs)#

Version

Onnx name: Concat

This version of the operator has been available since version 13.

Summary

Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.

Attributes

Inputs

Between 1 and 2147483647 inputs.

  • inputs (variadic, heterogeneous)T: List of tensors for concatenation

Outputs

  • concat_result (heterogeneous)T: Concatenated tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.

OnnxConcatFromSequence#

class skl2onnx.algebra.onnx_ops.OnnxConcatFromSequence(*args, **kwargs)#

Version

Onnx name: ConcatFromSequence

This version of the operator has been available since version 11.

Summary

Concatenate a sequence of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on. By default ‘new_axis’ is 0, the behavior is similar to numpy.concatenate. When ‘new_axis’ is 1, the behavior is similar to numpy.stack.

Attributes

  • new_axis: Insert and concatenate on a new axis or not, default 0 means do not insert new axis. Default value is name: "new_axis" type: INT i: 0

Inputs

  • input_sequence (heterogeneous)S: Sequence of tensors for concatenation

Outputs

  • concat_result (heterogeneous)T: Concatenated tensor

Type Constraints

  • S seq(tensor(uint8)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(int8)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(float16)), seq(tensor(float)), seq(tensor(double)), seq(tensor(string)), seq(tensor(bool)), seq(tensor(complex64)), seq(tensor(complex128)): Constrain input types to any tensor type.

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.

OnnxConcatFromSequence_11#

class skl2onnx.algebra.onnx_ops.OnnxConcatFromSequence_11(*args, **kwargs)#

Version

Onnx name: ConcatFromSequence

This version of the operator has been available since version 11.

Summary

Concatenate a sequence of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on. By default ‘new_axis’ is 0, the behavior is similar to numpy.concatenate. When ‘new_axis’ is 1, the behavior is similar to numpy.stack.

Attributes

  • new_axis: Insert and concatenate on a new axis or not, default 0 means do not insert new axis. Default value is name: "new_axis" type: INT i: 0

Inputs

  • input_sequence (heterogeneous)S: Sequence of tensors for concatenation

Outputs

  • concat_result (heterogeneous)T: Concatenated tensor

Type Constraints

  • S seq(tensor(uint8)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(int8)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(float16)), seq(tensor(float)), seq(tensor(double)), seq(tensor(string)), seq(tensor(bool)), seq(tensor(complex64)), seq(tensor(complex128)): Constrain input types to any tensor type.

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.

OnnxConcat_1#

class skl2onnx.algebra.onnx_ops.OnnxConcat_1(*args, **kwargs)#

Version

Onnx name: Concat

This version of the operator has been available since version 1.

Summary

Concatenate a list of tensors into a single tensor

Attributes

Inputs

Between 1 and 2147483647 inputs.

  • inputs (variadic, heterogeneous)T: List of tensors for concatenation

Outputs

  • concat_result (heterogeneous)T: Concatenated tensor

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain output types to float tensors.

OnnxConcat_11#

class skl2onnx.algebra.onnx_ops.OnnxConcat_11(*args, **kwargs)#

Version

Onnx name: Concat

This version of the operator has been available since version 11.

Summary

Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.

Attributes

Inputs

Between 1 and 2147483647 inputs.

  • inputs (variadic, heterogeneous)T: List of tensors for concatenation

Outputs

  • concat_result (heterogeneous)T: Concatenated tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.

OnnxConcat_13#

class skl2onnx.algebra.onnx_ops.OnnxConcat_13(*args, **kwargs)#

Version

Onnx name: Concat

This version of the operator has been available since version 13.

Summary

Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.

Attributes

Inputs

Between 1 and 2147483647 inputs.

  • inputs (variadic, heterogeneous)T: List of tensors for concatenation

Outputs

  • concat_result (heterogeneous)T: Concatenated tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.

OnnxConcat_4#

class skl2onnx.algebra.onnx_ops.OnnxConcat_4(*args, **kwargs)#

Version

Onnx name: Concat

This version of the operator has been available since version 4.

Summary

Concatenate a list of tensors into a single tensor

Attributes

Inputs

Between 1 and 2147483647 inputs.

  • inputs (variadic, heterogeneous)T: List of tensors for concatenation

Outputs

  • concat_result (heterogeneous)T: Concatenated tensor

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.

OnnxConstant#

class skl2onnx.algebra.onnx_ops.OnnxConstant(*args, **kwargs)#

Version

Onnx name: Constant

This version of the operator has been available since version 19.

Summary

This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.

Attributes

Outputs

  • output (heterogeneous)T: Output tensor containing the same value of the provided tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain input and output types to all tensor types.

OnnxConstantOfShape#

class skl2onnx.algebra.onnx_ops.OnnxConstantOfShape(*args, **kwargs)#

Version

Onnx name: ConstantOfShape

This version of the operator has been available since version 20.

Summary

Generate a tensor with given value and shape.

Attributes

Inputs

  • input (heterogeneous)T1: 1D tensor. The shape of the expected output tensor. If empty tensor is given, the output would be a scalar. All values must be >= 0.

Outputs

  • output (heterogeneous)T2: Output tensor of shape specified by ‘input’.If attribute ‘value’ is specified, the value and datatype of the output tensor is taken from ‘value’.If attribute ‘value’ is not specified, the value in the output defaults to 0, and the datatype defaults to float32.

Type Constraints

  • T1 tensor(int64): Constrain input types.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain output types to be numerics.

OnnxConstantOfShape_20#

class skl2onnx.algebra.onnx_ops.OnnxConstantOfShape_20(*args, **kwargs)#

Version

Onnx name: ConstantOfShape

This version of the operator has been available since version 20.

Summary

Generate a tensor with given value and shape.

Attributes

Inputs

  • input (heterogeneous)T1: 1D tensor. The shape of the expected output tensor. If empty tensor is given, the output would be a scalar. All values must be >= 0.

Outputs

  • output (heterogeneous)T2: Output tensor of shape specified by ‘input’.If attribute ‘value’ is specified, the value and datatype of the output tensor is taken from ‘value’.If attribute ‘value’ is not specified, the value in the output defaults to 0, and the datatype defaults to float32.

Type Constraints

  • T1 tensor(int64): Constrain input types.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(bfloat16), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain output types to be numerics.

OnnxConstantOfShape_9#

class skl2onnx.algebra.onnx_ops.OnnxConstantOfShape_9(*args, **kwargs)#

Version

Onnx name: ConstantOfShape

This version of the operator has been available since version 9.

Summary

Generate a tensor with given value and shape.

Attributes

Inputs

  • input (heterogeneous)T1: 1D tensor. The shape of the expected output tensor. If empty tensor is given, the output would be a scalar. All values must be >= 0.

Outputs

  • output (heterogeneous)T2: Output tensor of shape specified by ‘input’.If attribute ‘value’ is specified, the value and datatype of the output tensor is taken from ‘value’.If attribute ‘value’ is not specified, the value in the output defaults to 0, and the datatype defaults to float32.

Type Constraints

  • T1 tensor(int64): Constrain input types.

  • T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types to be numerics.

OnnxConstant_1#

class skl2onnx.algebra.onnx_ops.OnnxConstant_1(*args, **kwargs)#

Version

Onnx name: Constant

This version of the operator has been available since version 1.

Summary

A constant tensor.

Attributes

Outputs

  • output (heterogeneous)T: Output tensor containing the same value of the provided tensor.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxConstant_11#

class skl2onnx.algebra.onnx_ops.OnnxConstant_11(*args, **kwargs)#

Version

Onnx name: Constant

This version of the operator has been available since version 11.

Summary

A constant tensor. Exactly one of the two attributes, either value or sparse_value, must be specified.

Attributes

Outputs

  • output (heterogeneous)T: Output tensor containing the same value of the provided tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

OnnxConstant_12#

class skl2onnx.algebra.onnx_ops.OnnxConstant_12(*args, **kwargs)#

Version

Onnx name: Constant

This version of the operator has been available since version 12.

Summary

This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.

Attributes

Outputs

  • output (heterogeneous)T: Output tensor containing the same value of the provided tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

OnnxConstant_13#

class skl2onnx.algebra.onnx_ops.OnnxConstant_13(*args, **kwargs)#

Version

Onnx name: Constant

This version of the operator has been available since version 13.

Summary

This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.

Attributes

Outputs

  • output (heterogeneous)T: Output tensor containing the same value of the provided tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

OnnxConstant_19#

class skl2onnx.algebra.onnx_ops.OnnxConstant_19(*args, **kwargs)#

Version

Onnx name: Constant

This version of the operator has been available since version 19.

Summary

This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.

Attributes

Outputs

  • output (heterogeneous)T: Output tensor containing the same value of the provided tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128), tensor(float8e4m3fn), tensor(float8e4m3fnuz), tensor(float8e5m2), tensor(float8e5m2fnuz): Constrain input and output types to all tensor types.

OnnxConstant_9#

class skl2onnx.algebra.onnx_ops.OnnxConstant_9(*args, **kwargs)#

Version

Onnx name: Constant

This version of the operator has been available since version 9.

Summary

A constant tensor.

Attributes

Outputs

  • output (heterogeneous)T: Output tensor containing the same value of the provided tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.

OnnxConv#

class skl2onnx.algebra.onnx_ops.OnnxConv(*args, **kwargs)#

Version

Onnx name: Conv

This version of the operator has been available since version 11.

Summary

The convolution operator consumes an input tensor and a filter, and computes the output.

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • group: number of groups input channels and output channels are divided into. Default value is name: "group" type: INT i: 1

Inputs

Between 2 and 3 inputs.

  • X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

  • W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. Assuming zero based indices for the shape array, X.shape[1] == (W.shape[1] * group) == C and W.shape[0] mod G == 0. Or in other words FILTER_IN_CHANNEL multiplied by the number of groups should be equal to DATA_CHANNEL and the number of feature maps M should be a multiple of the number of groups G.

  • B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.

Outputs

  • Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxConvInteger#

class skl2onnx.algebra.onnx_ops.OnnxConvInteger(*args, **kwargs)#

Version

Onnx name: ConvInteger

This version of the operator has been available since version 10.

Summary

The integer convolution operator consumes an input tensor, its zero-point, a filter, and its zero-point, and computes the output. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • group: number of groups input channels and output channels are divided into. default is 1. Default value is name: "group" type: INT i: 1

Inputs

Between 2 and 4 inputs.

  • x (heterogeneous)T1: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

  • w (heterogeneous)T2: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.

  • x_zero_point (optional, heterogeneous)T1: Zero point tensor for input ‘x’. It’s optional and default value is 0. It’s a scalar, which means a per-tensor/layer quantization.

  • w_zero_point (optional, heterogeneous)T2: Zero point tensor for input ‘w’. It’s optional and default value is 0. It could be a scalar or a 1-D tensor, which means a per-tensor/layer or per output channel quantization. If it’s a 1-D tensor, its number of elements should be equal to the number of output channels (M)

Outputs

  • y (heterogeneous)T3: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.

Type Constraints

  • T1 tensor(int8), tensor(uint8): Constrain input x and its zero point data type to 8-bit integer tensor.

  • T2 tensor(int8), tensor(uint8): Constrain input w and its zero point data type to 8-bit integer tensor.

  • T3 tensor(int32): Constrain output y data type to 32-bit integer tensor.

OnnxConvInteger_10#

class skl2onnx.algebra.onnx_ops.OnnxConvInteger_10(*args, **kwargs)#

Version

Onnx name: ConvInteger

This version of the operator has been available since version 10.

Summary

The integer convolution operator consumes an input tensor, its zero-point, a filter, and its zero-point, and computes the output. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • group: number of groups input channels and output channels are divided into. default is 1. Default value is name: "group" type: INT i: 1

Inputs

Between 2 and 4 inputs.

  • x (heterogeneous)T1: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

  • w (heterogeneous)T2: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.

  • x_zero_point (optional, heterogeneous)T1: Zero point tensor for input ‘x’. It’s optional and default value is 0. It’s a scalar, which means a per-tensor/layer quantization.

  • w_zero_point (optional, heterogeneous)T2: Zero point tensor for input ‘w’. It’s optional and default value is 0. It could be a scalar or a 1-D tensor, which means a per-tensor/layer or per output channel quantization. If it’s a 1-D tensor, its number of elements should be equal to the number of output channels (M)

Outputs

  • y (heterogeneous)T3: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.

Type Constraints

  • T1 tensor(int8), tensor(uint8): Constrain input x and its zero point data type to 8-bit integer tensor.

  • T2 tensor(int8), tensor(uint8): Constrain input w and its zero point data type to 8-bit integer tensor.

  • T3 tensor(int32): Constrain output y data type to 32-bit integer tensor.

OnnxConvTranspose#

class skl2onnx.algebra.onnx_ops.OnnxConvTranspose(*args, **kwargs)#

Version

Onnx name: ConvTranspose

This version of the operator has been available since version 11.

Summary

The convolution transpose operator consumes an input tensor and a filter, and computes the output.

If the pads parameter is provided the shape of the output is calculated via the following equation:

output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]

output_shape can also be explicitly specified in which case pads values are auto generated using these equations:

total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] If (auto_pads == SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2) Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = input_shape[i] * strides[i] for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • group: number of groups input channels and output channels are divided into. Default value is name: "group" type: INT i: 1

Inputs

Between 2 and 3 inputs.

  • X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)

  • W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)

  • B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.

Outputs

  • Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxConvTranspose_1#

class skl2onnx.algebra.onnx_ops.OnnxConvTranspose_1(*args, **kwargs)#

Version

Onnx name: ConvTranspose

This version of the operator has been available since version 1.

Summary

The convolution transpose operator consumes an input tensor and a filter, and computes the output.

If the pads parameter is provided the shape of the output is calculated via the following equation:

output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]

output_shape can also be explicitly specified in which case pads values are auto generated using these equations:

total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2) Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • group: number of groups input channels and output channels are divided into. Default value is name: "group" type: INT i: 1

Inputs

Between 2 and 3 inputs.

  • X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)

  • W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)

  • B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.

Outputs

  • Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxConvTranspose_11#

class skl2onnx.algebra.onnx_ops.OnnxConvTranspose_11(*args, **kwargs)#

Version

Onnx name: ConvTranspose

This version of the operator has been available since version 11.

Summary

The convolution transpose operator consumes an input tensor and a filter, and computes the output.

If the pads parameter is provided the shape of the output is calculated via the following equation:

output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]

output_shape can also be explicitly specified in which case pads values are auto generated using these equations:

total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i] If (auto_pads == SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2) Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = input_shape[i] * strides[i] for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • group: number of groups input channels and output channels are divided into. Default value is name: "group" type: INT i: 1

Inputs

Between 2 and 3 inputs.

  • X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)

  • W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)

  • B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.

Outputs

  • Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxConv_1#

class skl2onnx.algebra.onnx_ops.OnnxConv_1(*args, **kwargs)#

Version

Onnx name: Conv

This version of the operator has been available since version 1.

Summary

The convolution operator consumes an input tensor and a filter, and computes the output.

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • group: number of groups input channels and output channels are divided into. Default value is name: "group" type: INT i: 1

Inputs

Between 2 and 3 inputs.

  • X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

  • W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.

  • B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.

Outputs

  • Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxConv_11#

class skl2onnx.algebra.onnx_ops.OnnxConv_11(*args, **kwargs)#

Version

Onnx name: Conv

This version of the operator has been available since version 11.

Summary

The convolution operator consumes an input tensor and a filter, and computes the output.

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is name: "auto_pad" type: STRING s: "NOTSET"

  • group: number of groups input channels and output channels are divided into. Default value is name: "group" type: INT i: 1

Inputs

Between 2 and 3 inputs.

  • X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

  • W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. Assuming zero based indices for the shape array, X.shape[1] == (W.shape[1] * group) == C and W.shape[0] mod G == 0. Or in other words FILTER_IN_CHANNEL multiplied by the number of groups should be equal to DATA_CHANNEL and the number of feature maps M should be a multiple of the number of groups G.

  • B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.

Outputs

  • Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxCos#

class skl2onnx.algebra.onnx_ops.OnnxCos(*args, **kwargs)#

Version

Onnx name: Cos

This version of the operator has been available since version 7.

Summary

Calculates the cosine of the given input tensor, element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The cosine of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxCos_7#

class skl2onnx.algebra.onnx_ops.OnnxCos_7(*args, **kwargs)#

Version

Onnx name: Cos

This version of the operator has been available since version 7.

Summary

Calculates the cosine of the given input tensor, element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The cosine of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxCosh#

class skl2onnx.algebra.onnx_ops.OnnxCosh(*args, **kwargs)#

Version

Onnx name: Cosh

This version of the operator has been available since version 9.

Summary

Calculates the hyperbolic cosine of the given input tensor element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The hyperbolic cosine values of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxCosh_9#

class skl2onnx.algebra.onnx_ops.OnnxCosh_9(*args, **kwargs)#

Version

Onnx name: Cosh

This version of the operator has been available since version 9.

Summary

Calculates the hyperbolic cosine of the given input tensor element-wise.

Inputs

  • input (heterogeneous)T: Input tensor

Outputs

  • output (heterogeneous)T: The hyperbolic cosine values of the input tensor computed element-wise

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

OnnxCumSum#

class skl2onnx.algebra.onnx_ops.OnnxCumSum(*args, **kwargs)#

Version

Onnx name: CumSum

This version of the operator has been available since version 14.

Summary

Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively meaning the first element is copied as is. Through an exclusive attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to 1.

Example:

input_x = [1, 2, 3]
axis=0
output = [1, 3, 6]
exclusive=1
output = [0, 1, 3]
exclusive=0
reverse=1
output = [6, 5, 3]
exclusive=1
reverse=1
output = [5, 3, 0]

Attributes

  • exclusive: If set to 1 will return exclusive sum in which the top element is not included. In other terms, if set to 1, the j-th output element would be the sum of the first (j-1) elements. Otherwise, it would be the sum of the first j elements. Default value is name: "exclusive" type: INT i: 0

  • reverse: If set to 1 will perform the sums in reverse direction. Default value is name: "reverse" type: INT i: 0

Inputs

  • x (heterogeneous)T: An input tensor that is to be processed.

  • axis (heterogeneous)T2: A 0-D tensor. Must be in the range [-rank(x), rank(x)-1]. Negative value means counting dimensions from the back.

Outputs

  • y (heterogeneous)T: Output tensor of the same type as ‘x’ with cumulative sums of the x’s elements

Type Constraints

  • T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to high-precision numeric tensors.

  • T2 tensor(int32), tensor(int64): axis tensor can be int32 or int64 only

OnnxCumSum_11#

class skl2onnx.algebra.onnx_ops.OnnxCumSum_11(*args, **kwargs)#

Version

Onnx name: CumSum

This version of the operator has been available since version 11.

Summary

Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively meaning the first element is copied as is. Through an exclusive attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to 1.

Example:

input_x = [1, 2, 3]
axis=0
output = [1, 3, 6]
exclusive=1
output = [0, 1, 3]
exclusive=0
reverse=1
output = [6, 5, 3]
exclusive=1
reverse=1
output = [5, 3, 0]

Attributes

  • exclusive: If set to 1 will return exclusive sum in which the top element is not included. In other terms, if set to 1, the j-th output element would be the sum of the first (j-1) elements. Otherwise, it would be the sum of the first j elements. Default value is name: "exclusive" type: INT i: