Supported scikitlearn Models¶
skl2onnx currently can convert the following list
of models for skl2onnx . They
were tested using onnxruntime
.
All the following classes overloads the following methods
such as
OnnxSklearnPipeline
does. They wrap existing
scikitlearn classes by dynamically creating a new one
which inherits from OnnxOperatorMixin
which
implements to_onnx methods.
Covered Converters¶
Name 
Package 
Supported 

ARDRegression 
linear_model 
Yes 
AdaBoostClassifier 
ensemble 
Yes 
AdaBoostRegressor 
ensemble 
Yes 
AdditiveChi2Sampler 
kernel_approximation 

AffinityPropagation 
cluster 

AgglomerativeClustering 
cluster 

BaggingClassifier 
ensemble 
Yes 
BaggingRegressor 
ensemble 
Yes 
BaseDecisionTree 
tree 

BaseEnsemble 
ensemble 

BayesianGaussianMixture 
mixture 
Yes 
BayesianRidge 
linear_model 
Yes 
BernoulliNB 
naive_bayes 
Yes 
BernoulliRBM 
neural_network 

Binarizer 
preprocessing 
Yes 
Birch 
cluster 

CCA 
cross_decomposition 

CalibratedClassifierCV 
calibration 
Yes 
CategoricalNB 
naive_bayes 
Yes 
ClassifierChain 
multioutput 

ComplementNB 
naive_bayes 
Yes 
DBSCAN 
cluster 

DecisionTreeClassifier 
tree 
Yes 
DecisionTreeRegressor 
tree 
Yes 
DictVectorizer 
feature_extraction 
Yes 
DictionaryLearning 
decomposition 

ElasticNet 
linear_model 
Yes 
ElasticNetCV 
linear_model 
Yes 
EllipticEnvelope 
covariance 

EmpiricalCovariance 
covariance 

ExtraTreeClassifier 
tree 
Yes 
ExtraTreeRegressor 
tree 
Yes 
ExtraTreesClassifier 
ensemble 
Yes 
ExtraTreesRegressor 
ensemble 
Yes 
FactorAnalysis 
decomposition 

FastICA 
decomposition 

FeatureAgglomeration 
cluster 

FeatureHasher 
feature_extraction 

FunctionTransformer 
preprocessing 
Yes 
GammaRegressor 
linear_model 

GaussianMixture 
mixture 
Yes 
GaussianNB 
naive_bayes 
Yes 
GaussianProcessClassifier 
gaussian_process 
Yes 
GaussianProcessRegressor 
gaussian_process 
Yes 
GaussianRandomProjection 
random_projection 
Yes 
GenericUnivariateSelect 
feature_selection 
Yes 
GradientBoostingClassifier 
ensemble 
Yes 
GradientBoostingRegressor 
ensemble 
Yes 
GraphicalLasso 
covariance 

GraphicalLassoCV 
covariance 

GridSearchCV 
model_selection 
Yes 
HuberRegressor 
linear_model 
Yes 
IncrementalPCA 
decomposition 
Yes 
IsolationForest 
ensemble 
Yes 
IsotonicRegression 
isotonic 

KBinsDiscretizer 
preprocessing 
Yes 
KMeans 
cluster 
Yes 
KNNImputer 
impute 
Yes 
KNeighborsClassifier 
neighbors 
Yes 
KNeighborsRegressor 
neighbors 
Yes 
KNeighborsTransformer 
neighbors 
Yes 
KernelCenterer 
preprocessing 

KernelDensity 
neighbors 

KernelPCA 
decomposition 

KernelRidge 
kernel_ridge 

LabelBinarizer 
preprocessing 
Yes 
LabelEncoder 
preprocessing 
Yes 
LabelPropagation 
semi_supervised 

LabelSpreading 
semi_supervised 

Lars 
linear_model 
Yes 
LarsCV 
linear_model 
Yes 
Lasso 
linear_model 
Yes 
LassoCV 
linear_model 
Yes 
LassoLars 
linear_model 
Yes 
LassoLarsCV 
linear_model 
Yes 
LassoLarsIC 
linear_model 
Yes 
LatentDirichletAllocation 
decomposition 

LedoitWolf 
covariance 

LinearDiscriminantAnalysis 
discriminant_analysis 
Yes 
LinearRegression 
linear_model 
Yes 
LinearSVC 
svm 
Yes 
LinearSVR 
svm 
Yes 
LocalOutlierFactor 
neighbors 

LogisticRegression 
linear_model 
Yes 
LogisticRegressionCV 
linear_model 
Yes 
MLPClassifier 
neural_network 
Yes 
MLPRegressor 
neural_network 
Yes 
MaxAbsScaler 
preprocessing 
Yes 
MeanShift 
cluster 

MinCovDet 
covariance 

MinMaxScaler 
preprocessing 
Yes 
MiniBatchDictionaryLearning 
decomposition 

MiniBatchKMeans 
cluster 
Yes 
MiniBatchSparsePCA 
decomposition 

MissingIndicator 
impute 

MultiLabelBinarizer 
preprocessing 

MultiOutputClassifier 
multioutput 

MultiOutputRegressor 
multioutput 

MultiTaskElasticNet 
linear_model 
Yes 
MultiTaskElasticNetCV 
linear_model 
Yes 
MultiTaskLasso 
linear_model 
Yes 
MultiTaskLassoCV 
linear_model 
Yes 
MultinomialNB 
naive_bayes 
Yes 
NMF 
decomposition 

NearestCentroid 
neighbors 

NearestNeighbors 
neighbors 
Yes 
NeighborhoodComponentsAnalysis 
neighbors 
Yes 
Normalizer 
preprocessing 
Yes 
NuSVC 
svm 
Yes 
NuSVR 
svm 
Yes 
Nystroem 
kernel_approximation 

OAS 
covariance 

OPTICS 
cluster 

OneClassSVM 
svm 
Yes 
OneHotEncoder 
preprocessing 
Yes 
OneVsOneClassifier 
multiclass 

OneVsRestClassifier 
multiclass 
Yes 
OrdinalEncoder 
preprocessing 
Yes 
OrthogonalMatchingPursuit 
linear_model 
Yes 
OrthogonalMatchingPursuitCV 
linear_model 
Yes 
OutputCodeClassifier 
multiclass 

PCA 
decomposition 
Yes 
PLSCanonical 
cross_decomposition 

PLSRegression 
cross_decomposition 
Yes 
PLSSVD 
cross_decomposition 

PassiveAggressiveClassifier 
linear_model 
Yes 
PassiveAggressiveRegressor 
linear_model 
Yes 
Perceptron 
linear_model 
Yes 
PoissonRegressor 
linear_model 

PolynomialCountSketch 
kernel_approximation 

PolynomialFeatures 
preprocessing 
Yes 
PowerTransformer 
preprocessing 
Yes 
QuadraticDiscriminantAnalysis 
discriminant_analysis 

QuantileTransformer 
preprocessing 

RANSACRegressor 
linear_model 
Yes 
RBFSampler 
kernel_approximation 

RFE 
feature_selection 
Yes 
RFECV 
feature_selection 
Yes 
RadiusNeighborsClassifier 
neighbors 
Yes 
RadiusNeighborsRegressor 
neighbors 
Yes 
RadiusNeighborsTransformer 
neighbors 

RandomForestClassifier 
ensemble 
Yes 
RandomForestRegressor 
ensemble 
Yes 
RandomTreesEmbedding 
ensemble 

RandomizedSearchCV 
model_selection 

RegressorChain 
multioutput 

Ridge 
linear_model 
Yes 
RidgeCV 
linear_model 
Yes 
RidgeClassifier 
linear_model 
Yes 
RidgeClassifierCV 
linear_model 
Yes 
RobustScaler 
preprocessing 
Yes 
SGDClassifier 
linear_model 
Yes 
SGDRegressor 
linear_model 
Yes 
SVC 
svm 
Yes 
SVR 
svm 
Yes 
SelectFdr 
feature_selection 
Yes 
SelectFpr 
feature_selection 
Yes 
SelectFromModel 
feature_selection 
Yes 
SelectFwe 
feature_selection 
Yes 
SelectKBest 
feature_selection 
Yes 
SelectPercentile 
feature_selection 
Yes 
SelfTrainingClassifier 
semi_supervised 

SequentialFeatureSelector 
feature_selection 

ShrunkCovariance 
covariance 

SimpleImputer 
impute 
Yes 
SkewedChi2Sampler 
kernel_approximation 

SparseCoder 
decomposition 

SparsePCA 
decomposition 

SparseRandomProjection 
random_projection 

SpectralBiclustering 
cluster 

SpectralClustering 
cluster 

SpectralCoclustering 
cluster 

StackingClassifier 
ensemble 
Yes 
StackingRegressor 
ensemble 
Yes 
StandardScaler 
preprocessing 
Yes 
TheilSenRegressor 
linear_model 
Yes 
TransformedTargetRegressor 
compose 

TruncatedSVD 
decomposition 
Yes 
TweedieRegressor 
linear_model 

VarianceThreshold 
feature_selection 
Yes 
VotingClassifier 
ensemble 
Yes 
VotingRegressor 
ensemble 
Yes 
scikitlearn’s version is 0.24.2. 114/182 models are covered.
Converters Documentation¶
OnnxBooster¶
OnnxCastRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxCastRegressor(estimator, *, dtype=<class 'numpy.float32'>)¶
OnnxOperatorMixin for CastRegressor
OnnxCastTransformer¶
 class skl2onnx.algebra.sklearn_ops.OnnxCastTransformer(*, dtype=<class 'numpy.float32'>)¶
OnnxOperatorMixin for CastTransformer
OnnxCustomScorerTransform¶
OnnxDecorrelateTransformer¶
OnnxLiveDecorrelateTransformer¶
OnnxMockWrappedLightGbmBoosterClassifier¶
OnnxPredictableTSNE¶
OnnxReplaceTransformer¶
 class skl2onnx.algebra.sklearn_ops.OnnxReplaceTransformer(*, from_value=0, to_value=nan, dtype=<class 'numpy.float32'>)¶
OnnxOperatorMixin for ReplaceTransformer
OnnxSklearnARDRegression¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnARDRegression(*, n_iter=300, tol=0.001, alpha_1=1e06, alpha_2=1e06, lambda_1=1e06, lambda_2=1e06, compute_score=False, threshold_lambda=10000.0, fit_intercept=True, normalize=False, copy_X=True, verbose=False)¶
OnnxOperatorMixin for ARDRegression
OnnxSklearnAdaBoostClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnAdaBoostClassifier(base_estimator=None, *, n_estimators=50, learning_rate=1.0, algorithm='SAMME.R', random_state=None)¶
OnnxOperatorMixin for AdaBoostClassifier
OnnxSklearnAdaBoostRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnAdaBoostRegressor(base_estimator=None, *, n_estimators=50, learning_rate=1.0, loss='linear', random_state=None)¶
OnnxOperatorMixin for AdaBoostRegressor
OnnxSklearnBaggingClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnBaggingClassifier(base_estimator=None, n_estimators=10, *, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0)¶
OnnxOperatorMixin for BaggingClassifier
OnnxSklearnBaggingRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnBaggingRegressor(base_estimator=None, n_estimators=10, *, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0)¶
OnnxOperatorMixin for BaggingRegressor
OnnxSklearnBayesianGaussianMixture¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnBayesianGaussianMixture(*, n_components=1, covariance_type='full', tol=0.001, reg_covar=1e06, max_iter=100, n_init=1, init_params='kmeans', weight_concentration_prior_type='dirichlet_process', weight_concentration_prior=None, mean_precision_prior=None, mean_prior=None, degrees_of_freedom_prior=None, covariance_prior=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10)¶
OnnxOperatorMixin for BayesianGaussianMixture
OnnxSklearnBayesianRidge¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnBayesianRidge(*, n_iter=300, tol=0.001, alpha_1=1e06, alpha_2=1e06, lambda_1=1e06, lambda_2=1e06, alpha_init=None, lambda_init=None, compute_score=False, fit_intercept=True, normalize=False, copy_X=True, verbose=False)¶
OnnxOperatorMixin for BayesianRidge
OnnxSklearnBernoulliNB¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnBernoulliNB(*, alpha=1.0, binarize=0.0, fit_prior=True, class_prior=None)¶
OnnxOperatorMixin for BernoulliNB
OnnxSklearnBinarizer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnBinarizer(*, threshold=0.0, copy=True)¶
OnnxOperatorMixin for Binarizer
OnnxSklearnCalibratedClassifierCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnCalibratedClassifierCV(base_estimator=None, *, method='sigmoid', cv=None, n_jobs=None, ensemble=True)¶
OnnxOperatorMixin for CalibratedClassifierCV
OnnxSklearnCategoricalNB¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnCategoricalNB(*, alpha=1.0, fit_prior=True, class_prior=None, min_categories=None)¶
OnnxOperatorMixin for CategoricalNB
OnnxSklearnComplementNB¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnComplementNB(*, alpha=1.0, fit_prior=True, class_prior=None, norm=False)¶
OnnxOperatorMixin for ComplementNB
OnnxSklearnCountVectorizer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnCountVectorizer(*, input='content', encoding='utf8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.int64'>)¶
OnnxOperatorMixin for CountVectorizer
OnnxSklearnDecisionTreeClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnDecisionTreeClassifier(*, criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, ccp_alpha=0.0)¶
OnnxOperatorMixin for DecisionTreeClassifier
OnnxSklearnDecisionTreeRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnDecisionTreeRegressor(*, criterion='mse', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, ccp_alpha=0.0)¶
OnnxOperatorMixin for DecisionTreeRegressor
OnnxSklearnDictVectorizer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnDictVectorizer(*, dtype=<class 'numpy.float64'>, separator='=', sparse=True, sort=True)¶
OnnxOperatorMixin for DictVectorizer
OnnxSklearnElasticNet¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, normalize=False, precompute=False, max_iter=1000, copy_X=True, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic')¶
OnnxOperatorMixin for ElasticNet
OnnxSklearnElasticNetCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, positive=False, random_state=None, selection='cyclic')¶
OnnxOperatorMixin for ElasticNetCV
OnnxSklearnExtraTreeClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnExtraTreeClassifier(*, criterion='gini', splitter='random', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, ccp_alpha=0.0)¶
OnnxOperatorMixin for ExtraTreeClassifier
OnnxSklearnExtraTreeRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnExtraTreeRegressor(*, criterion='mse', splitter='random', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', random_state=None, min_impurity_decrease=0.0, min_impurity_split=None, max_leaf_nodes=None, ccp_alpha=0.0)¶
OnnxOperatorMixin for ExtraTreeRegressor
OnnxSklearnExtraTreesClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnExtraTreesClassifier(n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)¶
OnnxOperatorMixin for ExtraTreesClassifier
OnnxSklearnExtraTreesRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnExtraTreesRegressor(n_estimators=100, *, criterion='mse', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, ccp_alpha=0.0, max_samples=None)¶
OnnxOperatorMixin for ExtraTreesRegressor
OnnxSklearnFunctionTransformer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnFunctionTransformer(func=None, inverse_func=None, *, validate=False, accept_sparse=False, check_inverse=True, kw_args=None, inv_kw_args=None)¶
OnnxOperatorMixin for FunctionTransformer
OnnxSklearnGaussianMixture¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianMixture(n_components=1, *, covariance_type='full', tol=0.001, reg_covar=1e06, max_iter=100, n_init=1, init_params='kmeans', weights_init=None, means_init=None, precisions_init=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10)¶
OnnxOperatorMixin for GaussianMixture
OnnxSklearnGaussianNB¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianNB(*, priors=None, var_smoothing=1e09)¶
OnnxOperatorMixin for GaussianNB
OnnxSklearnGaussianProcessClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianProcessClassifier(kernel=None, *, optimizer='fmin_l_bfgs_b', n_restarts_optimizer=0, max_iter_predict=100, warm_start=False, copy_X_train=True, random_state=None, multi_class='one_vs_rest', n_jobs=None)¶
OnnxOperatorMixin for GaussianProcessClassifier
OnnxSklearnGaussianProcessRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianProcessRegressor(kernel=None, *, alpha=1e10, optimizer='fmin_l_bfgs_b', n_restarts_optimizer=0, normalize_y=False, copy_X_train=True, random_state=None)¶
OnnxOperatorMixin for GaussianProcessRegressor
OnnxSklearnGaussianRandomProjection¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGaussianRandomProjection(n_components='auto', *, eps=0.1, random_state=None)¶
OnnxOperatorMixin for GaussianRandomProjection
OnnxSklearnGenericUnivariateSelect¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGenericUnivariateSelect(score_func=<function f_classif>, *, mode='percentile', param=1e05)¶
OnnxOperatorMixin for GenericUnivariateSelect
OnnxSklearnGradientBoostingClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGradientBoostingClassifier(*, loss='deviance', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, min_impurity_split=None, init=None, random_state=None, max_features=None, verbose=0, max_leaf_nodes=None, warm_start=False, validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0)¶
OnnxOperatorMixin for GradientBoostingClassifier
OnnxSklearnGradientBoostingRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGradientBoostingRegressor(*, loss='ls', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, min_impurity_split=None, init=None, random_state=None, max_features=None, alpha=0.9, verbose=0, max_leaf_nodes=None, warm_start=False, validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0)¶
OnnxOperatorMixin for GradientBoostingRegressor
OnnxSklearnGridSearchCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnGridSearchCV(estimator, param_grid, *, scoring=None, n_jobs=None, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', error_score=nan, return_train_score=False)¶
OnnxOperatorMixin for GridSearchCV
OnnxSklearnHistGradientBoostingClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnHistGradientBoostingClassifier(loss='auto', *, learning_rate=0.1, max_iter=100, max_leaf_nodes=31, max_depth=None, min_samples_leaf=20, l2_regularization=0.0, max_bins=255, categorical_features=None, monotonic_cst=None, warm_start=False, early_stopping='auto', scoring='loss', validation_fraction=0.1, n_iter_no_change=10, tol=1e07, verbose=0, random_state=None)¶
OnnxOperatorMixin for HistGradientBoostingClassifier
OnnxSklearnHistGradientBoostingRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnHistGradientBoostingRegressor(loss='least_squares', *, learning_rate=0.1, max_iter=100, max_leaf_nodes=31, max_depth=None, min_samples_leaf=20, l2_regularization=0.0, max_bins=255, categorical_features=None, monotonic_cst=None, warm_start=False, early_stopping='auto', scoring='loss', validation_fraction=0.1, n_iter_no_change=10, tol=1e07, verbose=0, random_state=None)¶
OnnxOperatorMixin for HistGradientBoostingRegressor
OnnxSklearnHuberRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnHuberRegressor(*, epsilon=1.35, max_iter=100, alpha=0.0001, warm_start=False, fit_intercept=True, tol=1e05)¶
OnnxOperatorMixin for HuberRegressor
OnnxSklearnIncrementalPCA¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnIncrementalPCA(n_components=None, *, whiten=False, copy=True, batch_size=None)¶
OnnxOperatorMixin for IncrementalPCA
OnnxSklearnIsolationForest¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnIsolationForest(*, n_estimators=100, max_samples='auto', contamination='auto', max_features=1.0, bootstrap=False, n_jobs=None, random_state=None, verbose=0, warm_start=False)¶
OnnxOperatorMixin for IsolationForest
OnnxSklearnKBinsDiscretizer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnKBinsDiscretizer(n_bins=5, *, encode='onehot', strategy='quantile', dtype=None)¶
OnnxOperatorMixin for KBinsDiscretizer
OnnxSklearnKMeans¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnKMeans(n_clusters=8, *, init='kmeans++', n_init=10, max_iter=300, tol=0.0001, precompute_distances='deprecated', verbose=0, random_state=None, copy_x=True, n_jobs='deprecated', algorithm='auto')¶
OnnxOperatorMixin for KMeans
OnnxSklearnKNNImputer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnKNNImputer(*, missing_values=nan, n_neighbors=5, weights='uniform', metric='nan_euclidean', copy=True, add_indicator=False)¶
OnnxOperatorMixin for KNNImputer
OnnxSklearnKNeighborsClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnKNeighborsClassifier(n_neighbors=5, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None, **kwargs)¶
OnnxOperatorMixin for KNeighborsClassifier
OnnxSklearnKNeighborsRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnKNeighborsRegressor(n_neighbors=5, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None, **kwargs)¶
OnnxOperatorMixin for KNeighborsRegressor
OnnxSklearnKNeighborsTransformer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnKNeighborsTransformer(*, mode='distance', n_neighbors=5, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=1)¶
OnnxOperatorMixin for KNeighborsTransformer
OnnxSklearnLGBMClassifier¶
OnnxSklearnLGBMRegressor¶
OnnxSklearnLabelBinarizer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLabelBinarizer(*, neg_label=0, pos_label=1, sparse_output=False)¶
OnnxOperatorMixin for LabelBinarizer
OnnxSklearnLabelEncoder¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLabelEncoder¶
OnnxOperatorMixin for LabelEncoder
OnnxSklearnLars¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLars(*, fit_intercept=True, verbose=False, normalize=True, precompute='auto', n_nonzero_coefs=500, eps=2.220446049250313e16, copy_X=True, fit_path=True, jitter=None, random_state=None)¶
OnnxOperatorMixin for Lars
OnnxSklearnLarsCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize=True, precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e16, copy_X=True)¶
OnnxOperatorMixin for LarsCV
OnnxSklearnLasso¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLasso(alpha=1.0, *, fit_intercept=True, normalize=False, precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic')¶
OnnxOperatorMixin for Lasso
OnnxSklearnLassoCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, positive=False, random_state=None, selection='cyclic')¶
OnnxOperatorMixin for LassoCV
OnnxSklearnLassoLars¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLassoLars(alpha=1.0, *, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.220446049250313e16, copy_X=True, fit_path=True, positive=False, jitter=None, random_state=None)¶
OnnxOperatorMixin for LassoLars
OnnxSklearnLassoLarsCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLassoLarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize=True, precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e16, copy_X=True, positive=False)¶
OnnxOperatorMixin for LassoLarsCV
OnnxSklearnLassoLarsIC¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLassoLarsIC(criterion='aic', *, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.220446049250313e16, copy_X=True, positive=False)¶
OnnxOperatorMixin for LassoLarsIC
OnnxSklearnLinearDiscriminantAnalysis¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLinearDiscriminantAnalysis(solver='svd', shrinkage=None, priors=None, n_components=None, store_covariance=False, tol=0.0001, covariance_estimator=None)¶
OnnxOperatorMixin for LinearDiscriminantAnalysis
OnnxSklearnLinearRegression¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLinearRegression(*, fit_intercept=True, normalize=False, copy_X=True, n_jobs=None, positive=False)¶
OnnxOperatorMixin for LinearRegression
OnnxSklearnLinearSVC¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLinearSVC(penalty='l2', loss='squared_hinge', *, dual=True, tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000)¶
OnnxOperatorMixin for LinearSVC
OnnxSklearnLinearSVR¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLinearSVR(*, epsilon=0.0, tol=0.0001, C=1.0, loss='epsilon_insensitive', fit_intercept=True, intercept_scaling=1.0, dual=True, verbose=0, random_state=None, max_iter=1000)¶
OnnxOperatorMixin for LinearSVR
OnnxSklearnLogisticRegression¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLogisticRegression(penalty='l2', *, dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='lbfgs', max_iter=100, multi_class='auto', verbose=0, warm_start=False, n_jobs=None, l1_ratio=None)¶
OnnxOperatorMixin for LogisticRegression
OnnxSklearnLogisticRegressionCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnLogisticRegressionCV(*, Cs=10, fit_intercept=True, cv=None, dual=False, penalty='l2', scoring=None, solver='lbfgs', tol=0.0001, max_iter=100, class_weight=None, n_jobs=None, verbose=0, refit=True, intercept_scaling=1.0, multi_class='auto', random_state=None, l1_ratios=None)¶
OnnxOperatorMixin for LogisticRegressionCV
OnnxSklearnMLPClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMLPClassifier(hidden_layer_sizes=(100,), activation='relu', *, solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e08, n_iter_no_change=10, max_fun=15000)¶
OnnxOperatorMixin for MLPClassifier
OnnxSklearnMLPRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMLPRegressor(hidden_layer_sizes=(100,), activation='relu', *, solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e08, n_iter_no_change=10, max_fun=15000)¶
OnnxOperatorMixin for MLPRegressor
OnnxSklearnMaxAbsScaler¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMaxAbsScaler(*, copy=True)¶
OnnxOperatorMixin for MaxAbsScaler
OnnxSklearnMinMaxScaler¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMinMaxScaler(feature_range=(0, 1), *, copy=True, clip=False)¶
OnnxOperatorMixin for MinMaxScaler
OnnxSklearnMiniBatchKMeans¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMiniBatchKMeans(n_clusters=8, *, init='kmeans++', max_iter=100, batch_size=100, verbose=0, compute_labels=True, random_state=None, tol=0.0, max_no_improvement=10, init_size=None, n_init=3, reassignment_ratio=0.01)¶
OnnxOperatorMixin for MiniBatchKMeans
OnnxSklearnMultiTaskElasticNet¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiTaskElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic')¶
OnnxOperatorMixin for MultiTaskElasticNet
OnnxSklearnMultiTaskElasticNetCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiTaskElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, random_state=None, selection='cyclic')¶
OnnxOperatorMixin for MultiTaskElasticNetCV
OnnxSklearnMultiTaskLasso¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiTaskLasso(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic')¶
OnnxOperatorMixin for MultiTaskLasso
OnnxSklearnMultiTaskLassoCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultiTaskLassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, random_state=None, selection='cyclic')¶
OnnxOperatorMixin for MultiTaskLassoCV
OnnxSklearnMultinomialNB¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnMultinomialNB(*, alpha=1.0, fit_prior=True, class_prior=None)¶
OnnxOperatorMixin for MultinomialNB
OnnxSklearnNearestNeighbors¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnNearestNeighbors(*, n_neighbors=5, radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=None)¶
OnnxOperatorMixin for NearestNeighbors
OnnxSklearnNeighborhoodComponentsAnalysis¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnNeighborhoodComponentsAnalysis(n_components=None, *, init='auto', warm_start=False, max_iter=50, tol=1e05, callback=None, verbose=0, random_state=None)¶
OnnxOperatorMixin for NeighborhoodComponentsAnalysis
OnnxSklearnNormalizer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnNormalizer(norm='l2', *, copy=True)¶
OnnxOperatorMixin for Normalizer
OnnxSklearnNuSVC¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnNuSVC(*, nu=0.5, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter= 1, decision_function_shape='ovr', break_ties=False, random_state=None)¶
OnnxOperatorMixin for NuSVC
OnnxSklearnNuSVR¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnNuSVR(*, nu=0.5, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, tol=0.001, cache_size=200, verbose=False, max_iter= 1)¶
OnnxOperatorMixin for NuSVR
OnnxSklearnOneClassSVM¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnOneClassSVM(*, kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, nu=0.5, shrinking=True, cache_size=200, verbose=False, max_iter= 1)¶
OnnxOperatorMixin for OneClassSVM
OnnxSklearnOneHotEncoder¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnOneHotEncoder(*, categories='auto', drop=None, sparse=True, dtype=<class 'numpy.float64'>, handle_unknown='error')¶
OnnxOperatorMixin for OneHotEncoder
OnnxSklearnOneVsRestClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnOneVsRestClassifier(estimator, *, n_jobs=None)¶
OnnxOperatorMixin for OneVsRestClassifier
OnnxSklearnOrdinalEncoder¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnOrdinalEncoder(*, categories='auto', dtype=<class 'numpy.float64'>, handle_unknown='error', unknown_value=None)¶
OnnxOperatorMixin for OrdinalEncoder
OnnxSklearnOrthogonalMatchingPursuit¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnOrthogonalMatchingPursuit(*, n_nonzero_coefs=None, tol=None, fit_intercept=True, normalize=True, precompute='auto')¶
OnnxOperatorMixin for OrthogonalMatchingPursuit
OnnxSklearnOrthogonalMatchingPursuitCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnOrthogonalMatchingPursuitCV(*, copy=True, fit_intercept=True, normalize=True, max_iter=None, cv=None, n_jobs=None, verbose=False)¶
OnnxOperatorMixin for OrthogonalMatchingPursuitCV
OnnxSklearnPCA¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnPCA(n_components=None, *, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power='auto', random_state=None)¶
OnnxOperatorMixin for PCA
OnnxSklearnPLSRegression¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnPLSRegression(n_components=2, *, scale=True, max_iter=500, tol=1e06, copy=True)¶
OnnxOperatorMixin for PLSRegression
OnnxSklearnPassiveAggressiveClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnPassiveAggressiveClassifier(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='hinge', n_jobs=None, random_state=None, warm_start=False, class_weight=None, average=False)¶
OnnxOperatorMixin for PassiveAggressiveClassifier
OnnxSklearnPassiveAggressiveRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnPassiveAggressiveRegressor(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=False, average=False)¶
OnnxOperatorMixin for PassiveAggressiveRegressor
OnnxSklearnPerceptron¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnPerceptron(*, penalty=None, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, eta0=1.0, n_jobs=None, random_state=0, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False)¶
OnnxOperatorMixin for Perceptron
OnnxSklearnPolynomialFeatures¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnPolynomialFeatures(degree=2, *, interaction_only=False, include_bias=True, order='C')¶
OnnxOperatorMixin for PolynomialFeatures
OnnxSklearnPowerTransformer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnPowerTransformer(method='yeojohnson', *, standardize=True, copy=True)¶
OnnxOperatorMixin for PowerTransformer
OnnxSklearnRANSACRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRANSACRegressor(base_estimator=None, *, min_samples=None, residual_threshold=None, is_data_valid=None, is_model_valid=None, max_trials=100, max_skips=inf, stop_n_inliers=inf, stop_score=inf, stop_probability=0.99, loss='absolute_loss', random_state=None)¶
OnnxOperatorMixin for RANSACRegressor
OnnxSklearnRFE¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRFE(estimator, *, n_features_to_select=None, step=1, verbose=0, importance_getter='auto')¶
OnnxOperatorMixin for RFE
OnnxSklearnRFECV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRFECV(estimator, *, step=1, min_features_to_select=1, cv=None, scoring=None, verbose=0, n_jobs=None, importance_getter='auto')¶
OnnxOperatorMixin for RFECV
OnnxSklearnRadiusNeighborsClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRadiusNeighborsClassifier(radius=1.0, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', outlier_label=None, metric_params=None, n_jobs=None, **kwargs)¶
OnnxOperatorMixin for RadiusNeighborsClassifier
OnnxSklearnRadiusNeighborsRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRadiusNeighborsRegressor(radius=1.0, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None, **kwargs)¶
OnnxOperatorMixin for RadiusNeighborsRegressor
OnnxSklearnRandomForestClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRandomForestClassifier(n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)¶
OnnxOperatorMixin for RandomForestClassifier
OnnxSklearnRandomForestRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRandomForestRegressor(n_estimators=100, *, criterion='mse', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, ccp_alpha=0.0, max_samples=None)¶
OnnxOperatorMixin for RandomForestRegressor
OnnxSklearnRidge¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRidge(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, solver='auto', random_state=None)¶
OnnxOperatorMixin for Ridge
OnnxSklearnRidgeCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRidgeCV(alphas=(0.1, 1.0, 10.0), *, fit_intercept=True, normalize=False, scoring=None, cv=None, gcv_mode=None, store_cv_values=False, alpha_per_target=False)¶
OnnxOperatorMixin for RidgeCV
OnnxSklearnRidgeClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRidgeClassifier(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, class_weight=None, solver='auto', random_state=None)¶
OnnxOperatorMixin for RidgeClassifier
OnnxSklearnRidgeClassifierCV¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRidgeClassifierCV(alphas=(0.1, 1.0, 10.0), *, fit_intercept=True, normalize=False, scoring=None, cv=None, class_weight=None, store_cv_values=False)¶
OnnxOperatorMixin for RidgeClassifierCV
OnnxSklearnRobustScaler¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnRobustScaler(*, with_centering=True, with_scaling=True, quantile_range=(25.0, 75.0), copy=True, unit_variance=False)¶
OnnxOperatorMixin for RobustScaler
OnnxSklearnSGDClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSGDClassifier(loss='hinge', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, n_jobs=None, random_state=None, learning_rate='optimal', eta0=0.0, power_t=0.5, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False, average=False)¶
OnnxOperatorMixin for SGDClassifier
OnnxSklearnSGDRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSGDRegressor(loss='squared_loss', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate='invscaling', eta0=0.01, power_t=0.25, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, warm_start=False, average=False)¶
OnnxOperatorMixin for SGDRegressor
OnnxSklearnSVC¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSVC(*, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter= 1, decision_function_shape='ovr', break_ties=False, random_state=None)¶
OnnxOperatorMixin for SVC
OnnxSklearnSVR¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSVR(*, kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter= 1)¶
OnnxOperatorMixin for SVR
OnnxSklearnSelectFdr¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectFdr(score_func=<function f_classif>, *, alpha=0.05)¶
OnnxOperatorMixin for SelectFdr
OnnxSklearnSelectFpr¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectFpr(score_func=<function f_classif>, *, alpha=0.05)¶
OnnxOperatorMixin for SelectFpr
OnnxSklearnSelectFromModel¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectFromModel(estimator, *, threshold=None, prefit=False, norm_order=1, max_features=None, importance_getter='auto')¶
OnnxOperatorMixin for SelectFromModel
OnnxSklearnSelectFwe¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectFwe(score_func=<function f_classif>, *, alpha=0.05)¶
OnnxOperatorMixin for SelectFwe
OnnxSklearnSelectKBest¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectKBest(score_func=<function f_classif>, *, k=10)¶
OnnxOperatorMixin for SelectKBest
OnnxSklearnSelectPercentile¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSelectPercentile(score_func=<function f_classif>, *, percentile=10)¶
OnnxOperatorMixin for SelectPercentile
OnnxSklearnSimpleImputer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnSimpleImputer(*, missing_values=nan, strategy='mean', fill_value=None, verbose=0, copy=True, add_indicator=False)¶
OnnxOperatorMixin for SimpleImputer
OnnxSklearnStackingClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnStackingClassifier(estimators, final_estimator=None, *, cv=None, stack_method='auto', n_jobs=None, passthrough=False, verbose=0)¶
OnnxOperatorMixin for StackingClassifier
OnnxSklearnStackingRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnStackingRegressor(estimators, final_estimator=None, *, cv=None, n_jobs=None, passthrough=False, verbose=0)¶
OnnxOperatorMixin for StackingRegressor
OnnxSklearnStandardScaler¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnStandardScaler(*, copy=True, with_mean=True, with_std=True)¶
OnnxOperatorMixin for StandardScaler
OnnxSklearnTfidfTransformer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnTfidfTransformer(*, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)¶
OnnxOperatorMixin for TfidfTransformer
OnnxSklearnTfidfVectorizer¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnTfidfVectorizer(*, input='content', encoding='utf8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, analyzer='word', stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class 'numpy.float64'>, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)¶
OnnxOperatorMixin for TfidfVectorizer
OnnxSklearnTheilSenRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnTheilSenRegressor(*, fit_intercept=True, copy_X=True, max_subpopulation=10000.0, n_subsamples=None, max_iter=300, tol=0.001, random_state=None, n_jobs=None, verbose=False)¶
OnnxOperatorMixin for TheilSenRegressor
OnnxSklearnTruncatedSVD¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnTruncatedSVD(n_components=2, *, algorithm='randomized', n_iter=5, random_state=None, tol=0.0)¶
OnnxOperatorMixin for TruncatedSVD
OnnxSklearnVarianceThreshold¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnVarianceThreshold(threshold=0.0)¶
OnnxOperatorMixin for VarianceThreshold
OnnxSklearnVotingClassifier¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnVotingClassifier(estimators, *, voting='hard', weights=None, n_jobs=None, flatten_transform=True, verbose=False)¶
OnnxOperatorMixin for VotingClassifier
OnnxSklearnVotingRegressor¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnVotingRegressor(estimators, *, weights=None, n_jobs=None, verbose=False)¶
OnnxOperatorMixin for VotingRegressor
OnnxSklearnXGBClassifier¶
OnnxSklearnXGBRegressor¶
OnnxTransferTransformer¶
OnnxValidatorClassifier¶
OnnxWrappedLightGbmBooster¶
OnnxWrappedLightGbmBoosterClassifier¶
Pipeline¶
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnPipeline(steps, memory=None, verbose=False, op_version=None)[source]¶
Combines Pipeline and
OnnxSubGraphOperatorMixin
. onnx_converter()¶
Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikitlearn parent it can find.
 onnx_parser(scope=None, inputs=None)¶
Returns a parser for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.
 onnx_shape_calculator()¶
Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.
 to_onnx(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None)¶
Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.
 Parameters
X – training data, at least one sample, it is used to guess the type of the input data.
name – name of the model, if None, it is replaced by the the class name.
options – specific options given to converters (see Converters with options)
white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed
black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted
final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.
 to_onnx_operator(inputs=None, outputs=None)¶
This function must be overloaded.
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnColumnTransformer(op_version=None)[source]¶
Combines ColumnTransformer and
OnnxSubGraphOperatorMixin
. onnx_converter()¶
Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikitlearn parent it can find.
 onnx_parser(scope=None, inputs=None)¶
Returns a parser for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.
 onnx_shape_calculator()¶
Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.
 to_onnx(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None)¶
Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.
 Parameters
X – training data, at least one sample, it is used to guess the type of the input data.
name – name of the model, if None, it is replaced by the the class name.
options – specific options given to converters (see Converters with options)
white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed
black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted
final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.
 to_onnx_operator(inputs=None, outputs=None)¶
This function must be overloaded.
 class skl2onnx.algebra.sklearn_ops.OnnxSklearnFeatureUnion(op_version=None)[source]¶
Combines FeatureUnion and
OnnxSubGraphOperatorMixin
. onnx_converter()¶
Returns a converter for this model. If not overloaded, it fetches the converter mapped to the first scikitlearn parent it can find.
 onnx_parser(scope=None, inputs=None)¶
Returns a parser for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.
 onnx_shape_calculator()¶
Returns a shape calculator for this model. If not overloaded, it fetches the parser mapped to the first scikitlearn parent it can find.
 to_onnx(X=None, name=None, options=None, white_op=None, black_op=None, final_types=None)¶
Converts the model in ONNX format. It calls method _to_onnx which must be overloaded.
 Parameters
X – training data, at least one sample, it is used to guess the type of the input data.
name – name of the model, if None, it is replaced by the the class name.
options – specific options given to converters (see Converters with options)
white_op – white list of ONNX nodes allowed while converting a pipeline, if empty, all are allowed
black_op – black list of ONNX nodes allowed while converting a pipeline, if empty, none are blacklisted
final_types – a python list. Works the same way as initial_types but not mandatory, it is used to overwrites the type (if type is not None) and the name of every output.
 to_onnx_operator(inputs=None, outputs=None)¶
This function must be overloaded.
Available ONNX operators¶
skl2onnx maps every ONNX operators into a class easy to insert into a graph. These operators get dynamically added and the list depends on the installed ONNX package. The documentation for these operators can be found on github: ONNX Operators.md and ONNXML Operators. Associated to onnxruntime, the mapping makes it easier to easily check the output of the ONNX operators on any data as shown in example Play with ONNX operators.
OnnxAbs¶
 class skl2onnx.algebra.onnx_ops.OnnxAbs(*args, **kwargs)¶
Version
Onnx name: Abs
This version of the operator has been available since version 13.
Summary
Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxAbs_1¶
 class skl2onnx.algebra.onnx_ops.OnnxAbs_1(*args, **kwargs)¶
Version
Onnx name: Abs
This version of the operator has been available since version 1.
Summary
Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.
Attributes
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAbs_13¶
 class skl2onnx.algebra.onnx_ops.OnnxAbs_13(*args, **kwargs)¶
Version
Onnx name: Abs
This version of the operator has been available since version 13.
Summary
Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxAbs_6¶
 class skl2onnx.algebra.onnx_ops.OnnxAbs_6(*args, **kwargs)¶
Version
Onnx name: Abs
This version of the operator has been available since version 6.
Summary
Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxAcos¶
 class skl2onnx.algebra.onnx_ops.OnnxAcos(*args, **kwargs)¶
Version
Onnx name: Acos
This version of the operator has been available since version 7.
Summary
Calculates the arccosine (inverse of cosine) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arccosine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAcos_7¶
 class skl2onnx.algebra.onnx_ops.OnnxAcos_7(*args, **kwargs)¶
Version
Onnx name: Acos
This version of the operator has been available since version 7.
Summary
Calculates the arccosine (inverse of cosine) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arccosine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAcosh¶
 class skl2onnx.algebra.onnx_ops.OnnxAcosh(*args, **kwargs)¶
Version
Onnx name: Acosh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arccosine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arccosine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAcosh_9¶
 class skl2onnx.algebra.onnx_ops.OnnxAcosh_9(*args, **kwargs)¶
Version
Onnx name: Acosh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arccosine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arccosine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdagrad¶
 class skl2onnx.algebra.onnx_ops.OnnxAdagrad(*args, **kwargs)¶
Version
Onnx name: Adagrad
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Compute one iteration of ADAGRAD, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.
Let’s define the behavior of this operator. As you can imagine, ADAGRAD requires some parameters:
The initial learningrate “R”.
The update count “T”. That is, the number of training iterations conducted.
A L2norm regularization coefficient “norm_coefficient”.
A learningrate decay factor “decay_factor”.
A small constant “epsilon” to avoid dividingbyzero.
At each ADAGRAD iteration, the optimized tensors are moved along a direction computed based on their estimated gradient and accumulated squared gradient. Assume that only a single tensor “X” is updated by this operator. We need the value of “X”, its gradient “G”, and its accumulated squared gradient “H”. Therefore, variables in this operator’s input list are sequentially “R”, “T”, “X”, “G”, and “H”. Other parameters are given as attributes because they are usually constants. Also, the corresponding output tensors are the new value of “X” (called “X_new”), and then the new accumulated squared gradient (called “H_new”). Those outputs are computed from the given inputs following the pseudo code below.
Let “+”, “”, “*”, and “/” are all elementwise arithmetic operations with numpystyle broadcasting support. The pseudo code to compute those outputs is:
// Compute a scalar learningrate factor. At the first update of X, T is generally // 0 (0based update index) or 1 (1based update index). r = R / (1 + T * decay_factor);
// Add gradient of 0.5 * norm_coefficient * X_2^2, where X_2 is the 2norm. G_regularized = norm_coefficient * X + G;
// Compute new accumulated squared gradient. H_new = H + G_regularized * G_regularized;
// Compute the adaptive part of percoordinate learning rate. Note that Sqrt(…) // computes elementwise squareroot. H_adaptive = Sqrt(H_new) + epsilon
// Compute the new value of “X”. X_new = X  r * G_regularized / H_adaptive;
If one assign this operators to optimize multiple inputs, for example, “X_1” and “X_2”, the same pseudo code may be extended to handle all tensors jointly. More specifically, we can view “X” as a concatenation of “X_1” and “X_2” (of course, their gradient and accumulate gradient should be concatenated too) and then just reuse the entire pseudo code.
Note that ADAGRAD was first proposed in http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf. In that reference paper, this operator is a special case of the Figure 1’s composite mirror descent update.
Attributes
decay_factor: The decay factor of learning rate after one update.The effective learning rate is computed by r = R / (1 + T * decay_factor). Default to 0 so that increasing update counts doesn’t reduce the learning rate. Default value is ``name: “decay_factor”
f: 0.0 type: FLOAT `` * epsilon: Small scalar to avoid dividing by zero. Default value is ``name: “epsilon” f: 9.999999974752427e07 type: FLOAT `` * norm_coefficient: Regularization coefficient in 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient” f: 0.0 type: FLOAT ``
Inputs
Between 3 and 2147483647 inputs.
R (heterogeneous)T1: The initial learning rate.
T (heterogeneous)T2: The update count of “X”. It should be a scalar.
inputs (variadic)T3: The current values of optimized tensors, followed by their respective gradients, followed by their respective accumulated squared gradients.For example, if two tensor “X_1” and “X_2” are optimized, The input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)T3: Updated values of optimized tensors, followed by their updated values of accumulated squared gradients. For example, if two tensor “X_1” and “X_2” are optimized, the output list would be [new value of “X_1,” new value of “X_2” new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float scalars.
T2 tensor(int64): Constrain input types to 64bit integer scalars.
T3 tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdagrad_1¶
 class skl2onnx.algebra.onnx_ops.OnnxAdagrad_1(*args, **kwargs)¶
Version
Onnx name: Adagrad
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Compute one iteration of ADAGRAD, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.
Let’s define the behavior of this operator. As you can imagine, ADAGRAD requires some parameters:
The initial learningrate “R”.
The update count “T”. That is, the number of training iterations conducted.
A L2norm regularization coefficient “norm_coefficient”.
A learningrate decay factor “decay_factor”.
A small constant “epsilon” to avoid dividingbyzero.
At each ADAGRAD iteration, the optimized tensors are moved along a direction computed based on their estimated gradient and accumulated squared gradient. Assume that only a single tensor “X” is updated by this operator. We need the value of “X”, its gradient “G”, and its accumulated squared gradient “H”. Therefore, variables in this operator’s input list are sequentially “R”, “T”, “X”, “G”, and “H”. Other parameters are given as attributes because they are usually constants. Also, the corresponding output tensors are the new value of “X” (called “X_new”), and then the new accumulated squared gradient (called “H_new”). Those outputs are computed from the given inputs following the pseudo code below.
Let “+”, “”, “*”, and “/” are all elementwise arithmetic operations with numpystyle broadcasting support. The pseudo code to compute those outputs is:
// Compute a scalar learningrate factor. At the first update of X, T is generally // 0 (0based update index) or 1 (1based update index). r = R / (1 + T * decay_factor);
// Add gradient of 0.5 * norm_coefficient * X_2^2, where X_2 is the 2norm. G_regularized = norm_coefficient * X + G;
// Compute new accumulated squared gradient. H_new = H + G_regularized * G_regularized;
// Compute the adaptive part of percoordinate learning rate. Note that Sqrt(…) // computes elementwise squareroot. H_adaptive = Sqrt(H_new) + epsilon
// Compute the new value of “X”. X_new = X  r * G_regularized / H_adaptive;
If one assign this operators to optimize multiple inputs, for example, “X_1” and “X_2”, the same pseudo code may be extended to handle all tensors jointly. More specifically, we can view “X” as a concatenation of “X_1” and “X_2” (of course, their gradient and accumulate gradient should be concatenated too) and then just reuse the entire pseudo code.
Note that ADAGRAD was first proposed in http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf. In that reference paper, this operator is a special case of the Figure 1’s composite mirror descent update.
Attributes
decay_factor: The decay factor of learning rate after one update.The effective learning rate is computed by r = R / (1 + T * decay_factor). Default to 0 so that increasing update counts doesn’t reduce the learning rate. Default value is ``name: “decay_factor”
f: 0.0 type: FLOAT `` * epsilon: Small scalar to avoid dividing by zero. Default value is ``name: “epsilon” f: 9.999999974752427e07 type: FLOAT `` * norm_coefficient: Regularization coefficient in 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient” f: 0.0 type: FLOAT ``
Inputs
Between 3 and 2147483647 inputs.
R (heterogeneous)T1: The initial learning rate.
T (heterogeneous)T2: The update count of “X”. It should be a scalar.
inputs (variadic)T3: The current values of optimized tensors, followed by their respective gradients, followed by their respective accumulated squared gradients.For example, if two tensor “X_1” and “X_2” are optimized, The input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)T3: Updated values of optimized tensors, followed by their updated values of accumulated squared gradients. For example, if two tensor “X_1” and “X_2” are optimized, the output list would be [new value of “X_1,” new value of “X_2” new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float scalars.
T2 tensor(int64): Constrain input types to 64bit integer scalars.
T3 tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdam¶
 class skl2onnx.algebra.onnx_ops.OnnxAdam(*args, **kwargs)¶
Version
Onnx name: Adam
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Compute one iteration of Adam, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.
Let’s define the behavior of this operator. First of all, Adam requires some parameters:
The learningrate “R”.
The update count “T”. That is, the number of training iterations conducted.
A L2norm regularization coefficient “norm_coefficient”.
A small constant “epsilon” to avoid dividingbyzero.
Two coefficients, “alpha” and “beta”.
At each Adam iteration, the optimized tensors are moved along a direction computed based on their exponentiallyaveraged historical gradient and exponentiallyaveraged historical squared gradient. Assume that only a tensor “X” is being optimized. The rest of required information is
the value of “X”,
“X“‘s gradient (denoted by “G”),
“X“‘s exponentiallyaveraged historical gradient (denoted by “V”), and
“X“‘s exponentiallyaveraged historical squared gradient (denoted by “H”).
Some of those parameters are passed into this operator as input tensors and others are stored as this operator’s attributes. Specifically, this operator’s input tensor list is [“R”, “T”, “X”, “G”, “V”, “H”]. That is, “R” is the first input, “T” is the second input, and so on. Other parameters are given as attributes because they are constants. Moreover, the corresponding output tensors are
the new value of “X” (called “X_new”),
the new exponentiallyaveraged historical gradient (denoted by “V_new”), and
the new exponentiallyaveraged historical squared gradient (denoted by “H_new”).
Those outputs are computed following the pseudo code below.
Let “+”, “”, “*”, and “/” are all elementwise arithmetic operations with numpystyle broadcasting support. The pseudo code to compute those outputs is:
// Add gradient of 0.5 * norm_coefficient * X_2^2, where X_2 is the 2norm. G_regularized = norm_coefficient * X + G
// Update exponentiallyaveraged historical gradient. V_new = alpha * V + (1  alpha) * G_regularized
// Update exponentiallyaveraged historical squared gradient. H_new = beta * H + (1  beta) * G_regularized * G_regularized
// Compute the elementwise squareroot of H_new. V_new will be elementwisely // divided by H_sqrt for a better update direction. H_sqrt = Sqrt(H_new) + epsilon
// Compute learningrate. Note that “alpha**T”/”beta**T” is alpha’s/beta’s Tth power. R_adjusted = T > 0 ? R * Sqrt(1  beta**T) / (1  alpha**T) : R
// Compute new value of “X”. X_new = X  R_adjusted * V_new / H_sqrt
// Postupdate regularization. X_final = (1  norm_coefficient_post) * X_new
If there are multiple inputs to be optimized, the pseudo code will be applied independently to each of them.
Attributes
alpha: Coefficient of previously accumulated gradient in running average. Default to 0.9. Default value is ``name: “alpha”
f: 0.8999999761581421 type: FLOAT `` * beta: Coefficient of previously accumulated squaredgradient in running average. Default to 0.999. Default value is ``name: “beta” f: 0.9990000128746033 type: FLOAT `` * epsilon: Small scalar to avoid dividing by zero. Default value is ``name: “epsilon” f: 9.999999974752427e07 type: FLOAT `` * norm_coefficient: Regularization coefficient of 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient” f: 0.0 type: FLOAT `` * norm_coefficient_post: Regularization coefficient of 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient_post” f: 0.0 type: FLOAT ``
Inputs
Between 3 and 2147483647 inputs.
R (heterogeneous)T1: The initial learning rate.
T (heterogeneous)T2: The update count of “X”. It should be a scalar.
inputs (variadic)T3: The tensors to be optimized, followed by their respective gradients, followed by their respective accumulated gradients (aka momentum), followed by their respective accumulated squared gradients. For example, to optimize tensors “X_1” and “X_2,”, the input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated gradient of “X_1”, accumulated gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)T3: New values of optimized tensors, followed by their respective new accumulated gradients, followed by their respective new accumulated squared gradients. For example, if two tensors “X_1” and “X_2” are optimized, the outputs list would be [new value of “X_1”, new value of “X_2”, new accumulated gradient of “X_1”, new accumulated gradient of “X_2”, new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float scalars.
T2 tensor(int64): Constrain input types to 64bit integer scalars.
T3 tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdam_1¶
 class skl2onnx.algebra.onnx_ops.OnnxAdam_1(*args, **kwargs)¶
Version
Onnx name: Adam
This version of the operator has been available since version 1 of domain ai.onnx.preview.training.
Summary
Compute one iteration of Adam, a stochastic gradient based optimization algorithm. This operator can conduct the optimization of multiple tensor variables.
Let’s define the behavior of this operator. First of all, Adam requires some parameters:
The learningrate “R”.
The update count “T”. That is, the number of training iterations conducted.
A L2norm regularization coefficient “norm_coefficient”.
A small constant “epsilon” to avoid dividingbyzero.
Two coefficients, “alpha” and “beta”.
At each Adam iteration, the optimized tensors are moved along a direction computed based on their exponentiallyaveraged historical gradient and exponentiallyaveraged historical squared gradient. Assume that only a tensor “X” is being optimized. The rest of required information is
the value of “X”,
“X“‘s gradient (denoted by “G”),
“X“‘s exponentiallyaveraged historical gradient (denoted by “V”), and
“X“‘s exponentiallyaveraged historical squared gradient (denoted by “H”).
Some of those parameters are passed into this operator as input tensors and others are stored as this operator’s attributes. Specifically, this operator’s input tensor list is [“R”, “T”, “X”, “G”, “V”, “H”]. That is, “R” is the first input, “T” is the second input, and so on. Other parameters are given as attributes because they are constants. Moreover, the corresponding output tensors are
the new value of “X” (called “X_new”),
the new exponentiallyaveraged historical gradient (denoted by “V_new”), and
the new exponentiallyaveraged historical squared gradient (denoted by “H_new”).
Those outputs are computed following the pseudo code below.
Let “+”, “”, “*”, and “/” are all elementwise arithmetic operations with numpystyle broadcasting support. The pseudo code to compute those outputs is:
// Add gradient of 0.5 * norm_coefficient * X_2^2, where X_2 is the 2norm. G_regularized = norm_coefficient * X + G
// Update exponentiallyaveraged historical gradient. V_new = alpha * V + (1  alpha) * G_regularized
// Update exponentiallyaveraged historical squared gradient. H_new = beta * H + (1  beta) * G_regularized * G_regularized
// Compute the elementwise squareroot of H_new. V_new will be elementwisely // divided by H_sqrt for a better update direction. H_sqrt = Sqrt(H_new) + epsilon
// Compute learningrate. Note that “alpha**T”/”beta**T” is alpha’s/beta’s Tth power. R_adjusted = T > 0 ? R * Sqrt(1  beta**T) / (1  alpha**T) : R
// Compute new value of “X”. X_new = X  R_adjusted * V_new / H_sqrt
// Postupdate regularization. X_final = (1  norm_coefficient_post) * X_new
If there are multiple inputs to be optimized, the pseudo code will be applied independently to each of them.
Attributes
alpha: Coefficient of previously accumulated gradient in running average. Default to 0.9. Default value is ``name: “alpha”
f: 0.8999999761581421 type: FLOAT `` * beta: Coefficient of previously accumulated squaredgradient in running average. Default to 0.999. Default value is ``name: “beta” f: 0.9990000128746033 type: FLOAT `` * epsilon: Small scalar to avoid dividing by zero. Default value is ``name: “epsilon” f: 9.999999974752427e07 type: FLOAT `` * norm_coefficient: Regularization coefficient of 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient” f: 0.0 type: FLOAT `` * norm_coefficient_post: Regularization coefficient of 0.5 * norm_coefficient * X_2^2. Default to 0, which means no regularization. Default value is ``name: “norm_coefficient_post” f: 0.0 type: FLOAT ``
Inputs
Between 3 and 2147483647 inputs.
R (heterogeneous)T1: The initial learning rate.
T (heterogeneous)T2: The update count of “X”. It should be a scalar.
inputs (variadic)T3: The tensors to be optimized, followed by their respective gradients, followed by their respective accumulated gradients (aka momentum), followed by their respective accumulated squared gradients. For example, to optimize tensors “X_1” and “X_2,”, the input list would be [“X_1”, “X_2”, gradient of “X_1”, gradient of “X_2”, accumulated gradient of “X_1”, accumulated gradient of “X_2”, accumulated squared gradient of “X_1”, accumulated squared gradient of “X_2”].
Outputs
Between 1 and 2147483647 outputs.
outputs (variadic)T3: New values of optimized tensors, followed by their respective new accumulated gradients, followed by their respective new accumulated squared gradients. For example, if two tensors “X_1” and “X_2” are optimized, the outputs list would be [new value of “X_1”, new value of “X_2”, new accumulated gradient of “X_1”, new accumulated gradient of “X_2”, new accumulated squared gradient of “X_1”, new accumulated squared gradient of “X_2”].
Type Constraints
T1 tensor(float), tensor(double): Constrain input types to float scalars.
T2 tensor(int64): Constrain input types to 64bit integer scalars.
T3 tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdd¶
 class skl2onnx.algebra.onnx_ops.OnnxAdd(*args, **kwargs)¶
Version
Onnx name: Add
This version of the operator has been available since version 14.
Summary
Performs elementwise binary addition (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
(Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxAdd_1¶
 class skl2onnx.algebra.onnx_ops.OnnxAdd_1(*args, **kwargs)¶
Version
Onnx name: Add
This version of the operator has been available since version 1.
Summary
Performs elementwise binary addition (with limited broadcast support).
If necessary the righthandside argument will be broadcasted to match the shape of lefthandside argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1dim expansion doesn’t work yet.
For example, the following tensor shapes are supported (with broadcast=1):
shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0
Attribute broadcast=1 needs to be passed to enable broadcasting.
Attributes
axis: If set, defines the broadcast dimensions. See doc for details. Default value is ````
broadcast: Pass 1 to enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT `` * consumed_inputs: legacy optimization attribute. Default value is ````
Inputs
A (heterogeneous)T: First operand, should share the type with the second operand.
B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.
Outputs
C (heterogeneous)T: Result, has same dimensions and type as A
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAdd_13¶
 class skl2onnx.algebra.onnx_ops.OnnxAdd_13(*args, **kwargs)¶
Version
Onnx name: Add
This version of the operator has been available since version 13.
Summary
Performs elementwise binary addition (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to highprecision numeric tensors.
OnnxAdd_14¶
 class skl2onnx.algebra.onnx_ops.OnnxAdd_14(*args, **kwargs)¶
Version
Onnx name: Add
This version of the operator has been available since version 14.
Summary
Performs elementwise binary addition (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
(Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxAdd_6¶
 class skl2onnx.algebra.onnx_ops.OnnxAdd_6(*args, **kwargs)¶
Version
Onnx name: Add
This version of the operator has been available since version 6.
Summary
Performs elementwise binary addition (with limited broadcast support).
If necessary the righthandside argument will be broadcasted to match the shape of lefthandside argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1dim expansion doesn’t work yet.
For example, the following tensor shapes are supported (with broadcast=1):
shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0
Attribute broadcast=1 needs to be passed to enable broadcasting.
Attributes
axis: If set, defines the broadcast dimensions. See doc for details. Default value is ````
broadcast: Pass 1 to enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: First operand, should share the type with the second operand.
B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.
Outputs
C (heterogeneous)T: Result, has same dimensions and type as A
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxAdd_7¶
 class skl2onnx.algebra.onnx_ops.OnnxAdd_7(*args, **kwargs)¶
Version
Onnx name: Add
This version of the operator has been available since version 7.
Summary
Performs elementwise binary addition (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxAnd¶
 class skl2onnx.algebra.onnx_ops.OnnxAnd(*args, **kwargs)¶
Version
Onnx name: And
This version of the operator has been available since version 7.
Summary
Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool): Constrains input to boolean tensor.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxAnd_1¶
 class skl2onnx.algebra.onnx_ops.OnnxAnd_1(*args, **kwargs)¶
Version
Onnx name: And
This version of the operator has been available since version 1.
Summary
Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B.
If broadcasting is enabled, the righthandside argument will be broadcasted to match the shape of lefthandside argument. See the doc of Add for a detailed description of the broadcasting rules.
Attributes
axis: If set, defines the broadcast dimensions. Default value is ````
broadcast: Enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: Left input tensor for the logical operator.
B (heterogeneous)T: Right input tensor for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool): Constrains input to boolean tensor.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxAnd_7¶
 class skl2onnx.algebra.onnx_ops.OnnxAnd_7(*args, **kwargs)¶
Version
Onnx name: And
This version of the operator has been available since version 7.
Summary
Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool): Constrains input to boolean tensor.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxArgMax¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMax(*args, **kwargs)¶
Version
Onnx name: ArgMax
This version of the operator has been available since version 13.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxArgMax_1¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMax_1(*args, **kwargs)¶
Version
Onnx name: ArgMax
This version of the operator has been available since version 1.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulted tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMax_11¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMax_11(*args, **kwargs)¶
Version
Onnx name: ArgMax
This version of the operator has been available since version 11.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMax_12¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMax_12(*args, **kwargs)¶
Version
Onnx name: ArgMax
This version of the operator has been available since version 12.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMax_13¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMax_13(*args, **kwargs)¶
Version
Onnx name: ArgMax
This version of the operator has been available since version 13.
Summary
Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxArgMin¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMin(*args, **kwargs)¶
Version
Onnx name: ArgMin
This version of the operator has been available since version 13.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxArgMin_1¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMin_1(*args, **kwargs)¶
Version
Onnx name: ArgMin
This version of the operator has been available since version 1.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulted tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMin_11¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMin_11(*args, **kwargs)¶
Version
Onnx name: ArgMin
This version of the operator has been available since version 11.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMin_12¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMin_12(*args, **kwargs)¶
Version
Onnx name: ArgMin
This version of the operator has been available since version 12.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxArgMin_13¶
 class skl2onnx.algebra.onnx_ops.OnnxArgMin_13(*args, **kwargs)¶
Version
Onnx name: ArgMin
This version of the operator has been available since version 13.
Summary
Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then the resulting tensor have the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the min is selected if the min appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.
Attributes
axis: The axis in which to compute the arg indices. Accepted range is [r, r1] where r = rank(data). Default value is ``name: “axis”
i: 0 type: INT `` * keepdims: Keep the reduced dimension or not, default 1 mean keep reduced dimension. Default value is ``name: “keepdims” i: 1 type: INT `` * select_last_index: Whether to select the last index or the first index if the {name} appears in multiple indices, default is False (first index). Default value is ``name: “select_last_index” i: 0 type: INT ``
Inputs
data (heterogeneous)T: An input tensor.
Outputs
reduced (heterogeneous)tensor(int64): Reduced output tensor with integer data type.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxArrayFeatureExtractor¶
 class skl2onnx.algebra.onnx_ops.OnnxArrayFeatureExtractor(*args, **kwargs)¶
Version
Onnx name: ArrayFeatureExtractor
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Select elements of the input tensor based on the indices passed.
The indices are applied to the last axes of the tensor.
Inputs
X (heterogeneous)T: Data to be selected
Y (heterogeneous)tensor(int64): The indices, based on 0 as the first index of any dimension.
Outputs
Z (heterogeneous)T: Selected output data as an array
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32), tensor(string): The input must be a tensor of a numeric type or string. The output will be of the same tensor type.
OnnxArrayFeatureExtractor_1¶
 class skl2onnx.algebra.onnx_ops.OnnxArrayFeatureExtractor_1(*args, **kwargs)¶
Version
Onnx name: ArrayFeatureExtractor
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Select elements of the input tensor based on the indices passed.
The indices are applied to the last axes of the tensor.
Inputs
X (heterogeneous)T: Data to be selected
Y (heterogeneous)tensor(int64): The indices, based on 0 as the first index of any dimension.
Outputs
Z (heterogeneous)T: Selected output data as an array
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32), tensor(string): The input must be a tensor of a numeric type or string. The output will be of the same tensor type.
OnnxAsin¶
 class skl2onnx.algebra.onnx_ops.OnnxAsin(*args, **kwargs)¶
Version
Onnx name: Asin
This version of the operator has been available since version 7.
Summary
Calculates the arcsine (inverse of sine) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arcsine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAsin_7¶
 class skl2onnx.algebra.onnx_ops.OnnxAsin_7(*args, **kwargs)¶
Version
Onnx name: Asin
This version of the operator has been available since version 7.
Summary
Calculates the arcsine (inverse of sine) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arcsine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAsinh¶
 class skl2onnx.algebra.onnx_ops.OnnxAsinh(*args, **kwargs)¶
Version
Onnx name: Asinh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arcsine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arcsine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAsinh_9¶
 class skl2onnx.algebra.onnx_ops.OnnxAsinh_9(*args, **kwargs)¶
Version
Onnx name: Asinh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arcsine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arcsine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAtan¶
 class skl2onnx.algebra.onnx_ops.OnnxAtan(*args, **kwargs)¶
Version
Onnx name: Atan
This version of the operator has been available since version 7.
Summary
Calculates the arctangent (inverse of tangent) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arctangent of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAtan_7¶
 class skl2onnx.algebra.onnx_ops.OnnxAtan_7(*args, **kwargs)¶
Version
Onnx name: Atan
This version of the operator has been available since version 7.
Summary
Calculates the arctangent (inverse of tangent) of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The arctangent of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAtanh¶
 class skl2onnx.algebra.onnx_ops.OnnxAtanh(*args, **kwargs)¶
Version
Onnx name: Atanh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arctangent of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arctangent values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAtanh_9¶
 class skl2onnx.algebra.onnx_ops.OnnxAtanh_9(*args, **kwargs)¶
Version
Onnx name: Atanh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic arctangent of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic arctangent values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool¶
 class skl2onnx.algebra.onnx_ops.OnnxAveragePool(*args, **kwargs)¶
Version
Onnx name: AveragePool
This version of the operator has been available since version 11.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is
name: "ceil_mode" i: 0 type: INT `` * *count_include_pad*: Whether include pad pixels when calculating values for the edges. Default is 0, doesn't count include pad. Default value is ``name: "count_include_pad" i: 0 type: INT `` * *kernel_shape* (required): The size of the kernel along each axis. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis. Default value is ````Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool_1¶
 class skl2onnx.algebra.onnx_ops.OnnxAveragePool_1(*args, **kwargs)¶
Version
Onnx name: AveragePool
This version of the operator has been available since version 1.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1) * pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements exclude pad.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * kernel_shape (required): The size of the kernel along each axis. Default value is ```` * pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool_10¶
 class skl2onnx.algebra.onnx_ops.OnnxAveragePool_10(*args, **kwargs)¶
Version
Onnx name: AveragePool
This version of the operator has been available since version 10.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is
name: "ceil_mode" i: 0 type: INT `` * *count_include_pad*: Whether include pad pixels when calculating values for the edges. Default is 0, doesn't count include pad. Default value is ``name: "count_include_pad" i: 0 type: INT `` * *kernel_shape* (required): The size of the kernel along each axis. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool_11¶
 class skl2onnx.algebra.onnx_ops.OnnxAveragePool_11(*args, **kwargs)¶
Version
Onnx name: AveragePool
This version of the operator has been available since version 11.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)
if ceil_mode is enabled
* pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * ceil_mode: Whether to use ceil or floor (default) to compute the output shape. Default value is
name: "ceil_mode" i: 0 type: INT `` * *count_include_pad*: Whether include pad pixels when calculating values for the edges. Default is 0, doesn't count include pad. Default value is ``name: "count_include_pad" i: 0 type: INT `` * *kernel_shape* (required): The size of the kernel along each axis. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis. Default value is ````Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxAveragePool_7¶
 class skl2onnx.algebra.onnx_ops.OnnxAveragePool_7(*args, **kwargs)¶
Version
Onnx name: AveragePool
This version of the operator has been available since version 7.
Summary
AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing. The output spatial shape will be following:
output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]  kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1) * pad_shape[i] is sum of pads along axis i
auto_pad is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]  kernel_spatial_shape[i] + 1) / strides_spatial_shape[i]) SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
And pad shape will be following if SAME_UPPER or SAME_LOWER:
pad_shape[i] = (output_spatial_shape[i]  1) * strides_spatial_shape[i] + kernel_spatial_shape[i]  input_spatial_shape[i]
The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * count_include_pad: Whether include pad pixels when calculating values for the edges. Default is 0, doesn’t count include pad. Default value is
name: "count_include_pad" i: 0 type: INT `` * *kernel_shape* (required): The size of the kernel along each axis. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs
Y (heterogeneous)T: Output data tensor from average or max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization¶
 class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization(*args, **kwargs)¶
Version
Onnx name: BatchNormalization
This version of the operator has been available since version 14.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, There are five required inputs ‘X’, ‘scale’, ‘B’, ‘input_mean’ and ‘input_var’. Note that ‘input_mean’ and ‘input_var’ are expected to be the estimated statistics in inference mode (training_mode=False, default), and the running statistics in training mode (training_mode=True). There are multiple cases for the number of outputs, which we list below:
Output case #1: Y, running_mean, running_var (training_mode=True) Output case #2: Y (training_mode=False)
When training_mode=False, extra outputs are invalid. The outputs are updated as follows when training_mode=True:
running_mean = input_mean * momentum + current_mean * (1  momentum) running_var = input_var * momentum + current_var * (1  momentum) Y = (X  current_mean) / sqrt(current_var + epsilon) * scale + B where: current_mean = ReduceMean(X, axis=all_except_channel_index) current_var = ReduceVar(X, axis=all_except_channel_index) Notice that ReduceVar refers to the population variance, and it equals to sum(sqrd(x_i  x_avg)) / N where N is the population size (this formula does not use sample size N  1).
When training_mode=False:
Y = (X  input_mean) / sqrt(input_var + epsilon) * scale + B
For previous (depreciated) nonspatial cases, implementors are suggested to flatten the input shape to (N x C * D1 * D2 * … * Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum). Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT `` * training_mode: If set to true, it indicates BatchNormalization is being used for training, and outputs 1, 2, 3, and 4 would be populated. Default value is ``name: “training_mode” i: 0 type: INT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1
scale (heterogeneous)T: Scale tensor of shape (C).
B (heterogeneous)T: Bias tensor of shape (C).
input_mean (heterogeneous)U: running (training) or estimated (testing) mean tensor of shape (C).
input_var (heterogeneous)U: running (training) or estimated (testing) variance tensor of shape (C).
Outputs
Between 1 and 3 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X
running_mean (optional, heterogeneous)U: The running mean after the BatchNormalization operator.
running_var (optional, heterogeneous)U: The running variance after the BatchNormalization operator. This op uses the population size (N) for calculating variance, and not the sample size N1.
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.
U tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain mean and variance types to float tensors. It allows all float type for U.
OnnxBatchNormalization_1¶
 class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_1(*args, **kwargs)¶
Version
Onnx name: BatchNormalization
This version of the operator has been available since version 1.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
Attributes
consumed_inputs (required): legacy optimization attribute. Default value is ````
epsilon: The epsilon value to use to avoid division by zero, default is 1e5f. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * is_test: If set to nonzero, run spatial batch normalization in test mode, default is 0. Default value is ``name: “is_test” i: 0 type: INT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum), default is 0.9f. Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT `` * spatial: If true, compute the mean and variance across all spatial elements If false, compute the mean and variance across per feature.Default is 1. Default value is ``name: “spatial” i: 1 type: INT ``
Inputs
X (heterogeneous)T: The input 4dimensional tensor of shape NCHW.
scale (heterogeneous)T: The scale as a 1dimensional tensor of size C to be applied to the output.
B (heterogeneous)T: The bias as a 1dimensional tensor of size C to be applied to the output.
mean (heterogeneous)T: The running mean (training) or the estimated mean (testing) as a 1dimensional tensor of size C.
var (heterogeneous)T: The running variance (training) or the estimated variance (testing) as a 1dimensional tensor of size C.
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output 4dimensional tensor of the same shape as X.
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator. Must be inplace with the input mean. Should not be used for testing.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator. Must be inplace with the input var. Should not be used for testing.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation. Should not be used for testing.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation. Should not be used for testing.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization_14¶
 class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_14(*args, **kwargs)¶
Version
Onnx name: BatchNormalization
This version of the operator has been available since version 14.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, There are five required inputs ‘X’, ‘scale’, ‘B’, ‘input_mean’ and ‘input_var’. Note that ‘input_mean’ and ‘input_var’ are expected to be the estimated statistics in inference mode (training_mode=False, default), and the running statistics in training mode (training_mode=True). There are multiple cases for the number of outputs, which we list below:
Output case #1: Y, running_mean, running_var (training_mode=True) Output case #2: Y (training_mode=False)
When training_mode=False, extra outputs are invalid. The outputs are updated as follows when training_mode=True:
running_mean = input_mean * momentum + current_mean * (1  momentum) running_var = input_var * momentum + current_var * (1  momentum) Y = (X  current_mean) / sqrt(current_var + epsilon) * scale + B where: current_mean = ReduceMean(X, axis=all_except_channel_index) current_var = ReduceVar(X, axis=all_except_channel_index) Notice that ReduceVar refers to the population variance, and it equals to sum(sqrd(x_i  x_avg)) / N where N is the population size (this formula does not use sample size N  1).
When training_mode=False:
Y = (X  input_mean) / sqrt(input_var + epsilon) * scale + B
For previous (depreciated) nonspatial cases, implementors are suggested to flatten the input shape to (N x C * D1 * D2 * … * Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum). Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT `` * training_mode: If set to true, it indicates BatchNormalization is being used for training, and outputs 1, 2, 3, and 4 would be populated. Default value is ``name: “training_mode” i: 0 type: INT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1
scale (heterogeneous)T: Scale tensor of shape (C).
B (heterogeneous)T: Bias tensor of shape (C).
input_mean (heterogeneous)U: running (training) or estimated (testing) mean tensor of shape (C).
input_var (heterogeneous)U: running (training) or estimated (testing) variance tensor of shape (C).
Outputs
Between 1 and 3 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X
running_mean (optional, heterogeneous)U: The running mean after the BatchNormalization operator.
running_var (optional, heterogeneous)U: The running variance after the BatchNormalization operator. This op uses the population size (N) for calculating variance, and not the sample size N1.
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.
U tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain mean and variance types to float tensors. It allows all float type for U.
OnnxBatchNormalization_6¶
 class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_6(*args, **kwargs)¶
Version
Onnx name: BatchNormalization
This version of the operator has been available since version 6.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
Attributes
epsilon: The epsilon value to use to avoid division by zero, default is 1e5f. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * is_test: If set to nonzero, run spatial batch normalization in test mode, default is 0. Default value is ``name: “is_test” i: 0 type: INT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum), default is 0.9f. Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT `` * spatial: If true, compute the mean and variance across all spatial elements If false, compute the mean and variance across per feature.Default is 1. Default value is ``name: “spatial” i: 1 type: INT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
scale (heterogeneous)T: The scale as a 1dimensional tensor of size C to be applied to the output.
B (heterogeneous)T: The bias as a 1dimensional tensor of size C to be applied to the output.
mean (heterogeneous)T: The running mean (training) or the estimated mean (testing) as a 1dimensional tensor of size C.
var (heterogeneous)T: The running variance (training) or the estimated variance (testing) as a 1dimensional tensor of size C.
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X.
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator. Must be inplace with the input mean. Should not be used for testing.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator. Must be inplace with the input var. Should not be used for testing.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation. Should not be used for testing.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation. Should not be used for testing.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization_7¶
 class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_7(*args, **kwargs)¶
Version
Onnx name: BatchNormalization
This version of the operator has been available since version 7.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum). Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT `` * spatial: If true, compute the mean and variance across per activation. If false, compute the mean and variance across per feature over each minibatch. Default value is ``name: “spatial” i: 1 type: INT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
scale (heterogeneous)T: If spatial is true, the dimension of scale is (C). If spatial is false, the dimensions of scale are (C x D1 x … x Dn)
B (heterogeneous)T: If spatial is true, the dimension of bias is (C). If spatial is false, the dimensions of bias are (C x D1 x … x Dn)
mean (heterogeneous)T: If spatial is true, the dimension of the running mean (training) or the estimated mean (testing) is (C). If spatial is false, the dimensions of the running mean (training) or the estimated mean (testing) are (C x D1 x … x Dn).
var (heterogeneous)T: If spatial is true, the dimension of the running variance(training) or the estimated variance (testing) is (C). If spatial is false, the dimensions of the running variance(training) or the estimated variance (testing) are (C x D1 x … x Dn).
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBatchNormalization_9¶
 class skl2onnx.algebra.onnx_ops.OnnxBatchNormalization_9(*args, **kwargs)¶
Version
Onnx name: BatchNormalization
This version of the operator has been available since version 9.
Summary
Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. Depending on the mode it is being run, there are multiple cases for the number of outputs, which we list below:
Output case #1: Y, mean, var, saved_mean, saved_var (training mode) Output case #2: Y (test mode)
For previous (depreciated) nonspatial cases, implementors are suggested to flatten the input shape to (N x C*D1*D2 ..*Dn) before a BatchNormalization Op. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
epsilon: The epsilon value to use to avoid division by zero. Default value is ``name: “epsilon”
f: 9.999999747378752e06 type: FLOAT `` * momentum: Factor used in computing the running mean and variance.e.g., running_mean = running_mean * momentum + mean * (1  momentum). Default value is ``name: “momentum” f: 0.8999999761581421 type: FLOAT ``
Inputs
X (heterogeneous)T: Input data tensor from the previous operator; dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size, C is the number of channels. Statistics are computed for every channel of C over N and D1 to Dn dimensions. For image data, input dimensions become (N x C x H x W). The op also accepts single dimension input of size N in which case C is assumed to be 1
scale (heterogeneous)T: Scale tensor of shape (C).
B (heterogeneous)T: Bias tensor of shape (C).
mean (heterogeneous)T: running (training) or estimated (testing) mean tensor of shape (C).
var (heterogeneous)T: running (training) or estimated (testing) variance tensor of shape (C).
Outputs
Between 1 and 5 outputs.
Y (heterogeneous)T: The output tensor of the same shape as X
mean (optional, heterogeneous)T: The running mean after the BatchNormalization operator.
var (optional, heterogeneous)T: The running variance after the BatchNormalization operator.
saved_mean (optional, heterogeneous)T: Saved mean used during training to speed up gradient computation.
saved_var (optional, heterogeneous)T: Saved variance used during training to speed up gradient computation.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxBinarizer¶
 class skl2onnx.algebra.onnx_ops.OnnxBinarizer(*args, **kwargs)¶
Version
Onnx name: Binarizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Maps the values of the input tensor to either 0 or 1, elementwise, based on the outcome of a comparison against a threshold value.
Attributes
threshold: Values greater than this are mapped to 1, others to 0. Default value is ``name: “threshold”
f: 0.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Data to be binarized
Outputs
Y (heterogeneous)T: Binarized output data
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32): The input must be a tensor of a numeric type. The output will be of the same tensor type.
OnnxBinarizer_1¶
 class skl2onnx.algebra.onnx_ops.OnnxBinarizer_1(*args, **kwargs)¶
Version
Onnx name: Binarizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Maps the values of the input tensor to either 0 or 1, elementwise, based on the outcome of a comparison against a threshold value.
Attributes
threshold: Values greater than this are mapped to 1, others to 0. Default value is ``name: “threshold”
f: 0.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Data to be binarized
Outputs
Y (heterogeneous)T: Binarized output data
Type Constraints
T tensor(float), tensor(double), tensor(int64), tensor(int32): The input must be a tensor of a numeric type. The output will be of the same tensor type.
OnnxBitShift¶
 class skl2onnx.algebra.onnx_ops.OnnxBitShift(*args, **kwargs)¶
Version
Onnx name: BitShift
This version of the operator has been available since version 11.
Summary
 Bitwise shift operator performs elementwise operation. For each input element, if the
attribute “direction” is “RIGHT”, this operator moves its binary representation toward the right side so that the input value is effectively decreased. If the attribute “direction” is “LEFT”, bits of binary representation moves toward the left side, which results the increase of its actual value. The input X is the tensor to be shifted and another input Y specifies the amounts of shifting. For example, if “direction” is “Right”, X is [1, 4], and S is [1, 1], the corresponding output Z would be [0, 2]. If “direction” is “LEFT” with X=[1, 2] and S=[1, 2], the corresponding output Y would be [2, 8].
Because this operator supports Numpystyle broadcasting, X’s and Y’s shapes are not necessarily identical.
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Attributes
direction (required): Direction of moving bits. It can be either “RIGHT” (for right shift) or “LEFT” (for left shift). Default value is ````
Inputs
X (heterogeneous)T: First operand, input to be shifted.
Y (heterogeneous)T: Second operand, amounts of shift.
Outputs
Z (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64): Constrain input and output types to integer tensors.
OnnxBitShift_11¶
 class skl2onnx.algebra.onnx_ops.OnnxBitShift_11(*args, **kwargs)¶
Version
Onnx name: BitShift
This version of the operator has been available since version 11.
Summary
 Bitwise shift operator performs elementwise operation. For each input element, if the
attribute “direction” is “RIGHT”, this operator moves its binary representation toward the right side so that the input value is effectively decreased. If the attribute “direction” is “LEFT”, bits of binary representation moves toward the left side, which results the increase of its actual value. The input X is the tensor to be shifted and another input Y specifies the amounts of shifting. For example, if “direction” is “Right”, X is [1, 4], and S is [1, 1], the corresponding output Z would be [0, 2]. If “direction” is “LEFT” with X=[1, 2] and S=[1, 2], the corresponding output Y would be [2, 8].
Because this operator supports Numpystyle broadcasting, X’s and Y’s shapes are not necessarily identical.
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Attributes
direction (required): Direction of moving bits. It can be either “RIGHT” (for right shift) or “LEFT” (for left shift). Default value is ````
Inputs
X (heterogeneous)T: First operand, input to be shifted.
Y (heterogeneous)T: Second operand, amounts of shift.
Outputs
Z (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64): Constrain input and output types to integer tensors.
OnnxCast¶
 class skl2onnx.algebra.onnx_ops.OnnxCast(*args, **kwargs)¶
Version
Onnx name: Cast
This version of the operator has been available since version 13.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.
Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may result 100. There are some string literals reserved for special floatingpoint values; “+INF” (and “INF”), “INF”, and “NaN” are positive infinity, negative infinity, and notanumber, respectively. Any string which can exactly match “+INF” in a caseinsensitive way would be mapped to positive infinite. Similarly, this caseinsensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floatingpoint representation (such as “314.15926”) would be used. Converting nonnumericalliteral string such as “Hello World!” is an undefined behavior. Cases of converting string representing floatingpoint arithmetic value, such as “2.718”, to INT is an undefined behavior.
Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64bit float 3.1415926459 may be round to a 32bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16): Constrain input types. Casting from complex is not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16): Constrain output types. Casting to complex is not supported.
OnnxCastMap¶
 class skl2onnx.algebra.onnx_ops.OnnxCastMap(*args, **kwargs)¶
Version
Onnx name: CastMap
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts a map to a tensor. The map key must be an int64 and the values will be ordered in ascending order based on this key. The operator supports dense packing or sparse packing. If using sparse packing, the key cannot exceed the max_map1 value.
Attributes
cast_to: A string indicating the desired element type of the output tensor, one of ‘TO_FLOAT’, ‘TO_STRING’, ‘TO_INT64’. Default value is ``name: “cast_to”
s: “TO_FLOAT” type: STRING `` * map_form: Indicates whether to only output as many values as are in the input (dense), or position the input based on using the key of the map as the index of the output (sparse).<br>One of ‘DENSE’, ‘SPARSE’. Default value is ``name: “map_form” s: “DENSE” type: STRING `` * max_map: If the value of map_form is ‘SPARSE,’ this attribute indicates the total length of the output tensor. Default value is ``name: “max_map” i: 1 type: INT ``
Inputs
X (heterogeneous)T1: The input map that is to be cast to a tensor
Outputs
Y (heterogeneous)T2: A tensor representing the same data as the input map, ordered by their keys
Type Constraints
T1 map(int64, string), map(int64, float): The input must be an integer map to either string or float.
T2 tensor(string), tensor(float), tensor(int64): The output is a 1D tensor of string, float, or integer.
OnnxCastMap_1¶
 class skl2onnx.algebra.onnx_ops.OnnxCastMap_1(*args, **kwargs)¶
Version
Onnx name: CastMap
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts a map to a tensor. The map key must be an int64 and the values will be ordered in ascending order based on this key. The operator supports dense packing or sparse packing. If using sparse packing, the key cannot exceed the max_map1 value.
Attributes
cast_to: A string indicating the desired element type of the output tensor, one of ‘TO_FLOAT’, ‘TO_STRING’, ‘TO_INT64’. Default value is ``name: “cast_to”
s: “TO_FLOAT” type: STRING `` * map_form: Indicates whether to only output as many values as are in the input (dense), or position the input based on using the key of the map as the index of the output (sparse).<br>One of ‘DENSE’, ‘SPARSE’. Default value is ``name: “map_form” s: “DENSE” type: STRING `` * max_map: If the value of map_form is ‘SPARSE,’ this attribute indicates the total length of the output tensor. Default value is ``name: “max_map” i: 1 type: INT ``
Inputs
X (heterogeneous)T1: The input map that is to be cast to a tensor
Outputs
Y (heterogeneous)T2: A tensor representing the same data as the input map, ordered by their keys
Type Constraints
T1 map(int64, string), map(int64, float): The input must be an integer map to either string or float.
T2 tensor(string), tensor(float), tensor(int64): The output is a 1D tensor of string, float, or integer.
OnnxCast_1¶
 class skl2onnx.algebra.onnx_ops.OnnxCast_1(*args, **kwargs)¶
Version
Onnx name: Cast
This version of the operator has been available since version 1.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message. NOTE: Casting to and from strings is not supported yet.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain input types. Casting from strings and complex are not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types. Casting to strings and complex are not supported.
OnnxCast_13¶
 class skl2onnx.algebra.onnx_ops.OnnxCast_13(*args, **kwargs)¶
Version
Onnx name: Cast
This version of the operator has been available since version 13.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.
Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may result 100. There are some string literals reserved for special floatingpoint values; “+INF” (and “INF”), “INF”, and “NaN” are positive infinity, negative infinity, and notanumber, respectively. Any string which can exactly match “+INF” in a caseinsensitive way would be mapped to positive infinite. Similarly, this caseinsensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floatingpoint representation (such as “314.15926”) would be used. Converting nonnumericalliteral string such as “Hello World!” is an undefined behavior. Cases of converting string representing floatingpoint arithmetic value, such as “2.718”, to INT is an undefined behavior.
Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64bit float 3.1415926459 may be round to a 32bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16): Constrain input types. Casting from complex is not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string), tensor(bfloat16): Constrain output types. Casting to complex is not supported.
OnnxCast_6¶
 class skl2onnx.algebra.onnx_ops.OnnxCast_6(*args, **kwargs)¶
Version
Onnx name: Cast
This version of the operator has been available since version 6.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message. NOTE: Casting to and from strings is not supported yet.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain input types. Casting from strings and complex are not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types. Casting to strings and complex are not supported.
OnnxCast_9¶
 class skl2onnx.algebra.onnx_ops.OnnxCast_9(*args, **kwargs)¶
Version
Onnx name: Cast
This version of the operator has been available since version 9.
Summary
The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns an output tensor of the same size in the converted type. The ‘to’ argument must be one of the data types specified in the ‘DataType’ enum field in the TensorProto message.
Casting from string tensor in plain (e.g., “3.14” and “1000”) and scientific numeric representations (e.g., “1e5” and “1E8”) to float types is supported. For example, converting string “100.5” to an integer may result 100. There are some string literals reserved for special floatingpoint values; “+INF” (and “INF”), “INF”, and “NaN” are positive infinity, negative infinity, and notanumber, respectively. Any string which can exactly match “+INF” in a caseinsensitive way would be mapped to positive infinite. Similarly, this caseinsensitive rule is applied to “INF” and “NaN”. When casting from numeric tensors to string tensors, plain floatingpoint representation (such as “314.15926”) would be used. Converting nonnumericalliteral string such as “Hello World!” is an undefined behavior. Cases of converting string representing floatingpoint arithmetic value, such as “2.718”, to INT is an undefined behavior.
Conversion from a numerical type to any numerical type is always allowed. User must be aware of precision loss and value change caused by range difference between two types. For example, a 64bit float 3.1415926459 may be round to a 32bit float 3.141592. Similarly, converting an integer 36 to Boolean may produce 1 because we truncate bits which can’t be stored in the targeted type.
Attributes
to (required): The data type to which the elements of the input tensor are cast. Strictly must be one of the types from DataType enum in TensorProto Default value is ````
Inputs
input (heterogeneous)T1: Input tensor to be cast.
Outputs
output (heterogeneous)T2: Output tensor with the same shape as input with type specified by the ‘to’ argument
Type Constraints
T1 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string): Constrain input types. Casting from complex is not supported.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool), tensor(string): Constrain output types. Casting to complex is not supported.
OnnxCategoryMapper¶
 class skl2onnx.algebra.onnx_ops.OnnxCategoryMapper(*args, **kwargs)¶
Version
Onnx name: CategoryMapper
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts strings to integers and vice versa.
Two sequences of equal length are used to map between integers and strings, with strings and integers at the same index detailing the mapping.
Each operator converts either integers to strings or strings to integers, depending on which default value attribute is provided. Only one default value attribute should be defined.
If the string default value is set, it will convert integers to strings. If the int default value is set, it will convert strings to integers.
Attributes
cats_int64s: The integers of the map. This sequence must be the same length as the ‘cats_strings’ sequence. Default value is ````
cats_strings: The strings of the map. This sequence must be the same length as the ‘cats_int64s’ sequence Default value is ````
default_int64: An integer to use when an input string value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_int64”
i: 1 type: INT `` * default_string: A string to use when an input integer value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_string” s: “_Unused” type: STRING ``
Inputs
X (heterogeneous)T1: Input data
Outputs
Y (heterogeneous)T2: Output data. If strings are input, the output values are integers, and vice versa.
Type Constraints
T1 tensor(string), tensor(int64): The input must be a tensor of strings or integers, either [N,C] or [C].
T2 tensor(string), tensor(int64): The output is a tensor of strings or integers. Its shape will be the same as the input shape.
OnnxCategoryMapper_1¶
 class skl2onnx.algebra.onnx_ops.OnnxCategoryMapper_1(*args, **kwargs)¶
Version
Onnx name: CategoryMapper
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Converts strings to integers and vice versa.
Two sequences of equal length are used to map between integers and strings, with strings and integers at the same index detailing the mapping.
Each operator converts either integers to strings or strings to integers, depending on which default value attribute is provided. Only one default value attribute should be defined.
If the string default value is set, it will convert integers to strings. If the int default value is set, it will convert strings to integers.
Attributes
cats_int64s: The integers of the map. This sequence must be the same length as the ‘cats_strings’ sequence. Default value is ````
cats_strings: The strings of the map. This sequence must be the same length as the ‘cats_int64s’ sequence Default value is ````
default_int64: An integer to use when an input string value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_int64”
i: 1 type: INT `` * default_string: A string to use when an input integer value is not found in the map.<br>One and only one of the ‘default_*’ attributes must be defined. Default value is ``name: “default_string” s: “_Unused” type: STRING ``
Inputs
X (heterogeneous)T1: Input data
Outputs
Y (heterogeneous)T2: Output data. If strings are input, the output values are integers, and vice versa.
Type Constraints
T1 tensor(string), tensor(int64): The input must be a tensor of strings or integers, either [N,C] or [C].
T2 tensor(string), tensor(int64): The output is a tensor of strings or integers. Its shape will be the same as the input shape.
OnnxCeil¶
 class skl2onnx.algebra.onnx_ops.OnnxCeil(*args, **kwargs)¶
Version
Onnx name: Ceil
This version of the operator has been available since version 13.
Summary
Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.
OnnxCeil_1¶
 class skl2onnx.algebra.onnx_ops.OnnxCeil_1(*args, **kwargs)¶
Version
Onnx name: Ceil
This version of the operator has been available since version 1.
Summary
Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.
Attributes
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCeil_13¶
 class skl2onnx.algebra.onnx_ops.OnnxCeil_13(*args, **kwargs)¶
Version
Onnx name: Ceil
This version of the operator has been available since version 13.
Summary
Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.
OnnxCeil_6¶
 class skl2onnx.algebra.onnx_ops.OnnxCeil_6(*args, **kwargs)¶
Version
Onnx name: Ceil
This version of the operator has been available since version 6.
Summary
Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCelu¶
 class skl2onnx.algebra.onnx_ops.OnnxCelu(*args, **kwargs)¶
Version
Onnx name: Celu
This version of the operator has been available since version 12.
Summary
Continuously Differentiable Exponential Linear Units: Perform the linear unit elementwise on the input tensor X using formula:
max(0,x) + min(0,alpha*(exp(x/alpha)1))
Attributes
alpha: The Alpha value in Celu formula which control the shape of the unit. The default value is 1.0. Default value is ``name: “alpha”
f: 1.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float): Constrain input and output types to float32 tensors.
OnnxCelu_12¶
 class skl2onnx.algebra.onnx_ops.OnnxCelu_12(*args, **kwargs)¶
Version
Onnx name: Celu
This version of the operator has been available since version 12.
Summary
Continuously Differentiable Exponential Linear Units: Perform the linear unit elementwise on the input tensor X using formula:
max(0,x) + min(0,alpha*(exp(x/alpha)1))
Attributes
alpha: The Alpha value in Celu formula which control the shape of the unit. The default value is 1.0. Default value is ``name: “alpha”
f: 1.0 type: FLOAT ``
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float): Constrain input and output types to float32 tensors.
OnnxClip¶
 class skl2onnx.algebra.onnx_ops.OnnxClip(*args, **kwargs)¶
Version
Onnx name: Clip
This version of the operator has been available since version 13.
Summary
Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.
Inputs
Between 1 and 3 inputs.
input (heterogeneous)T: Input tensor whose elements to be clipped
min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).
max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxClip_1¶
 class skl2onnx.algebra.onnx_ops.OnnxClip_1(*args, **kwargs)¶
Version
Onnx name: Clip
This version of the operator has been available since version 1.
Summary
Clip operator limits the given input within an interval. The interval is specified with arguments ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max() respectively.
Attributes
consumed_inputs: legacy optimization attribute. Default value is ````
max: Maximum value, above which element is replaced by max Default value is ````
min: Minimum value, under which element is replaced by min Default value is ````
Inputs
input (heterogeneous)T: Input tensor whose elements to be clipped
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxClip_11¶
 class skl2onnx.algebra.onnx_ops.OnnxClip_11(*args, **kwargs)¶
Version
Onnx name: Clip
This version of the operator has been available since version 11.
Summary
Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.
Inputs
Between 1 and 3 inputs.
input (heterogeneous)T: Input tensor whose elements to be clipped
min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).
max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxClip_12¶
 class skl2onnx.algebra.onnx_ops.OnnxClip_12(*args, **kwargs)¶
Version
Onnx name: Clip
This version of the operator has been available since version 12.
Summary
Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.
Inputs
Between 1 and 3 inputs.
input (heterogeneous)T: Input tensor whose elements to be clipped
min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).
max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxClip_13¶
 class skl2onnx.algebra.onnx_ops.OnnxClip_13(*args, **kwargs)¶
Version
Onnx name: Clip
This version of the operator has been available since version 13.
Summary
Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max(), respectively.
Inputs
Between 1 and 3 inputs.
input (heterogeneous)T: Input tensor whose elements to be clipped
min (optional, heterogeneous)T: Minimum value, under which element is replaced by min. It must be a scalar(tensor of empty shape).
max (optional, heterogeneous)T: Maximum value, above which element is replaced by max. It must be a scalar(tensor of empty shape).
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxClip_6¶
 class skl2onnx.algebra.onnx_ops.OnnxClip_6(*args, **kwargs)¶
Version
Onnx name: Clip
This version of the operator has been available since version 6.
Summary
Clip operator limits the given input within an interval. The interval is specified with arguments ‘min’ and ‘max’. They default to numeric_limits::lowest() and numeric_limits::max() respectively.
Attributes
max: Maximum value, above which element is replaced by max Default value is ``name: “max”
f: 3.4028234663852886e+38 type: FLOAT `` * min: Minimum value, under which element is replaced by min Default value is ``name: “min” f: 3.4028234663852886e+38 type: FLOAT ``
Inputs
input (heterogeneous)T: Input tensor whose elements to be clipped
Outputs
output (heterogeneous)T: Output tensor with clipped input elements
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCompress¶
 class skl2onnx.algebra.onnx_ops.OnnxCompress(*args, **kwargs)¶
Version
Onnx name: Compress
This version of the operator has been available since version 11.
Summary
Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html
Attributes
axis: (Optional) Axis along which to take slices. If not specified, input is flattened before elements being selected. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(input). Default value is ````
Inputs
input (heterogeneous)T: Tensor of rank r >= 1.
condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length along the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.
Outputs
output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
T1 tensor(bool): Constrains to boolean tensors.
OnnxCompress_11¶
 class skl2onnx.algebra.onnx_ops.OnnxCompress_11(*args, **kwargs)¶
Version
Onnx name: Compress
This version of the operator has been available since version 11.
Summary
Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html
Attributes
axis: (Optional) Axis along which to take slices. If not specified, input is flattened before elements being selected. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(input). Default value is ````
Inputs
input (heterogeneous)T: Tensor of rank r >= 1.
condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length along the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.
Outputs
output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
T1 tensor(bool): Constrains to boolean tensors.
OnnxCompress_9¶
 class skl2onnx.algebra.onnx_ops.OnnxCompress_9(*args, **kwargs)¶
Version
Onnx name: Compress
This version of the operator has been available since version 9.
Summary
Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index. In case axis is not provided, input is flattened before elements are selected. Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html
Attributes
axis: (Optional) Axis along which to take slices. If not specified, input is flattened before elements being selected. Default value is ````
Inputs
input (heterogeneous)T: Tensor of rank r >= 1.
condition (heterogeneous)T1: Rank 1 tensor of booleans to indicate which slices or data elements to be selected. Its length can be less than the input length alone the axis or the flattened input size if axis is not specified. In such cases data slices or elements exceeding the condition length are discarded.
Outputs
output (heterogeneous)T: Tensor of rank r if axis is specified. Otherwise output is a Tensor of rank 1.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
T1 tensor(bool): Constrains to boolean tensors.
OnnxConcat¶
 class skl2onnx.algebra.onnx_ops.OnnxConcat(*args, **kwargs)¶
Version
Onnx name: Concat
This version of the operator has been available since version 13.
Summary
Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.
Attributes
axis (required): Which axis to concat on. A negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(inputs).. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcatFromSequence¶
 class skl2onnx.algebra.onnx_ops.OnnxConcatFromSequence(*args, **kwargs)¶
Version
Onnx name: ConcatFromSequence
This version of the operator has been available since version 11.
Summary
Concatenate a sequence of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on. By default ‘new_axis’ is 0, the behavior is similar to numpy.concatenate. When ‘new_axis’ is 1, the behavior is similar to numpy.stack.
Attributes
axis (required): Which axis to concat on. Accepted range in [r, r  1], where r is the rank of input tensors. When new_axis is 1, accepted range is [r  1, r]. Default value is ````
new_axis: Insert and concatenate on a new axis or not, default 0 means do not insert new axis. Default value is ``name: “new_axis”
i: 0 type: INT ``
Inputs
input_sequence (heterogeneous)S: Sequence of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
S seq(tensor(uint8)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(int8)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(float16)), seq(tensor(float)), seq(tensor(double)), seq(tensor(string)), seq(tensor(bool)), seq(tensor(complex64)), seq(tensor(complex128)): Constrain input types to any tensor type.
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcatFromSequence_11¶
 class skl2onnx.algebra.onnx_ops.OnnxConcatFromSequence_11(*args, **kwargs)¶
Version
Onnx name: ConcatFromSequence
This version of the operator has been available since version 11.
Summary
Concatenate a sequence of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on. By default ‘new_axis’ is 0, the behavior is similar to numpy.concatenate. When ‘new_axis’ is 1, the behavior is similar to numpy.stack.
Attributes
axis (required): Which axis to concat on. Accepted range in [r, r  1], where r is the rank of input tensors. When new_axis is 1, accepted range is [r  1, r]. Default value is ````
new_axis: Insert and concatenate on a new axis or not, default 0 means do not insert new axis. Default value is ``name: “new_axis”
i: 0 type: INT ``
Inputs
input_sequence (heterogeneous)S: Sequence of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
S seq(tensor(uint8)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(int8)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(float16)), seq(tensor(float)), seq(tensor(double)), seq(tensor(string)), seq(tensor(bool)), seq(tensor(complex64)), seq(tensor(complex128)): Constrain input types to any tensor type.
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcat_1¶
 class skl2onnx.algebra.onnx_ops.OnnxConcat_1(*args, **kwargs)¶
Version
Onnx name: Concat
This version of the operator has been available since version 1.
Summary
Concatenate a list of tensors into a single tensor
Attributes
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain output types to float tensors.
OnnxConcat_11¶
 class skl2onnx.algebra.onnx_ops.OnnxConcat_11(*args, **kwargs)¶
Version
Onnx name: Concat
This version of the operator has been available since version 11.
Summary
Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.
Attributes
axis (required): Which axis to concat on. A negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(inputs).. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcat_13¶
 class skl2onnx.algebra.onnx_ops.OnnxConcat_13(*args, **kwargs)¶
Version
Onnx name: Concat
This version of the operator has been available since version 13.
Summary
Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on.
Attributes
axis (required): Which axis to concat on. A negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(inputs).. Default value is ````
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConcat_4¶
 class skl2onnx.algebra.onnx_ops.OnnxConcat_4(*args, **kwargs)¶
Version
Onnx name: Concat
This version of the operator has been available since version 4.
Summary
Concatenate a list of tensors into a single tensor
Attributes
Inputs
Between 1 and 2147483647 inputs.
inputs (variadic, heterogeneous)T: List of tensors for concatenation
Outputs
concat_result (heterogeneous)T: Concatenated tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain output types to any tensor type.
OnnxConstant¶
 class skl2onnx.algebra.onnx_ops.OnnxConstant(*args, **kwargs)¶
Version
Onnx name: Constant
This version of the operator has been available since version 13.
Summary
This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.
Attributes
sparse_value: The value for the elements of the output tensor in sparse format. Default value is ````
value: The value for the elements of the output tensor. Default value is ````
value_float: The value for the sole element for the scalar, float32, output tensor. Default value is ````
value_floats: The values for the elements for the 1D, float32, output tensor. Default value is ````
value_int: The value for the sole element for the scalar, int64, output tensor. Default value is ````
value_ints: The values for the elements for the 1D, int64, output tensor. Default value is ````
value_string: The value for the sole element for the scalar, UTF8 string, output tensor. Default value is ````
value_strings: The values for the elements for the 1D, UTF8 string, output tensor. Default value is ````
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConstantOfShape¶
 class skl2onnx.algebra.onnx_ops.OnnxConstantOfShape(*args, **kwargs)¶
Version
Onnx name: ConstantOfShape
This version of the operator has been available since version 9.
Summary
Generate a tensor with given value and shape.
Attributes
value: (Optional) The value of the output elements.Should be a oneelement tensor. If not specified, it defaults to a tensor of value 0 and datatype float32 Default value is ````
Inputs
input (heterogeneous)T1: 1D tensor. The shape of the expected output tensor. If empty tensor is given, the output would be a scalar. All values must be >= 0.
Outputs
output (heterogeneous)T2: Output tensor of shape specified by ‘input’.If attribute ‘value’ is specified, the value and datatype of the output tensor is taken from ‘value’.If attribute ‘value’ is not specified, the value in the output defaults to 0, and the datatype defaults to float32.
Type Constraints
T1 tensor(int64): Constrain input types.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types to be numerics.
OnnxConstantOfShape_9¶
 class skl2onnx.algebra.onnx_ops.OnnxConstantOfShape_9(*args, **kwargs)¶
Version
Onnx name: ConstantOfShape
This version of the operator has been available since version 9.
Summary
Generate a tensor with given value and shape.
Attributes
value: (Optional) The value of the output elements.Should be a oneelement tensor. If not specified, it defaults to a tensor of value 0 and datatype float32 Default value is ````
Inputs
input (heterogeneous)T1: 1D tensor. The shape of the expected output tensor. If empty tensor is given, the output would be a scalar. All values must be >= 0.
Outputs
output (heterogeneous)T2: Output tensor of shape specified by ‘input’.If attribute ‘value’ is specified, the value and datatype of the output tensor is taken from ‘value’.If attribute ‘value’ is not specified, the value in the output defaults to 0, and the datatype defaults to float32.
Type Constraints
T1 tensor(int64): Constrain input types.
T2 tensor(float16), tensor(float), tensor(double), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(bool): Constrain output types to be numerics.
OnnxConstant_1¶
 class skl2onnx.algebra.onnx_ops.OnnxConstant_1(*args, **kwargs)¶
Version
Onnx name: Constant
This version of the operator has been available since version 1.
Summary
A constant tensor.
Attributes
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConstant_11¶
 class skl2onnx.algebra.onnx_ops.OnnxConstant_11(*args, **kwargs)¶
Version
Onnx name: Constant
This version of the operator has been available since version 11.
Summary
A constant tensor. Exactly one of the two attributes, either value or sparse_value, must be specified.
Attributes
sparse_value: The value for the elements of the output tensor in sparse format. Default value is ````
value: The value for the elements of the output tensor. Default value is ````
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConstant_12¶
 class skl2onnx.algebra.onnx_ops.OnnxConstant_12(*args, **kwargs)¶
Version
Onnx name: Constant
This version of the operator has been available since version 12.
Summary
This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.
Attributes
sparse_value: The value for the elements of the output tensor in sparse format. Default value is ````
value: The value for the elements of the output tensor. Default value is ````
value_float: The value for the sole element for the scalar, float32, output tensor. Default value is ````
value_floats: The values for the elements for the 1D, float32, output tensor. Default value is ````
value_int: The value for the sole element for the scalar, int64, output tensor. Default value is ````
value_ints: The values for the elements for the 1D, int64, output tensor. Default value is ````
value_string: The value for the sole element for the scalar, UTF8 string, output tensor. Default value is ````
value_strings: The values for the elements for the 1D, UTF8 string, output tensor. Default value is ````
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConstant_13¶
 class skl2onnx.algebra.onnx_ops.OnnxConstant_13(*args, **kwargs)¶
Version
Onnx name: Constant
This version of the operator has been available since version 13.
Summary
This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, or value_* must be specified.
Attributes
sparse_value: The value for the elements of the output tensor in sparse format. Default value is ````
value: The value for the elements of the output tensor. Default value is ````
value_float: The value for the sole element for the scalar, float32, output tensor. Default value is ````
value_floats: The values for the elements for the 1D, float32, output tensor. Default value is ````
value_int: The value for the sole element for the scalar, int64, output tensor. Default value is ````
value_ints: The values for the elements for the 1D, int64, output tensor. Default value is ````
value_string: The value for the sole element for the scalar, UTF8 string, output tensor. Default value is ````
value_strings: The values for the elements for the 1D, UTF8 string, output tensor. Default value is ````
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConstant_9¶
 class skl2onnx.algebra.onnx_ops.OnnxConstant_9(*args, **kwargs)¶
Version
Onnx name: Constant
This version of the operator has been available since version 9.
Summary
A constant tensor.
Attributes
Outputs
output (heterogeneous)T: Output tensor containing the same value of the provided tensor.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxConv¶
 class skl2onnx.algebra.onnx_ops.OnnxConv(*args, **kwargs)¶
Version
Onnx name: Conv
This version of the operator has been available since version 11.
Summary
The convolution operator consumes an input tensor and a filter, and computes the output.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults is 1 along each spatial axis. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults is 1 along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConvInteger¶
 class skl2onnx.algebra.onnx_ops.OnnxConvInteger(*args, **kwargs)¶
Version
Onnx name: ConvInteger
This version of the operator has been available since version 10.
Summary
The integer convolution operator consumes an input tensor, its zeropoint, a filter, and its zeropoint, and computes the output. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each axis. Default value is ```` * group: number of groups input channels and output channels are divided into. default is 1. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input 'w'. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0.The value represent the number of pixels added to the beginning and end part of the corresponding axis.`pads` format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number ofpixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i.This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaultsto 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each axis. Default value is ````Inputs
Between 2 and 4 inputs.
x (heterogeneous)T1: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
w (heterogeneous)T2: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
x_zero_point (optional, heterogeneous)T1: Zero point tensor for input ‘x’. It’s optional and default value is 0. It’s a scalar, which means a pertensor/layer quantization.
w_zero_point (optional, heterogeneous)T2: Zero point tensor for input ‘w’. It’s optional and default value is 0. It could be a scalar or a 1D tensor, which means a pertensor/layer or per output channel quantization. If it’s a 1D tensor, its number of elements should be equal to the number of output channels (M)
Outputs
y (heterogeneous)T3: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T1 tensor(int8), tensor(uint8): Constrain input x and its zero point data type to 8bit integer tensor.
T2 tensor(int8), tensor(uint8): Constrain input w and its zero point data type to 8bit integer tensor.
T3 tensor(int32): Constrain output y data type to 32bit integer tensor.
OnnxConvInteger_10¶
 class skl2onnx.algebra.onnx_ops.OnnxConvInteger_10(*args, **kwargs)¶
Version
Onnx name: ConvInteger
This version of the operator has been available since version 10.
Summary
The integer convolution operator consumes an input tensor, its zeropoint, a filter, and its zeropoint, and computes the output. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each axis. Default value is ```` * group: number of groups input channels and output channels are divided into. default is 1. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input 'w'. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0.The value represent the number of pixels added to the beginning and end part of the corresponding axis.`pads` format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number ofpixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i.This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaultsto 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each axis. Default value is ````Inputs
Between 2 and 4 inputs.
x (heterogeneous)T1: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
w (heterogeneous)T2: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
x_zero_point (optional, heterogeneous)T1: Zero point tensor for input ‘x’. It’s optional and default value is 0. It’s a scalar, which means a pertensor/layer quantization.
w_zero_point (optional, heterogeneous)T2: Zero point tensor for input ‘w’. It’s optional and default value is 0. It could be a scalar or a 1D tensor, which means a pertensor/layer or per output channel quantization. If it’s a 1D tensor, its number of elements should be equal to the number of output channels (M)
Outputs
y (heterogeneous)T3: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T1 tensor(int8), tensor(uint8): Constrain input x and its zero point data type to 8bit integer tensor.
T2 tensor(int8), tensor(uint8): Constrain input w and its zero point data type to 8bit integer tensor.
T3 tensor(int32): Constrain output y data type to 32bit integer tensor.
OnnxConvTranspose¶
 class skl2onnx.algebra.onnx_ops.OnnxConvTranspose(*args, **kwargs)¶
Version
Onnx name: ConvTranspose
This version of the operator has been available since version 11.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  pads[start_i]  pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i]  (total_padding[i]/2) Else: pads[start_i] = total_padding[i]  (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = input_shape[i] * strides[i] for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each spatial axis. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* output_padding: Additional elements added to the side with higher coordinate indices in the output. Each padding value in “output_padding” must be less than the corresponding stride/dilation dimension. By default, this attribute is a zero vector. Note that this attribute doesn’t directly affect the computed output values. It only controls the selection of the computed values, so changing this attribute only adds or removes output elements. If “output_shape” is explicitly provided, “output_padding” does not contribute additional size to “output_shape” but participates in the computation of the needed padding amount. This is also called adjs or adjustment in some frameworks. Default value is ```` * output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads Default value is ```` * pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConvTranspose_1¶
 class skl2onnx.algebra.onnx_ops.OnnxConvTranspose_1(*args, **kwargs)¶
Version
Onnx name: ConvTranspose
This version of the operator has been available since version 1.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  pads[start_i]  pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i]  (total_padding[i]/2) Else: pads[start_i] = total_padding[i]  (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* output_padding: The zeropadding added to one side of the output. This is also called adjs/adjustment in some frameworks. Default value is ```` * output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads Default value is ```` * pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConvTranspose_11¶
 class skl2onnx.algebra.onnx_ops.OnnxConvTranspose_11(*args, **kwargs)¶
Version
Onnx name: ConvTranspose
This version of the operator has been available since version 11.
Summary
The convolution transpose operator consumes an input tensor and a filter, and computes the output.
If the pads parameter is provided the shape of the output is calculated via the following equation:
output_shape[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  pads[start_i]  pads[end_i]
output_shape can also be explicitly specified in which case pads values are auto generated using these equations:
total_padding[i] = stride[i] * (input_size[i]  1) + output_padding[i] + ((kernel_shape[i]  1) * dilations[i] + 1)  output_shape[i] If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i]  (total_padding[i]/2) Else: pads[start_i] = total_padding[i]  (total_padding[i]/2); pads[end_i] = (total_padding[i]/2).
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = input_shape[i] * strides[i] for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults to 1 along each spatial axis. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* output_padding: Additional elements added to the side with higher coordinate indices in the output. Each padding value in “output_padding” must be less than the corresponding stride/dilation dimension. By default, this attribute is a zero vector. Note that this attribute doesn’t directly affect the computed output values. It only controls the selection of the computed values, so changing this attribute only adds or removes output elements. If “output_shape” is explicitly provided, “output_padding” does not contribute additional size to “output_shape” but participates in the computation of the needed padding amount. This is also called adjs or adjustment in some frameworks. Default value is ```` * output_shape: The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads Default value is ```` * pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults to 1 along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn)
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (C x M/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the weight shape will be (C x M/group x k1 x k2 x … x kn), where (k1 x k2 x … x kn) is the dimension of the kernel. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, pad lengths and group count. The number of channels in the output should be equal to W.shape[1] * group (assuming zero based indices of the shape array)
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConv_1¶
 class skl2onnx.algebra.onnx_ops.OnnxConv_1(*args, **kwargs)¶
Version
Onnx name: Conv
This version of the operator has been available since version 1.
Summary
The convolution operator consumes an input tensor and a filter, and computes the output.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that the output spatial size match the input.In case of odd number add the extra padding at the end for SAME_UPPER and at the beginning for SAME_LOWER. VALID mean no padding. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxConv_11¶
 class skl2onnx.algebra.onnx_ops.OnnxConv_11(*args, **kwargs)¶
Version
Onnx name: Conv
This version of the operator has been available since version 11.
Summary
The convolution operator consumes an input tensor and a filter, and computes the output.
Attributes
auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is ``name: “auto_pad”
s: “NOTSET” type: STRING `` * dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults is 1 along each spatial axis. Default value is ```` * group: number of groups input channels and output channels are divided into. Default value is
name: "group" i: 1 type: INT `` * *kernel_shape*: The shape of the convolution kernel. If not present, should be inferred from input W. Default value is ``
* pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. Default value is ```` * strides: Stride along each spatial axis. If not present, the stride defaults is 1 along each spatial axis. Default value is ````Inputs
Between 2 and 3 inputs.
X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. X.shape[1] == (W.shape[1] * group) == C (assuming zero based indices for the shape array). Or in other words FILTER_IN_CHANNEL should be equal to DATA_CHANNEL.
B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.
Outputs
Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCos¶
 class skl2onnx.algebra.onnx_ops.OnnxCos(*args, **kwargs)¶
Version
Onnx name: Cos
This version of the operator has been available since version 7.
Summary
Calculates the cosine of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The cosine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCos_7¶
 class skl2onnx.algebra.onnx_ops.OnnxCos_7(*args, **kwargs)¶
Version
Onnx name: Cos
This version of the operator has been available since version 7.
Summary
Calculates the cosine of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The cosine of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCosh¶
 class skl2onnx.algebra.onnx_ops.OnnxCosh(*args, **kwargs)¶
Version
Onnx name: Cosh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic cosine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic cosine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCosh_9¶
 class skl2onnx.algebra.onnx_ops.OnnxCosh_9(*args, **kwargs)¶
Version
Onnx name: Cosh
This version of the operator has been available since version 9.
Summary
Calculates the hyperbolic cosine of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The hyperbolic cosine values of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxCumSum¶
 class skl2onnx.algebra.onnx_ops.OnnxCumSum(*args, **kwargs)¶
Version
Onnx name: CumSum
This version of the operator has been available since version 14.
Summary
Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively meaning the first element is copied as is. Through an exclusive attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to 1.
Example:
input_x = [1, 2, 3] axis=0 output = [1, 3, 6] exclusive=1 output = [0, 1, 3] exclusive=0 reverse=1 output = [6, 5, 3] exclusive=1 reverse=1 output = [5, 3, 0]
Attributes
exclusive: If set to 1 will return exclusive sum in which the top element is not included. In other terms, if set to 1, the jth output element would be the sum of the first (j1) elements. Otherwise, it would be the sum of the first j elements. Default value is ``name: “exclusive”
i: 0 type: INT `` * reverse: If set to 1 will perform the sums in reverse direction. Default value is ``name: “reverse” i: 0 type: INT ``
Inputs
x (heterogeneous)T: An input tensor that is to be processed.
axis (heterogeneous)T2: A 0D tensor. Must be in the range [rank(x), rank(x)1]. Negative value means counting dimensions from the back.
Outputs
y (heterogeneous)T: Output tensor of the same type as ‘x’ with cumulative sums of the x’s elements
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to highprecision numeric tensors.
T2 tensor(int32), tensor(int64): axis tensor can be int32 or int64 only
OnnxCumSum_11¶
 class skl2onnx.algebra.onnx_ops.OnnxCumSum_11(*args, **kwargs)¶
Version
Onnx name: CumSum
This version of the operator has been available since version 11.
Summary
Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively meaning the first element is copied as is. Through an exclusive attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to 1.
Example:
input_x = [1, 2, 3] axis=0 output = [1, 3, 6] exclusive=1 output = [0, 1, 3] exclusive=0 reverse=1 output = [6, 5, 3] exclusive=1 reverse=1 output = [5, 3, 0]
Attributes
exclusive: If set to 1 will return exclusive sum in which the top element is not included. In other terms, if set to 1, the jth output element would be the sum of the first (j1) elements. Otherwise, it would be the sum of the first j elements. Default value is ``name: “exclusive”
i: 0 type: INT `` * reverse: If set to 1 will perform the sums in reverse direction. Default value is ``name: “reverse” i: 0 type: INT ``
Inputs
x (heterogeneous)T: An input tensor that is to be processed.
axis (heterogeneous)T2: A 0D tensor. Must be in the range [rank(x), rank(x)1]. Negative value means counting dimensions from the back.
Outputs
y (heterogeneous)T: Output tensor of the same type as ‘x’ with cumulative sums of the x’s elements
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float), tensor(double): Input can be of any tensor type.
T2 tensor(int32), tensor(int64): axis tensor can be int32 or int64 only
OnnxCumSum_14¶
 class skl2onnx.algebra.onnx_ops.OnnxCumSum_14(*args, **kwargs)¶
Version
Onnx name: CumSum
This version of the operator has been available since version 14.
Summary
Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively meaning the first element is copied as is. Through an exclusive attribute, this behavior can change to exclude the first element. It can also perform summation in the opposite direction of the axis. For that, set reverse attribute to 1.
Example:
input_x = [1, 2, 3] axis=0 output = [1, 3, 6] exclusive=1 output = [0, 1, 3] exclusive=0 reverse=1 output = [6, 5, 3] exclusive=1 reverse=1 output = [5, 3, 0]
Attributes
exclusive: If set to 1 will return exclusive sum in which the top element is not included. In other terms, if set to 1, the jth output element would be the sum of the first (j1) elements. Otherwise, it would be the sum of the first j elements. Default value is ``name: “exclusive”
i: 0 type: INT `` * reverse: If set to 1 will perform the sums in reverse direction. Default value is ``name: “reverse” i: 0 type: INT ``
Inputs
x (heterogeneous)T: An input tensor that is to be processed.
axis (heterogeneous)T2: A 0D tensor. Must be in the range [rank(x), rank(x)1]. Negative value means counting dimensions from the back.
Outputs
y (heterogeneous)T: Output tensor of the same type as ‘x’ with cumulative sums of the x’s elements
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to highprecision numeric tensors.
T2 tensor(int32), tensor(int64): axis tensor can be int32 or int64 only
OnnxDepthToSpace¶
 class skl2onnx.algebra.onnx_ops.OnnxDepthToSpace(*args, **kwargs)¶
Version
Onnx name: DepthToSpace
This version of the operator has been available since version 13.
Summary
DepthToSpace rearranges (permutes) data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. By default, mode = DCR. In the DCR mode, elements along the depth dimension from the input tensor are rearranged in the following order: depth, column, and then row. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, blocksize, blocksize, c // (blocksize**2), h, w])
tmp = np.transpose(tmp, [0, 3, 4, 1, 5, 2])
y = np.reshape(tmp, [b, c // (blocksize**2), h * blocksize, w * blocksize])
In the CRD mode, elements along the depth dimension from the input tensor are rearranged in the following order: column, row, and the depth. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, c // (blocksize ** 2), blocksize, blocksize, h, w])
tmp = np.transpose(tmp, [0, 1, 4, 2, 5, 3])
y = np.reshape(tmp, [b, c // (blocksize ** 2), h * blocksize, w * blocksize])
Attributes
blocksize (required): Blocks of [blocksize, blocksize] are moved. Default value is ````
mode: DCR (default) for depthcolumnrow order rearrangement. Use CRD for columnrowdepth order. Default value is ``name: “mode”
s: “DCR” type: STRING ``
Inputs
input (heterogeneous)T: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.
Outputs
output (heterogeneous)T: Output tensor of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxDepthToSpace_1¶
 class skl2onnx.algebra.onnx_ops.OnnxDepthToSpace_1(*args, **kwargs)¶
Version
Onnx name: DepthToSpace
This version of the operator has been available since version 1.
Summary
DepthToSpace rearranges (permutes) data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions.
Attributes
Inputs
input (heterogeneous)T: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.
Outputs
output (heterogeneous)T: Output tensor of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxDepthToSpace_11¶
 class skl2onnx.algebra.onnx_ops.OnnxDepthToSpace_11(*args, **kwargs)¶
Version
Onnx name: DepthToSpace
This version of the operator has been available since version 11.
Summary
DepthToSpace rearranges (permutes) data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. By default, mode = DCR. In the DCR mode, elements along the depth dimension from the input tensor are rearranged in the following order: depth, column, and then row. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, blocksize, blocksize, c // (blocksize**2), h, w])
tmp = np.transpose(tmp, [0, 3, 4, 1, 5, 2])
y = np.reshape(tmp, [b, c // (blocksize**2), h * blocksize, w * blocksize])
In the CRD mode, elements along the depth dimension from the input tensor are rearranged in the following order: column, row, and the depth. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, c // (blocksize ** 2), blocksize, blocksize, h, w])
tmp = np.transpose(tmp, [0, 1, 4, 2, 5, 3])
y = np.reshape(tmp, [b, c // (blocksize ** 2), h * blocksize, w * blocksize])
Attributes
blocksize (required): Blocks of [blocksize, blocksize] are moved. Default value is ````
mode: DCR (default) for depthcolumnrow order rearrangement. Use CRD for columnrowdepth order. Default value is ``name: “mode”
s: “DCR” type: STRING ``
Inputs
input (heterogeneous)T: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.
Outputs
output (heterogeneous)T: Output tensor of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxDepthToSpace_13¶
 class skl2onnx.algebra.onnx_ops.OnnxDepthToSpace_13(*args, **kwargs)¶
Version
Onnx name: DepthToSpace
This version of the operator has been available since version 13.
Summary
DepthToSpace rearranges (permutes) data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. By default, mode = DCR. In the DCR mode, elements along the depth dimension from the input tensor are rearranged in the following order: depth, column, and then row. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, blocksize, blocksize, c // (blocksize**2), h, w])
tmp = np.transpose(tmp, [0, 3, 4, 1, 5, 2])
y = np.reshape(tmp, [b, c // (blocksize**2), h * blocksize, w * blocksize])
In the CRD mode, elements along the depth dimension from the input tensor are rearranged in the following order: column, row, and the depth. The output y is computed from the input x as below:
b, c, h, w = x.shape
tmp = np.reshape(x, [b, c // (blocksize ** 2), blocksize, blocksize, h, w])
tmp = np.transpose(tmp, [0, 1, 4, 2, 5, 3])
y = np.reshape(tmp, [b, c // (blocksize ** 2), h * blocksize, w * blocksize])
Attributes
blocksize (required): Blocks of [blocksize, blocksize] are moved. Default value is ````
mode: DCR (default) for depthcolumnrow order rearrangement. Use CRD for columnrowdepth order. Default value is ``name: “mode”
s: “DCR” type: STRING ``
Inputs
input (heterogeneous)T: Input tensor of [N,C,H,W], where N is the batch axis, C is the channel or depth, H is the height and W is the width.
Outputs
output (heterogeneous)T: Output tensor of [N, C/(blocksize * blocksize), H * blocksize, W * blocksize].
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bfloat16), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128): Constrain input and output types to all tensor types.
OnnxDequantizeLinear¶
 class skl2onnx.algebra.onnx_ops.OnnxDequantizeLinear(*args, **kwargs)¶
Version
Onnx name: DequantizeLinear
This version of the operator has been available since version 13.
Summary
The linear dequantization operator. It consumes a quantized tensor, a scale, and a zero point to compute the full precision tensor. The dequantization formula is y = (x  x_zero_point) * x_scale. ‘x_scale’ and ‘x_zero_point’ must have same shape, and can be either a scalar for pertensor / per layer quantization, or a 1D tensor for peraxis quantizations. ‘x_zero_point’ and ‘x’ must have same type. ‘x’ and ‘y’ must have same shape. In the case of dequantizing int32, there’s no zero point (zero point is supposed to be 0).
Attributes
axis: (Optional) The axis of the dequantizing dimension of the input tensor. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(input) Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
Between 2 and 3 inputs.
x (heterogeneous)T: ND quantized input tensor to be dequantized.
x_scale (heterogeneous)tensor(float): Scale for input ‘x’. It can be a scalar, which means a pertensor/layer dequantization, or a 1D tensor for peraxis dequantization.
x_zero_point (optional, heterogeneous)T: Zero point for input ‘x’. It can be a scalar, which means a pertensor/layer dequantization, or a 1D tensor for peraxis dequantization. It’s optional. 0 is the default value when it’s not specified.
Outputs
y (heterogeneous)tensor(float): ND full precision output tensor. It has same shape as input ‘x’.
Type Constraints
T tensor(int8), tensor(uint8), tensor(int32): Constrain ‘x_zero_point’ and ‘x’ to 8bit/32bit integer tensor.
OnnxDequantizeLinear_10¶
 class skl2onnx.algebra.onnx_ops.OnnxDequantizeLinear_10(*args, **kwargs)¶
Version
Onnx name: DequantizeLinear
This version of the operator has been available since version 10.
Summary
The linear dequantization operator. It consumes a quantized tensor, a scale, a zero point to compute the full precision tensor. The dequantization formula is y = (x  x_zero_point) * x_scale. ‘x_scale’ and ‘x_zero_point’ must have same shape. ‘x_zero_point’ and ‘x’ must have same type. ‘x’ and ‘y’ must have same shape. In the case of dequantizing int32, there’s no zero point (zero point is supposed to be 0).
Inputs
Between 2 and 3 inputs.
x (heterogeneous)T: ND quantized input tensor to be dequantized.
x_scale (heterogeneous)tensor(float): Scale for input ‘x’. It’s a scalar, which means a pertensor/layer quantization.
x_zero_point (optional, heterogeneous)T: Zero point for input ‘x’. It’s a scalar, which means a pertensor/layer quantization. It’s optional. 0 is the default value when it’s not specified.
Outputs
y (heterogeneous)tensor(float): ND full precision output tensor. It has same shape as input ‘x’.
Type Constraints
T tensor(int8), tensor(uint8), tensor(int32): Constrain ‘x_zero_point’ and ‘x’ to 8bit/32bit integer tensor.
OnnxDequantizeLinear_13¶
 class skl2onnx.algebra.onnx_ops.OnnxDequantizeLinear_13(*args, **kwargs)¶
Version
Onnx name: DequantizeLinear
This version of the operator has been available since version 13.
Summary
The linear dequantization operator. It consumes a quantized tensor, a scale, and a zero point to compute the full precision tensor. The dequantization formula is y = (x  x_zero_point) * x_scale. ‘x_scale’ and ‘x_zero_point’ must have same shape, and can be either a scalar for pertensor / per layer quantization, or a 1D tensor for peraxis quantizations. ‘x_zero_point’ and ‘x’ must have same type. ‘x’ and ‘y’ must have same shape. In the case of dequantizing int32, there’s no zero point (zero point is supposed to be 0).
Attributes
axis: (Optional) The axis of the dequantizing dimension of the input tensor. Negative value means counting dimensions from the back. Accepted range is [r, r1] where r = rank(input) Default value is ``name: “axis”
i: 1 type: INT ``
Inputs
Between 2 and 3 inputs.
x (heterogeneous)T: ND quantized input tensor to be dequantized.
x_scale (heterogeneous)tensor(float): Scale for input ‘x’. It can be a scalar, which means a pertensor/layer dequantization, or a 1D tensor for peraxis dequantization.
x_zero_point (optional, heterogeneous)T: Zero point for input ‘x’. It can be a scalar, which means a pertensor/layer dequantization, or a 1D tensor for peraxis dequantization. It’s optional. 0 is the default value when it’s not specified.
Outputs
y (heterogeneous)tensor(float): ND full precision output tensor. It has same shape as input ‘x’.
Type Constraints
T tensor(int8), tensor(uint8), tensor(int32): Constrain ‘x_zero_point’ and ‘x’ to 8bit/32bit integer tensor.
OnnxDet¶
 class skl2onnx.algebra.onnx_ops.OnnxDet(*args, **kwargs)¶
Version
Onnx name: Det
This version of the operator has been available since version 11.
Summary
Det calculates determinant of a square matrix or batches of square matrices. Det takes one input tensor of shape [*, M, M], where * is zero or more batch dimensions, and the innermost 2 dimensions form square matrices. The output is a tensor of shape [*], containing the determinants of all input submatrices. e.g., When the input is 2D, the output is a scalar(shape is empty: []).
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to floatingpoint tensors.
OnnxDet_11¶
 class skl2onnx.algebra.onnx_ops.OnnxDet_11(*args, **kwargs)¶
Version
Onnx name: Det
This version of the operator has been available since version 11.
Summary
Det calculates determinant of a square matrix or batches of square matrices. Det takes one input tensor of shape [*, M, M], where * is zero or more batch dimensions, and the innermost 2 dimensions form square matrices. The output is a tensor of shape [*], containing the determinants of all input submatrices. e.g., When the input is 2D, the output is a scalar(shape is empty: []).
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to floatingpoint tensors.
OnnxDictVectorizer¶
 class skl2onnx.algebra.onnx_ops.OnnxDictVectorizer(*args, **kwargs)¶
Version
Onnx name: DictVectorizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Uses an index mapping to convert a dictionary to an array.
Given a dictionary, each key is looked up in the vocabulary attribute corresponding to the key type. The index into the vocabulary array at which the key is found is then used to index the output 1D tensor ‘Y’ and insert into it the value found in the dictionary ‘X’.
The key type of the input map must correspond to the element type of the defined vocabulary attribute. Therefore, the output array will be equal in length to the index mapping vector parameter. All keys in the input dictionary must be present in the index mapping vector. For each item in the input dictionary, insert its value in the output array. Any keys not present in the input dictionary, will be zero in the output array.
For example: if the
string_vocabulary
parameter is set to["a", "c", "b", "z"]
, then an input of{"a": 4, "c": 8}
will produce an output of[4, 8, 0, 0]
.Attributes
int64_vocabulary: An integer vocabulary array.<br>One and only one of the vocabularies must be defined. Default value is ````
string_vocabulary: A string vocabulary array.<br>One and only one of the vocabularies must be defined. Default value is ````
Inputs
X (heterogeneous)T1: A dictionary.
Outputs
Y (heterogeneous)T2: A 1D tensor holding values from the input dictionary.
Type Constraints
T1 map(string, int64), map(int64, string), map(int64, float), map(int64, double), map(string, float), map(string, double): The input must be a map from strings or integers to either strings or a numeric type. The key and value types cannot be the same.
T2 tensor(int64), tensor(float), tensor(double), tensor(string): The output will be a tensor of the value type of the input map. It’s shape will be [1,C], where C is the length of the input dictionary.
OnnxDictVectorizer_1¶
 class skl2onnx.algebra.onnx_ops.OnnxDictVectorizer_1(*args, **kwargs)¶
Version
Onnx name: DictVectorizer
This version of the operator has been available since version 1 of domain ai.onnx.ml.
Summary
Uses an index mapping to convert a dictionary to an array.
Given a dictionary, each key is looked up in the vocabulary attribute corresponding to the key type. The index into the vocabulary array at which the key is found is then used to index the output 1D tensor ‘Y’ and insert into it the value found in the dictionary ‘X’.
The key type of the input map must correspond to the element type of the defined vocabulary attribute. Therefore, the output array will be equal in length to the index mapping vector parameter. All keys in the input dictionary must be present in the index mapping vector. For each item in the input dictionary, insert its value in the output array. Any keys not present in the input dictionary, will be zero in the output array.
For example: if the
string_vocabulary
parameter is set to["a", "c", "b", "z"]
, then an input of{"a": 4, "c": 8}
will produce an output of[4, 8, 0, 0]
.Attributes
int64_vocabulary: An integer vocabulary array.<br>One and only one of the vocabularies must be defined. Default value is ````
string_vocabulary: A string vocabulary array.<br>One and only one of the vocabularies must be defined. Default value is ````
Inputs
X (heterogeneous)T1: A dictionary.
Outputs
Y (heterogeneous)T2: A 1D tensor holding values from the input dictionary.
Type Constraints
T1 map(string, int64), map(int64, string), map(int64, float), map(int64, double), map(string, float), map(string, double): The input must be a map from strings or integers to either strings or a numeric type. The key and value types cannot be the same.
T2 tensor(int64), tensor(float), tensor(double), tensor(string): The output will be a tensor of the value type of the input map. It’s shape will be [1,C], where C is the length of the input dictionary.
OnnxDiv¶
 class skl2onnx.algebra.onnx_ops.OnnxDiv(*args, **kwargs)¶
Version
Onnx name: Div
This version of the operator has been available since version 14.
Summary
Performs elementwise binary division (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
(Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxDiv_1¶
 class skl2onnx.algebra.onnx_ops.OnnxDiv_1(*args, **kwargs)¶
Version
Onnx name: Div
This version of the operator has been available since version 1.
Summary
Performs elementwise binary division (with limited broadcast support).
If necessary the righthandside argument will be broadcasted to match the shape of lefthandside argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1dim expansion doesn’t work yet.
For example, the following tensor shapes are supported (with broadcast=1):
shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0
Attribute broadcast=1 needs to be passed to enable broadcasting.
Attributes
axis: If set, defines the broadcast dimensions. See doc for details. Default value is ````
broadcast: Pass 1 to enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT `` * consumed_inputs: legacy optimization attribute. Default value is ````
Inputs
A (heterogeneous)T: First operand, should share the type with the second operand.
B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.
Outputs
C (heterogeneous)T: Result, has same dimensions and type as A
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxDiv_13¶
 class skl2onnx.algebra.onnx_ops.OnnxDiv_13(*args, **kwargs)¶
Version
Onnx name: Div
This version of the operator has been available since version 13.
Summary
Performs elementwise binary division (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to highprecision numeric tensors.
OnnxDiv_14¶
 class skl2onnx.algebra.onnx_ops.OnnxDiv_14(*args, **kwargs)¶
Version
Onnx name: Div
This version of the operator has been available since version 14.
Summary
Performs elementwise binary division (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
(Opset 14 change): Extend supported types to include uint8, int8, uint16, and int16.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxDiv_6¶
 class skl2onnx.algebra.onnx_ops.OnnxDiv_6(*args, **kwargs)¶
Version
Onnx name: Div
This version of the operator has been available since version 6.
Summary
Performs elementwise binary division (with limited broadcast support).
If necessary the righthandside argument will be broadcasted to match the shape of lefthandside argument. When broadcasting is specified, the second tensor can either be of element size 1 (including a scalar tensor and any tensor with rank equal to or smaller than the first tensor), or having its shape as a contiguous subset of the first tensor’s shape. The starting of the mutually equal shape is specified by the argument “axis”, and if it is not set, suffix matching is assumed. 1dim expansion doesn’t work yet.
For example, the following tensor shapes are supported (with broadcast=1):
shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar tensor shape(A) = (2, 3, 4, 5), shape(B) = (1, 1), i.e. B is an 1element tensor shape(A) = (2, 3, 4, 5), shape(B) = (5,) shape(A) = (2, 3, 4, 5), shape(B) = (4, 5) shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1 shape(A) = (2, 3, 4, 5), shape(B) = (2), with axis=0
Attribute broadcast=1 needs to be passed to enable broadcasting.
Attributes
axis: If set, defines the broadcast dimensions. See doc for details. Default value is ````
broadcast: Pass 1 to enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: First operand, should share the type with the second operand.
B (heterogeneous)T: Second operand. With broadcasting can be of smaller size than A. If broadcasting is disabled it should be of the same size.
Outputs
C (heterogeneous)T: Result, has same dimensions and type as A
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxDiv_7¶
 class skl2onnx.algebra.onnx_ops.OnnxDiv_7(*args, **kwargs)¶
Version
Onnx name: Div
This version of the operator has been available since version 7.
Summary
Performs elementwise binary division (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First operand.
B (heterogeneous)T: Second operand.
Outputs
C (heterogeneous)T: Result, has same element type as two inputs
Type Constraints
T tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to highprecision numeric tensors.
OnnxDropout¶
 class skl2onnx.algebra.onnx_ops.OnnxDropout(*args, **kwargs)¶
Version
Onnx name: Dropout
This version of the operator has been available since version 13.
Summary
Dropout takes an input floatingpoint tensor, an optional input ratio (floatingpoint scalar) and an optional input training_mode (boolean scalar). It produces two tensor outputs, output (floatingpoint tensor) and mask (optional Tensor<bool>). If training_mode is true then the output Y will be a random dropout; Note that this Dropout scales the masked input data by the following equation, so to convert the trained model into inference mode, the user can simply not pass training_mode input or set it to false.
output = scale * data * mask,
where
scale = 1. / (1.  ratio).
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
seed: (Optional) Seed to the random generator, if not specified we will auto generate one. Default value is ````
Inputs
Between 1 and 3 inputs.
data (heterogeneous)T: The input data as Tensor.
ratio (optional, heterogeneous)T1: The ratio of random dropout, with value in [0, 1). If this input was not set, or if it was set to 0, the output would be a simple copy of the input. If it’s nonzero, output will be a random dropout of the scaled input, which is typically the case during training. It is an optional value, if not specified it will default to 0.5.
training_mode (optional, heterogeneous)T2: If set to true then it indicates dropout is being used for training. It is an optional value hence unless specified explicitly, it is false. If it is false, ratio is ignored and the operation mimics inference mode where nothing will be dropped from the input data and if mask is requested as output it will contain all ones.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T2: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.
T1 tensor(float16), tensor(float), tensor(double): Constrain input ‘ratio’ types to float tensors.
T2 tensor(bool): Constrain output ‘mask’ types to boolean tensors.
OnnxDropout_1¶
 class skl2onnx.algebra.onnx_ops.OnnxDropout_1(*args, **kwargs)¶
Version
Onnx name: Dropout
This version of the operator has been available since version 1.
Summary
Dropout takes one input data (Tensor<float>) and produces two Tensor outputs, output (Tensor<float>) and mask (Tensor<bool>). Depending on whether it is in test mode or not, the output Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done.
Attributes
consumed_inputs: legacy optimization attribute. Default value is ````
is_test: (int, default 0) if nonzero, run dropout in test mode where the output is simply Y = X. Default value is ``name: “is_test”
i: 0 type: INT `` * ratio: (float, default 0.5) the ratio of random dropout Default value is ``name: “ratio” f: 0.5 type: FLOAT ``
Inputs
data (heterogeneous)T: The input data as Tensor.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T: The output mask. If is_test is nonzero, this output is not filled.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxDropout_10¶
 class skl2onnx.algebra.onnx_ops.OnnxDropout_10(*args, **kwargs)¶
Version
Onnx name: Dropout
This version of the operator has been available since version 10.
Summary
Dropout takes one input floating tensor and produces two tensor outputs, output (floating tensor) and mask (Tensor<bool>). Depending on whether it is in test mode or not, the output Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
ratio: The ratio of random dropout Default value is ``name: “ratio”
f: 0.5 type: FLOAT ``
Inputs
data (heterogeneous)T: The input data as Tensor.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T1: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(bool): Constrain output mask types to boolean tensors.
OnnxDropout_12¶
 class skl2onnx.algebra.onnx_ops.OnnxDropout_12(*args, **kwargs)¶
Version
Onnx name: Dropout
This version of the operator has been available since version 12.
Summary
Dropout takes an input floatingpoint tensor, an optional input ratio (floatingpoint scalar) and an optional input training_mode (boolean scalar). It produces two tensor outputs, output (floatingpoint tensor) and mask (optional Tensor<bool>). If training_mode is true then the output Y will be a random dropout; Note that this Dropout scales the masked input data by the following equation, so to convert the trained model into inference mode, the user can simply not pass training_mode input or set it to false.
output = scale * data * mask,
where
scale = 1. / (1.  ratio).
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
seed: (Optional) Seed to the random generator, if not specified we will auto generate one. Default value is ````
Inputs
Between 1 and 3 inputs.
data (heterogeneous)T: The input data as Tensor.
ratio (optional, heterogeneous)T1: The ratio of random dropout, with value in [0, 1). If this input was not set, or if it was set to 0, the output would be a simple copy of the input. If it’s nonzero, output will be a random dropout of the scaled input, which is typically the case during training. It is an optional value, if not specified it will default to 0.5.
training_mode (optional, heterogeneous)T2: If set to true then it indicates dropout is being used for training. It is an optional value hence unless specified explicitly, it is false. If it is false, ratio is ignored and the operation mimics inference mode where nothing will be dropped from the input data and if mask is requested as output it will contain all ones.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T2: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
T1 tensor(float16), tensor(float), tensor(double): Constrain input ‘ratio’ types to float tensors.
T2 tensor(bool): Constrain output ‘mask’ types to boolean tensors.
OnnxDropout_13¶
 class skl2onnx.algebra.onnx_ops.OnnxDropout_13(*args, **kwargs)¶
Version
Onnx name: Dropout
This version of the operator has been available since version 13.
Summary
Dropout takes an input floatingpoint tensor, an optional input ratio (floatingpoint scalar) and an optional input training_mode (boolean scalar). It produces two tensor outputs, output (floatingpoint tensor) and mask (optional Tensor<bool>). If training_mode is true then the output Y will be a random dropout; Note that this Dropout scales the masked input data by the following equation, so to convert the trained model into inference mode, the user can simply not pass training_mode input or set it to false.
output = scale * data * mask,
where
scale = 1. / (1.  ratio).
This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
seed: (Optional) Seed to the random generator, if not specified we will auto generate one. Default value is ````
Inputs
Between 1 and 3 inputs.
data (heterogeneous)T: The input data as Tensor.
ratio (optional, heterogeneous)T1: The ratio of random dropout, with value in [0, 1). If this input was not set, or if it was set to 0, the output would be a simple copy of the input. If it’s nonzero, output will be a random dropout of the scaled input, which is typically the case during training. It is an optional value, if not specified it will default to 0.5.
training_mode (optional, heterogeneous)T2: If set to true then it indicates dropout is being used for training. It is an optional value hence unless specified explicitly, it is false. If it is false, ratio is ignored and the operation mimics inference mode where nothing will be dropped from the input data and if mask is requested as output it will contain all ones.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T2: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.
T1 tensor(float16), tensor(float), tensor(double): Constrain input ‘ratio’ types to float tensors.
T2 tensor(bool): Constrain output ‘mask’ types to boolean tensors.
OnnxDropout_6¶
 class skl2onnx.algebra.onnx_ops.OnnxDropout_6(*args, **kwargs)¶
Version
Onnx name: Dropout
This version of the operator has been available since version 6.
Summary
Dropout takes one input data (Tensor<float>) and produces two Tensor outputs, output (Tensor<float>) and mask (Tensor<bool>). Depending on whether it is in test mode or not, the output Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done.
Attributes
is_test: (int, default 0) if nonzero, run dropout in test mode where the output is simply Y = X. Default value is ``name: “is_test”
i: 0 type: INT `` * ratio: (float, default 0.5) the ratio of random dropout Default value is ``name: “ratio” f: 0.5 type: FLOAT ``
Inputs
data (heterogeneous)T: The input data as Tensor.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T: The output mask. If is_test is nonzero, this output is not filled.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxDropout_7¶
 class skl2onnx.algebra.onnx_ops.OnnxDropout_7(*args, **kwargs)¶
Version
Onnx name: Dropout
This version of the operator has been available since version 7.
Summary
Dropout takes one input data (Tensor<float>) and produces two Tensor outputs, output (Tensor<float>) and mask (Tensor<bool>). Depending on whether it is in test mode or not, the output Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done. This operator has optional inputs/outputs. See ONNX for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.
Attributes
ratio: The ratio of random dropout Default value is ``name: “ratio”
f: 0.5 type: FLOAT ``
Inputs
data (heterogeneous)T: The input data as Tensor.
Outputs
Between 1 and 2 outputs.
output (heterogeneous)T: The output.
mask (optional, heterogeneous)T: The output mask.
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxDynamicQuantizeLinear¶
 class skl2onnx.algebra.onnx_ops.OnnxDynamicQuantizeLinear(*args, **kwargs)¶
Version
Onnx name: DynamicQuantizeLinear
This version of the operator has been available since version 11.
Summary
A Function to fuse calculation for Scale, Zero Point and FP32>8Bit convertion of FP32 Input data. Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input. Scale is calculated as:
y_scale = (max(x)  min(x))/(qmax  qmin) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * data range is adjusted to include 0.
Zero point is calculated as:
intermediate_zero_point = qmin  min(x)/y_scale y_zero_point = cast(round(saturate(itermediate_zero_point))) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * for saturation, it saturates to [0, 255] if it's uint8, or [127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even.
Data quantization formula is:
y = saturate (round (x / y_scale) + y_zero_point) * for saturation, it saturates to [0, 255] if it's uint8, or [127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even.
Inputs
x (heterogeneous)T1: Input tensor
Outputs
y (heterogeneous)T2: Quantized output tensor
y_scale (heterogeneous)tensor(float): Output scale. It’s a scalar, which means a pertensor/layer quantization.
y_zero_point (heterogeneous)T2: Output zero point. It’s a scalar, which means a pertensor/layer quantization.
Type Constraints
T1 tensor(float): Constrain ‘x’ to float tensor.
T2 tensor(uint8): Constrain ‘y_zero_point’ and ‘y’ to 8bit unsigned integer tensor.
OnnxDynamicQuantizeLinear_11¶
 class skl2onnx.algebra.onnx_ops.OnnxDynamicQuantizeLinear_11(*args, **kwargs)¶
Version
Onnx name: DynamicQuantizeLinear
This version of the operator has been available since version 11.
Summary
A Function to fuse calculation for Scale, Zero Point and FP32>8Bit convertion of FP32 Input data. Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input. Scale is calculated as:
y_scale = (max(x)  min(x))/(qmax  qmin) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * data range is adjusted to include 0.
Zero point is calculated as:
intermediate_zero_point = qmin  min(x)/y_scale y_zero_point = cast(round(saturate(itermediate_zero_point))) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * for saturation, it saturates to [0, 255] if it's uint8, or [127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even.
Data quantization formula is:
y = saturate (round (x / y_scale) + y_zero_point) * for saturation, it saturates to [0, 255] if it's uint8, or [127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even.
Inputs
x (heterogeneous)T1: Input tensor
Outputs
y (heterogeneous)T2: Quantized output tensor
y_scale (heterogeneous)tensor(float): Output scale. It’s a scalar, which means a pertensor/layer quantization.
y_zero_point (heterogeneous)T2: Output zero point. It’s a scalar, which means a pertensor/layer quantization.
Type Constraints
T1 tensor(float): Constrain ‘x’ to float tensor.
T2 tensor(uint8): Constrain ‘y_zero_point’ and ‘y’ to 8bit unsigned integer tensor.
OnnxEinsum¶
 class skl2onnx.algebra.onnx_ops.OnnxEinsum(*args, **kwargs)¶
Version
Onnx name: Einsum
This version of the operator has been available since version 12.
Summary
An einsum of the form
`term1, term2 > outputterm`
produces an output tensor using the following equationwhere the reducesum performs a summation over all the indices occurring in the input terms (term1, term2) that do not occur in the outputterm. The Einsum operator evaluates algebraic tensor operations on a sequence of tensors, using the Einstein summation convention. The equation string contains a commaseparated sequence of lower case letters. Each term corresponds to an operand tensor, and the characters within the terms correspond to operands dimensions. This sequence may be followed by ">" to separate the left and right hand side of the equation. If the equation contains ">" followed by the righthand side, the explicit (not classical) form of the Einstein summation is performed, and the righthand side indices indicate output tensor dimensions. In other cases, output indices are (implicitly) set to the alphabetically sorted sequence of indices appearing exactly once in the equation. When a dimension character is repeated in the lefthand side, it represents summation along the dimension. The equation may contain ellipsis ("...") to enable broadcasting. Ellipsis must indicate a fixed number of dimensions. Specifically, every occurrence of ellipsis in the equation must represent the same number of dimensions. The righthand side may contain exactly one ellipsis. In implicit mode, the ellipsis dimensions are set to the beginning of the output. The equation string may contain space (U+0020) character.
Attributes
Inputs
Between 1 and 2147483647 inputs.
Inputs (variadic, heterogeneous)T: Operands
Outputs
Output (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numerical tensor types.
OnnxEinsum_12¶
 class skl2onnx.algebra.onnx_ops.OnnxEinsum_12(*args, **kwargs)¶
Version
Onnx name: Einsum
This version of the operator has been available since version 12.
Summary
An einsum of the form
`term1, term2 > outputterm`
produces an output tensor using the following equationwhere the reducesum performs a summation over all the indices occurring in the input terms (term1, term2) that do not occur in the outputterm. The Einsum operator evaluates algebraic tensor operations on a sequence of tensors, using the Einstein summation convention. The equation string contains a commaseparated sequence of lower case letters. Each term corresponds to an operand tensor, and the characters within the terms correspond to operands dimensions. This sequence may be followed by ">" to separate the left and right hand side of the equation. If the equation contains ">" followed by the righthand side, the explicit (not classical) form of the Einstein summation is performed, and the righthand side indices indicate output tensor dimensions. In other cases, output indices are (implicitly) set to the alphabetically sorted sequence of indices appearing exactly once in the equation. When a dimension character is repeated in the lefthand side, it represents summation along the dimension. The equation may contain ellipsis ("...") to enable broadcasting. Ellipsis must indicate a fixed number of dimensions. Specifically, every occurrence of ellipsis in the equation must represent the same number of dimensions. The righthand side may contain exactly one ellipsis. In implicit mode, the ellipsis dimensions are set to the beginning of the output. The equation string may contain space (U+0020) character.
Attributes
Inputs
Between 1 and 2147483647 inputs.
Inputs (variadic, heterogeneous)T: Operands
Outputs
Output (heterogeneous)T: Output tensor
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numerical tensor types.
OnnxElu¶
 class skl2onnx.algebra.onnx_ops.OnnxElu(*args, **kwargs)¶
Version
Onnx name: Elu
This version of the operator has been available since version 6.
Summary
Elu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the function f(x) = alpha * (exp(x)  1.) for x < 0, f(x) = x for x >= 0., is applied to the tensor elementwise.
Attributes
alpha: Coefficient of ELU. Default value is ``name: “alpha”
f: 1.0 type: FLOAT ``
Inputs
X (heterogeneous)T: 1D input tensor
Outputs
Y (heterogeneous)T: 1D output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxElu_1¶
 class skl2onnx.algebra.onnx_ops.OnnxElu_1(*args, **kwargs)¶
Version
Onnx name: Elu
This version of the operator has been available since version 1.
Summary
Elu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the function f(x) = alpha * (exp(x)  1.) for x < 0, f(x) = x for x >= 0., is applied to the tensor elementwise.
Attributes
alpha: Coefficient of ELU default to 1.0. Default value is ``name: “alpha”
f: 1.0 type: FLOAT `` * consumed_inputs: legacy optimization attribute. Default value is ````
Inputs
X (heterogeneous)T: 1D input tensor
Outputs
Y (heterogeneous)T: 1D input tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxElu_6¶
 class skl2onnx.algebra.onnx_ops.OnnxElu_6(*args, **kwargs)¶
Version
Onnx name: Elu
This version of the operator has been available since version 6.
Summary
Elu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the function f(x) = alpha * (exp(x)  1.) for x < 0, f(x) = x for x >= 0., is applied to the tensor elementwise.
Attributes
alpha: Coefficient of ELU. Default value is ``name: “alpha”
f: 1.0 type: FLOAT ``
Inputs
X (heterogeneous)T: 1D input tensor
Outputs
Y (heterogeneous)T: 1D output tensor
Type Constraints
T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.
OnnxEqual¶
 class skl2onnx.algebra.onnx_ops.OnnxEqual(*args, **kwargs)¶
Version
Onnx name: Equal
This version of the operator has been available since version 13.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxEqual_1¶
 class skl2onnx.algebra.onnx_ops.OnnxEqual_1(*args, **kwargs)¶
Version
Onnx name: Equal
This version of the operator has been available since version 1.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B.
If broadcasting is enabled, the righthandside argument will be broadcasted to match the shape of lefthandside argument. See the doc of Add for a detailed description of the broadcasting rules.
Attributes
axis: If set, defines the broadcast dimensions. Default value is ````
broadcast: Enable broadcasting Default value is ``name: “broadcast”
i: 0 type: INT ``
Inputs
A (heterogeneous)T: Left input tensor for the logical operator.
B (heterogeneous)T: Right input tensor for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(int32), tensor(int64): Constrains input to integral tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxEqual_11¶
 class skl2onnx.algebra.onnx_ops.OnnxEqual_11(*args, **kwargs)¶
Version
Onnx name: Equal
This version of the operator has been available since version 11.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxEqual_13¶
 class skl2onnx.algebra.onnx_ops.OnnxEqual_13(*args, **kwargs)¶
Version
Onnx name: Equal
This version of the operator has been available since version 13.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrains input types to all numeric tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxEqual_7¶
 class skl2onnx.algebra.onnx_ops.OnnxEqual_7(*args, **kwargs)¶
Version
Onnx name: Equal
This version of the operator has been available since version 7.
Summary
Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B (with Numpystyle broadcasting support).
This operator supports multidirectional (i.e., Numpystyle) broadcasting; for more details please check Broadcasting in ONNX.
Inputs
A (heterogeneous)T: First input operand for the logical operator.
B (heterogeneous)T: Second input operand for the logical operator.
Outputs
C (heterogeneous)T1: Result tensor.
Type Constraints
T tensor(bool), tensor(int32), tensor(int64): Constrains input to integral tensors.
T1 tensor(bool): Constrains output to boolean tensor.
OnnxErf¶
 class skl2onnx.algebra.onnx_ops.OnnxErf(*args, **kwargs)¶
Version
Onnx name: Erf
This version of the operator has been available since version 13.
Summary
Computes the error function of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The error function of the input tensor computed elementwise. It has the same shape and type of the input.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxErf_13¶
 class skl2onnx.algebra.onnx_ops.OnnxErf_13(*args, **kwargs)¶
Version
Onnx name: Erf
This version of the operator has been available since version 13.
Summary
Computes the error function of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The error function of the input tensor computed elementwise. It has the same shape and type of the input.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to all numeric tensors.
OnnxErf_9¶
 class skl2onnx.algebra.onnx_ops.OnnxErf_9(*args, **kwargs)¶
Version
Onnx name: Erf
This version of the operator has been available since version 9.
Summary
Computes the error function of the given input tensor elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The error function of the input tensor computed elementwise. It has the same shape and type of the input.
Type Constraints
T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double): Constrain input and output types to all numeric tensors.
OnnxExp¶
 class skl2onnx.algebra.onnx_ops.OnnxExp(*args, **kwargs)¶
Version
Onnx name: Exp
This version of the operator has been available since version 13.
Summary
Calculates the exponential of the given input tensor, elementwise.
Inputs
input (heterogeneous)T: Input tensor
Outputs
output (heterogeneous)T: The exponential of the input tensor computed elementwise
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.