MeanVarianceNormalization¶
MeanVarianceNormalization - 13¶
Version¶
domain:
mainsince_version:
13function:
Truesupport_level:
SupportType.COMMONshape inference:
False
This version of the operator has been available since version 13.
Summary¶
A MeanVarianceNormalization Function: Perform mean variance normalization
on the input tensor X using formula: (X-EX)/sqrt(E(X-EX)^2)
Function Body¶
The function definition for this operator.
<
domain: "",
opset_import: ["" : 18]
>
MeanVarianceNormalization <axes>(X) => (Y)
{
Exponent = Constant <value: tensor = float {2}> ()
Epsilon = Constant <value: tensor = float {1e-09}> ()
axes = Constant <value_ints: ints = @axes> ()
X_RM = ReduceMean (X, axes)
EX_squared = Pow (X_RM, Exponent)
X_squared = Pow (X, Exponent)
E_Xsquared = ReduceMean (X_squared, axes)
Variance = Sub (E_Xsquared, EX_squared)
STD = Sqrt (Variance)
X_variance = Sub (X, X_RM)
Processed_STD = Add (STD, Epsilon)
Y = Div (X_variance, Processed_STD)
}
Attributes¶
axes - INTS (default is
['0', '2', '3']):A list of integers, along which to reduce. The default is to calculate along axes [0,2,3] for calculating mean and variance along each channel. Two variables with the same C-coordinate are associated with the same mean and variance.
Inputs¶
X (heterogeneous) - T:
Input tensor
Outputs¶
Y (heterogeneous) - T:
Output tensor
Type Constraints¶
T in (
tensor(bfloat16),tensor(double),tensor(float),tensor(float16)):Constrain input and output types to all numeric tensors.
MeanVarianceNormalization - 9¶
Version¶
domain:
mainsince_version:
9function:
Truesupport_level:
SupportType.COMMONshape inference:
False
This version of the operator has been available since version 9.
Summary¶
A MeanVarianceNormalization Function: Perform mean variance normalization
on the input tensor X using formula:
(X-EX)/sqrt(E(X-EX)^2)
Function Body¶
The function definition for this operator.
<
domain: "",
opset_import: ["" : 9]
>
MeanVarianceNormalization <axes>(X) => (Y)
{
Exponent = Constant <value: tensor = float {2}> ()
Epsilon = Constant <value: tensor = float {1e-09}> ()
X_RM = ReduceMean <axes: ints = @axes> (X)
EX_squared = Pow (X_RM, Exponent)
X_squared = Pow (X, Exponent)
E_Xsquared = ReduceMean <axes: ints = @axes> (X_squared)
Variance = Sub (E_Xsquared, EX_squared)
STD = Sqrt (Variance)
X_variance = Sub (X, X_RM)
Processed_STD = Add (STD, Epsilon)
Y = Div (X_variance, Processed_STD)
}
Attributes¶
axes - INTS (default is
['0', '2', '3']):A list of integers, along which to reduce. The default is to calculate along axes [0,2,3] for calculating mean and variance along each channel. Two variables with the same C-coordinate are associated with the same mean and variance.
Inputs¶
X (heterogeneous) - T:
Input tensor
Outputs¶
Y (heterogeneous) - T:
Output tensor
Type Constraints¶
T in (
tensor(double),tensor(float),tensor(float16)):Constrain input and output types to all numeric tensors.