Selu¶
Selu - 22¶
Version¶
name: Selu (GitHub)
domain:
main
since_version:
22
function:
True
support_level:
SupportType.COMMON
shape inference:
True
This version of the operator has been available since version 22.
Summary¶
Selu takes one input data (Tensory = gamma * (alpha * e^x - alpha) for x <= 0
, y = gamma * x for x > 0
,
is applied to the tensor elementwise.
Function Body¶
The function definition for this operator.
<
domain: "",
opset_import: ["" : 18]
>
Selu <gamma,alpha>(X) => (Y)
{
Alpha = Constant <value_float: float = @alpha> ()
AlphaCast = CastLike (Alpha, X)
Gamma = Constant <value_float: float = @gamma> ()
GammaCast = CastLike (Gamma, X)
Zero = Constant <value: tensor = float {0}> ()
ZeroCast = CastLike (Zero, X)
ExpX = Exp (X)
AlphaMulExpX = Mul (AlphaCast, ExpX)
AlphaMulExpXSubAlpha = Sub (AlphaMulExpX, AlphaCast)
Neg = Mul (GammaCast, AlphaMulExpXSubAlpha)
Pos = Mul (GammaCast, X)
XLessThanZero = Less (X, ZeroCast)
Y = Where (XLessThanZero, Neg, Pos)
}
Attributes¶
alpha - FLOAT (default is
'1.67326'
):Coefficient of SELU default to 1.67326319217681884765625 (i.e., float32 approximation of 1.6732632423543772848170429916717).
gamma - FLOAT (default is
'1.0507'
):Coefficient of SELU default to 1.05070102214813232421875 (i.e., float32 approximation of 1.0507009873554804934193349852946).
Inputs¶
X (heterogeneous) - T:
Input tensor
Outputs¶
Y (heterogeneous) - T:
Output tensor
Type Constraints¶
T in (
tensor(bfloat16)
,tensor(double)
,tensor(float)
,tensor(float16)
):Constrain input and output types to float tensors.
Selu - 6¶
Version¶
name: Selu (GitHub)
domain:
main
since_version:
6
function:
True
support_level:
SupportType.COMMON
shape inference:
True
This version of the operator has been available since version 6.
Summary¶
Selu takes one input data (Tensory = gamma * (alpha * e^x - alpha) for x <= 0
, y = gamma * x for x > 0
,
is applied to the tensor elementwise.
Function Body¶
The function definition for this operator.
<
domain: "",
opset_import: ["" : 18]
>
Selu <gamma,alpha>(X) => (Y)
{
Alpha = Constant <value_float: float = @alpha> ()
AlphaCast = CastLike (Alpha, X)
Gamma = Constant <value_float: float = @gamma> ()
GammaCast = CastLike (Gamma, X)
Zero = Constant <value: tensor = float {0}> ()
ZeroCast = CastLike (Zero, X)
ExpX = Exp (X)
AlphaMulExpX = Mul (AlphaCast, ExpX)
AlphaMulExpXSubAlpha = Sub (AlphaMulExpX, AlphaCast)
Neg = Mul (GammaCast, AlphaMulExpXSubAlpha)
Pos = Mul (GammaCast, X)
XLessThanZero = Less (X, ZeroCast)
Y = Where (XLessThanZero, Neg, Pos)
}
Attributes¶
alpha - FLOAT (default is
'1.67326'
):Coefficient of SELU default to 1.67326319217681884765625 (i.e., float32 approximation of 1.6732632423543772848170429916717).
gamma - FLOAT (default is
'1.0507'
):Coefficient of SELU default to 1.05070102214813232421875 (i.e., float32 approximation of 1.0507009873554804934193349852946).
Inputs¶
X (heterogeneous) - T:
Input tensor
Outputs¶
Y (heterogeneous) - T:
Output tensor
Type Constraints¶
T in (
tensor(double)
,tensor(float)
,tensor(float16)
):Constrain input and output types to float tensors.
Selu - 1¶
Version¶
name: Selu (GitHub)
domain:
main
since_version:
1
function:
False
support_level:
SupportType.COMMON
shape inference:
False
This version of the operator has been available since version 1.
Summary¶
Selu takes one input data (Tensory = gamma * (alpha * e^x - alpha) for x <= 0
, y = gamma * x for x > 0
,
is applied to the tensor elementwise.
Attributes¶
alpha - FLOAT (default is
'1.6732'
):Coefficient of SELU default to 1.6732.
consumed_inputs - INTS :
legacy optimization attribute.
gamma - FLOAT (default is
'1.0507'
):Coefficient of SELU default to 1.0507.
Inputs¶
X (heterogeneous) - T:
Input tensor
Outputs¶
Y (heterogeneous) - T:
Output tensor
Type Constraints¶
T in (
tensor(double)
,tensor(float)
,tensor(float16)
):Constrain input and output types to float tensors.