HardSigmoid¶
HardSigmoid - 22¶
Version¶
- name: HardSigmoid (GitHub) 
- domain: - main
- since_version: - 22
- function: - True
- support_level: - SupportType.COMMON
- shape inference: - True
This version of the operator has been available since version 22.
Summary¶
HardSigmoid takes one input data (Tensor
Function Body¶
The function definition for this operator.
<
  domain: "",
  opset_import: ["" : 18]
>
HardSigmoid <beta,alpha>(X) => (Y)
{
   Alpha = Constant <value_float: float = @alpha> ()
   AlphaCast = CastLike (Alpha, X)
   Beta = Constant <value_float: float = @beta> ()
   BetaCast = CastLike (Beta, X)
   Zero = Constant <value: tensor = float {0}> ()
   ZeroCast = CastLike (Zero, X)
   One = Constant <value: tensor = float {1}> ()
   OneCast = CastLike (One, X)
   AlphaMulX = Mul (X, AlphaCast)
   AlphaMulXAddBeta = Add (AlphaMulX, BetaCast)
   MinOneOrAlphaMulXAddBeta = Min (AlphaMulXAddBeta, OneCast)
   Y = Max (MinOneOrAlphaMulXAddBeta, ZeroCast)
}
Attributes¶
- alpha - FLOAT (default is - '0.2'):- Value of alpha. 
- beta - FLOAT (default is - '0.5'):- Value of beta. 
Inputs¶
- X (heterogeneous) - T: - Input tensor 
Outputs¶
- Y (heterogeneous) - T: - Output tensor 
Type Constraints¶
- T in ( - tensor(bfloat16),- tensor(double),- tensor(float),- tensor(float16)):- Constrain input and output types to float tensors. 
HardSigmoid - 6¶
Version¶
- name: HardSigmoid (GitHub) 
- domain: - main
- since_version: - 6
- function: - True
- support_level: - SupportType.COMMON
- shape inference: - True
This version of the operator has been available since version 6.
Summary¶
HardSigmoid takes one input data (Tensor
Function Body¶
The function definition for this operator.
<
  domain: "",
  opset_import: ["" : 18]
>
HardSigmoid <beta,alpha>(X) => (Y)
{
   Alpha = Constant <value_float: float = @alpha> ()
   AlphaCast = CastLike (Alpha, X)
   Beta = Constant <value_float: float = @beta> ()
   BetaCast = CastLike (Beta, X)
   Zero = Constant <value: tensor = float {0}> ()
   ZeroCast = CastLike (Zero, X)
   One = Constant <value: tensor = float {1}> ()
   OneCast = CastLike (One, X)
   AlphaMulX = Mul (X, AlphaCast)
   AlphaMulXAddBeta = Add (AlphaMulX, BetaCast)
   MinOneOrAlphaMulXAddBeta = Min (AlphaMulXAddBeta, OneCast)
   Y = Max (MinOneOrAlphaMulXAddBeta, ZeroCast)
}
Attributes¶
- alpha - FLOAT (default is - '0.2'):- Value of alpha. 
- beta - FLOAT (default is - '0.5'):- Value of beta. 
Inputs¶
- X (heterogeneous) - T: - Input tensor 
Outputs¶
- Y (heterogeneous) - T: - Output tensor 
Type Constraints¶
- T in ( - tensor(double),- tensor(float),- tensor(float16)):- Constrain input and output types to float tensors. 
HardSigmoid - 1¶
Version¶
- name: HardSigmoid (GitHub) 
- domain: - main
- since_version: - 1
- function: - False
- support_level: - SupportType.COMMON
- shape inference: - False
This version of the operator has been available since version 1.
Summary¶
HardSigmoid takes one input data (Tensor
Attributes¶
- alpha - FLOAT (default is - '0.2'):- Value of alpha default to 0.2 
- beta - FLOAT (default is - '0.5'):- Value of beta default to 0.5 
- consumed_inputs - INTS : - legacy optimization attribute. 
Inputs¶
- X (heterogeneous) - T: - Input tensor 
Outputs¶
- Y (heterogeneous) - T: - Output tensor 
Type Constraints¶
- T in ( - tensor(double),- tensor(float),- tensor(float16)):- Constrain input and output types to float tensors.