HardSigmoid

HardSigmoid - 22

Version

  • name: HardSigmoid (GitHub)

  • domain: main

  • since_version: 22

  • function: True

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 22.

Summary

HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.

Function Body

The function definition for this operator.

&lt;
  domain: &#34;&#34;,
  opset_import: [&#34;&#34; : 18]
&gt;
HardSigmoid &lt;beta,alpha&gt;(X) =&gt; (Y)
{
   Alpha = Constant &lt;value_float: float = @alpha&gt; ()
   AlphaCast = CastLike (Alpha, X)
   Beta = Constant &lt;value_float: float = @beta&gt; ()
   BetaCast = CastLike (Beta, X)
   Zero = Constant &lt;value: tensor = float {0}&gt; ()
   ZeroCast = CastLike (Zero, X)
   One = Constant &lt;value: tensor = float {1}&gt; ()
   OneCast = CastLike (One, X)
   AlphaMulX = Mul (X, AlphaCast)
   AlphaMulXAddBeta = Add (AlphaMulX, BetaCast)
   MinOneOrAlphaMulXAddBeta = Min (AlphaMulXAddBeta, OneCast)
   Y = Max (MinOneOrAlphaMulXAddBeta, ZeroCast)
}

Attributes

  • alpha - FLOAT (default is &#39;0.2&#39;):

    Value of alpha.

  • beta - FLOAT (default is &#39;0.5&#39;):

    Value of beta.

Inputs

  • X (heterogeneous) - T:

    Input tensor

Outputs

  • Y (heterogeneous) - T:

    Output tensor

Type Constraints

  • T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ):

    Constrain input and output types to float tensors.

HardSigmoid - 6

Version

  • name: HardSigmoid (GitHub)

  • domain: main

  • since_version: 6

  • function: True

  • support_level: SupportType.COMMON

  • shape inference: True

This version of the operator has been available since version 6.

Summary

HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.

Function Body

The function definition for this operator.

&lt;
  domain: &#34;&#34;,
  opset_import: [&#34;&#34; : 18]
&gt;
HardSigmoid &lt;beta,alpha&gt;(X) =&gt; (Y)
{
   Alpha = Constant &lt;value_float: float = @alpha&gt; ()
   AlphaCast = CastLike (Alpha, X)
   Beta = Constant &lt;value_float: float = @beta&gt; ()
   BetaCast = CastLike (Beta, X)
   Zero = Constant &lt;value: tensor = float {0}&gt; ()
   ZeroCast = CastLike (Zero, X)
   One = Constant &lt;value: tensor = float {1}&gt; ()
   OneCast = CastLike (One, X)
   AlphaMulX = Mul (X, AlphaCast)
   AlphaMulXAddBeta = Add (AlphaMulX, BetaCast)
   MinOneOrAlphaMulXAddBeta = Min (AlphaMulXAddBeta, OneCast)
   Y = Max (MinOneOrAlphaMulXAddBeta, ZeroCast)
}

Attributes

  • alpha - FLOAT (default is &#39;0.2&#39;):

    Value of alpha.

  • beta - FLOAT (default is &#39;0.5&#39;):

    Value of beta.

Inputs

  • X (heterogeneous) - T:

    Input tensor

Outputs

  • Y (heterogeneous) - T:

    Output tensor

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16) ):

    Constrain input and output types to float tensors.

HardSigmoid - 1

Version

  • name: HardSigmoid (GitHub)

  • domain: main

  • since_version: 1

  • function: False

  • support_level: SupportType.COMMON

  • shape inference: False

This version of the operator has been available since version 1.

Summary

HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.

Attributes

  • alpha - FLOAT (default is &#39;0.2&#39;):

    Value of alpha default to 0.2

  • beta - FLOAT (default is &#39;0.5&#39;):

    Value of beta default to 0.5

  • consumed_inputs - INTS :

    legacy optimization attribute.

Inputs

  • X (heterogeneous) - T:

    Input tensor

Outputs

  • Y (heterogeneous) - T:

    Output tensor

Type Constraints

  • T in ( tensor(double), tensor(float), tensor(float16) ):

    Constrain input and output types to float tensors.