HardSwish - 14 vs 22

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

Files changed (1) hide show
  1. HardSwish14 → HardSwish22 +2 -2
HardSwish14 → HardSwish22 RENAMED
@@ -1 +1 @@
1
1
  HardSwish takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where
2
2
  the HardSwish function, y = x * max(0, min(1, alpha * x + beta)) = x * HardSigmoid<alpha, beta>(x),
3
3
  where alpha = 1/6 and beta = 0.5, is applied to the tensor elementwise.
4
4
  #### Function Body
5
5
  The function definition for this operator.
6
6
  <
7
7
  domain: "",
8
- opset_import: ["" : 14]
8
+ opset_import: ["" : 22]
9
9
  >
10
10
  HardSwish (X) => (Y)
11
11
  {
12
12
  HS_X = HardSigmoid <alpha: float = 0.166667, beta: float = 0.5> (X)
13
13
  Y = Mul (X, HS_X)
14
14
  }
15
15
  ### Inputs
16
16
  - **X** (heterogeneous) - **T**:
17
17
  Input tensor
18
18
  ### Outputs
19
19
  - **Y** (heterogeneous) - **T**:
20
20
  Output tensor
21
21
  ### Type Constraints
22
- * **T** in ( tensor(double), tensor(float), tensor(float16) ):
22
+ * **T** in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ):
23
23
  Constrain input and output types to float tensors.