Elu - 1 vs 22

Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.

Files changed (1) hide show
  1. Elu1 → Elu22 +28 -7
Elu1 → Elu22 RENAMED
@@ -1 +1 @@
1
1
  Elu takes one input data (Tensor<T>) and produces one output data
2
2
  (Tensor<T>) where the function f(x) = alpha * (exp(x) - 1.) for x <
3
3
  0, f(x) = x for x >= 0., is applied to the tensor elementwise.
4
+
5
+ #### Function Body
6
+
7
+ The function definition for this operator.
8
+
9
+
10
+ <
11
+ domain: "",
12
+ opset_import: ["" : 18]
13
+ >
14
+ Elu <alpha>(X) => (Y)
15
+ {
16
+ Alpha = Constant <value_float: float = @alpha> ()
17
+ AlphaCast = CastLike (Alpha, X)
18
+ Zero = Constant <value: tensor = float {0}> ()
19
+ ZeroCast = CastLike (Zero, X)
20
+ One = Constant <value: tensor = float {1}> ()
21
+ OneCast = CastLike (One, X)
22
+ XLessThanZero = Less (X, ZeroCast)
23
+ ExpX = Exp (X)
24
+ ExpXSubOne = Sub (ExpX, OneCast)
25
+ AlphaMulExpXSubOne = Mul (AlphaCast, ExpXSubOne)
26
+ Y = Where (XLessThanZero, AlphaMulExpXSubOne, X)
27
+ }
28
+
4
29
  ### Attributes
5
30
  * **alpha - FLOAT** (default is '1.0'):
31
+ Coefficient of ELU.
6
- Coefficient of ELU default to 1.0.
7
-
8
- * **consumed_inputs - INTS** :
9
-
10
- legacy optimization attribute.
11
32
  ### Inputs
12
33
  - **X** (heterogeneous) - **T**:
13
34
  1D input tensor
14
35
  ### Outputs
15
36
  - **Y** (heterogeneous) - **T**:
16
- 1D input tensor
37
+ 1D output tensor
17
38
  ### Type Constraints
18
- * **T** in ( tensor(double), tensor(float), tensor(float16) ):
39
+ * **T** in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ):
19
40
  Constrain input and output types to float tensors.