Softplus - 1 vs 22¶
Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.
- Softplus1 → Softplus22 +1 -1
Softplus1 → Softplus22
RENAMED
@@ -1 +1 @@
|
|
1
1
|
Softplus takes one input data (Tensor<T>) and produces one output data
|
2
2
|
(Tensor<T>) where the softplus function, y = ln(exp(x) + 1), is applied to
|
3
3
|
the tensor elementwise.
|
4
4
|
#### Function Body
|
5
5
|
The function definition for this operator.
|
6
6
|
<
|
7
7
|
domain: "",
|
8
8
|
opset_import: ["" : 18]
|
9
9
|
>
|
10
10
|
Softplus (X) => (Y)
|
11
11
|
{
|
12
12
|
exp_x = Exp (X)
|
13
13
|
one = Constant <value: tensor = float {1}> ()
|
14
14
|
one_cast = CastLike (one, X)
|
15
15
|
exp_x_add_one = Add (exp_x, one_cast)
|
16
16
|
Y = Log (exp_x_add_one)
|
17
17
|
}
|
18
18
|
### Inputs
|
19
19
|
- **X** (heterogeneous) - **T**:
|
20
20
|
1D input tensor
|
21
21
|
### Outputs
|
22
22
|
- **Y** (heterogeneous) - **T**:
|
23
23
|
1D input tensor
|
24
24
|
### Type Constraints
|
25
|
-
* **T** in ( tensor(double), tensor(float), tensor(float16) ):
|
25
|
+
* **T** in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ):
|
26
26
|
Constrain input and output types to float tensors.
|