(l-onnx-doc-Elu)= # Elu (l-onnx-op-elu-22)= ## Elu - 22 ### Version - **name**: [Elu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Elu) - **domain**: `main` - **since_version**: `22` - **function**: `True` - **support_level**: `SupportType.COMMON` - **shape inference**: `True` This version of the operator has been available **since version 22**. ### Summary Elu takes one input data (Tensor) and produces one output data (Tensor) where the function `f(x) = alpha * (exp(x) - 1.) for x < 0`, `f(x) = x for x >= 0`., is applied to the tensor elementwise. #### Function Body The function definition for this operator. ``` < domain: "", opset_import: ["" : 18] > Elu (X) => (Y) { Alpha = Constant () AlphaCast = CastLike (Alpha, X) Zero = Constant () ZeroCast = CastLike (Zero, X) One = Constant () OneCast = CastLike (One, X) XLessThanZero = Less (X, ZeroCast) ExpX = Exp (X) ExpXSubOne = Sub (ExpX, OneCast) AlphaMulExpXSubOne = Mul (AlphaCast, ExpXSubOne) Y = Where (XLessThanZero, AlphaMulExpXSubOne, X) } ``` ### Attributes * **alpha - FLOAT** (default is `'1.0'`): Coefficient of ELU. ### Inputs - **X** (heterogeneous) - **T**: 1D input tensor ### Outputs - **Y** (heterogeneous) - **T**: 1D output tensor ### Type Constraints * **T** in ( `tensor(bfloat16)`, `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors. ```{toctree} text_diff_Elu_6_22 ``` (l-onnx-op-elu-6)= ## Elu - 6 ### Version - **name**: [Elu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Elu) - **domain**: `main` - **since_version**: `6` - **function**: `True` - **support_level**: `SupportType.COMMON` - **shape inference**: `True` This version of the operator has been available **since version 6**. ### Summary Elu takes one input data (Tensor) and produces one output data (Tensor) where the function `f(x) = alpha * (exp(x) - 1.) for x < 0`, `f(x) = x for x >= 0`., is applied to the tensor elementwise. #### Function Body The function definition for this operator. ``` < domain: "", opset_import: ["" : 18] > Elu (X) => (Y) { Alpha = Constant () AlphaCast = CastLike (Alpha, X) Zero = Constant () ZeroCast = CastLike (Zero, X) One = Constant () OneCast = CastLike (One, X) XLessThanZero = Less (X, ZeroCast) ExpX = Exp (X) ExpXSubOne = Sub (ExpX, OneCast) AlphaMulExpXSubOne = Mul (AlphaCast, ExpXSubOne) Y = Where (XLessThanZero, AlphaMulExpXSubOne, X) } ``` ### Attributes * **alpha - FLOAT** (default is `'1.0'`): Coefficient of ELU. ### Inputs - **X** (heterogeneous) - **T**: 1D input tensor ### Outputs - **Y** (heterogeneous) - **T**: 1D output tensor ### Type Constraints * **T** in ( `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors. ```{toctree} text_diff_Elu_1_22 text_diff_Elu_1_6 ``` (l-onnx-op-elu-1)= ## Elu - 1 ### Version - **name**: [Elu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Elu) - **domain**: `main` - **since_version**: `1` - **function**: `False` - **support_level**: `SupportType.COMMON` - **shape inference**: `False` This version of the operator has been available **since version 1**. ### Summary Elu takes one input data (Tensor) and produces one output data (Tensor) where the function `f(x) = alpha * (exp(x) - 1.) for x < 0`, `f(x) = x for x >= 0`., is applied to the tensor elementwise. ### Attributes * **alpha - FLOAT** (default is `'1.0'`): Coefficient of ELU default to 1.0. * **consumed_inputs - INTS** : legacy optimization attribute. ### Inputs - **X** (heterogeneous) - **T**: 1D input tensor ### Outputs - **Y** (heterogeneous) - **T**: 1D input tensor ### Type Constraints * **T** in ( `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors.