(l-onnx-doc-Selu)= # Selu (l-onnx-op-selu-22)= ## Selu - 22 ### Version - **name**: [Selu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Selu) - **domain**: `main` - **since_version**: `22` - **function**: `True` - **support_level**: `SupportType.COMMON` - **shape inference**: `True` This version of the operator has been available **since version 22**. ### Summary Selu takes one input data (Tensor) and produces one output data (Tensor) where the scaled exponential linear unit function, `y = gamma * (alpha * e^x - alpha) for x <= 0`, `y = gamma * x for x > 0`, is applied to the tensor elementwise. #### Function Body The function definition for this operator. ``` < domain: "", opset_import: ["" : 18] > Selu (X) => (Y) { Alpha = Constant () AlphaCast = CastLike (Alpha, X) Gamma = Constant () GammaCast = CastLike (Gamma, X) Zero = Constant () ZeroCast = CastLike (Zero, X) ExpX = Exp (X) AlphaMulExpX = Mul (AlphaCast, ExpX) AlphaMulExpXSubAlpha = Sub (AlphaMulExpX, AlphaCast) Neg = Mul (GammaCast, AlphaMulExpXSubAlpha) Pos = Mul (GammaCast, X) XLessThanZero = Less (X, ZeroCast) Y = Where (XLessThanZero, Neg, Pos) } ``` ### Attributes * **alpha - FLOAT** (default is `'1.67326'`): Coefficient of SELU default to 1.67326319217681884765625 (i.e., float32 approximation of 1.6732632423543772848170429916717). * **gamma - FLOAT** (default is `'1.0507'`): Coefficient of SELU default to 1.05070102214813232421875 (i.e., float32 approximation of 1.0507009873554804934193349852946). ### Inputs - **X** (heterogeneous) - **T**: Input tensor ### Outputs - **Y** (heterogeneous) - **T**: Output tensor ### Type Constraints * **T** in ( `tensor(bfloat16)`, `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors. ```{toctree} text_diff_Selu_6_22 ``` (l-onnx-op-selu-6)= ## Selu - 6 ### Version - **name**: [Selu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Selu) - **domain**: `main` - **since_version**: `6` - **function**: `True` - **support_level**: `SupportType.COMMON` - **shape inference**: `True` This version of the operator has been available **since version 6**. ### Summary Selu takes one input data (Tensor) and produces one output data (Tensor) where the scaled exponential linear unit function, `y = gamma * (alpha * e^x - alpha) for x <= 0`, `y = gamma * x for x > 0`, is applied to the tensor elementwise. #### Function Body The function definition for this operator. ``` < domain: "", opset_import: ["" : 18] > Selu (X) => (Y) { Alpha = Constant () AlphaCast = CastLike (Alpha, X) Gamma = Constant () GammaCast = CastLike (Gamma, X) Zero = Constant () ZeroCast = CastLike (Zero, X) ExpX = Exp (X) AlphaMulExpX = Mul (AlphaCast, ExpX) AlphaMulExpXSubAlpha = Sub (AlphaMulExpX, AlphaCast) Neg = Mul (GammaCast, AlphaMulExpXSubAlpha) Pos = Mul (GammaCast, X) XLessThanZero = Less (X, ZeroCast) Y = Where (XLessThanZero, Neg, Pos) } ``` ### Attributes * **alpha - FLOAT** (default is `'1.67326'`): Coefficient of SELU default to 1.67326319217681884765625 (i.e., float32 approximation of 1.6732632423543772848170429916717). * **gamma - FLOAT** (default is `'1.0507'`): Coefficient of SELU default to 1.05070102214813232421875 (i.e., float32 approximation of 1.0507009873554804934193349852946). ### Inputs - **X** (heterogeneous) - **T**: Input tensor ### Outputs - **Y** (heterogeneous) - **T**: Output tensor ### Type Constraints * **T** in ( `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors. ```{toctree} text_diff_Selu_1_22 text_diff_Selu_1_6 ``` (l-onnx-op-selu-1)= ## Selu - 1 ### Version - **name**: [Selu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Selu) - **domain**: `main` - **since_version**: `1` - **function**: `False` - **support_level**: `SupportType.COMMON` - **shape inference**: `False` This version of the operator has been available **since version 1**. ### Summary Selu takes one input data (Tensor) and produces one output data (Tensor) where the scaled exponential linear unit function, `y = gamma * (alpha * e^x - alpha) for x <= 0`, `y = gamma * x for x > 0`, is applied to the tensor elementwise. ### Attributes * **alpha - FLOAT** (default is `'1.6732'`): Coefficient of SELU default to 1.6732. * **consumed_inputs - INTS** : legacy optimization attribute. * **gamma - FLOAT** (default is `'1.0507'`): Coefficient of SELU default to 1.0507. ### Inputs - **X** (heterogeneous) - **T**: Input tensor ### Outputs - **Y** (heterogeneous) - **T**: Output tensor ### Type Constraints * **T** in ( `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors.