(l-onnx-doc-Relu)= # Relu (l-onnx-op-relu-14)= ## Relu - 14 ### Version - **name**: [Relu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Relu) - **domain**: `main` - **since_version**: `14` - **function**: `True` - **support_level**: `SupportType.COMMON` - **shape inference**: `True` This version of the operator has been available **since version 14**. ### Summary Relu takes one input data (Tensor) and produces one output data (Tensor) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise. #### Function Body The function definition for this operator. ``` < domain: "", opset_import: ["" : 18] > Relu (X) => (Y) { Zero = Constant () ZeroCast = CastLike (Zero, X) Y = Max (X, ZeroCast) } ``` ### Inputs - **X** (heterogeneous) - **T**: Input tensor ### Outputs - **Y** (heterogeneous) - **T**: Output tensor ### Type Constraints * **T** in ( `tensor(bfloat16)`, `tensor(double)`, `tensor(float)`, `tensor(float16)`, `tensor(int16)`, `tensor(int32)`, `tensor(int64)`, `tensor(int8)` ): Constrain input and output types to signed numeric tensors. ```{toctree} text_diff_Relu_13_14 ``` (l-onnx-op-relu-13)= ## Relu - 13 ### Version - **name**: [Relu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Relu) - **domain**: `main` - **since_version**: `13` - **function**: `False` - **support_level**: `SupportType.COMMON` - **shape inference**: `True` This version of the operator has been available **since version 13**. ### Summary Relu takes one input data (Tensor) and produces one output data (Tensor) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise. ### Inputs - **X** (heterogeneous) - **T**: Input tensor ### Outputs - **Y** (heterogeneous) - **T**: Output tensor ### Type Constraints * **T** in ( `tensor(bfloat16)`, `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors. ```{toctree} text_diff_Relu_6_14 text_diff_Relu_6_13 ``` (l-onnx-op-relu-6)= ## Relu - 6 ### Version - **name**: [Relu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Relu) - **domain**: `main` - **since_version**: `6` - **function**: `False` - **support_level**: `SupportType.COMMON` - **shape inference**: `True` This version of the operator has been available **since version 6**. ### Summary Relu takes one input data (Tensor) and produces one output data (Tensor) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise. ### Inputs - **X** (heterogeneous) - **T**: Input tensor ### Outputs - **Y** (heterogeneous) - **T**: Output tensor ### Type Constraints * **T** in ( `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors. ```{toctree} text_diff_Relu_1_14 text_diff_Relu_1_13 text_diff_Relu_1_6 ``` (l-onnx-op-relu-1)= ## Relu - 1 ### Version - **name**: [Relu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Relu) - **domain**: `main` - **since_version**: `1` - **function**: `False` - **support_level**: `SupportType.COMMON` - **shape inference**: `False` This version of the operator has been available **since version 1**. ### Summary Relu takes one input data (Tensor) and produces one output data (Tensor) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise. ### Attributes * **consumed_inputs - INTS** : legacy optimization attribute. ### Inputs - **X** (heterogeneous) - **T**: Input tensor ### Outputs - **Y** (heterogeneous) - **T**: Output tensor ### Type Constraints * **T** in ( `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors.