(l-onnx-doc-Gelu)= # Gelu (l-onnx-op-gelu-20)= ## Gelu - 20 ### Version - **name**: [Gelu (GitHub)](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Gelu) - **domain**: `main` - **since_version**: `20` - **function**: `True` - **support_level**: `SupportType.COMMON` - **shape inference**: `True` This version of the operator has been available **since version 20**. ### Summary Gelu takes one input data (Tensor) and produces one output data (Tensor) where the gaussian error linear units function, $y = 0.5 * x * (1 + erf(x/sqrt(2)))$ is applied to the tensor elementwise. If the attribute "approximate" is set to "tanh", the function estimation, $y = 0.5 * x * (1 + Tanh(sqrt(2/\pi) * (x + 0.044715 * x^3)))$ is used and applied to the tensor elementwise. ### Attributes * **approximate - STRING** (default is `'none'`): Gelu approximation algorithm: `"tanh"`, `"none"`(default).`"none"`: do not use approximation.`"tanh"`: use tanh approximation. ### Inputs - **X** (heterogeneous) - **T**: Input tensor ### Outputs - **Y** (heterogeneous) - **T**: Output tensor ### Type Constraints * **T** in ( `tensor(bfloat16)`, `tensor(double)`, `tensor(float)`, `tensor(float16)` ): Constrain input and output types to float tensors.