LRN¶
LRN - 13¶
Version¶
name: LRN (GitHub)
domain:
main
since_version:
13
function:
False
support_level:
SupportType.COMMON
shape inference:
True
This version of the operator has been available since version 13.
Summary¶
Local Response Normalization proposed in the AlexNet paper.
It normalizes over local input regions.
The local region is defined across the channels. For an element X[n, c, d1, ..., dk]
in a tensor
of shape (N x C x D1 x D2, ..., Dk)
, its region is
{X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}
.
square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2)
,
where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))
.
Y[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ beta
Attributes¶
alpha - FLOAT (default is
'0.0001'
):Scaling parameter.
beta - FLOAT (default is
'0.75'
):The exponent.
bias - FLOAT (default is
'1.0'
):size - INT (required) :
The number of channels to sum over
Inputs¶
X (heterogeneous) - T:
Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs¶
Y (heterogeneous) - T:
Output tensor, which has the shape and type as input tensor
Type Constraints¶
T in (
tensor(bfloat16)
,tensor(double)
,tensor(float)
,tensor(float16)
):Constrain input and output types to float tensors.
LRN - 1¶
Version¶
name: LRN (GitHub)
domain:
main
since_version:
1
function:
False
support_level:
SupportType.COMMON
shape inference:
True
This version of the operator has been available since version 1.
Summary¶
Local Response Normalization proposed in the AlexNet paper. It normalizes over local input regions. The local region is defined across the channels. For an element X[n, c, d1, …, dk] in a tensor of shape (N x C x D1 x D2, …, Dk), its region is {X[n, i, d1, …, dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}.
square_sum[n, c, d1, …, dk] = sum(X[n, i, d1, …, dk] ^ 2), where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2)).
Y[n, c, d1, …, dk] = X[n, c, d1, …, dk] / (bias + alpha / size * square_sum[n, c, d1, …, dk] ) ^ beta
Attributes¶
alpha - FLOAT (default is
'0.0001'
):Scaling parameter.
beta - FLOAT (default is
'0.75'
):The exponent.
bias - FLOAT (default is
'1.0'
):size - INT (required) :
The number of channels to sum over
Inputs¶
X (heterogeneous) - T:
Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size. Optionally, if dimension denotation is in effect, the operation expects the input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].
Outputs¶
Y (heterogeneous) - T:
Output tensor, which has the shape and type as input tensor
Type Constraints¶
T in (
tensor(double)
,tensor(float)
,tensor(float16)
):Constrain input and output types to float tensors.