ScatterND¶
ScatterND - 18¶
Version¶
name: ScatterND (GitHub)
domain:
main
since_version:
18
function:
False
support_level:
SupportType.COMMON
shape inference:
True
This version of the operator has been available since version 18.
Summary¶
ScatterND takes three inputs data
tensor of rank r >= 1, indices
tensor of rank q >= 1,
and updates
tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
is produced by creating a copy of the input data
, and then updating its value to values
specified by updates
at specific index positions specified by indices
. Its output shape
is the same as the shape of data
.
indices
is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices
.
indices
is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data
.
Hence, k can be a value at most the rank of data
. When k equals rank(data), each update entry specifies an
update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
update to a slice of the tensor. Index values are allowed to be negative, as per the usual
convention for counting backwards from the end, but are expected in the valid range.
updates
is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
The remaining dimensions of updates
correspond to the dimensions of the
replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
corresponding to the trailing (r-k) dimensions of data
. Thus, the shape of updates
must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
of shapes.
The output
is calculated via the following equation:
output = np.copy(data)
update_indices = indices.shape[:-1]
for idx in np.ndindex(update_indices):
output[indices[idx]] = updates[idx]
The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
reduction
allows specification of an optional reduction operation, which is applied to all values in updates
tensor into output
at the specified indices
.
In cases where reduction
is set to “none”, indices should not have duplicate entries: that is, if idx1 != idx2,
then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
When reduction
is set to some reduction function f
, output
is calculated as follows:
output = np.copy(data)
update_indices = indices.shape[:-1]
for idx in np.ndindex(update_indices):
output[indices[idx]] = f(output[indices[idx]], updates[idx])
where the f
is +
, *
, max
or min
as specified.
This operator is the inverse of GatherND.
(Opset 18 change): Adds max/min to the set of allowed reduction ops.
Example 1:
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
Example 2:
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
Attributes¶
reduction - STRING (default is
'none'
):Type of reduction to apply: none (default), add, mul, max, min. ‘none’: no reduction applied. ‘add’: reduction using the addition operation. ‘mul’: reduction using the addition operation. ‘max’: reduction using the maximum operation.‘min’: reduction using the minimum operation.
Inputs¶
data (heterogeneous) - T:
Tensor of rank r >= 1.
indices (heterogeneous) - tensor(int64):
Tensor of rank q >= 1.
updates (heterogeneous) - T:
Tensor of rank q + r - indices_shape[-1] - 1.
Outputs¶
output (heterogeneous) - T:
Tensor of rank r >= 1.
Type Constraints¶
T in (
tensor(bfloat16)
,tensor(bool)
,tensor(complex128)
,tensor(complex64)
,tensor(double)
,tensor(float)
,tensor(float16)
,tensor(int16)
,tensor(int32)
,tensor(int64)
,tensor(int8)
,tensor(string)
,tensor(uint16)
,tensor(uint32)
,tensor(uint64)
,tensor(uint8)
):Constrain input and output types to any tensor type.
ScatterND - 16¶
Version¶
name: ScatterND (GitHub)
domain:
main
since_version:
16
function:
False
support_level:
SupportType.COMMON
shape inference:
True
This version of the operator has been available since version 16.
Summary¶
ScatterND takes three inputs data
tensor of rank r >= 1, indices
tensor of rank q >= 1,
and updates
tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
is produced by creating a copy of the input data
, and then updating its value to values
specified by updates
at specific index positions specified by indices
. Its output shape
is the same as the shape of data
.
indices
is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices
.
indices
is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data
.
Hence, k can be a value at most the rank of data
. When k equals rank(data), each update entry specifies an
update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
update to a slice of the tensor. Index values are allowed to be negative, as per the usual
convention for counting backwards from the end, but are expected in the valid range.
updates
is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
The remaining dimensions of updates
correspond to the dimensions of the
replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
corresponding to the trailing (r-k) dimensions of data
. Thus, the shape of updates
must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
of shapes.
The output
is calculated via the following equation:
output = np.copy(data)
update_indices = indices.shape[:-1]
for idx in np.ndindex(update_indices):
output[indices[idx]] = updates[idx]
The order of iteration in the above loop is not specified.
In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2].
This ensures that the output value does not depend on the iteration order.
reduction
allows specification of an optional reduction operation, which is applied to all values in updates
tensor into output
at the specified indices
.
In cases where reduction
is set to “none”, indices should not have duplicate entries: that is, if idx1 != idx2,
then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
When reduction
is set to “add”, output
is calculated as follows:
output = np.copy(data)
update_indices = indices.shape[:-1]
for idx in np.ndindex(update_indices):
output[indices[idx]] += updates[idx]
When reduction
is set to “mul”, output
is calculated as follows:
output = np.copy(data)
update_indices = indices.shape[:-1]
for idx in np.ndindex(update_indices):
output[indices[idx]] *= updates[idx]
This operator is the inverse of GatherND.
Example 1:
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
Example 2:
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
Attributes¶
reduction - STRING (default is
'none'
):Type of reduction to apply: none (default), add, mul. ‘none’: no reduction applied. ‘add’: reduction using the addition operation. ‘mul’: reduction using the multiplication operation.
Inputs¶
data (heterogeneous) - T:
Tensor of rank r >= 1.
indices (heterogeneous) - tensor(int64):
Tensor of rank q >= 1.
updates (heterogeneous) - T:
Tensor of rank q + r - indices_shape[-1] - 1.
Outputs¶
output (heterogeneous) - T:
Tensor of rank r >= 1.
Type Constraints¶
T in (
tensor(bfloat16)
,tensor(bool)
,tensor(complex128)
,tensor(complex64)
,tensor(double)
,tensor(float)
,tensor(float16)
,tensor(int16)
,tensor(int32)
,tensor(int64)
,tensor(int8)
,tensor(string)
,tensor(uint16)
,tensor(uint32)
,tensor(uint64)
,tensor(uint8)
):Constrain input and output types to any tensor type.
ScatterND - 13¶
Version¶
name: ScatterND (GitHub)
domain:
main
since_version:
13
function:
False
support_level:
SupportType.COMMON
shape inference:
True
This version of the operator has been available since version 13.
Summary¶
ScatterND takes three inputs data
tensor of rank r >= 1, indices
tensor of rank q >= 1,
and updates
tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
is produced by creating a copy of the input data
, and then updating its value to values
specified by updates
at specific index positions specified by indices
. Its output shape
is the same as the shape of data
. Note that indices
should not have duplicate entries.
That is, two or more updates
for the same index-location is not supported.
indices
is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices
.
indices
is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data
.
Hence, k can be a value at most the rank of data
. When k equals rank(data), each update entry specifies an
update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
update to a slice of the tensor. Index values are allowed to be negative, as per the usual
convention for counting backwards from the end, but are expected in the valid range.
updates
is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
The remaining dimensions of updates
correspond to the dimensions of the
replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
corresponding to the trailing (r-k) dimensions of data
. Thus, the shape of updates
must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
of shapes.
The output
is calculated via the following equation:
output = np.copy(data)
update_indices = indices.shape[:-1]
for idx in np.ndindex(update_indices):
output[indices[idx]] = updates[idx]
The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
This operator is the inverse of GatherND.
Example 1:
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
Example 2:
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
Inputs¶
data (heterogeneous) - T:
Tensor of rank r >= 1.
indices (heterogeneous) - tensor(int64):
Tensor of rank q >= 1.
updates (heterogeneous) - T:
Tensor of rank q + r - indices_shape[-1] - 1.
Outputs¶
output (heterogeneous) - T:
Tensor of rank r >= 1.
Type Constraints¶
T in (
tensor(bfloat16)
,tensor(bool)
,tensor(complex128)
,tensor(complex64)
,tensor(double)
,tensor(float)
,tensor(float16)
,tensor(int16)
,tensor(int32)
,tensor(int64)
,tensor(int8)
,tensor(string)
,tensor(uint16)
,tensor(uint32)
,tensor(uint64)
,tensor(uint8)
):Constrain input and output types to any tensor type.
ScatterND - 11¶
Version¶
name: ScatterND (GitHub)
domain:
main
since_version:
11
function:
False
support_level:
SupportType.COMMON
shape inference:
True
This version of the operator has been available since version 11.
Summary¶
ScatterND takes three inputs data
tensor of rank r >= 1, indices
tensor of rank q >= 1,
and updates
tensor of rank q + r - indices.shape[-1] - 1. The output of the operation
is produced by creating a copy of the input data
, and then updating its value to values
specified by updates
at specific index positions specified by indices
. Its output shape
is the same as the shape of data
. Note that indices
should not have duplicate entries.
That is, two or more updates
for the same index-location is not supported.
indices
is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of indices
.
indices
is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into data
.
Hence, k can be a value at most the rank of data
. When k equals rank(data), each update entry specifies an
update to a single element of the tensor. When k is less than rank(data) each update entry specifies an
update to a slice of the tensor. Index values are allowed to be negative, as per the usual
convention for counting backwards from the end, but are expected in the valid range.
updates
is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the
first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape.
The remaining dimensions of updates
correspond to the dimensions of the
replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,
corresponding to the trailing (r-k) dimensions of data
. Thus, the shape of updates
must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation
of shapes.
The output
is calculated via the following equation:
output = np.copy(data)
update_indices = indices.shape[:-1]
for idx in np.ndindex(update_indices):
output[indices[idx]] = updates[idx]
The order of iteration in the above loop is not specified. In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]. This ensures that the output value does not depend on the iteration order.
This operator is the inverse of GatherND.
Example 1:
data = [1, 2, 3, 4, 5, 6, 7, 8]
indices = [[4], [3], [1], [7]]
updates = [9, 10, 11, 12]
output = [1, 11, 3, 10, 9, 6, 7, 12]
Example 2:
data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
indices = [[0], [2]]
updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]
output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],
[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
[[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]
Inputs¶
data (heterogeneous) - T:
Tensor of rank r >= 1.
indices (heterogeneous) - tensor(int64):
Tensor of rank q >= 1.
updates (heterogeneous) - T:
Tensor of rank q + r - indices_shape[-1] - 1.
Outputs¶
output (heterogeneous) - T:
Tensor of rank r >= 1.
Type Constraints¶
T in (
tensor(bool)
,tensor(complex128)
,tensor(complex64)
,tensor(double)
,tensor(float)
,tensor(float16)
,tensor(int16)
,tensor(int32)
,tensor(int64)
,tensor(int8)
,tensor(string)
,tensor(uint16)
,tensor(uint32)
,tensor(uint64)
,tensor(uint8)
):Constrain input and output types to any tensor type.