GatherND - 12 vs 13¶
Next section compares an older to a newer version of the same operator after both definition are converted into markdown text. Green means an addition to the newer version, red means a deletion. Anything else is unchanged.
- GatherND12 → GatherND13 +26 -31
GatherND12 → GatherND13
RENAMED
@@ -1 +1 @@
|
|
1
1
|
Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers
|
2
2
|
slices of data into an output tensor of rank q + r - indices_shape[-1] - 1 - b.
|
3
3
|
indices is an q-dimensional integer tensor, best thought of as a (q-1)-dimensional tensor of index-tuples into data,
|
4
4
|
where each element defines a slice of data
|
5
5
|
batch_dims (denoted as b) is an integer indicating the number of batch dimensions, i.e the leading b number of dimensions of
|
6
6
|
data tensor and indices are representing the batches, and the gather starts from the b+1 dimension.
|
7
7
|
Some salient points about the inputs' rank and shape:
|
8
8
|
1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks r and q
|
9
9
|
2) The first b dimensions of the shape of indices tensor and data tensor must be equal.
|
10
10
|
3) b < min(q, r) is to be honored.
|
11
11
|
4) The indices_shape[-1] should have a value between 1 (inclusive) and rank r-b (inclusive)
|
12
12
|
5) All values in indices are expected to be within bounds [-s, s-1] along axis of size s (i.e.) -data_shape[i] <= indices[...,i] <= data_shape[i] - 1.
|
13
13
|
It is an error if any of the index values are out of bounds.
|
14
14
|
The output is computed as follows:
|
15
15
|
The output tensor is obtained by mapping each index-tuple in the indices tensor to the corresponding slice of the input data.
|
16
16
|
1) If indices_shape[-1] > r-b => error condition
|
17
17
|
2) If indices_shape[-1] == r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensors
|
18
18
|
containing 1-D tensors of dimension r-b, where N is an integer equals to the product of 1 and all the elements in the batch dimensions
|
19
19
|
of the indices_shape. Let us think of each such r-b ranked tensor as indices_slice. Each *scalar value* corresponding to data[0:b-1,indices_slice]
|
20
20
|
is filled into the corresponding location of the (q-b-1)-dimensional tensor to form the output tensor (Example 1 below)
|
21
21
|
3) If indices_shape[-1] < r-b, since the rank of indices is q, indices can be thought of as N (q-b-1)-dimensional tensor
|
22
22
|
containing 1-D tensors of dimension < r-b. Let us think of each such tensors as indices_slice. Each *tensor slice* corresponding
|
23
23
|
to data[0:b-1, indices_slice , :] is filled into the corresponding location of the (q-b-1)-dimensional tensor
|
24
24
|
to form the output tensor (Examples 2, 3, 4 and 5 below)
|
25
25
|
This operator is the inverse of ScatterND.
|
26
|
-
Example 1
|
26
|
+
**Example 1**
|
27
|
-
|
27
|
+
batch_dims = 0
|
28
|
-
|
28
|
+
data = [[0,1],[2,3]] # data_shape = [2, 2]
|
29
|
+
indices = [[0,0],[1,1]] # indices_shape = [2, 2]
|
30
|
+
output = [0,3] # output_shape = [2]
|
29
|
-
indices = [[0,0],[1,1]] # indices_shape = [2, 2]
|
30
|
-
|
31
|
+
**Example 2**
|
31
|
-
Example 2
|
32
|
-
|
32
|
+
batch_dims = 0
|
33
|
+
data = [[0,1],[2,3]] # data_shape = [2, 2]
|
34
|
+
indices = [[1],[0]] # indices_shape = [2, 1]
|
35
|
+
output = [[2,3],[0,1]] # output_shape = [2, 2]
|
33
|
-
|
36
|
+
**Example 3**
|
34
|
-
indices = [[1],[0]] # indices_shape = [2, 1]
|
37
|
+
batch_dims = 0
|
38
|
+
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
39
|
+
indices = [[0,1],[1,0]] # indices_shape = [2, 2]
|
35
|
-
|
40
|
+
output = [[2,3],[4,5]] # output_shape = [2, 2]
|
36
|
-
Example 3
|
37
|
-
|
41
|
+
**Example 4**
|
38
|
-
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
42
|
+
batch_dims = 0
|
43
|
+
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
39
|
-
|
44
|
+
indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
|
40
|
-
|
45
|
+
output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
|
41
|
-
Example
|
46
|
+
**Example 5**
|
42
|
-
batch_dims = 0
|
47
|
+
batch_dims = 1
|
43
|
-
|
48
|
+
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
44
|
-
|
49
|
+
indices = [[1],[0]] # indices_shape = [2, 1]
|
50
|
+
output = [[2,3],[4,5]] # output_shape = [2, 2]
|
45
|
-
output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
|
46
|
-
|
47
|
-
Example 5
|
48
|
-
|
49
|
-
batch_dims = 1
|
50
|
-
|
51
|
-
data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
|
52
|
-
|
53
|
-
indices = [[1],[0]] # indices_shape = [2, 1]
|
54
|
-
|
55
|
-
output = [[2,3],[4,5]] # output_shape = [2, 2]
|
56
51
|
### Attributes
|
57
52
|
* **batch_dims - INT** (default is '0'):
|
58
53
|
The number of batch dimensions. The gather of indexing starts from dimension of data[batch_dims:]
|
59
54
|
### Inputs
|
60
55
|
- **data** (heterogeneous) - **T**:
|
61
56
|
Tensor of rank r >= 1.
|
62
57
|
- **indices** (heterogeneous) - **tensor(int64)**:
|
63
58
|
Tensor of rank q >= 1. All index values are expected to be within bounds [-s, s-1] along axis of size s. It is an error if any of the index values are out of bounds.
|
64
59
|
### Outputs
|
65
60
|
- **output** (heterogeneous) - **T**:
|
66
61
|
Tensor of rank q + r - indices_shape[-1] - 1.
|
67
62
|
### Type Constraints
|
68
|
-
* **T** in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ):
|
63
|
+
* **T** in ( tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ):
|
69
64
|
Constrain input and output types to any tensor type.
|