I'm getting an odd error when trying to use tf.ragged.stack
. I'm trying to stack two tensors t1, t2, however they have different rank. t2 is 1 rank less than t1. Therefore I used tf.expand_dims
to increase the rank of t2 so the ranks match. However when I go to stack them I get an error:
import tensorflow as tf
t1 = tf.ones([2,10,10], tf.int32)
t2 = tf.ragged.constant([
[0,1,2,3,4,5],
[0,1,2,3,4]
])
t2 = tf.expand_dims(t2, -1)
tf.ragged.stack([t1, t2])
The error I get is
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [20,10] vs. shape[1] = [11,1] [Op:ConcatV2] name: concat
However when I create an equivalent tensor from scratch I don't get the error.
t2_new = tf.ragged.constant([
[[0],[1],[2],[3],[4],[5]],
[[0],[1],[2],[3],[4]]
tf.ragged.stack([t1, t2_new]) # no error
The difference between t2 and t2 new is that tensorflow thinks their shapes are different even though they actually represent the same tensor
print(t2.shape) # == (2,None,1)
print(t2_new.shape) # == (2, None, None)
print(tf.math.reduce_all(t2 == t2_new)) # == True i.e actually the same tensor
It's actually not that trivial, as tf.expand_dims
adds a new dimension, but not a ragged dimension, which is necessary to stack both tensors afterwards. Generally, it is possible but a bit more complicated than just adding a new dimension. I would recommend using tf.RaggedTensor.from_value_rowids with tf.RaggedTensor.from_row_splits:
import tensorflow as tf
t1 = tf.ones([2,10,10], tf.int32)
t2 = tf.ragged.constant([[0,1,2,3,4,5],
[0,1,2,3,4]])
t2_new = tf.ragged.constant([
[[0],[1],[2],[3],[4],[5]],
[[0],[1],[2],[3],[4]]])
flattened_ragged_tensor = t2.flat_values
rows = tf.cast(t2.bounding_shape()[0], dtype=tf.int32)
t2 = tf.RaggedTensor.from_value_rowids(
values=tf.RaggedTensor.from_row_splits(
values=flattened_ragged_tensor,
row_splits=tf.range(tf.shape(flattened_ragged_tensor)[0] + 1)),
value_rowids=tf.concat([tf.tile([i], [t2[i].shape[0]]) for i in tf.range(rows)], axis=0),
nrows=rows)
print(t2.shape)
print(t2_new.shape)
print(tf.ragged.stack([t1, t2]))
print(tf.ragged.stack([t1, t2_new]))
(2, None, None)
(2, None, None)
<tf.RaggedTensor [[[[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]], [[[0], [1], [2], [3], [4], [5]], [[0], [1], [2], [3], [4]]]]>
<tf.RaggedTensor [[[[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]], [[[0], [1], [2], [3], [4], [5]], [[0], [1], [2], [3], [4]]]]>