I'm trying to learn the Arrayfire idioms by translating some vectorised numpy code.
For example, this is valid rowwise addition and multiplication in numpy,
>>> a = np.array([1,2,3])
>>> a
array([1, 2, 3])
>>> b = np.arange(9).reshape((3, 3))
>>> b
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a + b
array([[ 1, 3, 5],
[ 4, 6, 8],
[ 7, 9, 11]])
>>> a * b
array([[ 0, 2, 6],
[ 3, 8, 15],
[ 6, 14, 24]])
Do all arrays in arrayfire need to be the same shape? This generates an error.
>>> a = af.from_ndarray(a)
>>> b = af.from_ndarray(b)
>>> a + b
Invalid dimension for argument 1
Expected: ldims == rdims
>>> a * b
Invalid dimension for argument 1
Expected: ldims == rdims
From @pradeep answer, you can do this using tile until the added as a feature request.
>>> a = np.array([1,2,3])
>>> a = af.tile(af.transpose(af.from_ndarray(a)),3,1)
>>> af.display(a)
[3 3 1 1]
1 2 3
1 2 3
1 2 3
>>> b = np.arange(9).reshape((3, 3))
>>> b = af.from_ndarray(b)
>>> af.display(a + b)
[3 3 1 1]
1 3 5
4 6 8
7 9 11
>>> af.display(a * b)
[3 3 1 1]
0 2 6
3 8 15
6 14 24
As of writing this response, yes that is correct. Array's need to be of same shape. But I would like to point out that we are already working on broadcasting feature for binary operations - here is the PR - we will try to get this feature into a release as soon as we can.
However, even with current release, this limitation can be easily worked around using tile function. Since tile will be a JIT operation for such broadcast operations, it won't allocate any additional memory. The arithmetic operation and tiling operation will be combined into an efficient single launch kernel.