deep-learningnumpy-ndarraymxnet

Adding <NDArray 4 @gpu(0)> and regular np.array


I have a loss variable that is returned after forward path, made of 4 "sub-losses" such as :

print(my_loss)

> Loss: 
 [0.37887186 0.4668851  0.4145702  0.506646  ]
 <NDArray 4 @gpu(0)>

I would like to sum all losses by epoch, while keeping the sub-losses split. I wanted to use numpy such as :

loss_to_save = np.zeros((4,))
loss_to_save += my_loss

However, this error is returned when trying the addition of arrays:

Traceback (most recent call last):
  File "train_schedule_copy.py", line 432, in <module>
    train(net, filename=cst.flname_weights, optimise="MCCExtent", resume=resumeFile)
  File "train_schedule_copy.py", line 292, in train
    loss_to_save += my_loss
  File ".local/lib/python3.8/site-packages/mxnet/ndarray/ndarray.py", line 291, in __radd__
    return self.__add__(other)
  File ".local/lib/python3.8/site-packages/mxnet/ndarray/ndarray.py", line 277, in __add__
    return add(self, other)
  File ".local/lib/python3.8/site-packages/mxnet/ndarray/ndarray.py", line 3634, in add
    return _ufunc_helper(
  File ".local/lib/python3.8/site-packages/mxnet/ndarray/ndarray.py", line 3578, in _ufunc_helper
    raise TypeError('type %s not supported' % str(type(rhs)))
TypeError: type <class 'numpy.ndarray'> not supported

From what I understand, the <NDArray 4 @gpu(0)> data type is not allowed to be added to regular numpy array ? How could achieve such operation ?


Solution

  • I managed to solve my issue by adding the asscalar() method, such as :

    loss_to_save = np.zeros((4,), dtype=np.float64)
    (...)
    loss_ind_sums = []
    for l in my_loss:
        summ = l.sum().asscalar()
        loss_ind_sums.append(summ)
    loss_to_save += loss_ind_sums