I'm trying to implement a function that computes the Relu derivative for each element in a matrix, and then return the result in a matrix. I'm using Python and Numpy.
Based on other Cross Validation posts, the Relu derivative for x is 1 when x > 0, 0 when x < 0, undefined or 0 when x == 0
Currently, I have the following code so far:
def reluDerivative(self, x):
return np.array([self.reluDerivativeSingleElement(xi) for xi in x])
def reluDerivativeSingleElement(self, xi):
if xi > 0:
return 1
elif xi <= 0:
return 0
Unfortunately, xi is an array because x is an matrix. reluDerivativeSingleElement function doesn't work on array. So I'm wondering is there a way to map values in a matrix to another matrix using numpy, like the exp function in numpy?
Thanks a lot in advance.
I guess this is what you are looking for:
>>> def reluDerivative(x):
... x[x<=0] = 0
... x[x>0] = 1
... return x
>>> z = np.random.uniform(-1, 1, (3,3))
>>> z
array([[ 0.41287266, -0.73082379, 0.78215209],
[ 0.76983443, 0.46052273, 0.4283139 ],
[-0.18905708, 0.57197116, 0.53226954]])
>>> reluDerivative(z)
array([[ 1., 0., 1.],
[ 1., 1., 1.],
[ 0., 1., 1.]])