I wrote a custom activation function in torch, but it is painfully slow:
class NoisyRelu(nn.Module):
def __init__(self):
super(NoisyRelu, self).__init__()
def forward(self, x: torch.Tensor)-> torch.Tensor:
relu_result = F.relu(x)
return relu_result + torch.randn(x.size()).to(x.device) * 0.05
Is there any way I can accelerate it? Decorating it with torch.jit.script did also not help.
Generate random directly on the device:
torch.randn(x.size(), device=x.device)