pythonpytorchautomatic-differentiation

Any faster and memory-efficient alternative of torch.autograd.functional.jacobian(model.decoder, latent_l)?


I have a decoder model.decoder, which is comprised of a series of Convolutional Batchnorm and ReLU layers. I have a latent vector latent_l, which is a 8 dimensional latent vector, say, has the dimension (1, 8, 1, 1), where 1 is the batch size. I am doing torch.autograd.functional.jacobian(model.decoder, latent_l), which is taking a huge amount of time, is there any fast approximation for this jacobian?

There is jacrev, but I am not sure if that works for this example where we pass a decoder as a whole and compute the jacobian of the decoder with respect to the latent vector.

When I use torch.autograd.functional.jacobian(model.decoder, latent_l, vectorize=True), the memory consumption of the GPU increases drastically, leading to the crashing of the program. Is there any efficient way of doing this using Pytorch?


Solution

  • This seems to work fine: torch.autograd.functional.jacobian(model.decoder, latent_l, strategy="forward-mode", vectorize=True), where only forward passes are needed instead of computing the whole jacobian.