pythonpytorchquantitative-finance

Gradient flow in Pytorch for autocallable options


I have the following code:

import numpy as np
import torch
from torch import autograd

# Define the parameters with requires_grad=True
r = torch.tensor(0.03, requires_grad=True)
q = torch.tensor(0.02, requires_grad=True)
v = torch.tensor(0.14, requires_grad=True)
S = torch.tensor(1001.0, requires_grad=True)

# Generate random numbers and other tensors
Z = torch.randn(10000, 5)
t = torch.tensor(np.arange(1.0, 6.0))
c = torch.tensor([0.2, 0.3, 0.4, 0.5, 0.6])

# Calculate mc_S with differentiable operations
mc_S = S * torch.exp((r - q - 0.5 * v * v) * t + Z.cumsum(axis=1))

# Calculate payoff with differentiable operations
res = []
mask = 1.0
for col, coup in zip(mc_S.T, c):
    payoff = mask * torch.where(col > S, coup, torch.tensor(0.0))
    res.append(payoff)
    mask = mask * (payoff == 0)

v = torch.stack(res).T
result = v.sum(axis=1).mean()

# Compute gradients - breaks here
grads = autograd.grad(result, [r, q, v, S], allow_unused=True, retain_graph=True)
print(grads)

I'm trying to price an autocallable option with early knockout and require the sensitivities to input variables.

However, the way the coupons are calculated (the c tensor in the code above), breaks the computational graph and I'm unable to obtain the gradients. Is there a way to get this code to calculate the derivatives?

Thanks


Solution

  • torch.where(col > S, coup, torch.tensor(0.0)) is not a differentiable operation. There is no smoothness to it. You are either selecting coup or 0. There is no slope to the function. coup * torch.sigmoid(col - S) will give you a result that is similar to your operation, but is differentiable.

    In your example, we are selecting from coup when col > S and choosing 0 otherwise. In the differentiable version, we get coup when col is much greater than S and 0 when col is much less than S. In the middle, we get something in between, and you will likely need to tune the scale and offset to get the exact loss function you need for your application, something like torch.sigmoid(alpha * (col + beta - S) ** gamma).

    import numpy as np
    import torch
    from torch import autograd
    
    # Define the parameters with requires_grad=True
    r = torch.tensor(0.03, requires_grad=True)
    q = torch.tensor(0.02, requires_grad=True)
    v = torch.tensor(0.14, requires_grad=True)
    S = torch.tensor(1001.0, requires_grad=True)
    
    # Generate random numbers and other tensors
    Z = torch.randn(10000, 5)
    t = torch.tensor(np.arange(1.0, 6.0))
    c = torch.tensor([0.2, 0.3, 0.4, 0.5, 0.6])
    
    # Calculate mc_S with differentiable operations
    mc_S = S * torch.exp((r - q - 0.5 * v * v) * t + Z.cumsum(axis=1))
    
    # Calculate payoff with differentiable operations
    res = []
    alpha = 1.0
    beta = 0.0
    gamma = 1.0
    for col, coup in zip(mc_S.T, c):
        payoff = coup * torch.sigmoid(alpha * (col + beta - S) ** gamma)
        res.append(payoff)
        mask = mask * (payoff == 0)
    
    v = torch.stack(res).T
    result = v.sum(axis=1).mean()
    
    # Compute gradients - breaks here
    grads = autograd.grad(result, [r, q, v, S], allow_unused=True, retain_graph=True)
    print(grads)