pytorchreinforcement-learningpytorch-geometricgraph-neural-network

Pytorch Geometric graph batching not using DataLoader for Reinforcement learning


I am using PyTorch Geometric to create a reinforcement learning algorithm, and I would therefore like to avoid using the inbuilt DataLoader as I generate data/observations on the go. However, I am encountering an issue when passing a batch of PyTorch Geometric Graphs. I have a numpy memory array with PyG graphs. I pick from this memory and try to push it through the neural network (NN).

Pushing a single graph through the NN seems to work fine. I get a representation for each node. However, when using a batch, issues arise. Normally, I can create a tensor of the numpy_array batch. However, PyTorch cannot do this as it cannot handle the PyG datatype. I, therefore, create a Batch using PyTorch Geometric's inbuilt functionality. It goes through the neural network; however, the output dimension seems weird. It seems that the graphs are combined into a single object and then passed through as a single graph. However, I was expecting an output of [batch_size, n_nodes] not [batch_size * n_nodes]. I was wondering if I am doing this correctly or not. Is there a better way of handling this to avoid the dimensionality issue? I do not trust that I can just split the output array every n_nodes in the array.

One option is to use a for-loop to push each individual graph into the forward pass, however this is quite inefficient. Perhaps there is a simple setting that I am missing? I have included a working example.

import torch as T
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch_geometric.nn import GCNConv
from torch_geometric.data import Batch
from torch_geometric.data import Data
import numpy as np


class DeepNetworkGCN(nn.Module):
    def __init__(self, lr=0.001, input_dims=[1], fc1_dims=128, fc2_dims=128, out_dims=[1]):
        super(DeepNetworkGCN, self).__init__()

        # CNN part of network
        self.GCNconv1 = GCNConv(*input_dims, fc1_dims)
        self.GCNconv2 = GCNConv(fc1_dims, fc2_dims)

        # conform to output dimension
        self.fc1 = nn.Linear(fc2_dims, *out_dims)

        self.optimizer = optim.Adam(self.parameters(), lr=lr)
        self.loss = nn.MSELoss()
        self.device = T.device('cuda:0' if T.cuda.is_available() else 'cpu')
        self.to(self.device)

    def forward(self, state):
        # Process graph data using GCN layers
        x = self.GCNconv1(state.x, state.edge_index)
        x = F.relu(x)
        x = self.GCNconv2(x, state.edge_index)

        # Final fully connected layer
        out = self.fc1(x)

        return out


def random_pyg_graph(num_nodes=3):  
    # random node features
    node_features = T.randint(0, 5, (num_nodes, 1), dtype=T.float)

    # random edge features
    edge_features = T.randn(num_nodes, num_nodes)

    # random edge indices
    edge_index = T.randint(0, num_nodes, (2, num_nodes * 2))

    # Remove self-loops
    edge_index = edge_index[:, edge_index[0] != edge_index[1]]

    # graph
    graph_data = Data(x=node_features, edge_index=edge_index, edge_attr=edge_features)

    return graph_data


# setup example
batch_size = 3
memory = np.zeros(batch_size, dtype=object)

# fill memory
for i in range(batch_size):
    memory[i] = random_pyg_graph()

# define model
CNN = DeepNetworkGCN()

# test for single PyG
output = CNN.forward(memory[0])
print(output)
# output 1 for each node e.g.
# tensor([[0.3770],
#        [0.6119],
#        [0.2014]], grad_fn=<AddmmBackward0>)

# test for numpy.ndarray
# FAILS! # FAILS! # FAILS!
# output = CNN.forward(memory[:]) # FAILS!
# FAILS! # FAILS! # FAILS!

# Create batch and do forward pass.
output = CNN.forward(Batch.from_data_list(memory[:]))
print(output)
# output dimension is weird. ( n_nodes*batch_size).
# tensor([[ 0.0173],
#         [ 0.0316],
#         [ 0.0282],
#         [ 0.0147],
#         [-0.0201],
#         [-0.0264],
#         [ 0.0147],
#         [-0.0084],
#         [ 0.0021]], grad_fn=<AddmmBackward0>)



Solution

  • If I run through the batch one by one or by creating a batch, I can get the same numerical results from the neural network. I then have to reshape the output to make it fit the expected output. I find it weird that PyTorch Geometric does not do this automatically. I don't know if this is the "correct" way of doing it. However, this seems to be the best alternative/solution.

    # setup example
    batch_size = 3
    num_nodes = 3
    memory = np.zeros(batch_size, dtype=object)
    
    # fill memory
    for i in range(batch_size):
        memory[i] = random_pyg_graph(num_nodes=num_nodes)
    
    # define model
    CNN = DeepNetworkGCN()
    
    # test for single PyG
    for i in range(len(memory)):
        output = CNN.forward(memory[i])
        print(output)
    # tensor([[-0.1082],
    #         [-0.1337],
    #         [-0.1323]], grad_fn=<AddmmBackward0>)
    # tensor([[-0.0894],
    #         [-0.0903],
    #         [-0.0789]], grad_fn=<AddmmBackward0>)
    # tensor([[-0.1073],
    #         [-0.1131],
    #         [-0.1131]], grad_fn=<AddmmBackward0>)
    
    # Create batch and do forward pass.
    output = CNN.forward(Batch.from_data_list(memory[:]))
    print(output)
    
    # tensor([[-0.1082],
    #         [-0.1337],
    #         [-0.1323],
    #         [-0.0894],
    #         [-0.0903],
    #         [-0.0789],
    #         [-0.1073],
    #         [-0.1131],
    #         [-0.1131]], grad_fn=<AddmmBackward0>)
    
    print(output.reshape(batch_size, num_nodes))
    
    # tensor([[-0.1082, -0.1337, -0.1323],
    #         [-0.0894, -0.0903, -0.0789],
    #         [-0.1073, -0.1131, -0.1131]], grad_fn=<ViewBackward0>)