I am trying to code a GNN example problem as shown in the given link: https://towardsdatascience.com/hands-on-graph-neural-networks-with-pytorch-pytorch-geometric-359487e221a8
I am using a Macbook Pro 2016 edition, without a Nvidia graphic card!
The example problem is implementing 'CUDA' toolkit. Can I somehow modify the code and run in on my current laptop? I have made the dataset sufficiently small, such that it does not requires high computation and can run on my PC!
The part of the code which is giving an error is as follows!
def train():
model.train()
loss_all = 0
for data in train_loader:
data = data.to(device)
optimizer.zero_grad()
output = model(data)
label = data.y.to(device)
loss = crit(output, label)
loss.backward()
loss_all += data.num_graphs * loss.item()
optimizer.step()
return loss_all / len(train_dataset)
device = torch.device('cuda')
model = Net().to(device) # Net = A class inherited from torch.nn.Module
optimizer = torch.optim.Adam(model.parameters(), lr=0.005)
crit = torch.nn.BCELoss()
train_loader = DataLoader(train_dataset, batch_size=batch_size)
for epoch in range(num_epochs):
train()
The error is as follows
AssertionError: Torch not compiled with CUDA enabled
You are using:
device = torch.device('cuda')
If you like to use cpu please change to:
device = torch.device('cpu')