javamachine-learninggpunvidiaopennlp

How to use GPU when training OpenNLP models?


I want to train an OpenNLP model using the CLI on a GPU server that I access remotely. I am familiar with utilizing the GPU when training pytorch models, but I realized I'm not sure how this would work with openNLP, given that it is written in java. Will openNLP make use of the GPU if I train it on one?

Specifically, I am thinking of this familiar code snippet we use when training pytorch models:

if torch.cuda.is_available():  
  dev = "cuda:0" 

Can anyone shed some light on how this works in the java OpenNLP library? Is there an equivalent to this line of code somewhere?

I am also using this docker image to run the CLI on my remote GPU server: https://hub.docker.com/r/casetext/opennlp/dockerfile

I believe I also need to modify the dockerfile to be able to use the GPU, but I was wondering if I needed to first do anything else to the openNLP code to accomplish this regardless of my docker container usage.


Solution

  • Apache OpenNLP does not support training on a GPU. Training can only be done on CPU.