I'm trying to use Theano with gpu. My OS is Ubuntu 16.04
Firstly, typing import theano
will result in
Using cuDNN version 5110 on context None
Mapped name None to device cuda0: GeForce GTX 1080 (0000:01:00.0)
To see if my GPU is being used I try test from theano documentation
My ~/.theanorc is
[global]
device = cuda0
floatX = float32
[nvcc]
fastmath = True
In this case test says:
[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, (False,))>),
HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.191431 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761 1.62323296]
Used the cpu
But using old backend with device = gpu0
says:
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.199280 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761 1.62323296]
Used the gpu
So I think something goes wrong with cuda. How can I check if its ok? Why "context" is "None"? Why does test say "using cpu" ?
try to replace cuda0 with cuda.
I was having the same warning-like text after importing theano:
Using cuDNN version 5110 on context None Mapped name None to device cuda: GeForce GT 750M (0000:01:00.0)
I went ahead and trained a DNN, I can see the speed is much faster than I ran my code on CPU before. So, I guess the text doesn't mean GPU is not working.