I use a server which has CUDA 7.5. But the server does not involve CUDNN.
Is it possible to install CUDNN, and set all the linkings with CUDA, without root access, for the usage of all applications on ubuntu 14.04?
I have implemented the solution on this page Installing cuDNN for Theano without root access, but it did not work for me. I have verified by building caffe; http://caffe.berkeleyvision.org/, and I have checked that using cmake. I have created a directory caffe/build and run cmake .. from there. If the configuration was correct I would see these lines:
-- Found cuDNN (include: /usr/local/cuda-7.0/include, library: /usr/local/cuda-7.0/lib64/libcudnn.so)
-- NVIDIA CUDA:
-- Target GPU(s) : Auto
-- GPU arch(s) : sm_30
-- cuDNN : Yes
But I saw
-- cuDNN : Not found
P.S. I also need to run: https://github.com/rsennrich/nematus
What is the best way to install CUDNN locally, and link with global CUDA in the server?
it is possible to use CuDNN with a CUDA installed in a server, here is what I did to make it work. First, you just need to simply make a file in your local space:
Home/local
and make it contain include and lib folders(I guess most of you have had these local folders).
HOME/local/include
HOME/local/lib
Then download CuDNN and move the content from include and lib64 in the CuDNN folder into your local include and lib folders separately(which you just made)
At last, add these two environment paths to your .bashrc file
export CPATH=$CPATH:$HOME/local/include
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/local/lib
It will work then.
BTW, if you meet a problem of 'out of memory' after successfully installing the CuDNN, enter this line in the terminal before running your code:
export CUDA_VISIBLE_DEVICES=0
to change the GPU device.