pythontensorflowtegratensorrtnvidia-jetson

How to do inference using TesnorFlow-GPU models on Tegra X2?


I am new to Jetson tegra x2 board.

I have a plan to run my tensorflow-gpu models on TX2 board and see how they perform there. These models are trained and tested on GTX GPU machine.

On tx2 board, Jetpack full does not have tensorflow in it. So tensorflow needs to be built/installed which I have seen several tutorials on and tried. My python files train.py and test.py expect tensorflow-gpu.

Now I suspect, if tensorflow-gpu buiding on tx2 board is the right way to go?

Oh, there is Nvidia TensorRT on TX2, that will do part of the job, but how? and is that right?

Will tensorflow and tensorRT work together to replace tensorflow-gpu? but how? then what modifications will i have to make in my train and test python files?

Do I really need to build tensorflow for tx2 at all? I only need inference I don't want to do training there.

I have studied different blogs and tried a several options but now things are bit messed up.

My simple question is:

What are steps to get inference done on Jetson TX2 board by using TensorFlow-GPU deep learning models trained on GTX machine?


Solution

  • The easiest way is to install the NVIDIA provided wheel: https://docs.nvidia.com/deeplearning/dgx/install-tf-jetsontx2/index.html

    All the dependencies are already installed by JetPack.

    After you install Tensorflow using the wheel, you can use it however you use Tensorflow on other platforms. For running inference, you can download a Tensorflow model into TX2 memory, and run your Tensorflow inference scripts on them.

    You can also optimize your Tensorflow models by passing them through TF-TRT: https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html There is just one API call that does the optimization: create_inference_graph(...) This will optimize the Tensorflow graph (by mostly fusing nodes), and also let you build the models for lower precision to get better speedup.