androidtensorflowgputorchvisiontorchscript

How to run Neural Network model on Android taking advantage of GPU?


Anyone tried to run object detection or crnn model on Android? I tried to run crnn model (serialized pytorch) but it takes 1s on Huawei P30 lite and 5s on Samsung J4 Core.

Huawei P30 lite
    CPU : octa core processor
    GPU : Mali-G51 MP4

Samsung J4
  CPU : quad core
  GPU : Adreno 308

GPU's in the android device are different from dedicated GPU in the sense that they don't have VRAM and power management. Both CPU and GPU shares the same RAM. Before running model on PC with GPU we specify to place my computation on GPU like

model = MyModel()
model.cuda()

But when I try to run model on Android does it take advantage of this built in GPU? or computation is faster in my Huawei because of this octa core processor, but Huawei obviously has better GPU than my Samsung device.


Solution

  • At the moment it is not possible to run pytorch on am ARM-GPU:

    Github Issue

    PyTorch Forum

    I think the differences in speed result out of the differnten cpu's!