tensorflowtensorflow-serving

modifying a tensorflow savedmodel pb file for inference with a custom op


I have a Tensorflow model trained in Python, exported to a .pb file and then used with Tensorflow Serving.

I have written a custom op that greatly speeds up the inference of some operators in this Tensorflow model, but only works for inference -- I can't use this custom op during training time.

I am wondering if it's possible for me to use this custom op with the .pb file in Tensorflow serving. I figure I will probably have to edit the .pb file such that it uses my custom op in place of the original op, and Tensorflow serving should then go about looking for the custom op implementation which I can link against its runtime.

So -- how does one go about modifying a Tensorflow .pb file and swap out operators? Are there example codes doing this that I can refer to?


Solution

  • Your best bet, if you for some reason can't train with the original ops, is probably proto surgery. I would look for some tools that let you convert a proto to ascii format, modify it, and convert it back to binary format. I found this gist of someone doing just that for saved model. You could then write tooling on top to try to replace with your custom op.