I have a Keras graph with a float32 tensor of shape (?, 224, 224, 3) that I want to export to Tensorflow Serving, in order to make predictions with RESTful. Problem is that I cannot input tensors, but encoded b64 strings, as that is a limitation of the REST API. That means that when exporting the graph, the input needs to be a string that needs to be decoded.
How can I "inject" the new input to be converted to the old tensor, without retraining the graph itself? I have tried several examples [1][2].
I currently have the following code for exporting:
image = tf.placeholder(dtype=tf.string, shape=[None], name='source')
signature = predict_signature_def(inputs={'image_bytes': image},
outputs={'output': model.output})
I somehow need to find a way to convert image to model.input, or a way to get the model output to connect to image.
Any help would be greatly appreciated!
You can use tf.decode_base64
:
image = tf.placeholder(dtype=tf.string, shape=[None], name='source')
image_b64decoded = tf.decode_base64(image)
signature = predict_signature_def(inputs={'image_bytes': image_b64decoded},
outputs={'output': model.output})
EDIT:
If you need to use tf.image.decode_image
, you can get it to work with multiple inputs using tf.map_fn
:
image = tf.placeholder(dtype=tf.string, shape=[None], name='source')
image_b64decoded = tf.decode_base64(image)
image_decoded = tf.map_fn(tf.image.decode_image, image_b64decoded, dtype=tf.uint8)
This will work as long as the images have all the same dimensions, of course. However, the result is a tensor with completely unknown shape, because tf.image.decode_image
can output a different number of dimensions depending on the type of image. You can either reshape it or use another tf.image.decode_*
call so at least you have a known number of dimensions in the tensor.