tensorflowquantizationtensorflow-litequantization-aware-training

create_training_graph() failed when converted MobileFacenet to quantize-aware model with TF-lite


I am trying to quantize MobileFacenet (code from sirius-ai) according to the suggestion and I think I met the same issue as this one

When I add tf.contrib.quantize.create_training_graph() into training graph
(train_nets.py ln.187: before train_op = train(...) or in train() utils/common.py ln.38 before gradients)

It did not add quantize-aware ops into the graph to collect dynamic range max\min.

I assume that I should see some additional nodes in tensorboard, but I did not, thus I think I did not successfully add quantize-aware ops in training graph. And I try to trace tensorflow, found that I got nothing with _FindLayersToQuantize().

However when I add tf.contrib.quantize.create_eval_graph() to refine the training graph. I can see some quantize-aware ops as act_quant... Since I did not add ops in training graph successfully, I have no weights to load in eval graph. Thus I got some error message as

Key MobileFaceNet/Logits/LinearConv1x1/act_quant/max not found in checkpoint

or

tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value MobileFaceNet/Logits/LinearConv1x1/act_quant/max

Does anyone know how to fix this error? or how to get quantized MobileFacenet with good accuracy?

Thanks!


Solution

  • H,

    Unfortunately, the contrib/quantize tool is now deprecated. It won't be able to support newer models, and we are not working on it anymore.

    If you are interested in QAT, I would recommend trying the new TF/Keras QAT API. We are actively developing that and providing support for it.