tensorflowcomputer-visiontensorflow-litetensorflow-hub

tensorflow hub example throws an error when Float16 is enabled


I am trying to load a model from Tensorflowhub using example code. It works perfect with the FP32. As soon as I add the tf.keras.mixed_precision.set_global_policy('mixed_float16') to enable mixed float, it raises an error. Looks like the dimension issue but then it works perfect with FP32. Here is the reproducible code:

import tensorflow as tf
import tensorflow_hub as hub
IMAGE_SIZE = (224,224)

class_names = ['cat','dog']

#If you comment out the following line, the code works fine.
tf.keras.mixed_precision.set_global_policy('mixed_float16')
# --------

model_handle = "https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5"
do_fine_tuning = False
print("Building model with", model_handle)
model = tf.keras.Sequential([
    tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
    hub.KerasLayer(model_handle, trainable=do_fine_tuning),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(len(class_names),
                          kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None,)+IMAGE_SIZE+(3,))
model.summary()

The following error is thrown:

Building model with https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [8], in <cell line: 4>()
      2 do_fine_tuning = False
      3 print("Building model with", model_handle)
----> 4 model = tf.keras.Sequential([
      5     tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
      6     hub.KerasLayer(model_handle, trainable=do_fine_tuning),
      7     tf.keras.layers.Dropout(rate=0.2),
      8     tf.keras.layers.Dense(len(class_names),
      9                           kernel_regularizer=tf.keras.regularizers.l2(0.0001))
     10 ])
     11 model.build((None,)+IMAGE_SIZE+(3,))
     12 model.summary()

File ~/miniconda3/envs/fahtx/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py:587, in no_automatic_dependency_tracking.<locals>._method_wrapper(self, *args, **kwargs)
    585 self._self_setattr_tracking = False  # pylint: disable=protected-access
    586 try:
--> 587   result = method(self, *args, **kwargs)
    588 finally:
    589   self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

File ~/miniconda3/envs/fahtx/lib/python3.8/site-packages/keras/utils/traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)
     65 except Exception as e:  # pylint: disable=broad-except
     66   filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67   raise e.with_traceback(filtered_tb) from None
     68 finally:
     69   del filtered_tb

File /tmp/__autograph_generated_fileo7avm3_o.py:74, in outer_factory.<locals>.inner_factory.<locals>.tf__call(self, inputs, training)
     72     result = ag__.converted_call(ag__.ld(smart_cond).smart_cond, (ag__.ld(training), ag__.autograph_artifact((lambda : ag__.converted_call(ag__.ld(f), (), dict(training=True), fscope))), ag__.autograph_artifact((lambda : ag__.converted_call(ag__.ld(f), (), dict(training=False), fscope)))), None, fscope)
     73 result = ag__.Undefined('result')
---> 74 ag__.if_stmt(ag__.not_(ag__.ld(self)._has_training_argument), if_body_3, else_body_3, get_state_3, set_state_3, ('result', 'training'), 1)
     76 def get_state_6():
     77     return (result,)

File /tmp/__autograph_generated_fileo7avm3_o.py:72, in outer_factory.<locals>.inner_factory.<locals>.tf__call.<locals>.else_body_3()
     70     training = False
     71 ag__.if_stmt(ag__.ld(self).trainable, if_body_2, else_body_2, get_state_2, set_state_2, ('training',), 1)
---> 72 result = ag__.converted_call(ag__.ld(smart_cond).smart_cond, (ag__.ld(training), ag__.autograph_artifact((lambda : ag__.converted_call(ag__.ld(f), (), dict(training=True), fscope))), ag__.autograph_artifact((lambda : ag__.converted_call(ag__.ld(f), (), dict(training=False), fscope)))), None, fscope)

File /tmp/__autograph_generated_fileo7avm3_o.py:72, in outer_factory.<locals>.inner_factory.<locals>.tf__call.<locals>.else_body_3.<locals>.<lambda>()
     70     training = False
     71 ag__.if_stmt(ag__.ld(self).trainable, if_body_2, else_body_2, get_state_2, set_state_2, ('training',), 1)
---> 72 result = ag__.converted_call(ag__.ld(smart_cond).smart_cond, (ag__.ld(training), ag__.autograph_artifact((lambda : ag__.converted_call(ag__.ld(f), (), dict(training=True), fscope))), ag__.autograph_artifact((lambda : ag__.converted_call(ag__.ld(f), (), dict(training=False), fscope)))), None, fscope)

ValueError: Exception encountered when calling layer "keras_layer_3" (type KerasLayer).

in user code:

    File "/root/miniconda3/envs/fahtx/lib/python3.8/site-packages/tensorflow_hub/keras_layer.py", line 237, in call  *
        result = smart_cond.smart_cond(training,

    ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
      Positional arguments (4 total):
        * <tf.Tensor 'inputs:0' shape=(None, 224, 224, 3) dtype=float16>
        * False
        * False
        * 0.99
      Keyword arguments: {}
    
     Expected these arguments to match one of the following 4 option(s):
    
    Option 1:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, None, None, 3), dtype=tf.float32, name='inputs')
        * True
        * True
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 2:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, None, None, 3), dtype=tf.float32, name='inputs')
        * True
        * False
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 3:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, None, None, 3), dtype=tf.float32, name='inputs')
        * False
        * True
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 4:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, None, None, 3), dtype=tf.float32, name='inputs')
        * False
        * False
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}


Call arguments received by layer "keras_layer_3" (type KerasLayer):
  • inputs=tf.Tensor(shape=(None, 224, 224, 3), dtype=float16)
  • training=False

Solution

  • It is about target 'dtype' when float16 is enabled as the variable rules it trying to use the float16 then you just need to specify float32 as the model input required. I like to include channel numbers as the image properties when switching colors to transforming. Some functions work without channels number but for those, you need a translation method. Sample Resize( ) -> img_to_array( ) -> Predict () OR Boundary Boxes working.

    [ Sample ]:

    import tensorflow as tf
    import tensorflow_hub as hub
    
    IMAGE_SIZE = ( 224,224,3 )
    class_names = ['cat','dog']
    
    tf.keras.mixed_precision.set_global_policy('mixed_float16')
    
    model_handle = "https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5"
    do_fine_tuning = False
    print("Building model with", model_handle)
    model = tf.keras.Sequential([
        tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE, dtype=tf.float32),
        hub.KerasLayer(model_handle, trainable=do_fine_tuning , dtype=tf.float32),
        tf.keras.layers.Dropout(rate=0.2),
        tf.keras.layers.Dense(len(class_names),
                              kernel_regularizer=tf.keras.regularizers.l2(0.0001))
    ])
    model.build((None,)+ IMAGE_SIZE)
    model.summary()
    

    [ Error ]:

    ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
      Positional arguments (4 total):
        * <tf.Tensor 'inputs:0' shape=(None, 224, 224, 3) dtype=float16>
        * False
        * False
        * 0.99
      Keyword arguments: {}
    

    [ Output ]:

    F:\temp\Python>python tf_test_mixed_float16.py
    WARNING:tensorflow:Mixed precision compatibility check (mixed_float16): WARNING
    Your GPU may run slowly with dtype policy mixed_float16 because it does not have compute capability of at least 7.0. Your GPU:
      NVIDIA GeForce GTX 1060 6GB, compute capability 6.1
    See https://developer.nvidia.com/cuda-gpus for a list of GPUs and their compute capabilities.
    If you will use compatible GPU(s) not attached to this host, e.g. by running a multi-worker model, you can ignore this warning. This message will only be logged once
    Building model with https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5
    2022-06-17 15:02:41.319205: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2022-06-17 15:02:41.878364: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4632 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
    Model: "sequential"
    _________________________________________________________________
     Layer (type)                Output Shape              Param #
    =================================================================
     keras_layer (KerasLayer)    (None, 2048)              23561152
    
     dropout (Dropout)           (None, 2048)              0
    
     dense (Dense)               (None, 2)                 4098
    
    =================================================================
    Total params: 23,565,250
    Trainable params: 4,098
    Non-trainable params: 23,561,152
    _________________________________________________________________
    
    F:\temp\Python>