float16
can be used in numpy but not in Tensorflow 2.4.1 causing the error.
Is float16 available only when running on an instance with GPU with 16 bit support?
Today, most models use the float32 dtype, which takes 32 bits of memory. However, there are two lower-precision dtypes, float16 and bfloat16, each which take 16 bits of memory instead. Modern accelerators can run operations faster in the 16-bit dtypes, as they have specialized hardware to run 16-bit computations and 16-bit dtypes can be read from memory faster.
NVIDIA GPUs can run operations in float16 faster than in float32, and TPUs can run operations in bfloat16 faster than float32. Therefore, these lower-precision dtypes should be used whenever possible on those devices. However, variables and a few computations should still be in float32 for numeric reasons so that the model trains to the same quality. The Keras mixed precision API allows you to use a mix of either float16 or bfloat16 with float32, to get the performance benefits from float16/bfloat16 and the numeric stability benefits from float32.
Then when testing on CPU, do I need to change the type manually to float32 to make it run? According to [TF2.0] Change default types globally, currently there is no option to change the default float precision.
import numpy as np
np.arange(12, dtype=np.float16).reshape(3,4)
---
array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float16)
import tensorflow as tf
tf.reshape(tf.range(12, dtype=tf.float16), (3,4))
---
NotFoundError Traceback (most recent call last)
<ipython-input-14-dbaa1413ee5c> in <module>
1 import tensorflow as tf
----> 2 tf.reshape(tf.range(12, dtype=tf.float16), (3,4))
~/conda/envs/tensorflow/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 """Call target, and fall back on dispatchers if there is a TypeError."""
200 try:
--> 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a
~/conda/envs/tensorflow/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in range(start, limit, delta, dtype, name)
1875 delta = cast(delta, inferred_dtype)
1876
-> 1877 return gen_math_ops._range(start, limit, delta, name=name)
1878
1879
~/conda/envs/tensorflow/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py in _range(start, limit, delta, name)
7190 return _result
7191 except _core._NotOkStatusException as e:
-> 7192 _ops.raise_from_not_ok_status(e, name)
7193 except _core._FallbackException:
7194 pass
~/conda/envs/tensorflow/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
6860 message = e.message + (" name: " + name if name is not None else "")
6861 # pylint: disable=protected-access
-> 6862 six.raise_from(core._status_to_exception(e.code, message), None)
6863 # pylint: enable=protected-access
6864
~/.local/lib/python3.8/site-packages/six.py in raise_from(value, from_value)
NotFoundError: Could not find device for node: {{node Range}} = Range[Tidx=DT_HALF]
All kernels registered for op Range:
device='CPU'; Tidx in [DT_INT64]
device='CPU'; Tidx in [DT_INT32]
device='CPU'; Tidx in [DT_DOUBLE]
device='CPU'; Tidx in [DT_FLOAT]
[Op:Range]
When first create with float32 then cast to float16 works. Please advise why the error is caused.
import tensorflow as tf
a = tf.reshape(tf.range(12, dtype=tf.float32), (3,4))
print(f"a.dtype is {a.dtype}")
tf.cast(a, tf.float16)
---
a.dtype is <dtype: 'float32'>
<tf.Tensor: shape=(3, 4), dtype=float16, numpy=
array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float16)>
Use:
tf.keras.backend.set_floatx('float16')
You'll that the default everything will be tf.float16
. For instance:
import tensorflow as tf
tf.keras.backend.set_floatx('float16')
dense_layer = tf.keras.layers.Dense(1)
dense_layer.build((4,))
dense_layer.weights
[<tf.Variable 'kernel:0' shape=(4, 1) dtype=float16, numpy=
array([[-0.4214],
[-1.031 ],
[ 1.041 ],
[-0.6313]], dtype=float16)>,
<tf.Variable 'bias:0' shape=(1,) dtype=float16, numpy=array([0.], dtype=float16)>]
But this isn't recommended:
Note: It is not recommended to set this to float16 for training, as this will likely cause numeric stability issues. Instead, mixed precision, which is using a mix of float16 and float32, can be used by calling tf.keras.mixed_precision.experimental.set_policy('mixed_float16'). See the mixed precision guide for details.
Read the docs.