pythontensorflowinfix-operatorprefix-operatorxla

Is it okay to use python operators for tensorflow tensors?


TL;DR
Is (a and b) equivalent to tf.logical_and(a, b) in terms of optimization and performance? (a and b are tensorflow tensors)

Details:
I use python with tensorflow. My first priority is to make the code run fast and my second priority is to make it readable. I have working and fast code that, for my personal feeling, looks ugly:

@tf.function
# @tf.function(jit_compile=True)
def my_tf_func():
    # ...

    a = ... # some tensorflow tensor
    b = ... # another tensorflow tensor

    # currently ugly: prefix notation with tf.logical_and
    c = tf.math.count_nonzero(tf.logical_and(a, b))

    # more readable alternative: infix notation:
    c = tf.math.count_nonzero(a and b)

    # ...

The code that uses prefix notation works and runs fast, but I don't think it's very readable due to the prefix notation (it's called prefix notation, because the name of the operation logical_and comes before the operands a and b).

Can I use infix notation, i.e. the alternative at the end of above code, with usual python operators like and, +, -, or == and still get all the benefits of tensorflow on the GPU and compile it with XLA support? Will it compile to the same result?

The same question applies to unary operators like not vs. tf.logical_not(...).

This question was crossposted at https://software.codidact.com/posts/289588 .


Solution

  • In general, no.

    It depends on the specific operator. As you may know, Python's data model allows to override the behavior of operators by re-defining special methods of a class. Tensorflow re-defines and most (if not all) of them for its Tensor class, and thankfully also documents their behavior, so you can easily check the documentation for tf.Tensor.

    Some operators might behave as you intend them to. For example, tensor_a + tensor_b is the same as tensor_a.__add__(tensor_b) or tf.math.add(tensor_a, tensor_b).

    Some of them might not. For example, if not some_tensor: ... will raise an exception because the __bool__ method of tf.Tensor is specifically designed to avoid misuse and raise an exception every time a tensor is used as a boolean. On the other hand, x = not t will be tf.logical_not(t) if t is a boolean tensor (its dtype is bool), but will perform a bitwise negation for non-boolean tensors (source). Therefore, you will have to be careful when you use operators instead of explicit methods.

    In your example tf.logical_and(a, b) can be written as a & b according to the documentation for tf.logical_and(). The expression a and b will raise an exception for the same reason as above, since in Python you cannot override the logical and and or operators (see here), and __bool__ will be called.