python-3.xtensorflowtensorflow-hubelmo

How do I produce ELMo embeddings for tokenised strings without getting "Function call stack: pruned"?


I am trying to produce ELMo embeddings for batches of tokenised strings. However I keep receiving the following error:

Traceback (most recent call last):
  File "/home/lorcan/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-2-0d50a997dad6>", line 17, in <module>
    embeddings = elmo(tokens=tokens2, sequence_len=lens2)['elmo']
  File "/home/lorcan/anaconda3/envs/ncr_elmo/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1605, in __call__
    return self._call_impl(args, kwargs)
  File "/home/lorcan/anaconda3/envs/ncr_elmo/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1645, in _call_impl
    return self._call_flat(args, self.captured_inputs, cancellation_manager)
  File "/home/lorcan/anaconda3/envs/ncr_elmo/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/home/lorcan/anaconda3/envs/ncr_elmo/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 598, in call
    ctx=ctx)
  File "/home/lorcan/anaconda3/envs/ncr_elmo/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError:  Incompatible shapes: [4,5,1] vs. [4,9,1024]
     [[node mul (defined at /home/lorcan/anaconda3/envs/ncr_elmo/lib/python3.6/site-packages/tensorflow_hub/module_v2.py:106) ]] [Op:__inference_pruned_4853]
Function call stack:
pruned

What is going wrong here? Are the embeddings tensors just too large? I am using Python 3.6.13 tensorflow==2.2.0, tensorflow-estimator==2.2.0, and tensorflow-hub==0.12.0.

The code below reproduces the error:

import tensorflow as tf
import tensorflow_hub as hub

elmo = hub.load('https://tfhub.dev/google/elmo/3').signatures['tokens']

tokens = tf.convert_to_tensor(
    [[b'fetal', b'derived', b'definitive', b'erythrocyte', b'', b'', b'', b'', b''],
     [b'splenic', b'red', b'pulp', b'macrophage', b'', b'', b'', b'', b''],
     [b'juxtaglomerular', b'complex', b'cell', b'', b'', b'', b'', b'', b''],
     [b'epithelial', b'cell', b'of', b'large', b'intestine', b'', b'', b'', b'']],
    tf.string)

lens = tf.convert_to_tensor([4, 4, 3, 5], tf.int32)

embeddings = elmo(tokens=tokens, sequence_len=lens)['elmo']

Solution

  • It works for me when the trailing spaces in tokens are removed such that at least one entry does not end in b'' i.e.

    tokens = tf.convert_to_tensor(
        [[b'fetal', b'derived', b'definitive', b'erythrocyte', b''],
         [b'splenic', b'red', b'pulp', b'macrophage', b''],
         [b'juxtaglomerular', b'complex', b'cell', b'', b''],
         [b'epithelial', b'cell', b'of', b'large', b'intestine']],
        tf.string)