pythontensorflowtensorflow-datasetstensorflow-estimator

Defining the input-function for TensorFlow pre-made estimator


I am trying to use the pre-made estimator tf.estimator.DNNClassifier to use on the MNIST dataset. I load the dataset from tensorflow_dataset.

I pursue the following four steps: first building the dataset pipeline and defining the input function:

## Step 1
mnist, info = tfds.load('mnist', with_info=True)

ds_train_orig, ds_test = mnist['train'], mnist['test']

def train_input_fn(dataset, batch_size):
    dataset = dataset.map(lambda x:({'image-pixels':tf.reshape(x['image'], (-1,))}, 
                                    x['label']))
    return dataset.shuffle(1000).repeat().batch(batch_size)

Then, in step 2, I define the feature column with a single key, and shape 784:

## Step 2:
image_feature_column = tf.feature_column.numeric_column(key='image-pixels',
                                                        shape=(28*28))

image_feature_column
NumericColumn(key='image-pixels', shape=(784,), default_value=None, dtype=tf.float32, normalizer_fn=None)

Step 3, I instantiated the estimator as follows:

## Step 3:
dnn_classifier = tf.estimator.DNNClassifier(
    feature_columns=image_feature_column,
    hidden_units=[16, 16],
    n_classes=10)

And finally, step 4 using the estimator by calling the .train() method:

## Step 4:
dnn_classifier.train(
    input_fn=lambda:train_input_fn(ds_train_orig, batch_size=32),
    #lambda:iris_data.train_input_fn(train_x, train_y, args.batch_size),
    steps=20)

But this reuslts in the following error. It looks like the problem has arised from the dataset.

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-21-95736cd65e45> in <module>
      2 dnn_classifier.train(
      3     input_fn=lambda: train_input_fn(ds_train_orig, batch_size=32),
----> 4     steps=20)

~/anaconda3/envs/tf2.0-beta/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx, accept_symbolic_tensors, accept_composite_tensors)
   1183       graph = get_default_graph()
   1184       if not graph.building_function:
-> 1185         raise RuntimeError("Attempting to capture an EagerTensor without "
   1186                            "building a function.")
   1187       return graph.capture(value, name=name)

RuntimeError: Attempting to capture an EagerTensor without building a function.

Solution

  • I think the graph construction gets weird if you load a tensorflow_datasets dataset outside the input_fn. I followed the TF2.0 migration guide example and this does not give errors. Please note that I have not tested for model correctness and you will have to modify input_fn logic a bit to get the function for eval.

    # Define the estimator's input_fn
    def input_fn():
      datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
      mnist_train, mnist_test = datasets['train'], datasets['test']
      dataset = mnist_train
      dataset = mnist_train.map(lambda x, y:({'image-pixels':tf.reshape(x, (-1,))}, 
                                        y))
      return dataset.shuffle(1000).repeat().batch(32)
    
    
    image_feature_column = tf.feature_column.numeric_column(key='image-pixels',
                                                            shape=(28*28))
    
    
    dnn_classifier = tf.estimator.DNNClassifier(
        feature_columns=[image_feature_column],
        hidden_units=[16, 16],
        n_classes=10)
    
    
    dnn_classifier.train(
        input_fn=input_fn,
        steps=200)
    

    I get a bunch of deprecation warnings at this point, but seems like the estimator is trained.