tensorflowkerasgoogle-colaboratorygoogle-cloud-tputalos

Why is Google colab TPU slow?


I'm using Talos to run hyperparameter tuning of a Keras model. Running this short code on Google colab TPU is very slow. I think it has something to do with the type of data. Should I convert it to tensors to make the TPU faster?

%tensorflow_version 2.x
import os
import tensorflow as tf
import talos as ta
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from sklearn.model_selection import train_test_split

def iris_model(x_train, y_train, x_val, y_val, params):

    # Specify a distributed strategy to use TPU
    resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
    tf.config.experimental_connect_to_host(resolver.master())
    tf.tpu.experimental.initialize_tpu_system(resolver)
    strategy = tf.distribute.experimental.TPUStrategy(resolver)

    # Use the strategy to create and compile a Keras model
    with strategy.scope():
      model = Sequential()
      model.add(Dense(32, input_shape=(4,), activation=tf.nn.relu, name="relu"))
      model.add(Dense(3, activation=tf.nn.softmax, name="softmax"))
      model.compile(optimizer=Adam(learning_rate=0.1), loss=params['losses'])

    # Convert data type to use TPU
    x_train = x_train.astype('float32')
    x_val = x_val.astype('float32')

    # Fit the Keras model on the dataset
    out = model.fit(x_train, y_train, batch_size=params['batch_size'], epochs=params['epochs'], validation_data=[x_val, y_val], verbose=0, steps_per_epoch=0)

    return out, model

# Load dataset
X, y = ta.templates.datasets.iris()

# Train and test set
x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=0.30, shuffle=False)

# Create a hyperparameter distributions 
p = {'losses': ['logcosh'], 'batch_size': [128, 256, 384, 512, 1024], 'epochs': [10, 20]}

# Use Talos to scan the best hyperparameters of the Keras model
scan_object = ta.Scan(x_train, y_train, params=p, model=iris_model, experiment_name='test', x_val=x_val, y_val=y_val, fraction_limit=0.5)

Solution

  • Thank you for your question.

    Unfortunately, I was not able to get your code sample to run on TensorFlow 2.2, so I don't know what performance you were seeing originally. I was able to fix it up and get it running on TPUs with the following changes:

    Here's the modified Colab code:

    # Run this to install Talos before running the rest of the code.
    !pip install git+https://github.com/autonomio/talos@1.0
    
    %tensorflow_version 2.x
    import os
    import tensorflow as tf
    import talos as ta
    from tensorflow.keras.models import Sequential
    from tensorflow.keras.layers import Dense
    from tensorflow.keras.optimizers import Adam
    from sklearn.model_selection import train_test_split
    
    print(tf.__version__) # TF 2.2.0 in my case
    
    resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
    tf.config.experimental_connect_to_cluster(resolver)
    tf.tpu.experimental.initialize_tpu_system(resolver)
    
    def iris_model(x_train, y_train, x_val, y_val, params):
        # Use the strategy to create and compile a Keras model
        strategy = tf.distribute.experimental.TPUStrategy(resolver)
        with strategy.scope():
          model = Sequential()
          model.add(Dense(32, input_shape=(4,), activation=tf.nn.relu, name="relu"))
          model.add(Dense(3, activation=tf.nn.softmax, name="softmax"))
          model.compile(optimizer=Adam(learning_rate=0.1), loss=params['losses'])
    
        train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(params['batch_size'])
        val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(params['batch_size'])
    
        # Fit the Keras model on the dataset
        out = model.fit(train_dataset, epochs=params['epochs'], validation_data=val_dataset)
    
        return out, model
    
    # Load dataset
    X, y = ta.templates.datasets.iris()
    
    # Train and test set
    x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=0.30, shuffle=False)
    
    # Create a hyperparameter distributions 
    p = {'losses': ['logcosh'], 'batch_size': [128, 256, 384, 512, 1024], 'epochs': [10, 20]}
    
    # Use Talos to scan the best hyperparameters of the Keras model
    scan_object = ta.Scan(x_train, y_train, params=p, model=iris_model, experiment_name='test', x_val=x_val, y_val=y_val, fraction_limit=0.5)
    

    For me, the last call took a little less than 2 minutes.

    For well-known datasets, you can skip the step of creating your own tf.data.Dataset by using the TensorFlow Datasets library. TFDS does have the iris dataset in their library. For an end-to-end example of using TFDS with TPUs, see TensorFlow's official TPU guide.