google-cloud-platformgcp-ai-platform-traininggoogle-ai-platformgoogle-cloud-aigoogle-cloud-ai-platform-pipelines

How to create the config.yaml file for distributed training on Unified Cloud AI Platform


I am looking to train a model using Google Cloud's new service - the Unified AI Platform. To do so I am using a config.yaml that looks like this:

workerPoolSpecs:
  workerPoolSpec:
    machineSpec:
      machineType: n1-highmem-16
      acceleratorType: NVIDIA_TESLA_P100
      acceleratorCount: 2
    replicaCount: 1
    pythonPackageSpec:
      executorImageUri: us-docker.pkg.dev/cloud-aiplatform/training/tf-gpu.2-4:latest
      packageUris: gs://path/to/bucket/unified_ai_platform/src_dist/trainer-0.1.tar.gz
      pythonModule: trainer.task
  workerPoolSpec:
    machineSpec:
      machineType: n1-highmem-16
      acceleratorType: NVIDIA_TESLA_P100
      acceleratorCount: 2
    replicaCount: 2
    pythonPackageSpec:
      executorImageUri: us-docker.pkg.dev/cloud-aiplatform/training/tf-gpu.2-4:latest
      packageUris: gs://path/to/bucket/unified_ai_platform/src_dist/trainer-0.1.tar.gz
      pythonModule: trainer.task

However for distributed training I am unable to understand how to pass multiple workerPoolSpecs in this file. The example yaml file provided does not look at the case wherein I can provide multiple workerPoolSpecs.

The example's documentation also saying that "You can specify multiple worker pool specs in order to create a custom job with multiple worker pools".

Any help in this regard will be appreciated.


Solution

  • Answering my own question. The config.yaml file should look like this:

    workerPoolSpecs:
      - machineSpec:
          machineType: n1-standard-16
          acceleratorType: NVIDIA_TESLA_P100
          acceleratorCount: 2
        replicaCount: 1
        containerSpec:
          imageUri: gcr.io/path/to/container:v2
          args: 
            - --model-dir=gs://path/to/model
            - --tfrecord-dir=gs://path/to/training/data/
            - --epochs=2
      - machineSpec:
          machineType: n1-standard-16
          acceleratorType: NVIDIA_TESLA_P100
          acceleratorCount: 2
        replicaCount: 2
        containerSpec:
          imageUri: gcr.io/path/to/container:v2
          args: 
            - --model-dir=gs://path/to/models
            - --tfrecord-dir=gs://path/to/training/data/
            - --epochs=2