dockerartificial-intelligenceserverlessamazon-sagemaker

RunPods Serverless - Testing Endpoint in Local with Docker and GPU


I’m working on creating a custom container to run FLUX and Lora on Runpods, using this Stable Diffusion example as a starting point. I successfully deployed my first pod on Runpods, and everything worked fine.

However, my issue arises when I make code changes and want to test my endpoints locally before redeploying. Constantly deploying to Runpods for every small test is quite time-consuming.

I found a guide for local testing in the Runpods documentation here. Unfortunately, it only provides a simple example that suggests running the handler function directly, like this:

python your_handler.py --test_input '{"input": {"prompt": "The quick brown fox jumps"}}'

This doe not work for me as it ignores the Docker setup entirely and just runs the function in my local Python environment. I want to go beyond this and test the Docker image end-to-end locally—on my GPU—with the exact dependencies and setup that will be used when deploying on Runpods.

Is there a specific documentation for testing Docker images locally for Runpods, or a recommended workflow for this kind of setup?

I tried following the guidelines for local testing here: https://docs.runpod.io/serverless/workers/development/local-testing


Solution

  • I've solved this with the following command

    docker run --gpus all -p 8080:8080 -v "$(pwd)/test_input.json:/test_input.json" ${IMAGE_REPO}
    

    This command will start the endpoint, run the test, and then terminate automatically. Make sure to place the test_input.json file in your local directory.