I have a couple Docker Lambdas that are being deployed using the python AWS CDK. I get this error:
Lambda function XXX reached terminal FAILED state due to
InvalidImage(ImageLayerFailure: UnsupportedImageLAyerDetected - Layer
Digest sha256: XXX) and failed to stabilize
We deploy with cdk deploy
and my team doesn't have issues like this when deploying the same lambdas with the same Dockerfile. So I believe it has to do something with my local environment?
I'm on a Macbook M3, so I've got export DOCKER_DEFAULT_PLATFORM='linux/amd64'
in my .zshrc. I get this error regardless of whether or not the stack exists already in AWS.
We are a bit stumped, so any ideas would be appreciated. Thanks
The error you’re encountering, UnsupportedImageLayerDetected, indicates that the AWS Lambda runtime cannot process one or more layers in your Docker image. This is typically due to architecture compatibility or a mismatch in the base image layers.
Since you’re using an M3 MacBook (ARM architecture) but forcing DOCKER_DEFAULT_PLATFORM='linux/amd64', discrepancies may arise in how the layers are built, especially if there’s an issue with Docker’s emulation.
To address this:
Clear your Docker cache: docker system prune -a
Set the platform explicitly in your CDK deployment:
lambda.DockerImageFunction( self, 'MyLambdaFunction', code=lambda.DockerImageCode.from_image_asset('path_to_your_docker', platform='linux/amd64') )
Since AWS CDK relies on your local Docker environment to build and push images, any emulation issues could result in invalid layers. To ensure QEMU (a user-space emulator) is properly configured for cross-platform builds, run:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
This sets up Docker to handle cross-platform emulation correctly.
The best solution to avoid these local environment issues entirely is to automate the deployment process with a CI/CD pipeline.