pythonaws-lambdaaws-lambda-containers

How to Prevent Invoking AWS Lambda Function Container Multiple Times


[UPDATE 1:] When I increased the timeout to 1 minute, CloudWatch displayed the following in the middle, after two successful runs of the script, and followed by one:

REPORT RequestId: xxx-xxx   Duration: 7670.23 ms    Billed Duration: 7671 ms    Memory Size: 128 MB Max Memory Used: 36 MB  
RequestId: xxx-xxx Error: Runtime exited without providing a reason
Runtime.ExitError

Original post

I have a Lambda function that runs via a custom container image. The gist of the single Python script within it is as follows:

# imports

def lambda_handler(event, context):
    # read a JSON file from S3, check an FTP server for some info, and prepare a JSON response
    # if file not found or some other error, handle exception, prepare appropriate JSON response
    # return JSON response

# Here be helper functions

if __name__ == '__main__':
    lambda_helper(None, None)
    # also tried response = lambda_helper(None, None)

This is the first state in a step function which will be triggered regularly by CloudWatch Events and therefore does not require any input.

It is invoked from the container as

CMD ["python", "script.py"]

When I test this function from the console, I see all the expected log messages in CloudWatch including the last one that indicates a successful execution, however this process repeats itself a couple of times and overall is seen as a failure (red banner on top).

It times out after 3 seconds, because that's the default limit, but not before the script has run successfully a couple of times. There are no memory issues (20-30 MB used, out of 128 MB) or other errors.

In an earlier version, the call to lambda_handler was enclosed within sys.exit(), but after reading some threads about it interfering with how Lambda handles the function I removed it. The only difference then was that I could see the JSON response in CloudWatch whereas now I see only the log messages.

I have read through a ton of threads and documentation, but I'm still not able to resolve this issue. Any help will be appreciated.


Solution

  • Answering my own question.

    Apparently, if you wish to use a base image that's different from what's offered by AWS, you must execute your code within an "AWS Runtime Interface Client". I initially use a Python-Alpine image and tried following the steps at the end of this blog post, but the build process ran into errors for reasons that I haven't looked into. I then created another image from Debian Buster based on instructions from here and, after some "clean up" of the Dockerfile to suit my application, it worked.