I have created a zip file with all the libraries on Amazon linux 2023 and uploaded it to LambdaLayer.
mkdir python
pip3.11 install atproto -t python
zip -r python.zip python
I then uploaded that zip file to LambdaLayer. Then I created a Lambda, specified a Layer, and executed the code below, but it terminated with a timeout.
from atproto import Client, client_utils
def lambda_handler(event, context):
print('test')
client = Client()
profile = client.login('MyBlueskyID', 'MyBlueskyPassword')
print('Welcome,', profile.display_name)
text = client_utils.TextBuilder().text('テストです')
post = client.send_post(text)
client.like(post.uri, post.cid)
Status: Failed
Test Event Name: posttest
Response:
{
"errorMessage": "2024-11-17T01:58:20.823Z xxxx-xxxx-xxxx Task timed out after 153.10 seconds"
}
Function Logs:
INIT_REPORT Init Duration: 10013.64 ms Phase: init Status: timeout
2024-11-17T01:58:20.823Z xxxx-xxxx-xxxx Task timed out after 153.10 seconds
END RequestId: xxxx-xxxx-xxxx
REPORT RequestId: xxxx-xxxx-xxxx Duration: 153099.68 ms Billed Duration: 150000 ms Memory Size: 128 MB Max Memory Used: 128 MB
Request ID: xxxx-xxxx-xxxx
I set the timeout period to 150 seconds, but the situation remains the same.
I use Python version 3.11.
What is the cause of this and how can it be resolved?
I thought there was a problem with my long-time Amazon linux configuration for creating layers, so I started over by creating an EC2 instance, but that did not solve the problem. And added a print function to see where in the source code it stopped, but it never output to CloudWatch.
I changed the Python runtime to 3.12, but the result did not change.
This has been resolved. I set the memory setting to 256 MB, doing so allowed the process to complete in a few seconds. The lesson learned is that we should start by reviewing the basic settings.