pythonamazon-web-servicesbashamazon-ec2boto3

AWS Boto3 EC2 client send_command process shuts down


I am using send_command() function to run local python script on about 100 ec2 instances for long periods(days).

ssm_client = boto3.client('ssm', region_name = 'eu-west-1')

commands_to_execute = """cd /home/ubuntu
set -x
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>logv5.out 2>&1
python3 myscript.py"""
    
command_lines_to_execute = commands_to_execute.split('\n')

ssm_client.send_command(InstanceIds=[instance_id], DocumentName="AWS-RunShellScript", Parameters={'commands': commands_to_execute})

I am also logging the console output via the part below:

set -x
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>logv5.out 2>&1

Logging part works as intended.

However after about an hour, Python process in each EC2 instance simply dies with no output. I thought perhaps some kind of virtual bash session dies which causes the python process that was launched by it to die. So, I tried launching screen first before executing other commands, but this did not help either. What am I missing here? Why does my Python process die without any output?


Solution

  • This is caused by the default command execution timeout on the SSM document:

    The time, in seconds, for a command to complete before it is considered to have failed. The default is 3600 (1 hour). The maximum value is 172800 (48 hours).

    Send the executionTimeout with your parameters. Example here.