pythonamazon-web-servicesamazon-s3aws-clis3cmd

Running AWS CLI through Python returns a "sh: 1: aws: not found" error


I am trying to copy a file into an S3 bucket, using Python, like so:

cmd = 'aws s3 %s %s' % (filename, bucketname)
os.system(cmd)

It gives me a sh: 1: aws: not found error.

However, using s3cmd works just fine.

Why would s3cmd work, but not aws?

Also, I did which aws and it returned: /home/username/anaconda/bin/aws.

which s3cmd returns: /home/username/anaconda/bin/s3cmd.

Why does one work, but not the other, despite having the same root?


Solution

  • A quick way to troubleshoot the issue is to try the full path on the OS call to see if it is a PATH problem:

    cmd = '/path/to/aws s3 %s %s' % (filename, bucketname)
    os.system(cmd)
    

    There could be a few reasons why this is a problem, most likely related to the PATH variable (at first guess). However, it might be better to steer away from os.system as noted in the docs (https://docs.python.org/2/library/os.html#os.system) and use some alternative methods.

    Using subprocess:

    cmd = ['/path/to/aws', 's3', filename, bucketname]
    subprocess.Popen(cmd)
    

    Or just use the python AWS client boto3 package. There are many ways, but one quick example from this SO question (How to save S3 object to a file using boto3):

    import boto3
    s3_client = boto3.client('s3')
    s3_client.upload_file(filename, bucketname, filename)
    

    That one is not testable with moto, which can be annoying. Instead if you want to test, you can do something like this:

    import boto3
    s3_resource = boto3.resource('s3')
    
    with open(filename, 'rb') as f:
        binary = f.read()
    
    s3_resource.Bucket(bucketname).put_object(
        Key=filename,
        Body=binary
    )