I'm trying to create a very simple flow that uses Elastic Transcoder to output HLS streams that can be served up directly from S3.
The pipeline and job are spitting out the right files, as far as I can tell and reporting that the job is completing successfully, but it doesn't seem to be setting the permissions on the S3 objects to allow access via anonymous requests.
Here's what the permissions on the generated files look like:
And here's how my pipeline is configured:
For some reason—probably entirely my fault—it seems like the pipeline configuration is being ignored when it comes to setting permissions on the objects stashed in S3. I've dug around a bit and haven't found much evidence that other people have run into this problem, which makes me pretty confident that I'm doing something wrong.
You have to double check the following:
Your IAM policy used by ElasticTranscoder should be similar to this one:
{ "Version": "2008-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "s3:Put*", "s3:ListBucket", "s3:*MultipartUpload*", "s3:Get*" ], "Resource": "*" }, { "Sid": "2", "Effect": "Allow", "Action": "sns:Publish", "Resource": "*" }, { "Sid": "3", "Effect": "Deny", "Action": [ "s3:*Delete*", "s3:*Policy*", "sns:*Remove*", "sns:*Delete*", "sns:*Permission*" ], "Resource": "*" } ] }