In AWS, I'm using API Gateway backed by lambda function to upload a JSON file and do some small checks in lambda and then dump it to S3. I'm getting stuck where the JSON file upload is done fine but when I look the file in the lambda logs I see that JSON file is converted into huge string. For example, I'm making the following request
curl --location 'https://[API_ID].execute-api.us-east-1.amazonaws.com/v1/mappings/data.json' \
--header 'Content-Type: application/octet-stream' \
--data '@./data.json'
Now, when I see the lambda function logs I see the following:
Following is my OpenAPI specs in API Gateway:
paths:
/mappings/{filename}:
post:
requestBody:
content:
application/octet-stream:
schema:
format: binary
type: string
parameters:
- in: path
name: filename
required: true
schema:
type: string
x-amazon-apigateway-integration:
httpMethod: POST
uri: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${MyLambda}/invocations
requestParameters:
integration.request.path.filename: method.request.path.filename
requestTemplates:
application/json: $input.body
passthroughBehavior: "when_no_match"
type: "aws_proxy"
I'm not sure what I'm doing wrong here.
UPDATED:
My Python lambda function
def lambda_handler(event, context):
print(event)
s3_client = boto3.client('s3')
s3_client.put_object(Body=event["body"], Bucket=BUCKET, Key=event['pathParameters']['filename'])
This is what file looks like in S3
The body was base64
encoded so that is why it seems like the body is not JSON but a long string. I just converted it back and it worked fine using the following code
import base64
base64_bytes = event["body"].encode('ascii')
message_bytes = base64.b64decode(base64_bytes)
message = json.loads(message_bytes.decode('ascii'))
print(message) # You'll get correct json object as you sent in the body