Need to create a csv file by reading 2 json files and the upload to gcs using google cloud functions.
Here is an input json:
file1:
{'k1': {'key_ID': 'srtw001',
'key1': 'ks001',
'key_val': 'kval001'}}
file2:
{'stu': {'stuid': s00918,'stutName': 'john vincent'}}
I am getting the error [('key1', 'ks001'), ('kval001', 'kval001')] could not be converted to bytes
Expected:
I tried with below code:
file_data=[]
rec_key = (REC_KEYS['k1']['key1']),(REC_KEYS['k1']['key_val'])
rec_val = json_data['stu']['stuName'], json_data['stu']['stuName']
file_data.append(rec_key)
file_data.append(rec_val)
content_type='text/csv'
try:
upload_file(os.environ['project_id'], os.environ['bucket_name'], folder_name, file_name, file_data, content_type)
except Exception as e:
print('error')
def upload_file(project_id, bucket_name, bucket_folder, file_name, document, content_type):
storage_client = storage.Client(project_id)
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(bucket_folder + "/" + file_name)
blob.upload_from_string(data=document, content_type=content_type)
I resolved the issue. Here is my fix.
The upload function says upload_from_string
so I converted list of tuple to string and \n
for new line of rows in csv. It worked.
:
file_data.append(rec_key)
file_data.append(rec_val)
file_data = '\n'.join(f'{tup[0]},{tup[1]}' for tup in file_data)
content_type='text/csv'
: