I'm using Google Drive to serve as a central place for various scripts to store common files. However, I've noticed that despite pretending to function like a filesystem, Google Drive does not observe normal filesystem conventions, such as enforcing unique filenames. This is allowing different users to upload separate files with identical filenames.
Is there any way to prevent this?
My scripts all use the same code to access the Drive API to upload files:
import os
import httplib2
from apiclient import discovery, errors
from apiclient.http import MediaFileUpload
credentials = get_google_drive_credentials()
service = discovery.build('drive', 'v2', http=credentials.authorize(httplib2.Http()))
dir_id = 'google_drive_folder_hash'
filename = '/some/path/blah.txt'
service.files().insert(
body={
'title': os.path.split(filename)[-1],
'parents': [{'id': dir_id}],
},
media_body=MediaFileUpload(filename, resumable=True),
).execute()
If two or more scripts ran this, and assuming no race conditions, why would this code result in duplicate uploads of "blah.txt"? How would I prevent this?
If my understanding is correct, how about this answer? In this answer, the process for searching the duplicated filename in the folder is added. For this, I used the files.list method. The flow of this script is as follows.
title
in the folder.res = service.files().list(q="title='" + title + "' and '" + dir_id + "' in parents", fields="items(id,title)").execute()
if len(res['items']) == 0:
# In this case, no duplicated filename is existing.
service.files().insert(
body={
'title': title,
'parents': [{'id': dir_id}],
},
media_body=MediaFileUpload(filename, resumable=True),
).execute()
else:
# In this case, the duplicated filename is existing.
# Please do error handling here, if you need.
print("Duplicated files were found.")
If I misunderstood your question and this was not the result you want, I apologize.