I have a Document model with a file that is uploaded to S3 using Carrierwave(fog) with my uploader (mount_uploader :file, DocumentUploader
). I am also using the 'paranoia' gem's acts_as_paranoid
to soft delete the documents. Upon destroy I wish to move the attached file to an 'archive' folder in the same directory. Then I plan on moving it back to the original(parent) directory when a deleted document is restored.
I have the following in my model:
skip_callback :commit, :after, :remove_file!
before_destroy :move_file_to_archive
after_restore :fetch_file_from_archive
And within the method move_file_to_archive
, I establish a connection to amazon using fog and do the following to move the file to archive:
bucket = connection.directories.get(bucket_name)
file = bucket.files.get(self.file.file.path)
new_path = file.key.split('/')[0..-2].join('/') + '/archive/' + file.key.split('/')[-1]
new_file = file.copy(bucket_name, new_path)
file.destroy
The problem is that I cannot find a way to get my document object to point to the new(archived) file instead of the old one. Somehow, when the object is being destroyed, I want the self.file.path to change to the archived path instead of the original path. And then revert it when the document is being restored. Any help would be appreciated!
Got it to work myself. I added a condition to my DocumentUploader
to set the path to contain /archive/
in case the document had a value present in paranoia's deleted_at
field. Just doing that makes carrierwave look at the archive path if the document is currently in deleted state.