I am trying to upload some large files (~15 TB) to AWS S3 Glacier and keep getting Signature Expired errors.
I have tried both manually setting a good clock time and then disabling time sync as well as having time and time zone both automatically set by Windows (as suggested here Signature expired: is now earlier than error : InvalidSignatureException)
Any time that I observe the clock with my own eyes it seems to be set to the appropriate time.
What else can I do to further identify and/or solve the problem?
EDIT 1
Here's the error message that I'm getting:
Signature expired: 20231014T151454Z is now earlier than 20231014T151519Z (20231014T152019Z - 5 min.)
Here's the VB code that references AWSSDK that uploads the file:
'4 GB per part
Dim mPartSize = 4L * 1024L * 1024L * 1024L
Dim InitiateUpload =
Function() As Model.InitiateMultipartUploadResponse
Dim GetInitialRequest =
Function() As Model.InitiateMultipartUploadRequest
Dim mRequest = New Model.InitiateMultipartUploadRequest
With mRequest
.VaultName = mVaultName
.PartSize = mPartSize
.ArchiveDescription = $"{mFileName}_{WhatObjectNamePrefix}"
End With
Return mRequest
End Function
Return mStorageClient.InitiateMultipartUpload(GetInitialRequest())
End Function
Dim Upload =
Function(WhatUploadID As String) As List(Of String)
Dim mBinaryChecksums = New List(Of String)
Dim mPosition = 0L
While mPosition < WhatStream.Length
Dim mStreamSize = Math.Min(mPartSize, WhatStream.Length - mPosition)
Dim mTempStream = GlacierUtils.CreatePartStream(WhatStream, mStreamSize)
Dim mUploadPartRequest = New Model.UploadMultipartPartRequest
With mUploadPartRequest
.VaultName = mVaultName
.UploadId = WhatUploadID
.Checksum = TreeHashGenerator.CalculateTreeHash(mTempStream)
.Body = mTempStream
.SetRange(mPosition, mPosition + mStreamSize - 1)
End With
mBinaryChecksums.Add(mUploadPartRequest.Checksum)
mStorageClient.UploadMultipartPart(mUploadPartRequest)
mPosition += mStreamSize
End While
Return mBinaryChecksums
End Function
Dim CompleteUpload =
Sub(WhatUploadID As String, WhatChecksums As List(Of String))
Dim mCompleteRequest = New Model.CompleteMultipartUploadRequest
With mCompleteRequest
.VaultName = mVaultName
.UploadId = WhatUploadID
.ArchiveSize = WhatStream.Length.ToString
.Checksum = TreeHashGenerator.CalculateTreeHash(WhatChecksums)
End With
mStorageClient.CompleteMultipartUpload(mCompleteRequest)
End Sub
Dim AbortUpload =
Sub(WhatInitialReponse As Model.InitiateMultipartUploadResponse)
Dim mRequest = New Model.AbortMultipartUploadRequest
With mRequest
.UploadId = WhatInitialReponse.UploadId
End With
mStorageClient.AbortMultipartUpload(mRequest)
End Sub
Dim mInitialResponse As Model.InitiateMultipartUploadResponse = Nothing
Try
mInitialResponse = InitiateUpload()
Dim mCheckSums = Upload(mInitialResponse.UploadId)
CompleteUpload(mInitialResponse.UploadId, mCheckSums)
Catch ex As Exception
If mInitialResponse IsNot Nothing Then Tryer(Sub() AbortUpload(mInitialResponse))
Throw New Exception($"Failed to upload file {mFileName}", ex)
End Try
EDIT 2
The link below recommended going async, so I updated the code to use async/await on the 'UploadMultipartPart' function where the exception is being thrown. Async/await keywords were propagated all the way up to Sub Main so no weird deadlock situation.
Still have the same exception being thrown... Thoughts?
https://repost.aws/knowledge-center/lambda-sdk-signature
EDIT 3
Updated the code to calculate the tree hash before instantiating the request object in case the signature was made in the constructor and it waited for the hash to complete causing the error. Still no dice
Dim mPartSize = 4L * 1024L * 1024L * 1024L
Dim InitiateUpload =
Function() As Model.InitiateMultipartUploadResponse
Dim GetInitialRequest =
Function() As Model.InitiateMultipartUploadRequest
Dim mRequest = New Model.InitiateMultipartUploadRequest
With mRequest
.VaultName = mVaultName
.PartSize = mPartSize
.ArchiveDescription = $"{mFileName}_{WhatObjectNamePrefix}"
End With
Return mRequest
End Function
Return mStorageClient.InitiateMultipartUpload(GetInitialRequest())
End Function
Dim Upload =
Async Function(WhatUploadID As String) As Task(Of List(Of String))
Dim mBinaryChecksums = New List(Of String)
Dim mPosition = 0L
While mPosition < WhatStream.Length
Dim mStreamSize = Math.Min(mPartSize, WhatStream.Length - mPosition)
Dim mTempStream = GlacierUtils.CreatePartStream(WhatStream, mStreamSize)
Dim mTreeHash = TreeHashGenerator.CalculateTreeHash(mTempStream)
Dim mUploadPartRequest = New Model.UploadMultipartPartRequest()
With mUploadPartRequest
.VaultName = mVaultName
.UploadId = WhatUploadID
.Checksum = mTreeHash
.Body = mTempStream
.SetRange(mPosition, mPosition + mStreamSize - 1)
End With
mBinaryChecksums.Add(mUploadPartRequest.Checksum)
Await mStorageClient.UploadMultipartPartAsync(mUploadPartRequest).ConfigureAwait(False)
mPosition += mStreamSize
End While
Return mBinaryChecksums
End Function
Dim CompleteUpload =
Sub(WhatUploadID As String, WhatChecksums As List(Of String))
Dim mFinalTreeHash = TreeHashGenerator.CalculateTreeHash(WhatChecksums)
Dim mCompleteRequest = New Model.CompleteMultipartUploadRequest
With mCompleteRequest
.VaultName = mVaultName
.UploadId = WhatUploadID
.ArchiveSize = WhatStream.Length.ToString
.Checksum = mFinalTreeHash
End With
mStorageClient.CompleteMultipartUpload(mCompleteRequest)
End Sub
Dim AbortUpload =
Sub(WhatInitialReponse As Model.InitiateMultipartUploadResponse)
Dim mRequest = New Model.AbortMultipartUploadRequest
With mRequest
.UploadId = WhatInitialReponse.UploadId
End With
mStorageClient.AbortMultipartUpload(mRequest)
End Sub
Dim mInitialResponse As Model.InitiateMultipartUploadResponse = Nothing
Try
mInitialResponse = InitiateUpload()
Dim mCheckSums = Await Upload(mInitialResponse.UploadId)
CompleteUpload(mInitialResponse.UploadId, mCheckSums)
Catch ex As Exception
If mInitialResponse IsNot Nothing Then Tryer(Sub() AbortUpload(mInitialResponse))
Throw New Exception($"Failed to upload file {mFileName}", ex)
End Try
For future reference, the issue here was that shadow copies (which needed to be taken to be able to read the files getting uploaded) were not being cleared after use and over time, slowed down disk operations to a halt. The slow disk I/O caused the above timing issues.