file-uploadvmwaremultipartform-datavcloud-director-rest-api

Upload of Media via VMware API results in larger transferred size than file size


We're utilizing the V Cloud API to interact with virtual machines (create machines, perform actions, switch media, etc). One requested function is to be able to upload media (specifically ISO's) to a particular a catalog. The API guide (pg 67) is fairly straightforward, and our multi-part requests to the URL that is provided when the upload starts go off without a hitch.

Note: We have to declare the file size before starting the upload

The only thing that seems amiss during the upload itself is that the "transferred size" ends up being larger than the "file size" at the end of the process. This is somewhat odd because our content-range never exceeds the expected file size (we assume that the meta data is being included without us having a say). Once this transferred size exceeds the file size, the status of the file upload changes to "Error" but still returns a 200 OK

    {
  "name": "J Small 4",
  "description": "",
  "files": [{
    "name": "file",
    "totalSize": 50696192,
    "status": "Error",
    "link": "https://cloud01.cs2cloud.com/transfer/27b8f93c-8319-419e-9e8c-15622097670b/file",
    "transferredSize": 54293177
  }],
  "id": "urn:vcloud:media:1cec68ef-f22e-4ec7-ae5d-dfbc4f7137d9",
  "catalogId": "urn:vcloud:catalogitem:19dbfdd8-ea70-4355-abc7-96e34dccb869"
}

Not sure where to even start debugging this since all the API calls come back with 200 OK, the .ISO file seems to be fine, our content-range headers never go outside the established file size, and the meta-data seems to be out of our control in terms of editing or measuring it.

Hoping some soul has experienced this issue before and can provide some insight into working towards a solution


Solution

  • It turns out the issue wasn't with the vmware at all, but how we were chunking up the media file. We initially used FileReader() to chunk up the file and send it over to the VMware API.

    Theoretically, we were choosing the chunk size and could then generate and set the content range, but in reality we were choosing the content-range but the content-length was different than the chunk size. We're still not entirely sure why it happened (maybe extra meta data being added on) but we found a solution.

    The fix: We eliminated FileReader() altogether and just put the file slices directly into a blob (you can see below)

    $scope.parseMediaFile = function(url, file, catalogId) {
             $scope.uploadingMediaFile = true;
    
             var fileSize = file.size;
             var chunkSize = 1024 * 1024 * 5; // bytes
             var offset = 0;
             var self = this; // we need a reference to the current object
             var chunkReaderBlock = null;
             var chunkNum = 0;
    
             if (fileSize < chunkSize) {
                chunkSize = fileSize;
             }
    
             chunkReaderBlock = function(_offset, length, _file) {
                var blob = _file.slice(_offset, length + _offset);
                var beginRange = _offset;
                var endRange = _offset + length;
    
                if(endRange > _file.size) {
                  endRange = _file.size
                }
    
                var contentRange = beginRange + "-" + endRange;
    
                vdcServices.uploadMediaFile(url, blob, fileSize, contentRange).then(
                  function(resp) {
                    vdcServices.getUploadStatus($scope.company, catalogId).then(function(resp) {
                      var uploaded = resp.data.files[0].transferredSize;
                      $scope.mediaPercentLoaded = $scope.trunc((uploaded / fileSize) * 100);
    
                      if (endRange == _file.size) {
                        $scope.closeModal();
                        return;
                      }
    
                      chunkReaderBlock(_offset+length, chunkSize, file);
                    }, function(err) {
                      $scope.errorMsg = err;
                      chunkReaderBlock(_offset-length, chunkSize, file);
                    })
                  },
                  function(err) {
                    $scope.errorMsg = err;
                  }
                )
             }
    
             // Starts the read with the first block
             if (offset < fileSize) {
                chunkReaderBlock(offset, chunkSize, file)
             }
    
          }
    

    Doing so allowed us to actually control the content-length, and since we can identify when the number of bytes transferred is equal to the file size we could then complete the process.