javascriptnode.jsfile-uploadtus

tus - Access-Control-Allow-Origin Error After Upload


I'm using both the tus-node-server and the tus-js-client to try to upload files to my server from a web browser. On smaller files (10mb-ish) it seems to be working fine, but on larger files (385mb-ish) it seems to be failing with a Access-Control-Allow-Origin failure.

The upload progress gets called and completes all the way till 100% then fails with the error. Which makes me think the error is related to some type of validation.

After it throws that error in the console it retries up until the limit of retries I have set.

I have posted the errors below. Any reason why this would be happening?

[Error] Origin https://example.com is not allowed by Access-Control-Allow-Origin.
[Error] XMLHttpRequest cannot load https://upload.example.com//saiudfhia1h due to access control checks.
[Error] Failed to load resource: Origin https://example.com is not allowed by Access-Control-Allow-Origin.

Also after all the retries have been attempted it fails with this error.

tus: failed to upload chunk at offset 0, caused by [object XMLHttpRequestProgressEvent], originated from request (response code: 0, response text: )

Front End JS:

var upload = new tus.Upload(file, {
    endpoint: "https://upload.example.com/?id=" + res._id,
    retryDelays: [0, 1000, 3000, 5000],
    metadata: {
        filename: res._id,
        filetype: file.type
    },
    onError: function(error) {
        console.log("Failed because: " + error)
    },
    onProgress: function(bytesUploaded, bytesTotal) {
        console.log(bytesUploaded, bytesTotal, percentage + "%")
    },
    onSuccess: function() {
        console.log("Download %s from %s", upload.file.name, upload.url)

        alert("You have successfully uploaded your file");
    }
})

// Start the upload
upload.start()

Back End JS:

server.datastore = new tus.FileStore({
    directory: '/files',
    path: '/',
    namingFunction: fileNameFromUrl
});

server.on(EVENTS.EVENT_UPLOAD_COMPLETE, (event) => {
    console.log(`Upload complete for file ${event.file.id}`);

    let params = {
        Bucket: keys.awsBucketName,
        Body: fs.createReadStream(path.join("/files", event.file.id)),
        Key: `${event.file.id}/rawfile`
    };
    s3.upload(params, function(err, data) {
        console.log(err, data);
        fs.unlink(path.join("/files", event.file.id), (err) => {
            if (err) throw err;
            console.log('successfully deleted file');
        });
    });
});

const app = express();
const uploadApp = express();
uploadApp.all('*', server.handle.bind(server));
app.use('/', uploadApp);
app.listen(3000);

Solution

  • The problem turned out to be the fact that the server was sitting behind CloudFlare which has a size limit per upload request. Setting the tus client to chunk the upload into multiple requests solved the problem.

    There is a chunkSize property that you can set in the tus-js-client. This can vary between clients. The default value for this property is Infinity.

    var upload = new tus.Upload(file, {
        endpoint: "https://upload.example.com/?id=" + res._id,
        retryDelays: [0, 1000, 3000, 5000],
        chunkSize: x, // Change `x` to the number representing the chunkSize you want
        metadata: {
            filename: res._id,
            filetype: file.type
        },
        onError: function(error) {
            console.log("Failed because: " + error)
        },
        onProgress: function(bytesUploaded, bytesTotal) {
            console.log(bytesUploaded, bytesTotal, percentage + "%")
        },
        onSuccess: function() {
            console.log("Download %s from %s", upload.file.name, upload.url)
    
            alert("You have successfully uploaded your file");
        }
    })