Attempt with aws-sdk v3, using my account's Master Application Key:
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import axios from "axios";
const region = process.env.BUCKET_REGION;
const bucket = process.env.BUCKET_NAME;
const client = new S3Client({
region,
// endpoint: "https://s3.us-east-005.backblazeb2.com",
});
const expiresIn = 7 * 24 * 60 * 60; // 3600
const command = new PutObjectCommand({ Bucket: bucket, Key: filename });
const signedUrl = await getSignedUrl(client, command, { expiresIn });
await axios.put(signedUrl, "hello");
This is wrong because it generates a presigned url like <BUCKET_NAME>.s3.us-east-005.amazonaws.com
instead of <BUCKET_NAME>.s3.us-east-005.backblazeb2.com
. Also my understanding is that aws sdk v3 uses v4 signature by default and also I did not even see an option to explicitly set v4 signature https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html.
Attempt with backblaze native apis:
import B2 from "backblaze-b2";
import axios from "axios";
const b2 = new B2({
applicationKey: process.env.APPLICATION_KEY!,
applicationKeyId: process.env.APPLICATION_KEY_ID!,
});
await b2.authorize();
const bucketName = process.env.BUCKET_NAME!;
const bucketApi = await b2.getBucket({ bucketName });
const bucket = bucketApi.data.buckets.find(
(b) => b.bucketName === bucketName
);
const signedUrlApi = await b2.getUploadUrl({ bucketId: bucket.bucketId });
await axios.put(signedUrlApi.data.uploadUrl, "testing123", {
headers: {
Authorization: signedUrlApi.data.authorizationToken,
},
});
This fails with 405 error. Please help as I have not seen any docs on how to properly generate a presigned url for uploading files into a backblaze bucket from the client.
S3 API
Note that, as mentioned in the B2 docs, you can't use your account's Master Application Key with the S3 API. So, first, you'll need to create a 'regular' application key for use with your app.
This works for me with a private bucket and a 'regular' application key:
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import axios from "axios";
const endpoint = process.env.BUCKET_ENDPOINT; // "https://s3.us-east-005.backblazeb2.com"
const region = process.env.BUCKET_REGION; // "us-east-005"
const bucket = process.env.BUCKET_NAME;
const client = new S3Client({
region,
endpoint,
});
const expiresIn = 7 * 24 * 60 * 60; // 3600
const command = new PutObjectCommand({ Bucket: bucket, Key: filename });
const signedUrl = await getSignedUrl(client, command, { expiresIn });
await axios.put(signedUrl, "hello");
The only difference from your code is that I added endpoint
to the configuration for the S3Client constructor, which you had commented out.
The AWS SDK for JavaScript v3 does indeed default to version 4 signatures, so you don't need to specify the signature version.
Here's an example of (an expired) signed URL created by the above code:
https://metadaddy-private.s3.us-west-004.backblazeb2.com/hello.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=00415f935cf4dcb000000003c%2F20230327%2Fus-west-004%2Fs3%2Faws4_request&X-Amz-Date=20230327T231819Z&X-Amz-Expires=60&X-Amz-Signature=eed21bde4ee375d07e1b26c47512904a4972ab13d41bd1c81c16e48feec41dcc&X-Amz-SignedHeaders=host&x-id=PutObject
Inserting line breaks to see the components more easily:
https://metadaddy-private.s3.us-west-004.backblazeb2.com/hello.txt
?X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD
&X-Amz-Credential=00415f935cf4dcb000000003c%2F20230327%2Fus-west-004%2Fs3%2Faws4_request
&X-Amz-Date=20230327T231819Z
&X-Amz-Expires=60
&X-Amz-Signature=eed21bde4ee375d07e1b26c47512904a4972ab13d41bd1c81c16e48feec41dcc
&X-Amz-SignedHeaders=host
&x-id=PutObject
The X-Amz-Content-Sha256
header is set to UNSIGNED-PAYLOAD
because, typically, when you generate a presigned upload URL, you don't know what the content will be, so you can't compute the SHA-256 digest.
If your upload fails in the browser, you can narrow down the cause by testing the URL from the command line with curl:
curl -i -X PUT --data-binary 'Hello' --header 'Content-Type: text/plain' 'https://...your presigned url...'
If this fails, you will see the the HTTP status code, as well as more detail in the body of the response. For example, testing with a bad access key id:
HTTP/1.1 403
x-amz-request-id: 46f5a7ff3a48b46a
x-amz-id-2: addJuXWt5bqtv2ndrbnY=
Cache-Control: max-age=0, no-cache, no-store
Content-Type: application/xml
Content-Length: 156
Date: Mon, 27 Mar 2023 23:57:55 GMT
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>Malformed Access Key Id</Message>
</Error>
If you can PUT a file from the command line with curl and your presigned URL, but not in the browser, it is likely CORS that is preventing the upload. Check the browser developer console for details.
B2 Native API
The B2 Native API cannot generate and use presigned URLs in the way that the S3 API can. Your code is simply uploading a file.
An HTTP 405 error means 'Method Not Allowed'. You can't PUT a file to the upload URL; you need to use POST (this is mentioned in the b2_upload_file
docs). You also need a couple more headers:
X-Bz-File-Name
- the filenameX-Bz-Content-Sha1
- a SHA-1 digest of the bodyThis should work for you:
import B2 from "backblaze-b2";
import axios from "axios";
import crypto from "crypto";
const b2 = new B2({
applicationKey: process.env.APPLICATION_KEY!,
applicationKeyId: process.env.APPLICATION_KEY_ID!,
});
await b2.authorize();
const bucketName = process.env.BUCKET_NAME!;
const bucketApi = await b2.getBucket({ bucketName });
const bucket = bucketApi.data.buckets.find(
(b) => b.bucketName === bucketName
);
const signedUrlApi = await b2.getUploadUrl({ bucketId: bucket.bucketId });
const body = "testing123";
const sha1Hash =
crypto.createHash('sha1')
.update(body)
.digest('hex');
await axios.post(signedUrlApi.data.uploadUrl, body, {
headers: {
Authorization: signedUrlApi.data.authorizationToken,
"X-Bz-File-Name": filename,
"X-Bz-Content-Sha1": sha1Hash,
},
});
Strictly speaking, you can skip the SHA-1 digest, and pass do_not_verify
in the X-Bz-Content-Sha1
header, but this is strongly discouraged, as it removes integrity protection on the uploaded data.
Backblaze B2 calculates its own SHA-1 digest of the data it receives and compares it to the digest you supply in the header. If some error were to corrupt the body in transit, the digests would not match, and B2 would reject the request with HTTP status 400 and the following response:
{
"code": "bad_request",
"message": "Checksum did not match data received",
"status": 400
}
If, on the other hand, X-Bz-Content-Sha1
was set to do_not_verify
, then B2 wouldn't be able to perform this check, and would unwittingly store the corrupted data.
Note
If you do need a presigned URL, then you must use the AWS SDK to generate one, as shown above. As I mentioned, the B2 Native API does not include a way of generating and using presigned URLs.
If you don't need a presigned URL, and you simply need to upload data to an object, as you are doing in your B2 Native API code, you can do so more simply using the AWS SDK:
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import axios from "axios";
const endpoint = process.env.BUCKET_ENDPOINT; // "https://s3.us-east-005.backblazeb2.com"
const region = process.env.BUCKET_REGION; // "us-east-005"
const bucket = process.env.BUCKET_NAME;
const client = new S3Client({
region,
endpoint,
});
const command = new PutObjectCommand({
Bucket: bucket,
Key: filename,
Body: "hello",
});
await client.send(command);
BTW, we (Backblaze - I'm Chief Technical Evangelist there) advise developers to use the AWS SDKs and the Backblaze S3 Compatible API unless there's a particular reason to use the B2 Native API - for example, manipulating application keys.