In the Create an AWS S3 Website in Under 5 Minutes YT video and Host a Static Website on Amazon S3 Pulumi tutorial there are great explanations how to create a website hosting on S3 using Pulumi.
In the example code Pulumi's Bucket and BucketObject are used. The first creates a S3 Bucket and the latter creates the objects, which are mostly an index.html
for public access like this:
const aws = require("@pulumi/aws");
const pulumi = require("@pulumi/pulumi");
const mime = require("mime");
// Create an S3 bucket
let siteBucket = new aws.s3.Bucket("s3-website-bucket");
let siteDir = "www"; // directory for content files
// For each file in the directory, create an S3 object stored in `siteBucket`
for (let item of require("fs").readdirSync(siteDir)) {
let filePath = require("path").join(siteDir, item);
let object = new aws.s3.BucketObject(item, {
bucket: siteBucket,
source: new pulumi.asset.FileAsset(filePath), // use FileAsset to point to a file
contentType: mime.getType(filePath) || undefined, // set the MIME type of the file
});
}
exports.bucketName = siteBucket.bucket; // create a stack export for bucket name
Now using a Vue.js / Nuxt.js based app I need to upload multiple generated files, which are located inside the dist
directory of my project root. They are produced by a npm run build
and result in the following files:
$ find dist
dist
dist/favicon.ico
dist/index.html
dist/.nojekyll
dist/200.html
dist/_nuxt
dist/_nuxt/LICENSES
dist/_nuxt/static
dist/_nuxt/static/1619685747
dist/_nuxt/static/1619685747/manifest.js
dist/_nuxt/static/1619685747/payload.js
dist/_nuxt/f3a11f3.js
dist/_nuxt/f179782.js
dist/_nuxt/fonts
dist/_nuxt/fonts/element-icons.4520188.ttf
dist/_nuxt/fonts/element-icons.313f7da.woff
dist/_nuxt/c25b1a7.js
dist/_nuxt/84fe6d0.js
dist/_nuxt/a93ae32.js
dist/_nuxt/7b77d06.js
My problem here is that these files also incorporate files nested in subdirectories, which itself coult also be subdirectories - e.g. dist/_nuxt/fonts/element-icons.4520188.ttf
. The provided approach in the tutorials doesn't evaluate subdirectories and I don't know how to do that with Pulumi/TypeScript.
I went on with the approach and tried to build a recursive TypeScript function, that either creates files or directories based on Pulumi's BucketObject, as recommended by the tutorials.
This got me down a complicated way! I needed to create directories using BucketObject
, what could be achieved using a appended "/"
inside the key
argument (see this answer). Just for the record, the function for that looked like this:
function createS3BucketFolder(dirName: string) {
new aws.s3.BucketObject(dirName, {
bucket: nuxtBucket,
acl: "public-read",
key: dirName + "/", // an appended '/' will create a S3 Bucket prefix (see https://stackoverflow.com/a/57479653/4964553)
contentType: "application/x-directory" // this content type is also needed for the S3 Bucket prefix
// no source needed here!
})
}
But this was just one piece of the massive code needed to recursively traverse a directory with TypeScript (see also this huge amount of so answers on the topic, coming from synchronous versions to crazy Node.js 11+ async solutions). I ended up with around 40-50 lines of code only for the recursive adding of the static site generated files to S3 - and it didn't feel good, to not have a test for that (and I don't really get, why Pulumi doesn't support that use case somehow like Terraform).
Finally I stumbled upon the Pulumi tutorial about Secure Static Website Using Amazon S3, CloudFront, Route53, and Certificate Manager where there's a special paragraph about deployment speed with an interesting quote:
This example creates a aws.S3.BucketObject for every file served from the website. When deploying large websites, that can lead to very long updates as every individual file is checked for any changes. It may be more efficient to not manage individual files using Pulumi and and instead just use the AWS CLI to sync local files with the S3 bucket directly.
TLDR: for non-hello world use cases, Pulumi docs tell us to not use Pulumi to upload the files to S3 but rather use the AWS CLI! So I reworked my code to only create the S3 Bucket using Pulumi like this:
import * as aws from "@pulumi/aws";
// Create an AWS resource (S3 Bucket)
const nuxtBucket = new aws.s3.Bucket("microservice-ui-nuxt-js-hosting-bucket", {
acl: "public-read",
website: {
indexDocument: "index.html",
}
});
// Export the name of the bucket
export const bucketName = nuxtBucket.id;
This creates a static-site hosting enabled S3 Bucket with public-read
access via Pulumi. Now using the AWS CLI, we can copy/sync our files to the Bucket elegantly using the following command:
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
Using $(pulumi stack output bucketName)
we simply get the S3 Bucket name that was created by Pulumi. Mind the --acl public-read
parameter at the end, since you have to enable public read access on each of your static web files in S3, although the Bucket itself already has public read access!