What is the best way to implement local caching for AWS Lambda@Edge?
Here is the context: We have lambda@edge trigger in CloudFront that needs to retrieve a public key from public internet. This key rotates periodically. Making public call from lambda@edge to public internet introduces latency penalty, so we would ideally like to cache the data. We could, of course, use ElastiCache or DynamoDB as a cache layer, but that would negate the benefit of having lambda running at edge location as it would need to talk to resource in one of the regions.
One way I can think of is to store data in the static website S3 bucket with CloudFront distribution enabled. If I understand this correctly, this would mean that this file would also be present on CloudFront in the same edge location as lambda itself. Lambda could then make a call to static website and retrieve that file from edge cache. Is that a valid pattern? Is there a better solution?
What we do is retrieving the key (JWK in our case) from the internet when the Lambda function starts up, that is, outside the handler function. That way, the key is only retrieved when the Lambda function is invoked from a cold start. Subsequent requests will use the in-memory key.
import { JwtRsaVerifier } from "aws-jwt-verify";
// this is run on cold start
const verifier = JwtRsaVerifier.create({
options,
jwksUri: "https://login.microsoftonline.com/common/discovery/v2.0/keys",
});
export const handler = async (event, context, callback) => {
const request = event.Records[0].cf.request;
const headers = request.headers;
const accessToken = parseAccessTokenCookie(headers);
let hasValidJwt = false;
if (accessToken) {
try {
const payload = await verifier.verify(accessToken);
console.log("Token is valid");
hasValidJwt = true;
} catch {
console.log("Token not valid!");
}
} else {
console.log("No token provided");
}
}