amazon-web-serviceskubernetesamazon-s3k8s-serviceaccountplease.build

How to enable AWS S3 Caching on Please Build in a Pod on AWS EKS Kubernetes cluster?


I'm using Please Build to build different modules of my app in a Jenkins job that runs inside an AWS EKS Kubernetes cluster on a linux AWS EC2 instance in a pod using jenkins/slave.jar in a debian container. It is working fine. Now, I'm trying to enable caching on AWS S3 bucket in Please Build. So, I tried using an AWS IAM role via web identity using Kubernetes Service Account used by the pod that is associated with the AWS IAM role I created which is having trust relationship policy to allow it to use AWS EKS OIDC to give access to the Kubernetes Service Account.

The problem is, I get access denied error on the AWS CLI command which is aws s3 cp. My AWS IAM role has the correct policy as I have even tried full access but it doesn't work.

My Please Build config file is below:

 [Cache]
    RetrieveCommand="aws s3 cp s3://<BUCKET_NAME>/please/$CACHE_KEY -"
    StoreCommand="aws s3 cp - s3://<BUCKET_NAME>/please/$CACHE_KEY"

I have tried running aws sts get-caller-identity and it shows the correct AWS IAM role which is the one I created for the pod.


Solution

  • After some investigation and digging, I found out --debug option that I can pass in any AWS CLI command to debug it. So, I updated my Please Build config file as follows:

     [Cache]
        RetrieveCommand="aws s3 cp --debug s3://<BUCKET_NAME>/please/$CACHE_KEY -"
        StoreCommand="aws s3 cp --debug - s3://<BUCKET_NAME>/please/$CACHE_KEY"
    

    Note: <BUCKET_NAME> will be replace with the name of the actual AWS S3 bucket.

    After doing so, I looked at the Please Build logs in the job and found out that the AWS CLI wasn't able to find out the AWS IAM role I created for pod and it was using AWS EC2 instance's AWS IAM role instead.

    Solution:

    As I wanted to use AWS IAM role for pod using AWS EKS OIDC and Kubernetes Service Account that I created, I had to change my Please Build config file and Jenkins job file as follows:

    [Cache]
     [Cache]
        HttpUrl = ""
        HttpWriteable = False
        RetrieveCommand = "AWS_ACCESS_KEY_ID=${env.AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${env.AWS_SECRET_ACCESS_KEY} AWS_SESSION_TOKEN=${env.AWS_SESSION_TOKEN} AWS_DEFAULT_REGION=${env.AWS_REGION} aws s3 cp s3://<BUCKET_NAME>/\$CACHE_KEY - | gunzip"
        StoreCommand = "gzip | AWS_ACCESS_KEY_ID=${env.AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${env.AWS_SECRET_ACCESS_KEY} AWS_SESSION_TOKEN=${env.AWS_SESSION_TOKEN} AWS_DEFAULT_REGION=${env.AWS_REGION} aws s3 cp - s3://<BUCKET_NAME>/\$CACHE_KEY"
    

    Note: <BUCKET_NAME> will be replace with the name of the actual AWS S3 bucket.

    sh "aws sts get-caller-identity"
    
    env.AWS_ACCESS_KEY_ID = sh(returnStdout: true, script: "set +x && cat /home/jenkins/.aws/cli/cache/* | jq -r .Credentials.AccessKeyId").trim()
    env.AWS_SECRET_ACCESS_KEY = sh(returnStdout: true, script: "set +x && cat /home/jenkins/.aws/cli/cache/* | jq -r .Credentials.SecretAccessKey").trim()
    env.AWS_SESSION_TOKEN = sh(returnStdout: true, script: "set +x && cat /home/jenkins/.aws/cli/cache/* | jq -r .Credentials.SessionToken").trim()
    

    Reason: Pod was assuming the role perfectly, as mentioned in the question because upon running aws sts get-caller-identity, it shows the correct AWS IAM role which is the one I created for the pod, but Please build was using AWS EC2 instance's AWS IAM role maybe because the credentials file was in a different format as well as in a different location with a random name (e.g., .aws/cli/cache/dajkhsd87asd7y8ayvc87y7.json), not sure though. So, I extracted the credentials out of that credentials file and set them as environment variables in the AWS CLI commands (i.e., aws s3 cp) used by the Please Build config and it worked.

    It will give AWS S3 bucket access permission to all the containers in a single pod only which is more secure.

    Using this approach, Please Build works in both JNLP and Docker in Docker (DinD) containers.

    Alternative Solution:

    You can make it work by giving access permissions to the AWS EC2 instance's IAM role. This way, you don't need to create different resources like Kubernetes Service Account, AWS EKS OIDC, AWS IAM role with custom trust relationship policy. Also, you don't need to make changes in Jenkins job file and Please build config file.

    Disadvantage: It will give AWS S3 bucket access permission to all the pods, containers and everything running on the AWS EC2 instance.