When I deploy each app using kubernetes/cluster/kube-up.sh over aws I set context using :
CONTEXT=$(kubectl config view | grep current-context | awk '{print $2}')
kubectl config set-context $CONTEXT --namespace=${PROJECT_ID}
I do this for multiple apps and each deploys fine. However I then need to be able to toggle between kubernetes context to interact with an arbitrary deployed app (view logs/ do a kubectl exec )
Here is how to show all my contexts
kubectl config view --output=json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "aws_kubernetes",
"cluster": {
"server": "https://52.87.88.888",
"certificate-authority-data": "REDACTED"
}
},
{
"name": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"cluster": {
"server": "https://104.196.888.888",
"certificate-authority-data": "REDACTED"
}
}
],
"users": [
{
"name": "aws_kubernetes",
"user": {
"client-certificate-data": "REDACTED",
"client-key-data": "REDACTED",
"token": "taklamakan"
}
},
{
"name": "aws_kubernetes-basic-auth",
"user": {
"username": "admin",
"password": "retrogradewaif"
}
},
{
"name": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"user": {
"client-certificate-data": "REDACTED",
"client-key-data": "REDACTED",
"username": "admin",
"password": "emptyadjacentpossible"
}
}
],
"contexts": [
{
"name": "aws_kubernetes",
"context": {
"cluster": "aws_kubernetes",
"user": "aws_kubernetes",
"namespace": "ruptureofthemundaneplane"
}
},
{
"name": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"context": {
"cluster": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"user": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"namespace": "primacyofdirectexperience"
}
}
],
"current-context": "aws_kubernetes"
}
you can see in above I have deployed two apps ... when I try the obvious to choose my kubernetes context
kubectl config set-context gke_primacyofdirectexperience_us-east1-b_loudhttpscluster --namespace=${PROJECT_ID}
... outputs
context "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster" set.
kubectl config set-cluster "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster"
... outputs
cluster "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster" set.
it then just hangs when I issue commands like
kubectl describe pods --namespace=primacyofdirectexperience
perhaps I am missing the command to also set user since in above json each deployed app gets its own user name ???
UPDATE
kubectl config use-context gke_primacyofdirectexperience_us-east1-b_loudhttpscluster
... outputs
switched to context "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster".
however now when I issue any kubectl command ... for example
kubectl get pods
.... outputs
Unable to connect to the server: x509: certificate signed by unknown authority
which is an error I have never seen before ... no doubt due to the toggle issue
Even with above error message /kubernetes/cluster/kube-down.sh was able to teardown the cluster so there is hope toggling will work !
To switch between contexts use use-context
:
kubectl config use-context gke_primacyofdirectexperience_us-east1-b_loudhttpscluster
Any kubectl commands applied now will be applied to that cluster (using the primacyofdirectexperience
namespace, since you set that as the default for the cluster).
kubectl get pods
Will now get all pods gke_primacyofdirectexperience_us-east1-b_loudhttpscluster
on the primacyofdirectexperience
namespace. To use a different namespace, you can apply the namspace flag:
kubectl get pods --namespace=someothernamespace
To switch contexts again, just run use-context
again:
kubectl config use-context aws_kubernetes
Now,
kubectl get pods
will run on the aws_kubernetes
cluster, using the default
namespace.
You can always see which context kubectl
is currently using by running:
kubectl config current-context