We are having naming collision in the name of the Backend Pools generated in Azure Application Gateway by AGIC (Application Gateway Ingress Controller) running in two separate (independent) AKS Clusters.
We are using a shared Application Gateway to public route traffic to services running in two separate AKS clusters. In both clusters we have a service running with the same service name.
When deploying a service into AKS using helm charts (using ArgoCD) we use the azure-application-gateway as ingress class.
When deploying the first service in the first cluster all is good and a Backend Pool is created in Application Gateway with an auto generated name for the Backend Pool that incorporates the service name and port. Unfortunately it seems we cannot influence this generated name and thus when we deploy a service with the same name in the second AKS cluster, it will overwrite (or modify to be more precise) the first generated Backend Pool, since Backend Pool names need to be unique. We want to have two separate Backend Pools, one for each service in each AKS.
Unfortunately we cannot just change the service name of the second deployed service and we would like to still stick to a single, shared Application Gateway.
Is there any way of influencing the generated Backend Pool name in Application Gateway so we could prefix it or something similar to make sure it is unique?
We tried some bogus annotation that chat gpt spit out, but obviously does not exist (anymore?):
appgw.ingress.kubernetes.io/backend-address-pool-name: pool-our-custom-name
Annotations listed on offical support page, do not seem to include anything that might help: https://azure.github.io/application-gateway-kubernetes-ingress/annotations/
Unfortunately, it is not possible to influence the generated Backend Pool name in Application Gateway via AGIC. The name is auto-generated by AGIC and cannot be modified. The Application Gateway Ingress Controller (AGIC) doesn't provide a direct annotation to customize the backend pool names. AGIC generates the backend pool names automatically based on the service name and service port, which as you've noticed, causes issues when the same service name is used across different AKS clusters with a shared Application Gateway.
Alternatively, you could use a unique service name for each AKS cluster to avoid naming collisions or second option is to deploy separate Application Gateways for each AKS cluster to avoid conflicts. It avoids conflicts entirely and provides isolation between the clusters. This approach increases the infrastructure cost and complexity but can be warranted for scenarios where naming collisions are a concern and where service isolation is desired.
Let's say you have a service named api-service
in both clusters. You would rename them to api-service-cluster1
and api-service-cluster2
respectively.
Cluster1
apiVersion: v1
kind: Service
metadata:
name: api-service-cluster1
spec:
selector:
app: api-server
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress-cluster1
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service-cluster1
port:
number: 80
Cluster 2
apiVersion: v1
kind: Service
metadata:
name: api-service-cluster2
spec:
selector:
app: api-server
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress-cluster2
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service-cluster2
port:
number: 80
By having unique service names, AGIC will create unique backend pools for each service in the Application Gateway.
Below is an example to setup AGIC on AKS cluster with an existing application gateway.
If you already have an AKS cluster, skip it. if not create a cluster as below-
az aks create -n <ClusterName> -g <ResourceGroup> --network-plugin azure --enable-managed-identity --generate-ssh-keys
deploy a new application gateway, change the names of myResourceGroup , myPublicIp, myVnet, mySubnet, myApplicationGateway with your choice of names
az network public-ip create -n myPublicIp -g myResourceGroup --allocation-method Static --sku Standard
az network vnet create -n myVnet -g myResourceGroup --address-prefix 10.0.0.0/16 --subnet-name mySubnet --subnet-prefix 10.0.0.0/24
az network application-gateway create -n myApplicationGateway -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet --priority 100
Now enable the AGIC add-on in the AKS cluster
appgwId=$(az network application-gateway show -n myApplicationGateway -g myResourceGroup -o tsv --query "id")
az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id $appgwId
If you’d like to use Azure portal to enable AGIC add-on, go to (https://aka.ms/azure/portal/aks/agic) and navigate to your AKS cluster. From there, go to the Networking tab under your AKS cluster. You’ll see an application gateway ingress controller section, with option to enable/disable the ingress controller add-on using the Azure portal.
Since I’ve deployed the AKS cluster in its own virtual network and the Application gateway in another virtual network, I’ll have to peer the two virtual networks together in order for traffic to flow from the Application gateway to the pods in the cluster.
nodeResourceGroup=$(az aks show -n myCluster -g myResourceGroup -o tsv --query "nodeResourceGroup")
aksVnetName=$(az network vnet list -g $nodeResourceGroup -o tsv --query "[0].name")
aksVnetId=$(az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query "id")
az network vnet peering create -n AppGWtoAKSVnetPeering -g myResourceGroup --vnet-name myVnet --remote-vnet $aksVnetId --allow-vnet-access
appGWVnetId=$(az network vnet show -n myVnet -g myResourceGroup -o tsv --query "id")
az network vnet peering create -n AKStoAppGWVnetPeering -g $nodeResourceGroup --vnet-name $aksVnetName --remote-vnet $appGWVnetId --allow-vnet-access
Done. Now deploying a sample application to the AKS cluster that will use the AGIC add-on for Ingress and connect the application gateway to the AKS cluster.
kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml
kubectl get ingress
Reference Documents: