I'm having trouble migrating an Azure Container App environment from "consumption only" to "workload profile". My setup is a Hub-Spoke scenario where the spoke contains the container app environment which is connected to a VNET/Subnet in the 10.70.10.0/23 range. All containers in the "consumption only" scenario also got an IP in this range. There is a route table connected to the subnet which routes all traffic to the firewall in the peered hub VNET. This all worked fine. See image below. After migrating to the "workload profile" all containers get an 100.100.x.x IP. There seems to be no traffic routed to though the route table anymore. I cannot reach any of the on prem services anymore an there seems to traffic coming in on the firewall as well.
This was a good troubleshooting you did , I am converting our conversation as answer for someone having similar issue in SO. Please feel free to add your points.
The root cause is that custom traffic selectors in the VPN connections weren’t forwarding the new 100.100.0.0/16
range for the container apps.
To resolve this
# Create VNet Hub
az network vnet create \
--name VNetHub \
--resource-group arkorg \
--address-prefixes 10.70.0.0/23 \
--subnet-name GatewaySubnet \
--subnet-prefixes 10.70.0.0/24
# Create VNet Spoke
az network vnet create \
--name VNetSpoke \
--resource-group arkorg \
--address-prefixes 10.70.10.0/23 \
--subnet-name SpokeSubnet \
--subnet-prefixes 10.70.10.0/24
# Create Static Public IP for the VPN Gateway
az network public-ip create \
--resource-group arkorg \
--name myVpnGatewayIP \
--allocation-method Static \
--sku Standard
# Create the VPN Gateway
az network vnet-gateway create \
--resource-group arkorg \
--name VNetGateway \
--public-ip-address myVpnGatewayIP \
--vnet VNetHub \
--gateway-type Vpn \
--vpn-type RouteBased \
--sku VpnGw1 \
--no-wait
# Peer VNetHub to VNetSpoke
az network vnet peering create \
--name HubToSpoke \
--resource-group arkorg \
--vnet-name VNetHub \
--remote-vnet VNetSpoke \
--allow-vnet-access
# Peer VNetSpoke to VNetHub
az network vnet peering create \
--name SpokeToHub \
--resource-group arkorg \
--vnet-name VNetSpoke \
--remote-vnet VNetHub \
--allow-vnet-access
# Create Route Table
az network route-table create \
--name SpokeRouteTable \
--resource-group arkorg \
--location eastus
# Add Route to VPN Gateway
az network route-table route create \
--resource-group arkorg \
--route-table-name SpokeRouteTable \
--name RouteToVPN \
--address-prefix 0.0.0.0/0 \
--next-hop-type VirtualAppliance \
--next-hop-ip-address 10.70.1.4 # Replace with your actual VPN gateway private IP
# Associate Route Table with Spoke Subnet
az network vnet subnet update \
--vnet-name VNetSpoke \
--name SpokeSubnet \
--resource-group arkorg \
--route-table SpokeRouteTable
Delegate the Subnet to Microsoft.App/Environments (this will allow Azure Container Apps to manage the subnet)
az network vnet subnet update \
--name SpokeSubnet \
--resource-group arkorg \
--vnet-name VNetSpoke \
--delegations Microsoft.App/environments
Create the Container Apps Environment
az containerapp env create \
--name arkoContainerEnv \
--resource-group arkorg \
--location eastus \
--infrastructure-subnet "/subscriptions/abcd-efgh-ijk-lmnop-9d23123dfc7d/resourceGroups/arkorg/providers/Microsoft.Network/virtualNetworks/VNetSpoke/subnets/SpokeSubnet"
Deploy a Container App, note- the VPN connection could be deleted and recreated with the correct traffic selectors.
az containerapp create \
--name arkocontainerapp \
--resource-group arkorg \
--environment arkoContainerEnv \
--image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
--cpu 0.5 \
--memory 1.0Gi \
--target-port 80 \
--ingress 'external' \
--query properties.configuration.ingress.fqdn
This should fix the issue where traffic wasn’t routed from the new 100.100.x.x
IP range to your VPN. This configuration ensures that both ICMP and HTTP(S) traffic are correctly routed and forwarded via your VPN connection.