I am currently facing an issue with accessing the master of a private GKE cluster on Google Cloud Platform. Here’s the configuration I have:
resource "google_container_cluster" "private_cluster" {
name = "name"
location = var.zone
network = gke_vpc_self_link
subnetwork = gke_subnet_self_link
enable_shielded_nodes = true
remove_default_node_pool = true
initial_node_count = 1
private_cluster_config {
enable_private_endpoint = true
enable_private_nodes = true
master_ipv4_cidr_block = "172.16.0.0/28"
}
I have a Pritunl VPN server running in the VPC Hub. The problem is that the master of the GKE cluster in the VPC Spoke is not accessible using the private IP obtained from the VPN in the VPC Hub. This is because VPC peering in Google Cloud does not support transit routing, meaning the IPs from the VPN cannot directly access the GKE master unless they belong to the same VPC.
Looking for a method to enable access from my local machine (via the VPN in the VPC Hub) to the GKE master in the VPC Spoke without having to set up another VPN in the VPC Spoke.
What are the recommended practices or configurations to achieve this setup? Any suggestions on using a transit gateway, routes, or other methods to resolve this issue would be highly appreciated.
#Environment Details
Actually, the new release on Network Connectivity Center could solve your problem. I faced the same issue and decided to pay for support from Engineer of Google Cloud.
It is called NCC PSC propagated connection. It will help you in hub and spoke architecture (custom VPC peered to Shared VPC which has GKE). Network from the GKE PSC route will be propagated and can be routed from the Custom VPC.
You can see the new feature here (Released Jun 14, 2024)
If you find it is helpful, please give me an upvote (I need 50 reputations to comment on the post, thanks)