google-cloud-platformnetwork-programminggoogle-cloud-functionsserverlessvpn

GCP: Serverless VPC Connector not working with Cloud Functions


I have the following network setup:

[GCP VPC]* -> [Cloud VPN Gateway & Tunnel (IPSec IKEv2)] -> [on premises]**

*internal IP range: 10.0.0.0/24
**internal IP range: 10.106.0.0/20

On premises, I have an API (Node.js) running on 10.106.0.2:3000.

I want to make HTTP calls to the aforementioned API from GCP.
(All traffic besides the IPSec tunnel from GCP is blocked)

A) From a VM (this works)

If I run a VM attached to GCP's VPC that's "connected" to the on-premises network, I am able to:

ping 10.106.0.2       # OK
curl 10.106.0.2:3000  # OK

The following Node.js script works perfectly:

# script.js

async function run() {
    const url = 'http://10.106.0.2:3000';

    try {
        const response = await fetch(url);
        const data = await response.text();
        console.log('Response:', data);
        res.status(200).send(data); // Returns the expected response from the API
    } catch (error) {
        console.error('Error:', error.message);
        res.status(500).send(error.message);
    }
}

run();
# Reaches the on-premises API and returns the expected response
$ node script.js

B) From a Cloud Function (this doesn't work)

I configured a Serverless VPC Connector.

Then I created a Cloud Function (gen1) with access to the Connector (tried both "Private ranges only" and "All traffic" as egress routing settings).

The function does the same as the script in the VM:

exports.helloWorld = async (req, res) => {
    const url = 'http://10.106.0.2:3000';

    try {
        const response = await fetch(url);
        const data = await response.text();
        console.log('Response:', data);
        res.status(200).send(data);
    } catch (error) {
        console.error('Error:', error.message);
        res.status(500).send(error.message);
    }
};

This doesn't work. I get the following error from the function:

{"cause":{"name":"ConnectTimeoutError","code":"UND_ERR_CONNECT_TIMEOUT","message":"Connect Timeout Error"}}

Additionally, I tried setting extra firewall rules as described here for all TCP ingress traffic (and port 3000 specifically as well). This didn't help.

Any ideas? Am I missing something?

Edit:

I've created a "Connectivity Test" to check the connection between my Cloud Function and 10.106.0.2:3000:

enter image description here

Edit 2:

For comparison, this is how the "Connectivity Test" from the VM to the same target (10.106.0.2:3000) looks like:

enter image description here


Solution

  • To solve this issue, I had to edit the IPSec configuration on my on-premises server (/etc/ipsec.conf).

    The internal IP range of the Serverless VPC Connector (10.1.0.0/28 in my case) had to be added to rightsubnet:

    config setup
        protostack=netkey
    
    conn mysubnet
         also=mytunnel
         leftsubnet=10.106.0.0/20
         rightsubnet=10.0.0.0/24,10.1.0.0/28 # <--- HERE
         auto=start
    
    conn mytunnel
        left=x.x.x.x # the external IP of the on-premises server
        right=y.y.y.y # the external IP of GCP's Cloud VPN Gateway
        authby=secret
    

    Now everything works properly:
    a) requests from any VM (in 10.0.0.0/24) and
    b) requests from any Cloud Function with a Serverless VPC Connector (in 10.1.0.0/28) attached to it.