google-cloud-platformgoogle-cloud-functionsgoogle-compute-engineserverlessgoogle-vpc

Google serverless VPC connector does not access GCE instance with multiple network interfaces


Is there anyway I can connect the Cloud Functions with a VPC connector on default network to a GCE instance with multiple network interfaces where nic0 is someother network and nic1 is default network?

So I have a GCE instance with multiple network interfaces.

nic0 is someother network

nic1 is default network

I made a serverless VPC connector on default network. And used that connector with Google Cloud functions to connect to the GCE instance.

The problem is that when network interfaces are swapped i.e. nic0 is default network and nic1 is someother network, then VPC connector connects successfully and cloud functions can reach the GCE but when nic0 is someother network and nic1 is default network then cloud functions cannot reach GCE.

I tried the following things:

  1. I tried swapping the network interfaces i.e. default to nic0 and it works but i need nic0 for someother network to connect to another external server and so default is on nic1.
  2. I tried making firewall rules but apparently they are not needed in this scenario as I already have the necessary rules setup.
  3. I tried making a VPC connector on the someother network so it could connect to nic0 but that does not work too. VPC connector should be on default network.

Note: I have the correct IAM permissions setup as I've successfully connected Cloud functions to GCE instance with only default network.


Solution

  • Without further configuration, secondary network interfaces only provide access to the immediate subnet they are attached to, this includes serverless VPC connectors, as they are by their very nature a different subnet than the one your instance is attached to.

    To get around this, you need to create a static route in the operating system on the instance where the secondary interface is located. This will obviously vary based on your operating system, but on Debian-9 you can set this up with this command:

    sudo ip route add [MY_CONNECTOR_SUBNET] via [ETH1_DEFAULT_ROUTER] dev eth1
    

    Where ETH1_DEFAULT_ROUTER is the .1 address of your ETH1 subnet, and MY_CONNECTOR_SUBNET is the CIDR-format /28 subnet the connector is configured to use (e.g. something like 10.50.1.0/28, but it will depend on how you set up your connector).

    Of course, this doesn't persist it at boot, as that is also an OS-specific configuration, but it should give you an idea if this is the problem for you.

    Also, there isn't really anything special about the 'default' network -- its just an auto-created auto-mode network, and there isn't any reason this shouldn't have worked when you had the connector attached to the nic0 "someother" network. The only thing happening here that is 'special' is that nic0 gets the default route for all traffic out of the VM, and therefore won't need a static route added to access a Serverless VPC Connector on the same network.