azuresshopnsense

Unable to SSH using the Azure CLI to VMs on subnet


I am having troubles with networking in Azure with an OPNSense firewall. I am brand new to Azure so I am just playing around with it.

I have setup an OPNSense box in Azure using instructions from the Github here: GitHub - dmauser/opnazure: This template allows you to deploy an OPNsense Firewall Azure VM using the opnsense-bootsrtap installation method. I have used the 2 NIC deployment.

The firewall uses the following two subnets:

I have created a new subnet (10.1.5.0/24) that I am wanting to put some VMs on. A routing table has been added to the the subnet so that the firewall (10.1.1.4) is the next hop for the internet. The VMs deploy correctly and VMs that are on the new subnet and trusted subnet can ping each other. The VMs on the new subnet can access the internet through the firewall. However, I am not able to ssh onto the machines while they are connected to the new subnet.

Using Azure CLI to SSH always returns the following:

:/# az ssh vm --resource-group rg-test-au-east --name vm-nettest
No public IP detected, attempting private IP (you must bring your own connectivity).
Use --prefer-private-ip to avoid this message.
OpenSSH_9.1p1, OpenSSL 3.0.8 7 Feb 2023
ssh: connect to host 10.1.5.4 port 22: Connection refused

Using --local-user returns the same error. Once I move the VM to the Trusted subnet, I am able to SSH onto the machine with no problems.

I have gone through the troubleshooting steps found here: Troubleshoot SSH connection issues to an Azure VM - Virtual Machines | Microsoft Learn. Since I can SSH to the VM when it is on a different subnet, I feel like the issue is not with SSH access itself.

It seems like there might be an issue with routing, but I am not sure if I am needing to adjust something in OPNSense, like adding a Gateway for the new subnet or something (which feels incorrect), or if I am missing something in Azure.

Since this is just a testing setup, I have a very open NSG on both subnets so port 22 is most definitely open. Why would I not be able to ssh onto VMs that are on this new subnet?


Solution

  • Thank you to @MarioDietner for helping me to see that Azure CLI SSH is not as magical as I thought it to be. I have now been able to isolate where my issue was.

    I am using OpenVPN to connect to the firewall and gain access to the network. With the VPN connected, I am able to ping everything on the Trusted subnet (10.1.1.0/24) but when I ping VMs on the new subnet (10.1.5.0/24), the request times out. In the firewall, I enabled ICMP in all the rules. When I ping the VMs on the trusted subnet, I can see the pass logs, but when I ping VMs on the new subnet, I get nothing.

    To test if I am getting to the firewall at all, I did a tracert to a VM on both subnets. The VM on the Trusted subnet sends me to the VPN gateway first and hits the correct machine on the next hop. When I ping the new subnet, I get sent straight out to the internet where eventually the request times out. So I was not hitting the firewall at all on the new subnet.

    I had the correct routes set in the firewall, but I was missing the new subnet in the actual OpenVPN server configuration, so my local machine had no idea where to look. After adding the new subnet to IPv4 Local Network in the VPN configuration, everything is working like a charm.