I have a docker host that is sitting on a little private network, to which are also attached some IP cameras, with one sitting on e.g. 192.168.10.10. I also have a cluster of docker containers that need to talk to each other, which it would be convenient to keep in their own docker bridge network. The trouble is: can I allow a container access a non-public IP that the container host can resolve, while still being connected to a bridge network?
--network=host
and --network=mycoolnet
is forbidden by docker ("cannot attach both user-defined and non-user-defined network-modes.")--aux-host=mycamera:192.168.10.10
) provides a nice hostname, but does not make the content at that IP accessible within the containerdocker network create -d macvlan --subnet 192.168.0.0/16 --gateway 192.168.0.1 -o parent mynetworkdevice my-macvlan-net
) assigns the container an identity on that subnet, but does not make the content of that subnet necessarily available to the container.I am not asking about how to connect to localhost specifically; that has been answered succinctly and correctly in another question
EDIT: An intricacy that is not clear in this original question (and is key to the resolution) is that 192.168.10.10 is not resolvable by the primary network interface on the host, but by the second interface (the aforementioned "little private network")
The problem I was having turned out to be related not to how docker networking per se worked, but how iptables was routing my request. An intricacy that I failed to mention in the original question (which I will shortly update) is that the docker host sits on more than one network. The request for 192.168.10.10 would exit the docker container and be handled by the primary network interface, but the secondary network interface was what could resolve it. Accordingly, the request would get bounced inside the container. Outside the container, the host knew to try both interfaces, and could service my request just fine.
Steps to address my particular situation:
When creating the docker network, I make it with a specific subnet
docker network create --driver=bridge --subnet=192.168.100.0/24 dev2-bridge
Checking iptables, I can see that docker has created a masquerade rule, and it's evidently not what I want:
> iptables -L -n -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 192.168.100.0/24 0.0.0.0/0
...
Let's look at my IP routes and find the gateway for the device I want
> ip route
...
192.168.10.0/24 dev interface2 proto kernel scope link src 192.168.10.3 metric 101
Looks like I want to use the gateway 192.168.10.3 to resolve that 192.168.10.0 address!
I remove the masquerade rule (my command below technically removes the newest rule), and add a SNAT one
> iptables -t nat -D POSTROUTING 1
> iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -d 192.168.10.0/24 -j SNAT --to-source 192.168.10.3
> iptables -L -n -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
...
SNAT all -- 192.168.100.0/24 0.0.0.0/0 to:192.168.10.3
...
This means that the docker network dev2-bridge now cannot see anything that 192.168.10.3, my local network, can't. That's fine for me! It might not be appropriate for your use case.
This answer on ServerFault was the key to getting the result I needed.