I have an EC2 instance attached to an Elastic IP and serves an REST API to the public through nginx. The FQDN points to the Elastic IP address and there's a working Lets Encrypt SSL cert for that domain. This works great for systems accessing the server from the Internets via HTTPS.
For internal systems that access the same server I want to use the internal IP address of the server instead of the one pointed to by the FQDN to keep traffic inside my VPC. However, of course the SSL cert is tied to the FQDN and clients complain.
How do get this scenario to work so that accessing both the internal and external IP are HTTPS?
Here's how I accomplished this:
I created two wsgi services: internal and external using unique .sock files for each: e.g.
ExecStart=/usr/bin/python3 /usr/local/bin/gunicorn --bind unix:/tmp/web-internal.sock wsgi:app
ExecStart=/usr/bin/python3 /usr/local/bin/gunicorn --bind unix:/tmp/web-external.sock wsgi:app
I created two nginx profiles: internal and external, with each referencing the same-named wsgi app via the .sock file: e.g. (Only the internal nginx snippet)
location / {
proxy_pass http://unix:/tmp/web-internal.sock;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Each internal and external nginx profiles listens to the EC2 internal IP address but different ports
I kept the Lets Encrypt cert attached to the externally facing nginx at the public FQDN
I created a private certificate authority (CA) and a self-signed cert and shared the private CA with all the clients that will access the internal version of the API. This cert contained the internal IP address for the API server as the subject alternate name.
Now I can have internal services access the API via the EC2 internal IP address via HTTPS. And the external services can access the API via the FQDN using the LetsEncrypt-issued cert and HTTPS.