dockerdocker-composedocker-swarm

Using replicas to scale multiple web server instances


The answer is at the end

I'm developing a web server in Rust, and it works fine using just Docker and defining two services in compose. As follows:

x-common_auth_service: &common_auth_service
  container_name: auth
  build: 
    context: ./auth
    dockerfile: Dockerfile
  network_mode: host
  restart: always
  deploy:
    resources:
      limits:
        memory: 40M
      reservations:
        memory: 20M
  depends_on:
    - redis

services:
  
  auth:
    <<: *common_auth_service
    container_name: auth
    environment: 
      APP_PORT: 3002 //specific port
  
  auth2:
    <<: *common_auth_service
    container_name: auth2
    environment: 
      APP_PORT: 3003 //specific port

But now I need to implement these services using Docker Swarm.

Here I came across the first problem when using replicas.

  compose.yml
  auth_service:
    image: service/auth_app_rust
    build: 
      context: ./auth
      dockerfile: Dockerfile
    networks:
      - network_overlay //I was previously a host, hence the problem
    ports:
      - "5000-5001:3002"
    deploy:
      mode: replicated
      replicas: 2

  nginx:
    image: nginx:latest
    volumes:
      - ./nginx/auth.conf:/etc/nginx/nginx.conf
    networks:
      - host
    depends_on:
      - auth_service

nginx.conf

upstream auth_server {
    server 127.0.0.1:5000;
    server 127.0.0.1:5001;
    keepalive 200;
}

server {
    listen 9999;
    location / {
        proxy_buffering off;
        proxy_set_header Connection "";
        proxy_http_version 1.1;
        proxy_set_header Keep-Alive "";
        proxy_set_header Proxy-Connection "keep-alive";
        proxy_pass http://auth_server;
    }
}

Services that previously defined ports manually used the same port across replicas, so I decided to use "port range mapping".

And at first it seemed to work fine, but I noticed that Docker Swarm doesn't assign each port to the container, it actually balances between the replicas.

In other words, if I run:

curl http://localhost:5000 

It won't necessarily run in container 1.

Although I have found it very convenient to scale services using replicas, and the model also makes it easy to upgrade services, how do I solve this problem?

Because I would still like to use replicas, instead of defining several services, but I would not like to use the Docker Swarm Load Balancer, as I already use it through Nginx.

In short, I want to scale the number of containers running the web server in Rust, without having to create several of them in docker-compose, and also not use the load balancer, as the load balancer alignment would be bad.

EDIT

@Chris Becke provided insight on how to resolve the issue in the overlay network scenario.

Here is how I solved it if host mode was used.

Basically we use the placement "{{.Task.Slot}}" to determine the ports of the services, and not generate conflicts.

In addition to modifying the "update_config" so that when the containers were updated, there would be no port conflicts.

 update_config:
    parallelism: 1
    order: stop-first

compose.yml - host network - Improve performance

  auth_service:
    image: service/auth_app_rust
    hostname: auth-service-{{.Task.Slot}}
    build: 
      context: ./auth
      dockerfile: Dockerfile
    networks: # on 300*
      - host
    environment:
      REDIS_HOST: localhost
      APP_PORT: "300{{.Task.Slot}}" # this way we specify the service port based on the container identifier
      METRIC_PORT: "301{{.Task.Slot}}"
    deploy:
      mode: replicated
      replicas: 2
      update_config:
        parallelism: 1 # !
        order: stop-first # !
        failure_action: rollback
        delay: 5s
      resources:
        limits:
          memory: 40M
        reservations:
          memory: 20M
    depends_on:
      - redis
      - rust_service_builder
    healthcheck:
      test: wget --no-verbose --tries=1 --spider http://127.0.0.1:3002/health
      retries: 5
      interval: 3s
      timeout: 5s


networks:
   host:
     name: host
     external: true

Solution

  • Its not clear where nginx is running. Given you are using 127.0.0.1 rather than docker.host.local it seems that nginx is NOT running in a container itself. You also talk about using docker swarm.

    This means you ARE, definitionally, using the ingress overlay network which loadbalances to service containers. Just use "127.0.0.1:5000" and let docker deal with finding the correct container.

    Alternatively, if you are runnning on a multi node swarm, then change the auth service:

    services:
      auth_service:
        ...
        ports:
          - target: 3000
            publish: 5000
            mode: host
        deploy:
          mode: global
    

    And then in the nginx.conf

    upstream auth_server {
        server server1-ip:5000;
        server server2-ip:5000;
        keepalive 200;
    }
    

    Finally, if nginx is actually running as a container, then drop the publish directives entirely and define a network that will connect nginx and the auth server. Use dockers service template syntax to give each instance a unique hostname - which will be published to containers attached to the same networks:

    networks:
      proxy:
        name: nginx
        attachable: true
    
    services:
      auth_service:
        hostname: auth-service-{{.Task.Slot}}
        networks:
        - proxy
    

    Attach nginx to the same network and use this config.

    upstream auth_server {
        server auth-server-1.nginx:3002;
        server auth-server-2.nginx:3002;
        keepalive 200;
    }
    

    This approach will make nginx senstive to the service running as nginx starts. to solve this you need to ensure nginx can resolve hosts at runtime:

    You need to include a resolver: 127.0.0.11 directive and use set $backend1 "http://auth-server-1:3002" to make the hostnames variables which forces nginx to resolve them at runtime. I don't know how to combine that with upstream auth_server however so ymmv.