serverless-frameworkaws-serverlessserverless-offline

How to run multiple microservices configured on same custom domain, locally?


I have configured my backend following this article: How to deploy multiple micro-services under one API domain with Serverless

I understand that deploying the services one by one is possible using the serverless offline plugin. But the plugin creates multiple mini-APIGateways on different ports.

My front end doesn't know that. It assumes that everything is deployed and ready to be used on one port.

If I want to, let's say, test out a feature that requires a valid session, I can't do that locally as my session and the feature, are managed by 2 different services.

Any manually testing in this kind of situation is only possible once I've deployed all the changes. This takes a lot of time.

Is there a way to deploy all the services on the same port, behind single API gateway?


Solution

  • The sls-multi-gateways package runs multiple API gateways. If you have multiple services and want to run than locally at the same time, you can add the package. But that's not a complete solution as ultimately you probably want the backend to be accessible on a single host.

    That means you are adding a dependency, and it gets you halfway.

    When you try to run multiple gateways locally without this package, you get error stating that the port 3002 is already being used. That is because the serverless offline plugin has 3002 assigned as the default port for lambda functions.

    Since we are trying to run multiple services, the first service will hog the 3002, and the rest will fail to start. To fix this, you have to tell serverless offline, which ports it should use for deploying lambda functions for each service by specifying the lambdaPort in serverless.yml files for your services. This can be done like:

    custom: 
      serverless-offline:
        httpPort: 4001 // this is the port on which the service will run
        lambdaPort: 4100 // this is the port which will be assigned to the first lambda function in the service
    

    So for each service, the port will be 400n and the lambda port will be 4n00 This pattern is safe if you have less than 100 lambda functions in your service. Looks like you just need to assign a port to support manual lambda invocations.

    Now you can run all the services parallelly using concurrently. We are now where we would be with sls-multi-gateways.

    Next what we need is a proxy. I used the createHttpProxy middleware with express. But you can setup any depending on your project.

    Here is what the code for proxy service looks like:

    const express = require("express");
    const {
      createProxyMiddleware,
    } = require("http-proxy-middleware");
    
    const port = process.env.PORT || 5000; // the port on which you want to access the backend
    const app = express();
    
    app.use(
      `/service2`,
      createProxyMiddleware({
        target: `${urlOfService2}`,
      })
    );
    app.use(
      `/service1`,
      createProxyMiddleware({
        target: `${urlOfService1}`,
      })
    );
    
    
    app.listen(port, () => {
      console.log(`Proxy service up on ${port}.`);
    });