sequelize.jsscalabilityamazon-rdsamazon-aurora

How to handle new replica in RDS Cluster with Sequelize?


We are prepping for a very large surge of traffic, but the question is also meant as a generic one:

Knowing that you can set up Sequelize to use a cluster of RDS Databases (in our case: Aurora) like so:

const master = { rdsClusterWriterEndpoint, username, password, port, database }
const replica = { rdsClusterReaderEndpoint, username, password, port, database }
const Sequelize = require('sequelize')
const sequelize = new Sequelize(null, null, null, {
  dialect: 'mysql',
  pool: {
    handleDisconnects: true,
    min: 0,
    max: 10,
    idle: 10000,
  },
  replication: {
    write: master,
    read: [replica],
  },
})

How could I handle adding a new RDS instance to the cluster to load balance reads even more without reloading the app?

I've pocked around but couldn't find a good way to do it. The DNS resolution seems to be done once at startup time and I haven't found a way to refresh it every once in a while.

Has someone found a safe way of doing this?

Thanks


Solution

  • I ended up with a configuration like this:

    const getRandomWithinRange = (min, max) => {
      min = Math.ceil(min)
      max = Math.floor(max)
      return Math.floor(Math.random() * (max - min + 1)) + min // The maximum is inclusive and the minimum is inclusive
    }
    const maxConnectionAge = moment.duration(10, 'minutes').asSeconds()
    const pool =     {
      handleDisconnects: true,
      min: pool.min || 1, // Keep one connection open
      max: pool.max || 10, // Max 10 connections
      idle: pool.idle || 9000, // 9 seconds
      validate: (obj) => {
        // Recycle connections periodically
        if (!obj.recycleWhen) {
          // Setup expiry on new connections and return the connection as valid
          obj.recycleWhen = moment().add(getRandomWithinRange(maxConnectionAge, maxConnectionAge * 2), 'seconds')
          return true
        }
        // Recycle the connection if it expired
        return moment().diff(obj.recycleWhen, 'seconds') < 0
      }
    }
    const master = { rdsClusterWriterEndpoint, username, password, port, database, pool }
    const replica = { rdsClusterReaderEndpoint, username, password, port, database, pool }
    const sequelize = new Sequelize(null, null, null, {
      dialect: 'mysql',
      replication: {
        write: master,
        read: [replica]
      }
    }
    

    The connection within the pool are recycled at regular interval which triggers spreading to new replicas introduced in the cluster.

    It's not ideal as most of the time it is recycled for no reason and when you add replicas to handle a growing pressure on the DB you may want it to take effect sooner than later, but that's my poor man solution for the moment and it helped us go through a fairly large surge in traffic recently.

    For simplicity here I'm using the same pool configuration for master and readers but obviously it shouldn't be.

    If someone has a better idea, I'm all ears ;)