node.jssocketsamazon-ec2clickhouse

Node.Js socket hang up on ClickhouseDb connection pooling


I am connecting my Node.Js app to a self hosted clickhouse db on EC2. It was working fine in the first implementation where a new connection was being made for every request and then connection is closed at the end of that request, but when I switched to connection pooling in order to reduce the overhead of making new connections over every request, there's a Socket hang up error showing on some requests. This is only showing when multiple requests are being made simultaneously for load testing.

The exact error message is:

Error: socket hang up
    at connResetException (node:internal/errors:705:14)
    at Socket.socketCloseListener (node:_http_client:467:25)
    at Socket.emit (node:events:525:35)
    at TCP.<anonymous> (node:net:301:12) {
  code: 'ECONNRESET'
}

I tried increasing the request timeout as well as connection timeout. Here's the config for the connection:

const { createClient } = require("@clickhouse/client");

const clickhouseClients = {};

const clickhouseClient = (dbName) => {
  if (!clickhouseClients[dbName]) {
    clickhouseClients[dbName] = createClient({
      host: process.env.CLICKHOUSE_HOST,
      username: process.env.CLICKHOUSE_USERNAME,
      password: process.env.CLICKHOUSE_PASSWORD,
      database: dbName,
      max_open_connections: 10,
      min_open_connections: 2,
      connection_timeout: 120000,
      request_timeout: 400000,
    });

    clickhouseClients[dbName]
      .ping()
      .then(() =>
        console.log(`Successfully connected to ClickHouse database: ${dbName}`)
      )
      .catch((error) => {
        console.error(`Failed to ping ClickHouse database ${dbName}:`, error);
        throw new Error(
          `Failed to initialize ClickHouse client for database ${dbName}`
        );
      });
  }

  return clickhouseClients[dbName];
};

module.exports = clickhouseClient;

I have also tried increasing max open connections but it didn't work. I am expecting this has something to do with the 'keep_alive.idle_socket_ttl' in the clickhouse config.

Node version: 20.10.0 Clickhouse client version: 0.2.10


Solution

  • I don't think that you need a separate client per database in your use case, as it can actually introduce more overhead, and you are connecting to the same host.

    First of all, the client already uses the HTTP agent internally, which is supposed to manage the connections.

    If you have a lot of databases and queries, you might eventually open too many sockets, more than the LB/CH will allow in their config (cause each client will have up to 10 per your configuration); I am not sure if this is the root cause of the issue, but it could potentially be.

    You can explicitly provide the dbName in the query or insert method, e.g.

    const rows = await client.query({
      query: `SELECT * FROM ${dbName}.${tableName} LIMIT 10`,
      format: format,
    })
    

    Similarly, you can do the same in the insert method:

    await client.insert({
      table: `${dbName}.${tableName}`,
      format: 'JSONEachRow',
      values: [{ 
        // ...
      }]
    })
    

    The main issue with this additional pooling is that you will have separate socket pools in each client. Instead of all queries reusing the same internal socket pool (and keeping the sockets alive, as they supposedly will not be idling for long), each will try to keep the pool of sockets per db.

    If some db is queried infrequently, the idle sockets will eventually expire, and you will end up establishing more new connections than if you used a single client instance.

    Regarding the additional settings:

      clickhouse_settings: {
        send_progress_in_http_headers: 1,
        http_headers_progress_interval_ms: "110000", // UInt64, should be passed as a string
      },
    

    If you don’t have (very) long-running queries, i.e., something like INSERT FROM SELECT, you might not need this; see https://clickhouse.com/docs/en/integrations/language-clients/javascript#keep-alive-troubleshooting

    This part looks like a dangling promise

    clickhouseClients[dbName]
      .ping()
      .then(() =>
        console.log(`Successfully connected to ClickHouse database: ${dbName}`)
      )
      .catch((error) => {
        console.error(`Failed to ping ClickHouse database ${dbName}:`, error);
        throw new Error(
          `Failed to initialize ClickHouse client for database ${dbName}`
        );
      });
    

    Additionally, ping does not throw; see the docs https://clickhouse.com/docs/en/integrations/language-clients/javascript#ping

    Considering your query generator and dbName/tableName - it will probably also work like this:

    const queryResponse = gcpQueryBuilder(
      dbClient,
      `${dbName}.${tableName}`,
      response.customer_id,
      selectedGroupBys,
      startDate,
      endDate,
      filters,
      true
    );
    

    So you probably won’t need a client per db, as the name will be fully qualified.

    Node version: 20.10.0 Clickhouse client version: 0.2.10

    Please upgrade to the latest stable client version. Since 0.2.10, the internal socket management was reworked to specifically avoid the socket hang-up issue.