cassandracassandra-3.0elassandra

Elassandra replication information and rack configuration


I recently started working with an Elassandra cluster with two data centers which have been configured using NetworkTopologyStrategy.

Cluster details : Elassandra 6.2.3.15 = Elasticsearch 6.2.3 + Cassandra 3.11.4

Datacenter: DC1
=================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns    Host ID                               Rack
UN  <ip1>         50 GiB  256          ?       6cab1f4c-8937-437d-b010-0a5677443dc3  rack1
UN  <ip2>         48 GiB  256          ?       6c9e7ad5-a642-4c0d-8b77-e78d821d904b  rack1
UN  <ip3>         50 GiB  256          ?       7e493bc6-c8a5-471e-8eee-3f3fe985b90a  rack1
Datacenter: DC2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns    Host ID                               Rack
UN  <ip4>         47 GiB  256          ?       c49c1203-cc38-41a2-b9c8-2b42bc907c17  rack1
UN  <ip5>         67 GiB  256          ?       0d9f31bc-9690-49b6-9d88-4fb30c1b6c0d  rack1
UN  <ip6>         88 GiB  256          ?       80c4d60d-185f-457a-ae9b-2eb611735f07  rack1

schema info
CREATE KEYSPACE my_keyspace WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3', 'DC2': '3'} AND durable_writes = true;

The DC2 is kind of a Disaster Recovery site and in an ideal world, we should be able to use only that in case of a disaster.

  1. With the very limited knowledge I have, I strongly suspect that we need to modify the rack configuration to have a 'proper' D/R cluster (So that data in DC1 gets replicated in DC2) or am I getting this wrong? If so, is there a standard guideline to follow?
  2. When there are multiple DCs, does Cassandra automatically replicate this regardless of rack configurations? (Are racks kind of additional fail proof?)
  3. DC2 has more data than DC1. Is this purely related to hash function?
  4. Is there any other things that can be rectified in this cluster?

Many thanks!


Solution

  • These replication settings mean that the data for your keyspace is replicated in real time between the 2 DCs with each DC having 3 replicas (copies):

    CREATE KEYSPACE my_keyspace WITH replication = {
      'class': 'NetworkTopologyStrategy',
      'DC1': '3',
      'DC2': '3'
    }
    

    Replication in Cassandra happens in real time -- any writes sent to one DC is sent to all other DCs at the same time. Unlike traditional RDBMS or configurations with primary/secondary or active/DR, Cassandra replication is instantaneous and immediate.

    The logical Cassandra racks are for additional redundancy mechanism. If you have C* nodes deployed in different (a) physical racks, or (b) public cloud availability zones, Cassandra will distribute the replicas to separate racks so each rack has a full copy of the data. With a replication factor of 3 in the DC, if a rack goes down for whatever reason then there's still full copies of the data in the remaining 2 racks and read/write requests with a consistency of LOCAL_QUORUM (or lower) will not be affected.

    I've explained this in a bit more detail in this post -- https://community.datastax.com/questions/1128/.

    If you're new to Cassandra, we recommend https://www.datastax.com/dev which has links to short hands-on tutorials where you can quickly learn the basics of Cassandra -- all free. This tutorial is a good place to start -- https://www.datastax.com/try-it-out. Cheers!