cassandracassandra-3.0nodetool

Repair command #6 failed with error Nothing to repair for (1838038121817533879,1854751957995458118] in xyz - aborting


Node tool repair command is failing to repair form some of tables . Cassandra version is 3.11.6 version . I have following queries :

  1. Is this really a problem . what is the impact if we ignore this error ?
  2. How can we get rid of this token range for some keyspace ?
  3. what is the possible reason that it complaining about this token range ?

here is the error trace :

[2020-11-12 16:33:46,506] Starting repair command #6 (d5fa7530-2504-11eb-ab07-59621b514775), repairing keyspace solutionkeyspace with repair options (parallelism: parallel, primary range: false, incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 256, pull repair: false, ignore unreplicated keyspaces: false)
[2020-11-12 16:33:46,507] Repair command #6 failed with error Nothing to repair for (1838038121817533879,1854751957995458118] in solutionkeyspace - aborting
[2020-11-12 16:33:46,507] Repair command #6 finished with error
error: Repair job has failed with the error message: [2020-11-12 16:33:46,507] Repair command #6 failed with error Nothing to repair for (1838038121817533879,1854751957995458118] in solutionkeyspace - aborting
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: [2020-11-12 16:33:46,507] Repair command #6 failed with error Nothing to repair for (1838038121817533879,1854751957995458118] in solutionkeyspace - aborting
    at org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:116)
    at org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
    at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)
    at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)
    at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
    at com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)
    

Scheme definition

CREATE KEYSPACE solutionkeyspace WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '1', 'datacenter2': '1'}  AND durable_writes = true;

CREATE TABLE solutionkeyspace.schemas (
    namespace text PRIMARY KEY,
    avpcontainer map<text, text>,
    schemacreationcql text,
    status text,
    version text
) WITH bloom_filter_fp_chance = 0.001
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 10800
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.05
    AND speculative_retry = '99PERCENTILE';

nodetool status output

bash-5.0# nodetool -p 7199 status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns    Host ID                               Rack
UN  172.16.0.68  5.71 MiB   256          ?       6a4d7b51-b57b-4918-be2f-3d62653b9509  rack1

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless

Solution

  • It looks like you are running Cassandra version 3.11.9+ where this new behaviour was introduced.

    https://issues.apache.org/jira/browse/CASSANDRA-15160

    You will not see this error if you add the following nodetool repair command line option;

    -iuk

    or

    --ignore-unreplicated-keyspaces

    This change would have broken people's repair scripts when they upgrade from an earlier version of Cassandra to 3.11.9+. We certainly noticed it.