cephobject-storagedata-scrubbing

ceph pg repair doesnt start right away


Every now and then i get a single pg inconsistency error on my cluster. As suggested by the docs I run ceph pg repair pg.id and the command gives "instructing pg x on osd y to repair" seems to be working as intended. However it doesn't start right away, what might be the cause of this? I'm running 24 hour scrubs so at any given time i have at least 8-10 pgs getting scrubbed or deep scrubbed. Do the pg processes such as scrubbing or repairing form a queue and does my repair command just wait for its turn? Or is there another issue behind this?

Edit:

Ceph health detail output :

pg 57.ee is active+clean+inconsistent, acting [16,46,74,59,5]

Output of

rados list-inconsistent-obj 57.ee --format=json-pretty


{
    "epoch": 55281,
    "inconsistents": [
        {
            "object": {
                "name": "10001a447c7.00005b03",
                "nspace": "",
                "locator": "",
                "snap": "head",
                "version": 150876
            },
            "errors": [],
            "union_shard_errors": [
                "read_error"
            ],
            "selected_object_info": {
                "oid": {
                    "oid": "10001a447c7.00005b03",
                    "key": "",
                    "snapid": -2,
                    "hash": 3954101486,
                    "max": 0,
                    "pool": 57,
                    "namespace": ""
                },
                "version": "55268'150876",
                "prior_version": "0'0",
                "last_reqid": "client.42086585.0:355736",
                "user_version": 150876,
                "size": 4194304,
                "mtime": "2021-03-15 21:52:43.651368",
                "local_mtime": "2021-03-15 21:52:45.399035",
                "lost": 0,
                "flags": [
                    "dirty",
                    "data_digest"
                ],
                "truncate_seq": 0,
                "truncate_size": 0,
                "data_digest": "0xf88f1537",
                "omap_digest": "0xffffffff",
                "expected_object_size": 0,
                "expected_write_size": 0,
                "alloc_hint_flags": 0,
                "manifest": {
                    "type": 0
                },
                "watchers": {}
            },
            "shards": [
                {
                    "osd": 5,
                    "primary": false,
                    "shard": 4,
                    "errors": [],
                    "size": 1400832,
                    "omap_digest": "0xffffffff",
                    "data_digest": "0x00000000"
                },
                {
                    "osd": 16,
                    "primary": true,
                    "shard": 0,
                    "errors": [],
                    "size": 1400832,
                    "omap_digest": "0xffffffff",
                    "data_digest": "0x00000000"
                },
                {
                    "osd": 46,
                    "primary": false,
                    "shard": 1,
                    "errors": [],
                    "size": 1400832,
                    "omap_digest": "0xffffffff",
                    "data_digest": "0x00000000"
                },
                {
                    "osd": 59,
                    "primary": false,
                    "shard": 3,
                    "errors": [
                        "read_error"
                    ],
                    "size": 1400832
                },
                {
                    "osd": 74,
                    "primary": false,
                    "shard": 2,
                    "errors": [],
                    "size": 1400832,
                    "omap_digest": "0xffffffff",
                    "data_digest": "0x00000000"
                }
            ]
        }
    ]
}

This pg is inside an EC pool. When i run ceph pg repair 57.ee i get the output:

instructing pg 57.ees0 on osd.16 to repair

However as you can see from the pg report the inconsistent shard is in osd 59. I thought that the "s0" at the end of the output referred to the first shard so i tried the repair command like this as well:

ceph pg repair 57.ees3 but i got an error that tells me this is invalid command.


Solution

  • You have I/O errors, happen often due to faulty disks, as you see shard error :

    errors": [],
                "union_shard_errors": [
                    "read_error"
    

    problematic shard is on "osd":59

    Try to force to read again the problematic object:

    # rados -p  EC_pool get 10001a447c7.00005b03 
    

    The scrub caused a read of the object, and returned a read error , that's mean the object is marked as gone, when that happens it will try and recover the object from elsewhere (peering, recovery, backfill)