amazon-s3storageceph

Ceph s3 bucket space not freeing up


I been testing Ceph with s3

my test ENV is a 3node with an datadisk of 10GB each so 30GB its set to replicate 3 times. so i have "15290 MB" space available.

I got the S3 bucket working and been uploading files, and filled up the storage, tried to remove the said files but the disks are still show as full

cluster 4ab8d087-1802-4c10-8c8c-23339cbeded8
 health HEALTH_ERR
        3 full osd(s)
        full flag(s) set
 monmap e1: 3 mons at {ceph-1=xxx.xxx.xxx.3:6789/0,ceph-2=xxx.xxx.xxx.4:6789/0,ceph-3=xxx.xxx.xxx.5:6789/0}
        election epoch 30, quorum 0,1,2 ceph-1,ceph-2,ceph-3
 osdmap e119: 3 osds: 3 up, 3 in
        flags full,sortbitwise,require_jewel_osds
  pgmap v2224: 164 pgs, 13 pools, 4860 MB data, 1483 objects
        14715 MB used, 575 MB / 15290 MB avail
             164 active+clean

I am not sure how to get the disk space back?

Can any one advise on what i have done wrong or have missed


Solution

  • I'm beginning with ceph and had the same problem.

    1. try running the garbage collector

    list what will be deleted

    radosgw-admin gc list --include-all
    

    then run it

    radosgw-admin gc process
    
    1. if it didn't work (like for me with most of my data)

    find the bucket with your data :

    ceph df
    

    Usually your S3 data goes in the default pool default.rgw.buckets.data

    purge it from every object /!\ you will loose all your data /!\

    rados purge default.rgw.buckets.data --yes-i-really-really-mean-it   
    

    I don't know why ceph is not purging this data itself for now (still learning...).