I am applying this bluestore_min_alloc_size to 4096, no matter how I apply the setting, it not getting picked up by daemons, I have tried to restart all daemons pod also after applying the setting but no effect.
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd bluestore_min_alloc_size
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size_hdd
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config get osd.0 bluestore_min_alloc_size_ssd
4096
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic log_to_file false
global advanced mon_allow_pool_delete true
global advanced mon_cluster_log_file
global advanced mon_pg_warn_min_per_osd 0
global advanced osd_pool_default_pg_autoscale_mode on
global advanced osd_scrub_auto_repair true
global advanced rbd_default_features 3
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/active true
mgr advanced mgr/balancer/mode upmap
mgr.a advanced mgr/dashboard/server_port 8443 *
mgr.a advanced mgr/dashboard/ssl true *
mgr.a advanced mgr/dashboard/ssl_server_port 8443 *
osd advanced bluestore_min_alloc_size 4096 *
osd.0 advanced bluestore_min_alloc_size_hdd 4096 *
osd.0 advanced bluestore_min_alloc_size_ssd 4096 *
mds.iondfs-a basic mds_join_fs iondfs
mds.iondfs-b basic mds_join_fs iondfs
ceph df
[root@rook-ceph-tools-55c94c6786-x88d2 /]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 43 GiB 39 GiB 4.0 GiB 5.0 GiB 11.45
TOTAL 43 GiB 39 GiB 4.0 GiB 5.0 GiB 11.45
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 12 GiB
iondfs-metadata 2 32 240 MiB 128 241 MiB 0.64 36 GiB
iondfs-data0 3 32 209 MiB 60.80k 3.8 GiB 9.41 36 GiB
you can see stored size of 60.80k objects is 209MB but used is 3.8Gb that is 64x60.8x1000 kb = 3.8912 Gb That showing that still 64k block size is being used instead of 4kb
The catch is option blustore_min_alloc_size cant be set aftet osd is created, you need to create config before creating the cluster.
kubectl create namespace rook-ceph
save below as ceph-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: rook-config-override
namespace: rook-ceph
data:
config: |
[osd]
bluestore_min_alloc_size = 4096
bluestore_min_alloc_size_hdd = 4096
bluestore_min_alloc_size_ssd = 4096
kubectl apply -f ceph-conf.yaml
Now create the ceph cluster