we have HDP cluster with 7 datanodes machines
under /hadoop/hdfs/namenode/current/
we can see more then 1500
edit files
each file is around 7M
to 20M
as the following
7.8M /hadoop/hdfs/namenode/current/edits_0000000002331008695-0000000002331071883
7.0M /hadoop/hdfs/namenode/current/edits_0000000002331071884-0000000002331128452
7.8M /hadoop/hdfs/namenode/current/edits_0000000002331128453-0000000002331189702
7.1M /hadoop/hdfs/namenode/current/edits_0000000002331189703-0000000002331246584
11M /hadoop/hdfs/namenode/current/edits_0000000002331246585-0000000002331323246
8.0M /hadoop/hdfs/namenode/current/edits_0000000002331323247-0000000002331385595
7.7M /hadoop/hdfs/namenode/current/edits_0000000002331385596-0000000002331445237
7.9M /hadoop/hdfs/namenode/current/edits_0000000002331445238-0000000002331506718
9.1M /hadoop/hdfs/namenode/current/edits_0000000002331506719-0000000002331573154
9.0M /hadoop/hdfs/namenode/current/edits_0000000002331573155-0000000002331638086
7.8M /hadoop/hdfs/namenode/current/edits_0000000002331638087-0000000002331697435
7.8M /hadoop/hdfs/namenode/current/edits_0000000002331697436-0000000002331755881
8.0M /hadoop/hdfs/namenode/current/edits_0000000002331755882-0000000002331814933
9.8M /hadoop/hdfs/namenode/current/edits_0000000002331814934-0000000002331884369
11M /hadoop/hdfs/namenode/current/edits_0000000002331884370-0000000002331955341
8.7M /hadoop/hdfs/namenode/current/edits_0000000002331955342-0000000002332019335
7.8M /hadoop/hdfs/namenode/current/edits_0000000002332019336-0000000002332074498
is it possible to minimize file size by some HDFS
configuration? ( or minimize edit files numbers )
since we have small disks and the disk is now 100%
/dev/sdb 100G 100G 0 100% /hadoop/hdfs
You can configure the dfs.namenode.num.checkpoints.retained
and
dfs.namenode.num.extra.edits.retained
properties to control the size
of the directory that holds the NameNode edits directory.
dfs.namenode.num.checkpoints.retained
: The number of image checkpoint files that are retained in storage directories. All edit logs necessary to recover an up-to-date namespace from the oldest retained checkpoint are also retained.dfs.namenode.num.extra.edits.retained
: The number of extra transactions that should be retained beyond what is minimally necessary for a NameNode restart. This can be useful for audit purposes, or for an HA setup where a remote Standby Node may have been offline for some time and require a longer backlog of retained edits in order to start again.