hadoophivehdphive-configuration

is it right to limit cleaning /tmp each day in hadoop cluster


We have HDP cluster version – 2.6.4

Cluster installed on redhat machines version – 7.2

We noticed about the following issue on the JournalNodes machines ( master machines )

We have 3 JournalNodes machines , and under /tmp folder we have thousands of empty folders as

drwx------.  2 hive      hadoop     6 Dec 20 09:00 a962c02e-4ed8-48a0-b4bb-79c76133c3ca_resources

an also a lot of folders as

drwxr-xr-x.  4 hive      hadoop  4096 Dec 12 09:02 hadoop-unjar6426565859280369566

with content as

beeline-log4j.properties  BeeLine.properties  META-INF  org  sql-keywords.properties

/tmp should be purged every 10 days according to the configuration file:

more  /usr/lib/tmpfiles.d/tmp.conf
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

# See tmpfiles.d(5) for details

# Clear tmp directories separately, to make them easier to override
v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d

# Exclude namespace mountpoints created with PrivateTmp=yes
x /tmp/systemd-private-%b-*
X /tmp/systemd-private-%b-*/tmp
x /var/tmp/systemd-private-%b-*
X /var/tmp/systemd-private-%b-*/tmp
You have new mail in /var/spool/mail/root

So we decrease the retention to 1d instead of 10d in order to avoid this issue

Then indeed /tmp have only folders content of one day

But I want to ask the following questions

Is it ok to configure the retention about /tmp in Hadoop cluster to 1day ?

( I almost sure it ok , but want to hear more opinions )

Second

Why HIVE generate thousands of empty folders as XXXX_resources ,

and is it possible to solve it from HIVE service , instead to limit the retention on /tmp


Solution

  • It is quite normal having thousands of folders in /tmp as long as there is still free space available for normal run. Many processes are using /tmp, including Hive, Pig, etc. One day retention period of /tmp maybe too small, because normally Hive or other map-reduce tasks can run more than one day, though it depends on your tasks. HiveServer should remove temp files but when tasks fail or aborted, the files may remain, also it depend on Hive version. Better to configure some retention, because when there is no space left in /tmp, everything stops working.

    Read also this Jira about HDFS scratch dir retention.