hadoopubuntu-11.04

Why do we need to format HDFS after every time we restart machine?


I have installed Hadoop in pseudo distributed mode on my laptop, OS is Ubuntu.

I have changed paths where hadoop will store its data (by default hadoop stores data in /tmp folder)

hdfs-site.xml file looks as below :

<property>
    <name>dfs.data.dir</name>
    <value>/HADOOP_CLUSTER_DATA/data</value>
</property>

Now whenever I restart machine and try to start hadoop cluster using start-all.sh script, data node never starts. I confirmed that data node is not start by checking logs and by using jps command.

Then I

  1. Stopped cluster using stop-all.sh script.
  2. Formatted HDFS using hadoop namenode -format command.
  3. Started cluster using start-all.sh script.

Now everything works fine even if I stop and start cluster again. Problem occurs only when I restart machine and try to start the cluster.


Solution

  • By changing dfs.datanode.data.dir away from /tmp you indeed made the data (the blocks) survive across a reboot. However there is more to HDFS than just blocks. You need to make sure all the relevant dirs point away from /tmp, most notably dfs.namenode.name.dir (I can't tell what other dirs you have to change, it depends on your config, but the namenode dir is mandatory, could be also sufficient).

    I would also recommend using a more recent Hadoop distribution. BTW, the 1.1 namenode dir setting is dfs.name.dir.