ubuntuhadoophdfsnamenode

hadoop-1.2.1 namenode is not formatted


I have installed hadoop 1.2.1 in Ubuntu 16 and configured as below:

core-site.xml

<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:8020</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>tmpDir/snadikop/hadoopdata</value>

hdfs-site.xml

<configuration>
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>

mapred-site.xml

<configuration>
<property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
</property>

when I started first time, everything's working fine. but when I restarted the system, and when I tried to start the daemons namenode is not starting.

tried

hadoop namenode -format 

command and tried

sudo chown snadikop tmpDir/snadikop/hadoopdata
sudo chmod 750 tmpDir/snadikop/hadoopdata

where snadikop is the user. Still couldn't solve this issue. please help me with this issue?

Thank you.

Below is my log file

2017-03-02 18:07:01,185 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2017-03-02 18:07:01,377 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

2017-03-02 18:07:01,411 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 

and this is what happening in the terminal

snadikop@satish-vb:~$ jps
11492 NameNode
11654 DataNode
11863 Jps
11818 SecondaryNameNode
snadikop@satish-vb:~$ jps
11654 DataNode
11880 Jps
11818 SecondaryNameNode
snadikop@satish-vb:~$ 

below are the screenshots locations of 'name' and 'data' folders. 'name' folder path I have doubt regarding this, whether both has to be in same folder or not.

'data' folder path


Solution

  • The value you have provided to hadoop.tmp.dir is a relative path, this would change every time based on the path from where the start scripts are invoked. This hadoop.tmp.dir directory path will be the base path for dfs.name.dir and dfs.data.dir if they are not explicitly set in hdfs-site.xml.

    So, if the tmp.dir changes, the namenode's name directory changes and thus the Namenode is not formatted error.

    Add these properties to hdfs-site.xml with an absolute non-tmp path,

    <property>
       <name>dfs.name.dir</name>
       <value>/home/username/namenode</value>
    </property>
    <property>
       <name>dfs.data.dir</name>
       <value>/home/username/datanode</value>
    </property>
    

    Then format the namenode

    hadoop namenode -format
    

    Also modify the value of hadoop.tmp.dir to an absolute path to stop creating tmp directories randomly.