javahadoophdfshortonworks-data-platformnamenode

Hortonworks HDFS Name Node tryLock issue on startup


We're using HDP HDFS module version 2.7.3.2.6.5.0-292.

The server was stuck and had to be hard-reseted - now the Name Node service throws an error upon start up.

After successfully acquiring a lock file, it instantaneously fails by trying to acquire it again, even through it's for the same process (presumably the same thread) - it fails.

How should we start the name node with the data in tact?

18/11/14 20:19:24 INFO util.GSet: Computing capacity for map NameNodeRetryCache 18/11/14 20:19:24 INFO util.GSet: VM type = 64-bit 18/11/14 20:19:24 INFO util.GSet: 0.029999999329447746% max memory 1011.3 MB = 310.7 KB 18/11/14 20:19:25 INFO util.GSet: capacity = 2^15 = 32768 entries 18/11/14 20:19:25 INFO common.Storage: Lock on /mnt/pd1/hadoop/hdfs/namenode/in_use.lock acquired by nodename 10635@hadoop-327 18/11/14 20:19:25 ERROR common.Storage: It appears that another node 10635@hadoop-327 has already locked the storage directory: /mnt/pd1/hadoop/hdfs/namenode java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1113) at java.nio.channels.FileChannel.tryLock(FileChannel.java:1155) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:770) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:738) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:551) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:502)


Solution

  • So it appears that we had 2 paths that target the same directory at dfs.namenode.name.dir, which caused the double lock. Once we used a single path, everything was back to order.