hadoopnetwork-connection

hadoop Protocol message tag had invalid wire type


I Set up hadoop 2.6 cluster using two nodes of 8 cores each on Ubuntu 12.04. sbin/start-dfs.sh and sbin/start-yarn.sh both succeed. And I can see the following after jps on the master node.

22437 DataNode
22988 ResourceManager
24668 Jps
22748 SecondaryNameNode
23244 NodeManager

The jps outcome on the slave node is

19693 DataNode
19966 NodeManager

I then run the PI example.

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 30 100

Which gives me there error-log

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)

The problem seems with the HDFS file system since trying out the command bin/hdfs dfs -mkdir /user fails with the similar exception.

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;

where xxx.ww.y.zz is the ip-address of Master-R5-Node

I have checked and followed all the recommendations of ConnectionRefused on Apache and on this site.

Despite the week long effort, I cannot get it fixed.

Thanks.


Solution

  • There are so many reasons to what may lead to the problem I faced. But I finally ended up fixing it using some of the following things.

    1. Make sure that you have the needed permission to the /hadoop and hdfs temporary files. (you have to figure out where that is for your paticular case)
    2. remove the port number from fs.defaultFS in $HADOOP_CONF_DIR/core-site.xml. It should look like this:
    `<configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://my.master.ip.address/</value>
    <description>NameNode URI</description>
    </property>
    </configuration>`
    
    1. Add the following two properties to `$HADOOP_CONF_DIR/hdfs-site.xml
     <property>
        <name>dfs.datanode.use.datanode.hostname</name>
        <value>false</value>
     </property>
    
      <property>
         <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
         <value>false</value>
      </property>
    

    Voila! You should now be up and running!