i have a problem , some info follow:
nodes : 3 node , but only configurtion 2 regionserver
os : Centos6.3
Apache Hadoop2.7.1
Apache Hbase0.98.12
my hadoop and hbase support lzo compression and at the same time support snappy comression successs , i have a hbase table using lzo compression and have other hbase table useing snappy compression, i insert 50 recoder data into this table , ok ,insert is no problem ,but when i use java api to scan this table , one of regionserver is deaded.
i check hbase log ,but no error or Exception , but i check hadoop log , i found some Exception :
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
i don't know why throws Exception in only scan hbase table , because i execute MR job read lzo file is Normal , thanks for your answer !
ok, i finally find the answer , it is unbelievable , Through the Hbase gc log , i see a long full gc suggest , my hbase's heap size is default 1 gb , so it maybe occurred problem ,when i increase it to 4 GB heap , i use lots of compression is normal , so please remember the lession !