将数据复制到HDFS时createBlockOutputStream中的exception

在将数据复制到HDFS时,我收到以下警告消息。 我有6个节点集群在运行。 每次复制期间它都会忽略这两个节点并显示以下警告消息。

INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad connect ack with firstBadLink as 192.168.226.136:50010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1116) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487) 13/11/04 05:02:15 INFO hdfs.DFSClient: Abandoning BP-603619794-127.0.0.1-1376359904614:blk_-7294477166306619719_1917 13/11/04 05:02:15 INFO hdfs.DFSClient: Excluding datanode 192.168.226.136:50010 

Datanode日志

 2014-02-07 04:22:01,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService java.io.IOException: Failed on local exception: java.io.IOException: Connection reset by peer; Host Details : local host is: "datanode4/192.168.226.136"; destination host is: "namenode":8020; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763) at org.apache.hadoop.ipc.Client.call(Client.java:1235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at sun.proxy.$Proxy10.sendHeartbeat(Unknown Source) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) at sun.proxy.$Proxy10.sendHeartbeat(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:441) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:521) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673) at java.lang.Thread.run(Thread.java:679) Caused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) at sun.nio.ch.IOUtil.read(IOUtil.java:224) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:56) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:143) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:156) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.FilterInputStream.read(FilterInputStream.java:133) at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:411) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.FilterInputStream.read(FilterInputStream.java:83) at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:276) at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:760) at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:288) at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:752) at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:985) at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:941) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:839) 2014-02-07 04:22:04,780 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:05,783 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:06,785 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:07,788 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:08,791 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:09,794 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:10,796 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:11,798 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:12,802 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:13,813 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-02-07 04:22:13,818 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService java.net.ConnectException: Call From datanode4/192.168.226.136 to namenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 

我尝试从datanode到namenode进行SSH工作。 有谁能请我这个。

如果您需要任何其他详细信息,请与我们联系。

一定要检查防火墙

服务iptables保存

服务iptables停止

chkconfig iptables off

有关此博客的更多详细信息: http : //ahikmat.blogspot.kr/2014/05/three-essential-things-to-do-while.html

您需要为无法连接的节点打开TCP端口。

解决方案:将TCP端口范围0-65535添加到安全组以解决问题。