天天看点

FATAL namenode.FSEditLog ,Error: flush failed for required journal

Hadoop HA NameNode进程异常退出

  • 异常
2018-12-24 22:45:30,418 WARN  client.QuorumJournalManager (QuorumCall.java:waitFor(134)) - Waited 19032 ms (timeout=20000 ms) for a response for sendE
dits. Succeeded so far: [10.10.22.3:8485]. Exceptions so far: [10.10.22.2:8485: Journal disabled until next roll]
2018-12-24 22:45:31,688 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for required journal (Journal
AndStream(mgr=QJM to [10.10.22.3:8485, 10.10.22.2:8485, 10.10.22.4:8485], stream=QuorumOutputStream starting at txid 35387466))
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.flushAndSync(QuorumOutputStream.java:107)
        at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113)
        at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$8.apply(JournalSet.java:533)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.access$100(JournalSet.java:57)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.flush(JournalSet.java:529)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:707)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:641)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:3394)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3268)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:5
04)
           
  • 分析

    昨晚生产集群突然发生异常报警(NameNode进程挂掉,其他JPS进行依然运行)重启NN后看似一切正常。开始查看日志分析原因。发现是由于请求

    journal

    节点超时引起的,为什么NN会去请求journal呢,这就跟HA的设计有关了。journalNode的作用存放EditLog的,在MR1中editlog是和fsimage存放在一起的然后SecondNamenode做定期合并。为了让Standby NN的状态和Active NN保持同步,即元数据保持一致,使用journal作为守护进程通信。这时journal就会变成NN节点所依赖的属性,所以通常我们会配置zookeeper集群来保证高可用。

    上诉异常也就是因为NN与journal节点通信超时引起的,默认参数为20s,我们可以在

    hdfs-site.xml

    中设置60s来解决这个问题。
<property>
        <name>dfs.qjournal.write-txns.timeout.ms</name>
        <value>60000</value>
</property>
           

参考:

https://blog.csdn.net/levy_cui/article/details/51143214

https://blog.csdn.net/androidlushangderen/article/details/48415073

继续阅读