Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
Here is a log that we got from a non-rolling upgrade:
local/master(0766d2cd23afb29f0eb42cf95b09d3d2984c14fa) -> upstream/master(57d42b12d3b6451e2ac8519780e82993ecce3611)
// code placeholder 2021-07-27 20:11:59,458 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServerConfigKeys: raft.server.leaderelection.leader.step-down.wait-time = 10s (default) 2021-07-27 20:11:59,458 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServerConfigKeys: raft.server.sleep.deviation.threshold = 300ms (default) 2021-07-27 20:11:59,458 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServerConfigKeys: raft.server.log.appender.install.snapshot.enabled = false (custom) 2021-07-27 20:11:59,458 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServer$Division: 32e8d855-b702-438d-b829-ac43dc567afc@group-9034B3A2567B: ConfigurationManager, init=-1: [], old=null, confs=<EMPTY_MAP> 2021-07-27 20:11:59,458 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServerConfigKeys: raft.server.storage.dir = [/data1/hdds-ratis, /data2/hdds-ratis, /data3/hdds-ratis, /data4/hdds-ratis, /data5/hdds-ratis, /data6/hdds-ratis, /data7/hdds-ratis, /data8/hdds-ratis, /data9/hdds-ratis, /data10/hdds-ratis, /data11/hdds-ratis] (custom) 2021-07-27 20:11:59,458 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServerConfigKeys: raft.server.log.corruption.policy = EXCEPTION (default) 2021-07-27 20:11:59,461 [Datanode State Machine Thread - 0] ERROR org.apache.hadoop.ozone.upgrade.UpgradeFinalizer: Exception while running first upgrade run actions for feature SCM_HA 2021-07-27 20:11:59,461 [Datanode State Machine Thread - 0] ERROR org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine: Unable to start the DatanodeState Machine FIRST_UPGRADE_START_ACTION_FAILED org.apache.hadoop.ozone.upgrade.UpgradeException: Exception while running first upgrade run actions for feature SCM_HA at org.apache.hadoop.ozone.upgrade.BasicUpgradeFinalizer.runFirstUpgradeAction(BasicUpgradeFinalizer.java:293) at org.apache.hadoop.ozone.upgrade.BasicUpgradeFinalizer.runPrefinalizeStateActions(BasicUpgradeFinalizer.java:255) at org.apache.hadoop.ozone.container.upgrade.DataNodeUpgradeFinalizer.runPrefinalizeStateActions(DataNodeUpgradeFinalizer.java:92) at org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:256) at org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:455) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269) at org.apache.hadoop.ozone.common.StorageInfo.writeTo(StorageInfo.java:208) at org.apache.hadoop.ozone.common.Storage.persistCurrentState(Storage.java:296) at org.apache.hadoop.ozone.upgrade.BasicUpgradeFinalizer.persistStorage(BasicUpgradeFinalizer.java:320) at org.apache.hadoop.ozone.upgrade.BasicUpgradeFinalizer.runFirstUpgradeAction(BasicUpgradeFinalizer.java:287) ... 5 more 2021-07-27 20:11:59,502 [pool-92215-thread-1] INFO org.apache.ratis.server.storage.RaftStorageDirectory: Lock on /data11/hdds-ratis/2f6ee407-957e-4326-92b4-9034b3a2567b/in_use.lock acquired by nodename 126731@tdw-9-180-19-146 2021-07-27 20:11:59,502 [pool-92215-thread-1] INFO org.apache.ratis.server.storage.RaftStorage: Read RaftStorageMetadata{term=2, votedFor=55148789-0cdb-4631-a3b3-c1da774523aa} from /data11/hdds-ratis/2f6ee407-957e-4326-92b4-9034b3a2567b/current/raft-meta 2021-07-27 20:11:59,507 [pool-92215-thread-1] INFO org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine: group-9034B3A2567B: The snapshot info is null. Setting the last applied indexto:(t:0, i:~) 2021-07-27 20:11:59,507 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServer$Division: 32e8d855-b702-438d-b829-ac43dc567afc@group-9034B3A2567B: set configuration 0: [32e8d855-b702-438d-b829-ac43dc567afc|rpc:9.180.19.146:9858|dataStream:|priority:0, 55148789-0cdb-4631-a3b3-c1da774523aa|rpc:9.179.142.135:9858|dataStream:|priority:1, d3336357-8920-4a4e-a12f-e57da1640c4d|rpc:9.180.20.94:9858|dataStream:|priority:0], old=null 2021-07-27 20:11:59,507 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServerConfigKeys: raft.server.notification.no-leader.timeout = 300s (custom) 2021-07-27 20:11:59,507 [pool-92215-thread-1] INFO org.apache.ratis.server.RaftServerConfigKeys: raft.server.log.use.memory = false (default)
As the log show, an interrupt was caught in `runPrefinalizeStateActions()` before `DatanodeStateMachine` goes into the main loop.
And we could see logs from the RaftServer thread right above and below the exception, which will potentially call `triggerHearbeat` to send an interrupt to `DatanodeStateMachine` main thread.
So there is a race condition.
I think we should prevent RaftServer from sending interrupt to `DatanodeStateMachine` until it is ready to accept it, the current way is to check whether the thread variable is null, which is not enough to check whether the `DatanodeStateMachine` is ready to get heartbeat triggered, and it is not thread-safe to check the thread variable either.
// DatanodeStateMachine.java /** * Calling this will immediately trigger a heartbeat to the SCMs. * This heartbeat will also include all the reports which are ready to * be sent by datanode. */ public void triggerHeartbeat() { if (stateMachineThread != null) { stateMachineThread.interrupt(); } }
Attachments
Issue Links
- links to