Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.14.4
Description
I have a JobManager, which at some point failed to acknowledge a checkpoint:
Error while processing AcknowledgeCheckpoint message org.apache.flink.runtime.checkpoint.CheckpointException: Could not complete the pending checkpoint 193393. Failure reason: Failure to finalize checkpoint. at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1255) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:1100) at org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$acknowledgeCheckpoint$1(ExecutionGraphHandler.java:89) at org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$processCheckpointCoordinatorMessage$3(ExecutionGraphHandler.java:119) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) Caused by: org.apache.flink.runtime.persistence.StateHandleStore$AlreadyExistException: checkpointID-0000000000000193393 already exists in ConfigMap cm-00000000000000000000000000000000-jobmanager-leader at org.apache.flink.kubernetes.highavailability.KubernetesStateHandleStore.getKeyAlreadyExistException(KubernetesStateHandleStore.java:534) at org.apache.flink.kubernetes.highavailability.KubernetesStateHandleStore.lambda$addAndLock$0(KubernetesStateHandleStore.java:155) at org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.lambda$attemptCheckAndUpdateConfigMap$11(Fabric8FlinkKubeClient.java:316) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) ... 3 common frames omitted
the JobManager creates subsequent checkpoints successfully.
Upon failure, it tries to recover this checkpoint (0000000000000193393), but fails to do so because of:
Caused by: org.apache.flink.util.FlinkException: Could not retrieve checkpoint 193393 from state handle under checkpointID-0000000000000193393. This indicates that the retrieved state handle is broken. Try cleaning the state handle store ... Caused by: java.io.FileNotFoundException: No such file or directory: s3://xxx/flink-ha/xxx/completedCheckpoint72e30229420c
I'm running Flink 1.14.4.
Note: This issue has been first discussed here: https://github.com/apache/flink/pull/15832#pullrequestreview-1005973050
Attachments
Attachments
Issue Links
- relates to
-
FLINK-25098 Jobmanager CrashLoopBackOff in HA configuration
- Open
-
FLINK-29105 KubernetesStateHandleStoreTest.testAddAndLockShouldNotThrowAlreadyExistExceptionWithSameContents failed with AssertionFailedError
- Closed
- links to