Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
Under some circumstances, it seems that placeholder allocations are being removed multiple times:
2023-04-25T06:25:46.279Z INFO scheduler/partition.go:1233 replacing placeholder allocation {"appID": "spark-000000031tn2lgv2gar", "allocationId": "20a4cf77-7095-4635-b9e9-43a7564385c4"} ... 2023-04-25T06:25:46.299Z INFO scheduler/partition.go:1233 replacing placeholder allocation {"appID": "spark-000000031tn2lgv2gar", "allocationId": "20a4cf77-7095-4635-b9e9-43a7564385c4"}
This message only appears once in the codebase, in PartitionContext.removeAllocation(). Furthermore, it is guarded by a test for release.TerminationType == si.TerminationType_PLACEHOLDER_REPLACED. This would seem to indicate that removeAllocation() is somehow being called twice. I believe this would cause the used resources on the node to be subtracted twice for the same allocation. This quickly results in health checks failing:
2023-04-25T06:26:10.632Z WARN scheduler/health_checker.go:176 Scheduler is not healthy {"health check values": [..., {"Name":"Consistency of data","Succeeded":false,"Description":"Check if node total resource = allocated resource + occupied resource + available resource","DiagnosisMessage":"Nodes with inconsistent data: [\"ip-10-0-112-148.eu-central-1.compute.internal\"]"}, ...]}
This was originally thought to be YUNIKORN-1615, but that seems related to occupied (rather than used) resources.
Updated by zhuqi :
The release allocation called twice, but it not cause the node and other resource update twice.
The title will be changed to:
Fix allocatedResource and availableResource should be updated at the sametime for ReplaceAllocation
Attachments
Attachments
Issue Links
- links to