Details
-
Bug
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
See the following scenarios:
Total cluster resources:<memory: 13312, vCores: 4, yarn. io/gpu: 2>
Queue:
root
-default (50%)
-autotest (50%)
Task: mapreduce task, 1 am, 1 map, 1 reduce
Task1: Do not request for GPU
Task2: Request for 2 gpu in the map
Test scenario and results: Firstly, using hadoop user to submit Task2 to the default queue, which can be allocated to resources normally. Afterwards, using hadoop user to submit Task1 to the default queue, which cannot allocate resources.
I think in this situation, Task1 should also be allocated to resources and run normally.