Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
0.23.1, 2.0.1-alpha
-
None
-
None
-
16 (duo core) machine cluster ==> 32 containers
namenode and resourcemanager running on separate 17th machine
Description
If a job has more reduce tasks than there are containers available, then the reduce tasks can occupy all containers causing starvation. The attached graph illustrates the behaviour. Scheduler used is fifo.
I understand that the correct behaviour when all containers are taken by reducers while mappers are still pending, is for the running reducers to be "pre-empted". However, pre-emption does not occur.
A work-around is to set the number of reducers < available containers.
Attachments
Attachments
Issue Links
- duplicates
-
MAPREDUCE-4299 Terasort hangs with MR2 FifoScheduler
- Closed