Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
Description
We observed cases where the reducer preemption makes the job finish much later, and the preemption does not seem to be necessary since after preemption both the preempted reducer and the mapper are assigned immediately--meaning that there was already enough space for the mapper.
The logic for triggering preemption is at RMContainerAllocator::preemptReducesIfNeeded
The preemption is triggered if the following is true:
headroom + am * |m| + pr * |r| < mapResourceRequest
where am: number of assigned mappers, |m| is mapper size, pr is number of reducers being preempted, and |r| is the reducer size.
The original idea apparently was that if headroom is not big enough for the new mapper requests, reducers should be preempted. This would work if the job is alone in the cluster. Once we have queues, the headroom calculation becomes more complicated and it would require a separate headroom calculation per queue/job.
So, as a result headroom variable is kind of given up currently: headroom is always set to 0 What this implies to the speculation is that speculation becomes very aggressive, not considering whether there is enough space for the mappers or not.
Attachments
Attachments
Issue Links
- is broken by
-
YARN-1959 Fix headroom calculation in FairScheduler
- Closed
- relates to
-
MAPREDUCE-6302 Preempt reducers after a configurable timeout irrespective of headroom
- Closed