Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
-
Hadoop 2.4.0 (as packaged by HortonWorks in HDP 2.1.2)
Description
I have a small cluster consisting of 8 desktop class systems (1 master + 7 workers).
Due to the small memory of these systems I configured yarn as follows:
yarn.nodemanager.resource.memory-mb = 2200
yarn.scheduler.minimum-allocation-mb = 250
On my client I did
mapreduce.map.memory.mb = 512
mapreduce.reduce.memory.mb = 512
Now I run a job with 27 mappers and 32 reducers.
After a while I saw this deadlock occur:
- All nodes had been filled to their maximum capacity with reducers.
- 1 Mapper was waiting for a container slot to start in.
I tried killing reducer attempts but that didn't help (new reducer attempts simply took the existing container).
Workaround:
I set this value from my job. The default value is 0.05 (= 5%)
mapreduce.job.reduce.slowstart.completedmaps = 0.99f
Attachments
Attachments
Issue Links
- duplicates
-
YARN-1680 availableResources sent to applicationMaster in heartbeat should exclude blacklistedNodes free memory.
- Open