Details
-
New Feature
-
Status: Closed
-
Major
-
Resolution: Won't Fix
-
None
-
None
-
None
-
Reviewed
-
Job tracker parameters permit setting limits on the number of maps (or reduces) per job and/or per node.
Description
There are a number of use cases for being able to do this. The focus of this jira should be on finding what would be the simplest to implement that would satisfy the most use cases.
This could be implemented as either a per-node maximum or a cluster-wide maximum. It seems that for most uses, the former is preferable however either would fulfill the requirements of this jira.
Some of the reasons for allowing this feature (mine and from others on list):
- I have some very large CPU-bound jobs. I am forced to keep the max map/node limit at 2 or 3 (on a 4 core node) so that I do not starve the Datanode and Regionserver. I have other jobs that are network latency bound and would like to be able to run high numbers of them concurrently on each node. Though I can thread some jobs, there are some use cases that are difficult to thread (scanning from hbase) and there's significant complexity added to the job rather than letting hadoop handle the concurrency.
- Poor assignment of tasks to nodes creates some situations where you have multiple reducers on a single node but other nodes that received none. A limit of 1 reducer per node for that job would prevent that from happening. (only works with per-node limit)
- Poor mans MR job virtualization. Since we can limit a jobs resources, this gives much more control in allocating and dividing up resources of a large cluster. (makes most sense w/ cluster-wide limit)
Attachments
Attachments
Issue Links
- is related to
-
MAPREDUCE-704 Per-node task limits in the fair scheduler
- Open
-
MAPREDUCE-698 Per-pool task limits for the fair scheduler
- Closed
-
MAPREDUCE-5583 Ability to limit running map and reduce tasks
- Closed