Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.5.2, 1.6.0
Description
When starting a Spark job on a Mesos cluster, all available cores are reserved (up to spark.cores.max), creating one executor per Mesos node, and as many executors as needed.
This is the case even when dynamic allocation is enabled.
When dynamic allocation is enabled, the number of executor launched at startup should be limited to the value of spark.dynamicAllocation.initialExecutors.
The Mesos scheduler backend already follows the value computed by the ExecutorAllocationManager for the number of executors that should be up and running. Expect at startup, when it just creates all the executors it can.
Attachments
Issue Links
- relates to
-
SPARK-13162 Standalone mode does not respect `spark.dynamicAllocation.initialExecutors`
- Resolved
- links to