Details
-
Question
-
Status: Resolved
-
Trivial
-
Resolution: Information Provided
-
3.5.1
-
None
-
None
Description
I have been trying autoscaling in Kubernetes for Spark Jobs,When first job is triggered based on load workers pods are scaling which is fine but When second job is submitted its not getting allocating any resources as First Job is consuming all the resources.
Second job is in Waiting State until First Job is finished.I have gone through documentation to set max cores in standalone mode which is not a ideal solution as we are planning autoscaling based on load and Jobs submitted.
Is there any solution for this or any alternatives ?