Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-48673

Scheduling Across Applications in k8s mode

    XMLWordPrintableJSON

Details

    Description

      I have been trying autoscaling in Kubernetes for Spark Jobs,When first job is triggered based on load workers pods are scaling which is fine but When second job is submitted its not getting allocating any resources as First Job is consuming all the resources.

      Second job is in Waiting State until First Job is finished.I have gone through documentation to set max cores in standalone mode which is not a ideal solution as we are planning autoscaling based on load and Jobs submitted.

      Is there any solution for this or any alternatives ?

      Attachments

        Activity

          People

            Unassigned Unassigned
            samba1112 Samba Shiva
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: