Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
Currently, every parfor script triggers the lazy spark context creation, independent of its input data size and script in order to obtain memory budgets and parallelism. On small data the the spark context creation dominates end-to-end execution time. We should improve this to a configuration-only analysis, which would avoid the context creation.
For example, here are the XS and S performance results for univariate statistics:
UnivariateStatistics on mbperftest/bivar/A_10k/data: 14 UnivariateStatistics on mbperftest/bivar/A_10k/data: 14 UnivariateStatistics on mbperftest/bivar/A_10k/data: 17 UnivariateStatistics on mbperftest/bivar/A_10k/data: 16 UnivariateStatistics on mbperftest/bivar/A_100k/data: 14 UnivariateStatistics on mbperftest/bivar/A_100k/data: 15 UnivariateStatistics on mbperftest/bivar/A_100k/data: 14 UnivariateStatistics on mbperftest/bivar/A_100k/data: 17
Attachments
Issue Links
- Is contained by
-
SYSTEMDS-1010 Perftest 0.11 release and related improvements
- Resolved
As it turned out, the unnecessary spark context creation was due to (1) obtaining the spark version (introduced with our version-specific memory management), (2) unnecessary parfor eager caching/partitioning, and (3) unnecessary setup of fair scheduling for local parworkers. The fix essentially avoids these unnecessarily created spark context, which applies to all parfor scripts and scripts with unknowns (because these request the broadcast memory budget, which creates the spark cluster config and due to the version issue always the spark context).
After applying the fix, the end-to-end runtimes for univariate statistics are much better: