Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
Description
We currently have 11 constantly failing tests in our federated test suite which do not appear as failures because the global dml execution mode is set to the correct value, namely "SPARK", before the creation of the federated workers at the second test run.
The issue there is that the federated worker is creating its execution context on startup, based on the global dml execution mode. Therefore, the federated worker creates a casual ExecutionContext and we are running into a ClassCastException because we try to cast the ExecutionContext to a SparkExecutionContext.
However, the test suite marks the failing tests as flakes since at the second run the global execution mode is already set to SPARK.
Attachments
Issue Links
- links to
Commit 1e4f3e1a983c666da187296e8f0953857c827350 in systemds's branch refs/heads/main from ywcb00
[ https://gitbox.apache.org/repos/asf?p=systemds.git;h=1e4f3e1 ]
SYSTEMDS-3215Fix federated execution contexts for spark instructionsCloses #1453.