Details
-
Sub-task
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
3.0.0
-
None
Description
Copied from https://github.com/apache/spark/pull/23804#issuecomment-466198782
Under Java 11, tests fail with:
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.935 s <<< FAILURE! - in org.apache.spark.ml.regression.JavaGBTRegressorSuite [ERROR] runDT(org.apache.spark.ml.regression.JavaGBTRegressorSuite) Time elapsed: 0.933 s <<< ERROR! java.lang.reflect.InaccessibleObjectException: Unable to make field jdk.internal.ref.PhantomCleanable jdk.internal.ref.PhantomCleanable.prev accessible: module java.base does not "opens jdk.internal.ref" to unnamed module @4212a0c8 at org.apache.spark.ml.regression.JavaGBTRegressorSuite.runDT(JavaGBTRegressorSuite.java:65) [INFO] Running org.apache.spark.ml.regression.JavaDecisionTreeRegressorSuite [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.43 s - in org.apache.spark.ml.regression.JavaDecisionTreeRegressorSuite [INFO] Running org.apache.spark.ml.regression.JavaRandomForestRegressorSuite [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.507 s <<< FAILURE! - in org.apache.spark.ml.regression.JavaRandomForestRegressorSuite [ERROR] runDT(org.apache.spark.ml.regression.JavaRandomForestRegressorSuite) Time elapsed: 0.506 s <<< ERROR! java.lang.reflect.InaccessibleObjectException: Unable to make field jdk.internal.ref.PhantomCleanable jdk.internal.ref.PhantomCleanable.prev accessible: module java.base does not "opens jdk.internal.ref" to unnamed module @4212a0c8 at org.apache.spark.ml.regression.JavaRandomForestRegressorSuite.runDT(JavaRandomForestRegressorSuite.java:88)
Stack trace:
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:337) at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:281) at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:176) at java.base/java.lang.reflect.Field.setAccessible(Field.java:170) at org.apache.spark.util.SizeEstimator$.$anonfun$getClassInfo$2(SizeEstimator.scala:337) at org.apache.spark.util.SizeEstimator$.$anonfun$getClassInfo$2$adapted(SizeEstimator.scala:331) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198) at org.apache.spark.util.SizeEstimator$.getClassInfo(SizeEstimator.scala:331) at org.apache.spark.util.SizeEstimator$.getClassInfo(SizeEstimator.scala:325) at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:223) at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:202) at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:70) at org.apache.spark.util.collection.SizeTracker.takeSample(SizeTracker.scala:78) at org.apache.spark.util.collection.SizeTracker.afterUpdate(SizeTracker.scala:70) at org.apache.spark.util.collection.SizeTracker.afterUpdate$(SizeTracker.scala:67) at org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31) at org.apache.spark.storage.memory.DeserializedValuesHolder.storeValue(MemoryStore.scala:665) at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:222) at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:299) at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1166) at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1092) at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1157) at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:915) at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1482) at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:133) at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:91) at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34) at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62) at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1427) at org.apache.spark.ml.regression.GBTRegressionModel.transformImpl(GBTRegressor.scala:249)
The fix is to not include fields in the estimate that can't be accessed anyway, for this or really any SecurityException on setAccessible. The impact on the estimate should be fairly limited, as it didn't cause failures in but a few tests.
One other open question is, why did only this come up in a very few cases like broadcasting GBTRegressionModel? and why PhantomReference, which isn't used in (non-test) Spark code? I can't see anything odd about this model class.
Attachments
Issue Links
- links to