Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
Incompatible change, Reviewed
-
Changes group name of hbase metrics from "HBase Counters" to "HBaseCounters".
Description
Messing w/ spark counting RDD rows, spark dumps out following complaint:
2018-11-07 20:03:29,132 ERROR [Executor task launch worker for task 0] repl.ExecutorClassLoader: Failed to check existence of class HBase Counters_en_US on REPL class server at spark://192.168.1.139:61037/classes java.net.URISyntaxException: Illegal character in path at index 41: spark://192.168.1.139:61037/classes/HBase Counters_en_US.class at java.net.URI$Parser.fail(URI.java:2848) at java.net.URI$Parser.checkChars(URI.java:3021) at java.net.URI$Parser.parseHierarchical(URI.java:3105) at java.net.URI$Parser.parse(URI.java:3053) at java.net.URI.<init>(URI.java:588) at org.apache.spark.rpc.netty.NettyRpcEnv.openChannel(NettyRpcEnv.scala:328) at org.apache.spark.repl.ExecutorClassLoader.org$apache$spark$repl$ExecutorClassLoader$$getClassFileInputStreamFromSparkRPC(ExecutorClassLoader.scala:95) at org.apache.spark.repl.ExecutorClassLoader$$anonfun$1.apply(ExecutorClassLoader.scala:62) at org.apache.spark.repl.ExecutorClassLoader$$anonfun$1.apply(ExecutorClassLoader.scala:62) at org.apache.spark.repl.ExecutorClassLoader.findClassLocally(ExecutorClassLoader.scala:167) at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:85) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.util.ResourceBundle$Control.newBundle(ResourceBundle.java:2649) at java.util.ResourceBundle.loadBundle(ResourceBundle.java:1510) at java.util.ResourceBundle.findBundle(ResourceBundle.java:1474) at java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1370) at java.util.ResourceBundle.getBundle(ResourceBundle.java:1091) at org.apache.hadoop.mapreduce.util.ResourceBundles.getBundle(ResourceBundles.java:37) at org.apache.hadoop.mapreduce.util.ResourceBundles.getValue(ResourceBundles.java:56) at org.apache.hadoop.mapreduce.util.ResourceBundles.getCounterGroupName(ResourceBundles.java:77) at org.apache.hadoop.mapreduce.counters.CounterGroupFactory.newGroup(CounterGroupFactory.java:94) at org.apache.hadoop.mapreduce.counters.AbstractCounters.getGroup(AbstractCounters.java:226) at org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:153) at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl$DummyReporter.getCounter(TaskAttemptContextImpl.java:110) at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.getCounter(TaskAttemptContextImpl.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.updateCounters(TableRecordReaderImpl.java:298) at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.updateCounters(TableRecordReaderImpl.java:286) at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:257) at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:133) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:220) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:214) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1837) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1168) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1168) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
... each time I run an rdd count.
Attachments
Attachments
Issue Links
- links to