Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
Hive makes use of MapReduce counters for statistics and possibly for other purposes. For Hive on Spark, we should achieve the same functionality using Spark's accumulators.
Hive also collects metrics from MapReduce jobs traditionally. Spark job very likely publishes a different set of metrics, which, if made available, would help user to get insights into their spark jobs. Thus, we should obtain the metrics and make them available as we do for MapReduce.
This task therefore includes:
- identify Hive's existing functionality w.r.t. counters, statistics, and metrics;
- design and implement the same functionality in Spark.
Please refer to the design document for more information. https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark#HiveonSpark-CountersandMetrics
Attachments
Attachments
Issue Links
- depends upon
-
HIVE-7706 ClassCastException trying to CTAS table
- Resolved
-
HIVE-7551 expand spark accumulator to support hive counter [Spark Branch]
- Resolved
-
HIVE-7552 Collect spark job statistic through spark metrics [Spark Branch]
- Resolved
- incorporates
-
HIVE-7893 Find a way to get a job identifier when submitting a spark job [Spark Branch]
- Resolved
- is depended upon by
-
HIVE-7772 Add tests for order/sort/distribute/cluster by query [Spark Branch]
- Resolved