Details
-
Bug
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
0.15.0
-
None
-
None
-
pig:0.15,tez:0.6.2,hadoop:2.5.0
Description
After running several Pig Job,the tez-ui server crashed,because the content from timeline server exceed 80MB.
When I input :
http://timeline:48188/ws/v1/timeline/TEZ_DAG_ID?limit=51
The content is too long...
Like this :
{"vertexName":"scope-18","processorClass":"org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor","userPayloadAsText":"{\"desc\":\"wordcount[4,12] (GROUP_BY)\",\"config\":{\"dfs.datanode.data.dir\":\"file:\\/\\/\\/search\\/hadoop\\/dfs_data,\",\"dfs.namenode.checkpoint.txns\":\"1000000\",\"s3.replication\":\"3\",\"mapreduce.output.fileoutputformat.compress.type\":\"RECORD\",\"mapreduce.jobtracker.jobhistory.lru.cache.size\":\"5\",\"hadoop.http.filter.initializers\":\"org.apache.hadoop.http.lib.StaticUserWebFilter\",\"yarn.nodemanager.keytab\":\"\\/etc\\/krb5.keytab\",\"nfs.mountd.port\":\"4242\",\"yarn.resourcemanager.zk-acl\":\"world:anyone:rwcda\",\"dfs.https.server.keystore.resource\":\"ssl-server.xml\",\"mapr......
I am surprised that each vertex has a "userPayloadAsText",and "config" information is particularly large.
When I run a more complex pig on tez job, the tez-ui server is easy to crash.