Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
2.7.1
Description
I found lots of unimportant records info in assigning container when I prepared to debug the problem of container assigning.There are too many records like this in yarn-resourcemanager.log, and it's difficiult for me to directly to found the important info.
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:52,971 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:52,976 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:52,981 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:52,986 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:52,991 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:52,996 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,001 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,007 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,012 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,017 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,022 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,027 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,032 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,038 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,050 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node offered to app: application_1449458968698_0011 reserved: false 2016-02-21 16:31:53,057 DEBUG
The reason why of so many records is that it always print this info first in container assigning whether the assigned result is successful or failed.
Can see the complete yarn log in updated log, and you can see how many records there are.
And in addition, too many these info logging will slow down process of container assigning.Maybe we should change this logLevel to other level, like trace.