Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
0.20-append, 0.23.0
-
None
-
None
Description
In our environment, after 3 days long run Backup NameNode is using 100% CPU and not accepting any calls.
Thread dump
"IPC Server Responder" daemon prio=10 tid=0x00007f86c41c6800 nid=0x3b2a runnable [0x00007f86ce579000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
locked <0x00007f86d67e2a20> (a sun.nio.ch.Util$1)
locked <0x00007f86d67e2a08> (a java.util.Collections$UnmodifiableSet)
locked <0x00007f86d67e26a8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.apache.hadoop.ipc.Server$Responder.run(Server.java:501)
Looks like we are running into similar issue like this Jetty one. http://jira.codehaus.org/browse/JETTY-937
Attachments
Attachments
Issue Links
- is duplicated by
-
HADOOP-7304 BackUpNameNode is using 100% CPU and not accepting any requests.
- Resolved
- is related to
-
MAPREDUCE-2386 TT jetty server stuck in tight loop around epoll_wait
- Open
-
HADOOP-3132 DFS writes stuck occationally
- Closed