Details
-
New Feature
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.22.0
-
None
-
Reviewed
Description
Currently Hadoop RPC does not timeout when the RPC server is alive. What it currently does is that a RPC client sends a ping to the server whenever a socket timeout happens. If the server is still alive, it continues to wait instead of throwing a SocketTimeoutException. This is to avoid a client to retry when a server is busy and thus making the server even busier. This works great if the RPC server is NameNode.
But Hadoop RPC is also used for some of client to DataNode communications, for example, for getting a replica's length. When a client comes across a problematic DataNode, it gets stuck and can not switch to a different DataNode. In this case, it would be better that the client receives a timeout exception.
I plan to add a new configuration ipc.client.max.pings that specifies the max number of pings that a client could try. If a response can not be received after the specified max number of pings, a SocketTimeoutException is thrown. If this configuration property is not set, a client maintains the current semantics, waiting forever.
Attachments
Attachments
Issue Links
- blocks
-
HDFS-1330 Make RPCs to DataNodes timeout
- Closed
- depends upon
-
HADOOP-6907 Rpc client doesn't use the per-connection conf to figure out server's Kerberos principal
- Closed
- duplicates
-
HADOOP-7488 When Namenode network is unplugged, DFSClient operations waits for ever
- Resolved
- relates to
-
YARN-2578 NM does not failover timely if RM node network connection fails
- Resolved