Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
We found that huge amount of 'slow request' appeared during the application of ATS in full-cluster mode:
the slow request log:
[Feb 2 14:30:15.336] Server {0x2ba5bc807700} ERROR: <HttpSM.cc:6896 (update_stats)> [1244] Slow Request: client_ip: x.x.x.x:xxx url: xxxxx status: 200 unique id: redirection_tries: 0 bytes: 1050551 fd: 0 client state: 6 server state: 0 ua_begin: 0.000 ua_first_read: 0.000 ua_read_header_done: 0.000 cache_open_read_begin: 0.000 cache_open_read_end: 0.225 dns_lookup_begin: -1.000 dns_lookup_end: -1.000 server_connect: -1.000 server_first_read: -1.000 server_read_header_done: -1.000 server_close: -1.000 ua_close: 90.050 sm_finish: 90.057 plugin_active: -1.000 plugin_total: -1.000
Every client state is 6, which means 'client active timeout' (when we configure the transaction_active_timeout_in as 90 and 900, the condition is also the same), and the problem is related to 'http object size': when object_size > proxy.config.cache.target_fragment_size, the problem raised; when the target_fragment_size was modified to some value larger than object_size, the problem dissappeared.
We are sure that the reason is cluster, as under no-cluster mode there is no such problem.
Finally, the tracking progress is shown as below:
The 'doc_size' was read out in the 'HttpSM::setup_cache_read_transfer' is -1, and this was corresponding to the timeout problem;
when we backtracked the problem, it was found that when 'HttpSM::state_cache_open_read' was called the value of 'doc_size' was found as -1 when printed out.