Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-1332

Check failed: mem_tracker()->consumption() == 0 (17039360 vs. 0) Leaked memory.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • Impala 2.0
    • Impala 2.0
    • None
    • None
    • commit 7cfed977263d89d2faf04409965a6ee349f3e6e5
      Author: Nong Li <nong@cloudera.com>
      Date: Wed Sep 24 11:24:03 2014 -0700

          Update distinctpc/pcsa to return bigint.

    Description

      This failure has randomly come up twice while running simple queries. If I rerun the inflight queries (3 of them) they all run fine. Though I guess that's expected for a memory leak. I don't have a good way to reproduce this. Let me know if you have ideas. All the queries run get logged. There are 3 clients running queries concurrently.

      Impalad log

      I0930 20:12:44.431531 34731 partitioned-hash-join-node.cc:517] PHJ(node_id=3) partitioned(level=0) 8 rows into:
        0 not spilled (fraction=50.00%)
          #rows:4
        1 not spilled (fraction=0.00%)
          #rows:0
        2 not spilled (fraction=50.00%)
          #rows:4
        3 not spilled (fraction=0.00%)
          #rows:0
      I0930 20:12:44.445114 34700 data-stream-mgr.cc:128] DeregisterRecvr(): fragment_instance_id=c54f61815e40cc68:42fa63c54beb17bd, node=5
      I0930 20:12:44.445132 34700 data-stream-recvr.cc:230] cancelled stream: fragment_instance_id_=c54f61815e40cc68:42fa63c54beb17bd node_id=5
      I0930 20:12:44.445181 34700 data-stream-mgr.cc:128] DeregisterRecvr(): fragment_instance_id=c54f61815e40cc68:42fa63c54beb17bd, node=6
      I0930 20:12:44.445190 34700 data-stream-recvr.cc:230] cancelled stream: fragment_instance_id_=c54f61815e40cc68:42fa63c54beb17bd node_id=6
      I0930 20:12:45.139888 23071 impala-server.cc:1083] CancelPlanFragment(): instance_id=9a434b0eedb7d751:9ed413a943f8979e
      I0930 20:12:45.139910 23071 plan-fragment-executor.cc:530] Cancel(): instance_id=9a434b0eedb7d751:9ed413a943f8979e
      I0930 20:12:45.139917 23071 data-stream-mgr.cc:156] cancelling all streams for fragment=9a434b0eedb7d751:9ed413a943f8979e
      I0930 20:12:45.139928 23071 data-stream-recvr.cc:230] cancelled stream: fragment_instance_id_=9a434b0eedb7d751:9ed413a943f8979e node_id=5
      I0930 20:12:45.139938 23071 data-stream-recvr.cc:230] cancelled stream: fragment_instance_id_=9a434b0eedb7d751:9ed413a943f8979e node_id=6
      I0930 20:12:45.139947 23071 data-stream-recvr.cc:230] cancelled stream: fragment_instance_id_=9a434b0eedb7d751:9ed413a943f8979e node_id=7
      I0930 20:12:45.142891 34022 data-stream-mgr.cc:128] DeregisterRecvr(): fragment_instance_id=9a434b0eedb7d751:9ed413a943f8979e, node=5
      I0930 20:12:45.142938 34022 data-stream-mgr.cc:128] DeregisterRecvr(): fragment_instance_id=9a434b0eedb7d751:9ed413a943f8979e, node=6
      F0930 20:12:45.143110 34022 exec-node.cc:167] Check failed: mem_tracker()->consumption() == 0 (17039360 vs. 0) Leaked memory.
      Fragment 9a434b0eedb7d751:9ed413a943f8979e: Consumption=16.27 MB
        UDFs: Consumption=0
        CROSS_JOIN_NODE (id=4): Consumption=0
        HASH_JOIN_NODE (id=3): Consumption=16.25 MB
        EXCHANGE_NODE (id=5): Consumption=0
        EXCHANGE_NODE (id=6): Consumption=0
        EXCHANGE_NODE (id=7): Consumption=0
        DataStreamRecvr: Consumption=0
        DataStreamSender: Consumption=24.00 KB
      

      stack trace

      (gdb) bt
      #0  0x0000003a0ca32635 in raise () from /lib64/libc.so.6
      #1  0x0000003a0ca33e15 in abort () from /lib64/libc.so.6
      #2  0x0000000001e7ebc9 in google::DumpStackTraceAndExit () at src/utilities.cc:147
      #3  0x0000000001e762cd in google::LogMessage::Fail () at src/logging.cc:1296
      #4  0x0000000001e79d57 in google::LogMessage::SendToLog (this=0x7ff2fbda3df0)
          at src/logging.cc:1250
      #5  0x0000000001e792b6 in google::LogMessage::Flush (this=0x7ff2fbda3df0)
          at src/logging.cc:1119
      #6  0x0000000001e7a1ed in google::LogMessageFatal::~LogMessageFatal (this=0x7ff2fbda3df0, 
          __in_chrg=<value optimized out>) at src/logging.cc:1817
      #7  0x00000000013f362d in impala::ExecNode::Close (this=0x71cb500, state=0xe363800)
          at /data/9/query-gen/Impala/be/src/exec/exec-node.cc:168
      #8  0x00000000014f5522 in impala::BlockingJoinNode::Close (this=0x71cb500, state=0xe363800)
          at /data/9/query-gen/Impala/be/src/exec/blocking-join-node.cc:96
      #9  0x00000000014b9993 in impala::PartitionedHashJoinNode::Close (this=0x71cb500, 
          state=0xe363800)
          at /data/9/query-gen/Impala/be/src/exec/partitioned-hash-join-node.cc:186
      #10 0x00000000013f34cc in impala::ExecNode::Close (this=0x80dac40, state=0xe363800)
          at /data/9/query-gen/Impala/be/src/exec/exec-node.cc:164
      #11 0x00000000014f5522 in impala::BlockingJoinNode::Close (this=0x80dac40, state=0xe363800)
          at /data/9/query-gen/Impala/be/src/exec/blocking-join-node.cc:96
      #12 0x00000000014f74de in impala::CrossJoinNode::Close (this=0x80dac40, state=0xe363800)
          at /data/9/query-gen/Impala/be/src/exec/cross-join-node.cc:49
      #13 0x00000000013c14b7 in impala::PlanFragmentExecutor::Close (this=0x72c0ad0)
          at /data/9/query-gen/Impala/be/src/runtime/plan-fragment-executor.cc:571
      #14 0x000000000106a6d6 in impala::ImpalaServer::FragmentExecState::Exec (this=0x72c0900)
          at /data/9/query-gen/Impala/be/src/service/fragment-exec-state.cc:50
      #15 0x0000000000f95ebc in impala::ImpalaServer::RunExecPlanFragment (this=0x6538c00, 
          exec_state=0x72c0900) at /data/9/query-gen/Impala/be/src/service/impala-server.cc:1157
      #16 0x0000000000ffbac4 in boost::_mfi::mf1<void, impala::ImpalaServer, impala::ImpalaServer::FragmentExecState*>::operator() (this=0xe7c2820, p=0x6538c00, a1=0x72c0900)
          at /usr/include/boost/bind/mem_fn_template.hpp:165
      #17 0x0000000000ffa1a7 in boost::_bi::list2<boost::_bi::value<impala::ImpalaServer*>, boost::_bi::value<impala::ImpalaServer::FragmentExecState*> >::operator()<boost::_mfi::mf1<void, impala::ImpalaServer, impala::ImpalaServer::FragmentExecState*>, boost::_bi::list0> (
          this=0xe7c2830, f=..., a=...) at /usr/include/boost/bind/bind.hpp:313
      #18 0x0000000000ff65a5 in boost::_bi::bind_t<void, boost::_mfi::mf1<void, impala::ImpalaServer, impala::ImpalaServer::FragmentExecState*>, boost::_bi::list2<boost::_bi::value<impala::ImpalaServer*>, boost::_bi::value<impala::ImpalaServer::FragmentExecState*> > >::operator()
          (this=0xe7c2820) at /usr/include/boost/bind/bind_template.hpp:20
      #19 0x0000000000ff076f in boost::detail::function::void_function_obj_invoker0<boost::_bi::bind_t<void, boost::_mfi::mf1<void, impala::ImpalaServer, impala::ImpalaServer::FragmentExecState*>, boost::_bi::list2<boost::_bi::value<impala::ImpalaServer*>, boost::_bi::value<impala::ImpalaServer::FragmentExecState*> > >, void>::invoke (function_obj_ptr=...)
          at /usr/include/boost/function/function_template.hpp:153
      

      when this happened before the stack trace was corrupt.

      Attachments

        Activity

          People

            nong_impala_60e1 Nong Li
            caseyc casey
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: