Uploaded image for project: 'ActiveMQ Artemis'
  1. ActiveMQ Artemis
  2. ARTEMIS-1298

Memory leak with apache artemis 2.1.0

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.1.0
    • 2.3.0
    • MQTT
    • None

    Description

      hi,

      With my broker Mqtt configuration, it seems a memory leak appears after 16 hours.
      Indeed the memory increase but each full GC and GC not release enough memory.

      my JAVA_ARGS=" -XX:+PrintClassHistogram -XX:+UseG1GC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx2G"

      Please found below my broker.xml :

      My broker.xml
      <?xml version='1.0'?>
      <!--
      Licensed to the Apache Software Foundation (ASF) under one
      or more contributor license agreements.  See the NOTICE file
      distributed with this work for additional information
      regarding copyright ownership.  The ASF licenses this file
      to you under the Apache License, Version 2.0 (the
      "License"); you may not use this file except in compliance
      with the License.  You may obtain a copy of the License at
      
        http://www.apache.org/licenses/LICENSE-2.0
      
      Unless required by applicable law or agreed to in writing,
      software distributed under the License is distributed on an
      "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
      KIND, either express or implied.  See the License for the
      specific language governing permissions and limitations
      under the License.
      -->
      
      <configuration xmlns="urn:activemq"
                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                     xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
      
         <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq:core ">
      
            <name>iotBrokerA</name>
      
            <persistence-enabled>true</persistence-enabled>
      
            <!-- this could be ASYNCIO or NIO
             -->
            <journal-type>ASYNCIO</journal-type>
      
            <paging-directory>/artemis/data/paging</paging-directory>
      
            <bindings-directory>/artemis/data/bindings</bindings-directory>
      
            <journal-directory>/artemis/data/journal</journal-directory>
      
            <large-messages-directory>/artemis/data/large-messages</large-messages-directory>
      
            <journal-datasync>true</journal-datasync>
      
            <journal-min-files>2</journal-min-files>
      
            <journal-pool-files>-1</journal-pool-files>
      
            <!--
              You can specify the NIC you want to use to verify if the network
               <network-check-NIC>theNickName</network-check-NIC>
              -->
      
            <!--
              Use this to use an HTTP server to validate the network
               <network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
      
            <!-- <network-check-period>10000</network-check-period> -->
            <!-- <network-check-timeout>1000</network-check-timeout> -->
      
            <!-- this is a comma separated list, no spaces, just DNS or IPs
                 it should accept IPV6
      
                 Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
                          Using IPs that could eventually disappear or be partially visible may defeat the purpose.
                          You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
            <!-- <network-check-list>10.0.0.1</network-check-list> -->
      
            <!-- use this to customize the ping used for ipv4 addresses -->
            <!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
      
            <!-- use this to customize the ping used for ipv6 addresses -->
            <!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
      
      
      
            <!--
             This value was determined through a calculation.
             Your system could perform 6.25 writes per millisecond
             on the current journal configuration.
             That translates as a sync write every 160000 nanoseconds
            -->
            <journal-buffer-timeout>160000</journal-buffer-timeout>
      
          <connectors>
              <!-- Connector used to be announced through cluster connections and notifications -->
              <connector name="artemis">tcp://myserver:61616</connector>
          </connectors>
      
      
      
            <!-- how often we are looking for how many bytes are being used on the disk in ms -->
            <disk-scan-period>5000</disk-scan-period>
      
            <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
                 that won't support flow control. -->
            <max-disk-usage>90</max-disk-usage>
      
            <!-- the system will enter into page mode once you hit this limit.
                 This is an estimate in bytes of how much the messages are using in memory
      
                  The system will use half of the available memory (-Xmx) by default for the global-max-size.
                  You may specify a different value here if you need to customize it to your needs.
      
                  <global-max-size>100Mb</global-max-size>
      
            -->
      
            <acceptors>
      
               <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
               <!-- amqpCredits: The number of credits sent to AMQP producers -->
               <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
      
               <!-- Acceptor for every supported protocol -->
               <acceptor name="artemis">tcp://myserver:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
      
               <acceptor name="mqtt">tcp://myserver:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
               <acceptor name="mqtt_ssl">tcp://myserver:8883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true;sslEnabled=true;keyStorePath=/artemis/security/PROD_NC1.jks;keyStorePassword=********</acceptor>
            </acceptors>
      
      
            <cluster-user>AdminCluster</cluster-user>
      
            <!--<cluster-password>${cluster-password}</cluster-password>-->
            <cluster-password>**********</cluster-password>
      
            <broadcast-groups>
               <broadcast-group name="bg-group1">
                  <group-address>231.7.7.7</group-address>
                  <group-port>9876</group-port>
                  <broadcast-period>5000</broadcast-period>
                  <connector-ref>artemis</connector-ref>
               </broadcast-group>
            </broadcast-groups>
      
            <discovery-groups>
               <discovery-group name="dg-group1">
                  <group-address>231.7.7.7</group-address>
                  <group-port>9876</group-port>
                  <refresh-timeout>10000</refresh-timeout>
               </discovery-group>
            </discovery-groups>
      
            <cluster-connections>
               <cluster-connection name="my-cluster">
                  <connector-ref>artemis</connector-ref>
                  <message-load-balancing>ON_DEMAND</message-load-balancing>
                  <max-hops>0</max-hops>
                  <discovery-group-ref discovery-group-name="dg-group1"/>
               </cluster-connection>
            </cluster-connections>
      
      
            <ha-policy>
               <shared-store>
                  <master>
                     <failover-on-shutdown>true</failover-on-shutdown>
                  </master>
               </shared-store>
            </ha-policy>
      
            <security-settings>
               <security-setting match="#">
                  <permission type="createNonDurableQueue" roles="Myrole"/>
                  <permission type="deleteNonDurableQueue" roles="Myrole"/>
                  <permission type="createDurableQueue" roles="Myrole"/>
                  <permission type="deleteDurableQueue" roles="Myrole"/>
                  <permission type="createAddress" roles="Myrole"/>
                  <permission type="deleteAddress" roles="Myrole"/>
                  <permission type="consume" roles="Myrole"/>
                  <permission type="browse" roles="Myrole,Myrole2"/>
                  <permission type="send" roles="Myrole,Myrole2"/>
                  <!-- we need this otherwise ./artemis data imp wouldn't work -->
                  <permission type="manage" roles="Myrole"/>
               </security-setting>
               
               <security-setting match="projet.0397.test.test1.#">
                  <permission type="createDurableQueue" roles="Myrole2"/>
                  <permission type="deleteDurableQueue" roles="Myrole2"/>
                  <permission type="createAddress" roles="Myrole2"/>
                  <permission type="deleteAddress" roles="Myrole2"/>
                  <permission type="consume" roles="Myrole2"/>
                  <permission type="browse" roles="Myrole2"/>
                  <permission type="send" roles="Myrole2"/>
               </security-setting>
      
               <security-setting match="projet.0397.test.test2.#">
                  <permission type="createDurableQueue" roles="Myrole2"/>
                  <permission type="deleteDurableQueue" roles="Myrole2"/>
                  <permission type="createAddress" roles="Myrole2"/>
                  <permission type="deleteAddress" roles="Myrole2"/>
                  <permission type="consume" roles="Myrole2"/>
                  <permission type="browse" roles="Myrole2"/>
                  <permission type="send" roles="Myrole2"/>
               </security-setting>
      
               <security-setting match="projet.0397.test.test3.#">
                  <permission type="createDurableQueue" roles="Myrole2"/>
                  <permission type="deleteDurableQueue" roles="Myrole2"/>
                  <permission type="createAddress" roles="Myrole2"/>
                  <permission type="deleteAddress" roles="Myrole2"/>
                  <permission type="consume" roles="Myrole2"/>
                  <permission type="browse" roles="Myrole2"/>
                  <permission type="send" roles="Myrole2"/>
               </security-setting>
            </security-settings>
      
            <address-settings>
               <!-- if you define auto-create on certain queues, management has to be auto-create -->
               <address-setting match="activemq.management#">
                  <dead-letter-address>DLQ</dead-letter-address>
                  <expiry-address>ExpiryQueue</expiry-address>
                  <redelivery-delay>0</redelivery-delay>
                  <!-- with -1 only the global-max-size is in use for limiting -->
                  <max-size-bytes>-1</max-size-bytes>
                  <message-counter-history-day-limit>10</message-counter-history-day-limit>
                  <address-full-policy>PAGE</address-full-policy>
                  <auto-create-queues>true</auto-create-queues>
                  <auto-create-addresses>true</auto-create-addresses>
                  <auto-create-jms-queues>true</auto-create-jms-queues>
                  <auto-create-jms-topics>true</auto-create-jms-topics>
               </address-setting>
               <!--default for catch all-->
               <address-setting match="#">
                  <dead-letter-address>DLQ</dead-letter-address>
                  <expiry-address>ExpiryQueue</expiry-address>
                  <!--Expiration des messages au bout de 6 heures-->
                  <!--expiry-delay>21600000</expiry-delay-->
                  <expiry-delay>60000</expiry-delay>
                  <redelivery-delay>0</redelivery-delay>
                  <!-- with -1 only the global-max-size is in use for limiting -->
                  <max-size-bytes>-1</max-size-bytes>
                  <message-counter-history-day-limit>10</message-counter-history-day-limit>
                  <address-full-policy>PAGE</address-full-policy>
                  <auto-create-queues>true</auto-create-queues>
                  <auto-create-addresses>true</auto-create-addresses>
                  <auto-create-jms-queues>true</auto-create-jms-queues>
                  <auto-create-jms-topics>true</auto-create-jms-topics>
               </address-setting>
            </address-settings>
      
            <addresses>
               <address name="DLQ">
                  <anycast>
                     <queue name="DLQ" />
                  </anycast>
               </address>
               <address name="ExpiryQueue">
                  <anycast>
                     <queue name="ExpiryQueue" />
                  </anycast>
               </address>
               <address name="projet.0397.test.test1.#">
                  <multicast />
               </address>
               <address name="projet.0397.test.test2.#">
                  <multicast />
               </address>
               <address name="projet.0397.test.test3.#">
                  <multicast />
               </address>
            </addresses>
      
         </core>
      </configuration>
      

      Please find below a screen shot of the memory leak :

      Can you help me to resolve this issue

      Attachments

        1. MemoryLeakArtemis.png
          8 kB
          REGINA Patrick

        Activity

          People

            Unassigned Unassigned
            DeomisR REGINA Patrick
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: