The migration guide outlines how users can migrate an existing ActiveMQ Classic broker installation to ActiveMQ Artemis.

1. Preface

As more and more people start using Artemis, it’s valuable to have a migration guide that will help experienced ActiveMQ users adapt to the new broker. From outside, two brokers might seem very similar, but there are subtle differences in their inner-workings that can lead to confusions. The goal of this guide is to explain these differences and help make a transition.

Migration is a fairly broad term in systems like these, so what are we talking about here? This guide will be focused only on broker server migration. We’ll assume that the current system is a working ActiveMQ Classic broker with OpenWire JMS clients. We’ll see how we can replace the broker with Artemis and leave the clients intact.

This guide is aimed at experienced ActiveMQ users that want to learn more about what’s different in Artemis. We will assume that you know the concepts that are covered in these articles. They will not be explained from the first principles, for that you’re advised to see appropriate manuals of the ActiveMQ and Artemis brokers.

Before we dig into more details on the migration, let’s talk about basic conceptual differences between two brokers.

2. Differences From ActiveMQ Classic

2.1. Architectural differences

Although they are designed to do the same job, things are done differently internally. Here are some of the most notable architectural differences you need to be aware of when you’re planning the migration.

In ActiveMQ, we have a few different implementations of the IO connectivity layer, like tcp (synchronous one) and nio (non-blocking one). In Artemis, the IO layer is implemented using Netty, which is a nio framework. This means that there’s no more need to choose between different implementations as the non-blocking one is used by default.

The other important part of every broker is a message store. Most of the ActiveMQ users are familiar with KahaDB. It consists of a message journal for fast sequential storing of messages (and other command packets) and an index for retrieving messages when needed.

Artemis has its own message store. It consists only of the append-only message journal. Because of the differences in how paging is done, there’s no need for the message index. We’ll talk more about that in a minute. It’s important to say at this point that these two stores are not interchangeable, and data migration if needed must be carefully planed.

What do we mean by paging differences? Paging is the process that happens when broker can’t hold all incoming messages in its memory. The strategy of how to deal with this situation differs between two brokers. ActiveMQ have cursors, which are basically a cache of messages ready to be dispatched to the consumer. It will try to keep all incoming messages in there. When we run out of the available memory, messages are added to the store, but the caching stops. When the space become available again, the broker will fill the cache again by pulling messages from the store in batches. Because of this, we need to read the journal from time to time during a broker runtime. In order to do that, we need to maintain a journal index, so that messages' position can be tracked inside the journal.

In Artemis, things work differently in this regard. The whole message journal is kept in memory and messages are dispatched directly from it. When we run out of memory, messages are paged on the producer side (before they hit the broker). Theay are stored in sequential page files in the same order as they arrived. Once the memory is freed, messages are moved from these page files into the journal. With paging working like this, messages are read from the file journal only when the broker starts up, in order to recreate this in-memory version of the journal. In this case, the journal is only read sequentially, meaning that there’s no need to keep an index of messages in the journal.

This is one of the main differences between ActiveMQ Classic and Artemis. It’s important to understand it early on as it affects a lot of destination policy settings and how we configure brokers in order to support these scenarios properly.

2.2. Addressing differences

Another big difference that’s good to cover early on is the difference of how message addressing and routing is done. ActiveMQ started as an open source JMS implementation, so at its core all JMS concepts like queues, topics and durable subscriptions are implemented as the first-class citizens. It’s all based on OpenWire protocol developed within the project and even KahaDB message store is OpenWire centric. This means that all other supported protocols, like MQTT and AMQP are translated internally into OpenWire.

Artemis took a different approach. It implements only queues internally and all other messaging concepts are achieved by routing messages to appropriate queue(s) using addresses. Messaging concepts like publish-subscribe (topics) and point-to-point (queues) are implemented using different type of routing mechanisms on addresses. Multicast routing is used to implement publish-subscribe semantics, where all subscribers to a certain address will get their own internal queue and messages will be routed to all of them. Anycast routing is used implement point-to-point semantics, where there’ll be only one queue for the address and all consumers will subscribe to it. The addressing and routing scheme is used across all protocols. So for example, you can view the JMS topic just as a multicast address. We’ll cover this topic in more details in the later articles.

3. Configuration

Once we download and install the broker we run into the first difference. With Artemis, you need to explicitly create a broker instance, while on ActiveMQ this step is optional. The whole idea of this step is to keep installation and configuration of the broker separate, which makes it easier to upgrade and maintain the broker in the future.

So in order to start with Artemis you need execute something like this

$ bin/artemis create --user admin --password admin --role admins --allow-anonymous true /opt/artemis

No matter where you installed your broker binaries, the broker instance will be now in /opt/artemis directory. The content of this directory will be familiar to every ActiveMQ user:

  • bin - contains shell scripts for managing the broker(start, stop, etc.)

  • data - is where the broker state lives (message store)

  • etc - contains broker configuration file (it’s what conf directory is in ActiveMQ)

  • log - Artemis stores logs in this separate directory, unlike ActiveMQ which keeps them in data directory

  • tmp - is utility directory for temporary files

Let’s take a look now at the configuration in more details. The entry etc/bootstrap.xml file is here to set the basics, like the location of the main broker configuration file, utility apps like a web server and JAAS security.

The main configuration file is etc/broker.xml. Similarly to ActiveMQ’s conf/activemq.xml, this is where you configure most of the aspects of the broker, like connector ports, destination names, security policies, etc. We will go through this file in details in the following articles.

The etc/artemis.profile and etc/artemis-utility.profile files are similar to the bin/env file in ActiveMQ. In the etc/artemis.profile you can configure environment variables for the broker started by executing the run command, mostly regular JVM args related to SSL context, debugging, etc. In the etc/artemis-utility.profile file you can configure environment variables for all CLI commands other than run, mostly regular JVM args related to SSL context, debugging, etc.

There’s not much difference in logging configuration between two brokers, so anyone familiar with Java logging systems in general will find herself at home here. The etc/log4j2.properties file is where it’s all configured for the broker. The etc/log4j2-utility.properties file is where it’s all configured for all CLI commands other than run.

Finally, we have JAAS configuration files (login.config, artemis-users.properties and artemis-roles.properties), which cover same roles as in ActiveMQ and we will go into more details on these in the article that covers security.

After this brief walk through the location of different configuration aspects of Artemis, we’re ready to start the broker. If you wish to start the broker in the foreground, you should execute

$ bin/artemis run

This is the same as

$ bin/activemq console

command in ActiveMQ.

For running the broker as a service, Artemis provides a separate shell script bin/artemis-service. So you can run the broker in the background like

$ bin/artemis-service start

This is the same as running ActiveMQ with

$ bin/activemq start

After the start, you can check the broker status in logs/artemis.log file.

Congratulations, you have your Artemis broker up and running. By default, Artemis starts Openwire connector on the same port as ActiveMQ, so clients can connect. To test this you can go to your existing ActiveMQ instance and run the following commands.

$ bin/activemq producer
$ bin/activemq consumer

You should see the messages flowing through the broker. Finally, we can stop the broker with

$ bin/artemis-service stop

With this, our orienteering session around Artemis is finished. In the following articles we’ll start digging deeper into the configuration details and differences between two brokers and see how that can affect your messaging applications.

4. Connectors

After broker is started, you’ll want to connect your clients to it. So, let’s start with comparing ActiveMQ and Artemis configurations in area of client connectors. In ActiveMQ terminology, they are called transport connectors, and the default configuration looks something like this (in conf/activemq.xml).

<transportConnectors>
    <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
</transportConnectors>

In Artemis, client connectors are called acceptors and they are configured in etc/broker.xml like this

<acceptors>
    <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE</acceptor>
    <acceptor name="amqp">tcp://0.0.0.0:5672?protocols=AMQP</acceptor>
    <acceptor name="stomp">tcp://0.0.0.0:61613?protocols=STOMP</acceptor>
    <acceptor name="hornetq">tcp://0.0.0.0:5445?protocols=HORNETQ,STOMP</acceptor>
    <acceptor name="mqtt">tcp://0.0.0.0:1883?protocols=MQTT</acceptor>
</acceptors>

As you can notice the syntax is very similar, but there are still some differences that we need to understand. First, as we said earlier, there’s no notion of blocking and non-blocking (nio) transport in Artemis, so you should treat everything as non-blocking. Also, in Artemis the low level transport is distinct from the actual messaging protocol (like AMQP or MQTT) used on top of it. One acceptor can handle multiple messaging protocols on the same port. By default, all protocols are accepted on the single port, but you can restrict this using the protocols=X,Y uri attribute pattern as shown in the example above.

Besides tcp network protocol, Artemis support InVm and Web Socket transports. The InVm transport is similar to ActiveMQ’s vm transport and is used to connect clients to the embedded broker. The difference is that you can use any messaging protocol on top of InVm transport in Artemis, while vm transport in ActiveMQ is tied to OpenWire.

One of the advantages of using Netty for IO layer, is that Web Sockets are supported out of the box. So, there’s no need for the separate ws transport like in ActiveMQ, the tcp (Netty) acceptor in Artemis will detect Web Socket clients and handle them accordingly.

To summarize this topic, here’s a table that shows you how to migrate your ActiveMQ transport connectors to the Artemis acceptors

ActiveMQ Artemis (options in the acceptor URL)

OpenWire

protocols=OpenWire (version 10+)

NIO

-

AMQP

protocols=AMQP

STOMP

protocols=STOMP

VM (OpenWire only)

InVM (all protocols, peer to tcp)

HTTP (OpenWire-based)

-

MQTT

protocols=MQTT

WebSocket (STOMP and MQTT)

handled by tcp (all protocols)

5. Destinations

We already talked about addressing differences between ActiveMQ and Artemis in the introduction. Now let’s dig into the details and see how to configure JMS queues and topics. It’s important to note here that both brokers are configured by default to auto-create destinations requested by clients, which is preferred behavior for many use cases. This is configured using authorization security policies, so we will cover this topic in the later sections of this manual. For now, let’s see how you can predefine JMS queues and topics in both brokers.

In ActiveMQ, destinations are pre-defined in the <destinations> section of the conf/activemq.xml configuration file.

<destinations>
     <queue physicalName="my-queue" />
     <topic physicalName="my-topic" />
</destinations>

Things looks a bit different in Artemis. We already explained that queues are anycast addresses and topics are multicast ones. We’re not gonna go deep into the address settings details here and you’re advised to look at the user manual for that. Let’s just see what we need to do in order to replicate ActiveMQ configuration.

Addresses are defined in <addresses> section of the etc/broker.xml configuration file. So the corresponding Artemis configuration for the ActiveMQ example above, looks like this:

<addresses>
    <address name="my-queue">
        <anycast>
            <queue name="my-queue"/>
        </anycast>
    </address>

    <address name="my-topic">
        <multicast></multicast>
    </address>
</adresses>

After this step we have our destinations ready in the new broker.

6. Virtual Topics

Virtual Topics (a specialisation of virtual destinations) in ActiveMQ Classic typically address two different but related problems. Let’s take each in turn:

6.1. Shared access to a JMS durable topic subscription

With JMS1.1, a durable subscription is identified by the pair of clientId and subscriptionName. The clientId component must be unique to a connection on the broker. This means that the subscription is exclusive. It is not possible to load balance the stream of messages across consumers and quick failover is difficult because the existing connection state on the broker needs to be first disposed. With virtual topics, each subscription’s stream of messages is redirected to a queue.

In Artemis there are two alternatives, the new JMS 2.0 api or direct access to a subscription queue via the FQQN.

6.2. JMS 2.0 shared subscriptions

JMS 2.0 adds the possibility of shared subscriptions with new API’s that are fully supported in Artemis.

6.3. Fully Qualified Queue name (FQQN)

Secondly, Artemis uses a queue per topic subscriber model internally, and it is possibly to directly address the subscription queue using its Fully Qualified Queue name (FQQN).

For example, a default Classic consumer destination for topic VirtualTopic.Orders subscription A:

    ...
    Queue subscriptionQueue = session.createQueue("Consumer.A.VirtualTopic.Orders");
    session.createConsumer(subscriptionQueue);

would be replaced with an Artemis FQQN comprised of the address and queue.

    ...
    Queue subscriptionQueue = session.createQueue("VirtualTopic.Orders::Consumer.A.VirtualTopic.Orders");
    session.createConsumer(subscriptionQueue);

This does require modification to the destination name used by consumers which is not ideal. If OpenWire clients cannot be modified, Artemis supports a virtual topic wildcard filter mechanism on the OpenWire protocol handler that will automatically convert the consumer destination into the corresponding FQQN. The format is a comma separated list of strings pairs, delimited with a ';'. Each pair identifies a filter to match the virtual topic consumer destination, and an int that specifies the number of path matches that terminate the consumer queue identity.

E.g: For the default Classic virtual topic consumer prefix of Consumer.. the parameter virtualTopicConsumerWildcards should be: Consumer..>;2. However, there is a caveat because this value needs to be encoded in a uri for the xml configuration. Any unsafe url characters , in this case: > ; need to be escaped with their hex code point representation; leading to a value of Consumer.*.%3E%3B2. In this way a consumer destination of Consumer.A.VirtualTopic.Orders will be transformed into a FQQN of VirtualTopic.Orders::Consumer.A.VirtualTopic.Orders.

6.4. Durable topic subscribers in a network of brokers

The store and forward network bridges in Classic create a durable subscriber per destination. As demand migrates across a network, duplicate durable subs get created on each node in the network, but they do not migrate. The end result can result in duplicate message storage and ultimately duplicate delivery, which is not good. When durable subscribers map to virtual topic subscriber queues, the queues can migrate, and the problem can be avoided.

In Artemis, because a durable sub is modeled as a queue, this problem does not arise.

7. Authentication

Now that we have our acceptors and addresses ready, it’s time to deal with broker security. Artemis inherited most of the security concepts from ActiveMQ. One of the most notable differences is that ActiveMQ groups are now called roles in Artemis. Besides that things should be pretty familiar to existing ActiveMQ users. Let’s start by looking into the authentication mechanisms and defining users and roles (groups).

Both ActiveMQ and Artemis use JAAS to define authentication credentials. In ActiveMQ, that’s configured through the appropriate broker plugin in conf/activemq.xml

<plugins>
  <jaasAuthenticationPlugin configuration="activemq" />
</plugins>

The name of the JAAS domain is specified as a configuration parameter.

In Artemis, the same thing is achieved by defining <jaas-security> configuration in etc/bootstrap.xml

<jaas-security domain="activemq"/>

From this point on, you can go and define your users and their roles in appropriate files, like conf/users.properties and conf/groups.properties in ActiveMQ. Similarly, etc/artemis-users.properties and etc/artemis-roles.properties files are used in Artemis. These files are interchangeable, so you should be able to just copy your existing configuration over to the new broker.

If your deployment is more complicated than this and requires some advanced JAAS configuration, you’ll need go and change the etc/login.config file. It’s important to say that all custom JAAS modules and configuration you were using in ActiveMQ should be compatible with Artemis.

Finally, in case you’re still using ActiveMQ’s Simple Authentication Plugin, which defines users and groups directly in the broker’s xml configuration file, you’ll need to migrate to JAAS as Artemis doesn’t support the similar concept.

8. Authorization

To complete security migration, we need to deal with authorization policies as well. In ActiveMQ, authorization is specified using the appropriate broker plugin in conf/activemq.xml, like

<authorizationPlugin>
  <map>
	<authorizationMap>
	  <authorizationEntries>
		<authorizationEntry queue=">" read="admins" write="admins" admin="admins"/>
		<authorizationEntry queue="USERS.>" read="users" write="users" admin="users"/>
		<authorizationEntry queue="GUEST.>" read="guests" write="guests,users" admin="guests,users"/>
		<authorizationEntry topic=">" read="admins" write="admins" admin="admins"/>
		<authorizationEntry topic="USERS.>" read="users" write="users" admin="users"/>
		<authorizationEntry topic="GUEST.>" read="guests" write="guests,users" admin="guests,users"/>
		<authorizationEntry topic="ActiveMQ.Advisory.>" read="guests,users" write="guests,users" admin="guests,users"/>
	  </authorizationEntries>
	</authorizationMap>
  </map>
</authorizationPlugin>

The equivalent Artemis configuration is specified in etc/broker.xml and should look like this

<security-settings>
  <security-setting match="#">
	<permission type="createNonDurableQueue" roles="admins"/>
	<permission type="deleteNonDurableQueue" roles="admins"/>
	<permission type="createDurableQueue" roles="admins"/>
	<permission type="deleteDurableQueue" roles="admins"/>
	<permission type="consume" roles="admins"/>
	<permission type="browse" roles="admins"/>
	<permission type="send" roles="admins"/>
  </security-setting>

  <security-setting match="USERS.#">
	<permission type="createNonDurableQueue" roles="users"/>
	<permission type="deleteNonDurableQueue" roles="users"/>
	<permission type="createDurableQueue" roles="users"/>
	<permission type="deleteDurableQueue" roles="users"/>
	<permission type="consume" roles="users"/>
	<permission type="browse" roles="users"/>
	<permission type="send" roles="users"/>
  </security-setting>

  <security-setting match="GUESTS.#">
	<permission type="createNonDurableQueue" roles="guests"/>
	<permission type="deleteNonDurableQueue" roles="guests"/>
	<permission type="createDurableQueue" roles="guests"/>
	<permission type="deleteDurableQueue" roles="guests"/>
	<permission type="consume" roles="guests"/>
	<permission type="browse" roles="guests"/>
	<permission type="send" roles="guests"/>
  </security-setting>
</security-settings>

As you can see, things are pretty comparable with some minor differences. The most important one is that policies in ActiveMQ are defined on destination names, while in Artemis they are applied to core queues (refresh your knowledge about relation between queues and addresses in previous sections and Artemis user manual).

The other notable difference is that policies are more fine-grained in Artemis. The following paragraphs and tables show Artemis policies that corresponds to ActiveMQ ones.

If you wish to allow users to send messages, you need to define the following policies in the respective brokers.

ActiveMQ Artemis

write

send

In Artemis, policies for consuming and browsing are separated and you need to define them both in order to control read access to the destination.

ActiveMQ Artemis

read

consume

browse

It’s the same story with admin privileges. You need to define separate create and delete policies for durable and non-durable core queues.

ActiveMQ Artemis

admin

createNonDurableQueue

deleteNonDurableQueue

createDurableQueue

deleteDurableQueue

Finally, there’s a topic of using wildcards to define policies. The following table shows the wildcard syntax difference.

Wildcard Description ActiveMQ Artemis

Delimiter

Separates words in the path

.

.

Single word

Match single word in the path

*

*

Any word

Match any work recursively in the path

>

#

Basically, by default only the any word character is different, so that’s why we used GUESTS.# in Artemis example instead of ActiveMQ’s GUESTS.> syntax.

Powered with this knowledge, you should be able to transform your current ActiveMQ authorization policies to Artemis.

9. SSL

The next interesting security related topic is encrypting transport layer using SSL. Both ActiveMQ and Artemis leverage JDK’s Java Secure Socket Extension (JSSE), so things should be easy to migrate.

Let’s recap quickly how SSL is used in ActiveMQ. First, you need to define the SSL Context. You can do that using <sslContext> configuration section in conf/activemq.xml, like

<sslContext>
    <sslContext keyStore="file:${activemq.conf}/broker.ks" keyStorePassword="password"/>
</sslContext>

The SSL context defines key and trust stores to be used by the broker. After this, you set your transport connector with the ssl schema and preferably some additional options.

<transportConnectors>
    <transportConnector name="ssl" uri="ssl://localhost:61617?transport.needClientAuth=true"/>
</transportConnectors>

These options are related to SSLServerSocket and are specified as URL parameters with the transport. prefix, like needClientAuth shown in the example above.

In Artemis, Netty is responsible for all things related to the transport layer, so it handles SSL for us as well. All configuration options are set directly on the acceptor, like

<acceptors>
    <acceptor name="netty-ssl-acceptor">tcp://localhost:61617?sslEnabled=true;keyStorePath=${data.dir}/../etc/broker.ks;keyStorePassword=password;needClientAuth=true</acceptor>
</acceptors>

Note that we used the same Netty connector schema and just added sslEnabled=true parameter to use it with SSL. Next, we can go ahead and define key and trust stores. There’s a slight difference in parameter naming between two brokers, as shown in the table below.

ActiveMQ Artemis

keyStore

keyStorePath

keyStorePassword

keyStorePassword

trustStore

trustStorePath

trustStorePassword

trustStorePassword

Finally, you can go and set all other SSLServerSocket parameters you need (like needClientAuth in this example). There’s no extra prefix needed for this in Artemis.

It’s important to note that you should be able to reuse your existing key and trust stores and just copy them to the new broker.

10. Message Store Migration

10.1. ActiveMQ Classic KahaDB or mKahaDB

ActiveMQ Artemis supports an XML format for message store exchange. An existing store may be exported from a broker using the command line tools and subsequently imported to another broker.

The Apache ActiveMQ Command Line Tools project provides an command line export tool for ActiveMQ Classic that will export a KahaDB (or mKahaDB) message store into the ActiveMQ Artemis XML format, for subsequent import by ActiveMQ Artemis.

The export tool supports selective export using filters, useful if only some of your data needs to be migrated. From version 0.2.0, the export tool has support for virtual topic consumer queue mapping, which will allow existing Openwire virtual topic consumers to resume on an ActiveMQ Artemis broker with no message loss. Note the OpenWire acceptor virtualTopicConsumerWildcards option from virtual topics migration.

Full details of tool can be found on the project website: https://github.com/apache/activemq-cli-tools

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.