What is Magento RabbitMQ ?
Magento RabbitMQ is the AMQP message broker Magento uses as a queue backend for asynchronous tasks — async order emails, B2B shared-catalogue updates, Multi-Source Inventory reservations, async indexers and webhook dispatch. Required for Adobe Commerce (B2B + Cloud); optional but recommended for Magento Open Source stores above ~500 orders per day. Consumers are long-lived processes wired via etc/queue.xml and etc/queue_consumer.xml.
Five things RabbitMQ does inside a Magento install
Most Magento devs treat the queue layer as a black box. Here is exactly what happens between bin/magento setup:config:set and a worker draining the async-email queue.
-
01
Install RabbitMQ on the host or use a managed broker
Self-host with
apt install rabbitmq-serveron Debian / Ubuntu, run the official Docker image, or point Magento at a managed broker like CloudAMQP or AWS MQ for RabbitMQ. Production stores almost always pick managed: you get clustering, monitoring, automatic upgrades and 24x7 support without owning the broker. Magento needs version 3.9 or above to satisfy the AMQP 0-9-1 features the framework relies on for consumer cancellation and per-message ack. -
02
Wire credentials into env.php with setup:config:set
Run
bin/magento setup:config:set --amqp-host=localhost --amqp-port=5672 --amqp-user=magento --amqp-password=… --amqp-virtualhost=/once per environment. Magento writes the AMQP block intoapp/etc/env.phpunderqueue; from that point on the framework auto-routes any queue withconnection="amqp"through RabbitMQ instead of the fallbackdbqueue. Use a dedicated vhost per Magento install when the broker is shared between projects. -
03
Publishers declare exchanges and bindings in queue.xml
Each Magento module that needs async processing ships an
etc/queue.xml,etc/communication.xmlandetc/queue_topology.xml. These XML files declare the topic name, message data type, target exchange and bindings. At runtime Magento's publisher serialises the payload (JSON for primitive data, PHP-serialised for objects) and pushes it to RabbitMQ via the AMQP library. Magento creates the exchange / queue / bindings on first publish if they don't already exist. -
04
Consumers subscribe and process messages one at a time
Each consumer is declared in
etc/queue_consumer.xmlwith a name, queue, handler class and connection. Examples:sales.rule.update.coupon.usage,async.operations.all,inventory.source.items.cleanup,product_action_attribute.update. The consumer pulls one message, instantiates the handler, runs the work, acks back to RabbitMQ on success or nacks (requeues) on failure. Failed messages can be routed to a dead-letter queue if topology declares one. -
05
Run consumers as long-lived processes under supervisor
Start a consumer with
bin/magento queue:consumers:start <name> --single-thread --max-messages=10000. The--max-messagesflag lets the process exit cleanly after N messages so supervisor / systemd / Kubernetes restarts a fresh worker, sidestepping memory leaks. Magento also ships aconsumers_runnercron job that auto-starts consumers if you flip the toggle inenv.php; ideal for low-volume Open Source stores that don't want a full supervisor setup.
Four situations where RabbitMQ is the right call
RabbitMQ isn't free overhead — it adds a broker to operate. In these four scenarios the operational cost is dwarfed by what it buys you.
-
All Adobe Commerce installs (required)
RabbitMQ is mandatory on Adobe Commerce — the B2B module, async indexers, async webhooks and the storefront image API all assume an AMQP broker. Adobe Commerce Cloud provisions a managed RabbitMQ for every environment automatically; on self-hosted Adobe Commerce you wire one in yourself before
setup:install. Skipping it leaves half the platform features either broken or running on a slow MySQL fallback. -
High-volume Open Source stores (>500 orders/day)
On Magento Open Source the MySQL
queuetable works fine up to roughly 1,000 jobs / hour. Past that point lock contention onqueue_messagestarts adding latency to checkout — async order emails back up, sales rule recalculations stall, and the table grows multi-gigabyte. Switching the async-email and sales-rule consumers to RabbitMQ removes the bottleneck and keeps checkout under 500 ms even during sale spikes. -
Multi-source inventory / B2B catalogue deployments
Multi-Source Inventory (MSI) reservations and B2B shared-catalogue updates publish dozens of messages per cart-update on a busy store. The async indexer queue keeps PLP filters consistent without blocking the request thread. Both features ship configured for the
amqpconnection out of the box — running them on the MySQL fallback queue is a documented anti-pattern that surfaces as stale stock and out-of-date catalogue prices. -
Async-webhook integrations (Klaviyo, Algolia, Akeneo)
Modern third-party connectors push customer events, product updates and PIM-driven attribute changes through async webhooks. Each webhook is a queue message: the request handler enqueues the payload then returns 200 immediately, while a consumer ships the data to the downstream API. RabbitMQ keeps the integration retry-safe and decoupled — if Klaviyo or Algolia is down, messages queue up and process when the API recovers, instead of blowing up storefront response times.
Three traps that take RabbitMQ from quietly humming to quietly broken
Every queue incident I've been paged into in the last three years collapses to one of these three root causes. Avoid them and the broker stays boring — which is the goal.
-
Not running consumers at all
The single most common production incident I’m called in to fix: publishers are happily pushing messages to RabbitMQ, but no consumer process is running, so the queue fills indefinitely. Symptoms include orders stuck in “pending email”, stock reservations never released, and admin grids showing stale data. Always verify
queue:consumers:listreturns the expected workers and thatrabbitmqctl list_queuesshows depth trending to zero, not climbing. -
Running consumers without a supervisor
A bare
php bin/magento queue:consumers:start …invocation dies on the first uncaught exception, memory leak or OOM. Without a supervisor (systemd, supervisord, Kubernetes Deployment) the process never restarts and the queue silently backs up. Always wrap each consumer in a unit file withRestart=always, set--max-messages=10000so the process recycles before memory creeps up, and alert on consumer process count, not just queue depth. -
Cron-runner conflict on the consumers_runner flag
Magento ships a
consumers_runnercron job that auto-starts every consumer. Leaving the cron flag on while also running supervisor-managed workers gives you duplicate consumers fighting over the same queue — half the messages process twice, half not at all. Either flipcron_consumers_runner -> cron_run -> falseinenv.php(use supervisor) or leave cron in charge and skip supervisor entirely. Pick one. Document which.
Magento RabbitMQ — frequently asked questions
-
RabbitMQ vs the MySQL queue — when must I use RabbitMQ?
You must use RabbitMQ on Adobe Commerce (it is required by the framework) and on any Magento Open Source store pushing more than ~1,000 jobs per hour through the queue layer — typically anything over 500 orders per day, or any deployment using B2B, Multi-Source Inventory or async webhooks. The MySQL fallback (the db connection in queue.xml) is fine for development and for small Open Source stores below those thresholds, but it shares the main Magento database, locks on the queue_message table during contention, and does not scale horizontally. RabbitMQ is a dedicated broker with its own connection pool, persistent storage and clustering, so adding workers genuinely scales throughput. -
How many consumers should I run, and how many workers per queue?
Start with one worker process per consumer name and scale up only the hot queues. async.operations.all, product_action_attribute.update, sales.rule.update.coupon.usage and the MSI reservation consumers tend to be the busiest on a typical store; everything else can stay at one worker. Use rabbitmqctl list_queues name messages consumers to see depth and worker count side-by-side, then add workers to any queue where depth grows faster than it drains. The hard cap is CPU — each consumer is a full PHP process so 4 - 8 workers per CPU core is the practical ceiling. Most Open Source stores need fewer than 10 worker processes total; Adobe Commerce on a busy B2B catalogue can run 30+. -
Why are my consumers dying silently after a few hours?
Almost always PHP memory leak. Long-lived PHP processes accumulate memory across messages and eventually hit memory_limit, at which point PHP terminates the process with no useful log line. The fix is the --max-messages flag: bin/magento queue:consumers:start <name> --single-thread --max-messages=10000 tells the consumer to exit cleanly after N messages, after which supervisor / systemd starts a fresh worker. 10,000 is a safe default; tune lower if a specific handler is leaky. Also set memory_limit on the CLI php.ini high enough (1G is reasonable) so the worker survives the message batch before recycling. Pair this with alerting on consumer process count, not queue depth, so you find out the moment a worker dies and supervisor fails to restart it. -
Can I cluster RabbitMQ for high availability, and what is the right sharding strategy?
Yes, but think carefully about the strategy. RabbitMQ supports two HA modes: classic mirroring (deprecated in 3.9, removed in 4.0) and quorum queues (the current recommended option). Quorum queues use the Raft consensus protocol and survive losing one node in a three-node cluster without data loss. For Magento workloads, declare critical queues (async.operations.all, sales-rule recalc, MSI reservations) as quorum and leave low-priority queues classic for throughput. Adobe Commerce Cloud handles all of this for you on the managed broker. For self-hosted setups, CloudAMQP and AWS MQ both expose quorum-queue support with sensible defaults — managed almost always beats DIY clustering once you account for monitoring, backups and patching. -
Does Hyvä affect queue or consumer usage at all?
No. RabbitMQ is a backend concern — the queue layer runs entirely on the PHP side of Magento, before any HTML rendering happens. Switching the storefront theme from Luma to Hyvä changes how the browser receives and renders HTML, but does not touch publishers, consumers, queue declarations or the AMQP connection. A Hyvä migration neither speeds up nor slows down message throughput. The two systems are completely orthogonal, and both are worth tuning independently. If you are migrating to Hyvä on a high-volume store, audit the consumer setup at the same time — the front-of-house win loses some of its shine if checkout async-email is stuck behind a cold MySQL queue. -
How do I monitor queue depth and consumer health in production?
Three layers. Layer one is the RabbitMQ Management UI on port 15672 (enable with rabbitmq-plugins enable rabbitmq_management) — gives a live dashboard of queue depth, message rate, consumer count and connection health. Layer two is CLI checks via rabbitmqctl list_queues name messages consumers messages_ready messages_unacknowledged, scriptable into your monitoring stack (Prometheus rabbitmq_exporter, Datadog agent, New Relic integration). Layer three is application-level: alert on Magento's own bin/magento queue:consumers:list output (consumer process count) and on queue depth growth rate — depth climbing without bound is the canonical signal that a consumer has died or a handler is throwing. Send all three layers to PagerDuty for prod.
Want a RabbitMQ + consumer audit on your Magento store?
Send your storefront URL — I will review queue topology, consumer supervisor setup, cron-runner config and message-rate metrics, then reply with a written remediation plan, fixed-price quote and earliest start date. 24-business-hour turnaround.