Introduction

If you follow the message streaming space at all, you know it is dominated by a few big names. Apache Kafka has been the 800-pound gorilla since LinkedIn open-sourced it back in 2011. I've covered streaming-adjacent projects a few times in this series, like WarpStream, Apache Paimon, and Proton. So when I started seeing chatter about a message streaming platform written from scratch in Rust that was processing millions of messages per second, I had to take a look. Enter Apache Iggy (Incubating).

The project was created by Piotr Gankiewicz back in April 2023. The origin story is one I can appreciate as a developer. Piotr wanted to learn Rust and decided the best way to do that was to build something real. Having spent years working with distributed systems and tools like RabbitMQ, ZeroMQ, Kafka, and Aeron, he decided to build a message streaming platform.

What started as a learning exercise evolved into a project that now boasts nearly 4,000 stars on GitHub and was accepted into the Apache Incubator in February 2025, garnering 21 votes in favor and zero against.

The name, by the way, is an abbreviation for Italian Greyhound. Piotr owns two of them. Small but extremely fast dogs. That tracks with what the project is going for.

Let's Dive In

So, what exactly is Iggy? It is a persistent message streaming platform, not a message broker. That is an important distinction. Think of it more along the lines of Kafka than RabbitMQ. It uses a similar conceptual model of streams, topics, partitions, and segments, which means if you've worked with Kafka, the mental model will be familiar.

The big difference is what is under the hood. Iggy is built in Rust with a thread-per-core, shared-nothing architecture using io_uring for I/O operations. If that doesn't mean much to you, the short version is that it is designed to squeeze every bit of performance out of modern Linux systems.

There is no garbage collector, no JVM, and the resource footprint is minimal compared to Kafka. Some early adopters have reported processing 20 million messages per second via TCP, which is pretty wild for a project this young.

The 0.6.0 release from December 2025 was a major milestone. The team completely rewrote the server, moving from the Tokio-based async runtime to a completion-based model using io_uring and compio. That is a significant architectural bet, and the benchmarks back it up. They report throughput exceeding 5,000 MB/s with appropriate hardware and sub-millisecond P99 latency. They even have a live benchmarks page you can check out.

One thing I find interesting is the transport protocol flexibility. Iggy supports QUIC, TCP, WebSocket, and HTTP, all with TLS. TCP gives you the best performance with a custom binary protocol. QUIC is interesting for lower latency on lossy networks.

WebSocket is great if you need browser connectivity. And HTTP gives you a standard REST API for integration and debugging. That kind of flexibility in how clients connect is a nice touch.

The project also uses custom zero-copy serialization, working directly with binary data rather than enforcing a schema. This avoids the overhead that comes with traditional serialization and deserialization, which contributes to the performance numbers.

Trying It Out

Getting started is straightforward. You can pull the Docker image with docker pull apache/iggy and run it with docker compose up. There is also a CLI tool you can install with cargo install iggy-cli and then just type iggy in your terminal.

The getting started guide walks you through building a producer and consumer application in Rust, which I thought was well written. The default credentials are username iggy and password iggy for the root user, which has full permissions. You can create additional users with granular permissions from there.

Let’s walk through a simple working scenario:

We’ll do the whole thing with just the built-in CLI tool inside Docker. This takes about 5 minutes and shows the core idea.

Step 1: Start the Iggy server (one-time)

git clone https://github.com/apache/iggy.git 
cd iggy 
# Set easy default login (only needed the first time) 
export IGGY_ROOT_USERNAME=iggy 
export IGGY_ROOT_PASSWORD=iggy  
# Start the server in the background 
docker compose up -d

The server is now running on your machine (data is saved in a local_data folder, so it survives restarts).

Step 2: Create a mailbox (stream + topic)

Open a new terminal and run these (the container name comes from the docker-compose file):

docker exec -it iggy-server /iggy --username iggy --password iggy stream create notifications  
docker exec -it iggy-server /iggy --username iggy --password iggy topic create notifications alerts 1 none

Step 3: Send a message (the “producer” side)

docker exec -it iggy-server /iggy --username iggy --password iggy message send --partition-id 1 notifications alerts "Hello from Iggy! User just logged in at 8:06 PM."

You can run this command as many times as you want; each one sends a new message.

Step 4: Receive the message (the “consumer” side)

In another terminal (or the same one):

docker exec -it iggy-server /iggy --username iggy --password iggy message poll --consumer 1 --offset 0 --message-count 5 --auto-commit notifications alerts 1

You’ll immediately see the messages printed out. The consumer “polls” for new messages and can keep running in a loop (or you can put real code behind it).

That’s it, we just built a working producer → streaming server → consumer pipeline in under 10 commands, which is pretty cool.

To summarize what just happened:

They have SDKs for Rust, C#, Java, Go, Python, and Node.js, with C++ and Elixir in progress. The Rust SDK is the most mature, as you might expect. The API for creating a producer is clean. You set up a client, point it at a stream and topic, configure batching and partitioning, and start sending messages. The consumer side follows a similar pattern with support for consumer groups, which handle message ordering and horizontal scaling across connected clients.

There is a Web UI for managing streams, topics, partitions, and browsing messages, available as a separate Docker image at apache/iggy-web-ui. Here is a look at the Web UI showing stream management. They also recently added a connectors framework where you can implement Source or Sink traits in Rust to build custom data pipelines. The 0.6.0 release added connectors for Apache Iceberg, Elasticsearch, and Apache Flink, which shows they are thinking about the broader data ecosystem. There is even an MCP (Model Context Protocol) server for providing context to LLMs, which is a forward-looking feature.

The Ecosystem and What's Coming

If I had a nit to pick, it is that clustering is not yet available. Iggy currently runs as a single node, with clustering based on Viewstamped Replication planned for the future. For production deployments where high availability is a hard requirement, that is a blocker. The team is aware of this, and it is on the roadmap, but it is worth knowing before you plan a deployment.

On the positive side, Iggy was recently listed on the Thoughtworks Technology Radar as worth experimenting with, which is validating. The project has a growing ecosystem with close to 20 repositories, a connector runtime, an MCP server, multiple SDKs, a CLI, and a Web UI. It’s looking like a robust start.

Summary

With projects like Iggy, Redpanda, and WarpStream, we are seeing a trend of rethinking message streaming infrastructure for modern hardware and cloud environments. Kafka is still the dominant player and will be for a while, but these newer projects are exploring different tradeoffs around performance, resource efficiency, and operational simplicity.

So, what the heck is Apache Iggy? It is a Rust-based, high-performance message streaming platform that is going after the performance crown in this space. It is not trying to be Kafka-compatible like WarpStream. Instead, it is its own thing with its own protocol and API, built from scratch to be as fast and lightweight as possible. If you are in a position where raw throughput and low latency are critical, and you can tolerate the single-node limitation for now, it is absolutely worth experimenting with.

Check out my other What the Heck is... articles at the links below: