You’re reading the final part of a 3-Part Series on Paxos Consensus Algorithms in Distributed Systems.


In Part 1, Alice and Bob fought for a lock using Paxos. In Part 2, we explored Paxos in the wild — messy situations like minority partitions, delayed proposals, and dueling proposers. Paxos ensured safety, but left developers with plenty of complexity to untangle.

Now, in Part 3, we’ll see how Raft tackles the same scenarios — with a much simpler leader–follower model.


Normal Case

In Paxos, both Alice and Bob could try proposing values. Acceptors had to juggle these and eventually converge on one.

In Raft, the story is simpler. A single node is elected as the Leader. The client sends its proposal to the leader. The leader then replicates this proposal to their followers. Once a majority of followers accept, the leader commits and informs everyone.



Edge Case 1 — Leader Crash — No Lost Commit

In Paxos, Alice could succeed in getting a majority but still stall if acknowledgments were lost or she disappeared before hearing back. The value was safe, but Alice might never know.


Raft avoids this ambiguity by making the leader responsible for all commits.


Alice’s case:


In Paxos, this might look like Alice’s proposal is lost.


But in Raft:



Result: No phantom successes — if a majority saw it, Raft guarantees it will eventually be committed.


Edge Case 2 — Simultaneous Proposals — No Dueling


In Paxos, Alice and Bob could both propose at the same time, causing the cluster to juggle values until a higher-numbered proposal won.

In Raft, this competition is eliminated by the single-leader model.


Alice vs Bob:


If leadership changes mid-flight (say Node 4 crashes):


Result: Raft eliminates dueling proposers. Alice’s and Bob’s requests are funneled through one leader, keeping the log consistent.


Edge Case 3 — Minority Partition


In Paxos, even a minority group could keep sending proposals that ultimately wouldn’t succeed, leading to wasted work.



Raft takes a stricter stance. If a node is isolated in a minority partition, it simply cannot make progress. Only the majority side can elect a leader and continue processing proposals. When the network heals, the isolated node catches up with the leader’s log.




Alice’s side (majority):

Alice sends her proposal to the cluster. Node 4 is elected as Leader.


Bob’s side (minority):

Bob sends a Proposal to Node 1. But Node 1 is in a minority partition (cut off from the rest).


Result: Only the majority moves forward. Alice’s proposal succeeds, Bob’s stalls — wasted work is avoided and consistency preserved.


Edge Case 4 — Delayed Proposal


In Paxos, old proposals could show up late and confuse acceptors, forcing extra reconciliation rounds.

Raft avoids this confusion. If a proposal arrives late after the cluster has already agreed on another value, followers simply reject it and move on. Only the leader’s current proposal can make progress.


In Alice and Bob’s case:


Bob’s late message:


Alice’s active proposal:


Result: Bob’s stale proposal is ignored. Alice’s proposal commits, and the log stays clean..


Wrap-Up for Part 3


Raft takes Paxos’s safety guarantees and adds clarity:


Where Paxos showed consensus is possible, Raft made it practical.


Wrapping up the Series


Over this 3-part series, we followed Alice and Bob as they fought for a lock across a cluster of unreliable nodes. Along the way, we explored why consensus is such a difficult problem in distributed systems — and how algorithms like Paxos and Raft rise to the challenge.





The Bigger Lesson


Consensus isn’t just about picking a value. It’s about surviving the messiness of real distributed systems — crashes, partitions, delays — while keeping the system both safe and live.


For Alice and Bob, that means fewer battles and more predictable outcomes. For us, it means distributed systems we can trust to keep running, even when the world around them falls apart.