In this article, I’ll walk you through the following classic topics in concurrent programming:

Each of these topics provides valuable insights into designing robust and efficient concurrent systems. Let’s dive in.

The Producer-Consumer Problem

Imagine a small bakery that makes cupcakes. There are people in the kitchen (producers) who bake cupcakes, and there are clerks at the front counter (consumers) who serve them to customers. But there’s a twist: the cupcakes can’t go directly from the oven to the customer they must first be placed on a small tray with limited space. If the tray gets full, the bakers must wait. If the tray is empty, the clerks must wait.

This seemingly simple situation models a very common problem in computer science: the producer-consumer problem. At its heart is a shared space a queue or a buffer that connects two independent processes: one that generates data and another that consumes it. In programming, this is especially important when working with concurrency where multiple things happen at once.

Let’s imagine this scenario using Go. We'll simulate a bunch of bakers who create random "cupcakes" and a team of clerks who "serve" them. First, we’ll build a simple version and later improve it so it can shut down gracefully.

Step 1: Basic Producer

Each producer will randomly create a number, sleep for a little while, and send the number through a channel. Here's how we could write that:

func cupcakeBaker(id int, stop <-chan struct{}, tray chan<- int) {
	for {
		select {
		case <-stop:
			return
		default:
			cupcake := rand.Intn(100)
			time.Sleep(time.Millisecond * time.Duration(rand.Intn(1000)))
			select {
			case tray <- cupcake:
				fmt.Printf("👨‍🍳 Baker %d baked cupcake %d\n", id, cupcake)
			case <-stop:
				return
			}
		}
	}
}

Here, tray is the channel that acts as the shared buffer. The stop channel allows us to signal all bakers to stop working.

Step 2: Basic Consumer

Now let’s define how clerks work. They just stand ready at the counter and grab a cupcake from the tray whenever it's available.

func counterClerk(id int, tray <-chan int) {
	for cupcake := range tray {
		fmt.Printf("🧍 Clerk %d served cupcake %d\n", id, cupcake)
	}
}

The range loop will automatically end when the tray channel is closed, signaling that no more cupcakes are coming.

Step 3: Putting It All Together

Let’s wire it all in the main function and launch our cupcake operation:

func main() {
	tray := make(chan int, 5)           // A tray that holds up to 5 cupcakes
	stop := make(chan struct{})         // Signal to stop production

	for i := 0; i < 5; i++ {
		go cupcakeBaker(i, stop, tray)
	}
	for i := 0; i < 5; i++ {
		go counterClerk(i, tray)
	}

	time.Sleep(10 * time.Second)
	close(stop)
}

At this point, the producers will stop creating new items, but the consumers might still be running waiting on the channel. This is unsafe and may lead to a hanging program. Let’s improve it.

Graceful Shutdown: Making the Bakery Close Smoothly

Let’s now improve our cupcake bakery so that it shuts down properly without leaving any workers hanging around. To do this, we need a way to:

  1. Let all bakers know when to stop.
  2. Wait until they’re all done.
  3. Tell the clerks that no more cupcakes are coming.
  4. Wait until all clerks finish serving the remaining cupcakes.

To coordinate this graceful shutdown, we’ll use a sync.WaitGroup a helpful tool in Go that waits for a collection of goroutines to finish.

We update our baker function to accept a WaitGroup. This way, the main routine knows when every baker has finished their job.

func cupcakeBaker(id int, wg *sync.WaitGroup, stop <-chan struct{}, tray chan<- int) {
	defer wg.Done()

	for {
		select {
		case <-stop:
			return
		default:
			cupcake := rand.Intn(100)
			time.Sleep(time.Millisecond * time.Duration(rand.Intn(1000)))

			select {
			case tray <- cupcake:
				fmt.Printf("👨‍🍳 Baker %d baked cupcake %d\n", id, cupcake)
			case <-stop:
				return
			}
		}
	}
}

Just like bakers, clerks now also use a WaitGroup. Once all cupcakes are served and the tray is empty (i.e., the channel is closed), they finish their work and signal completion.

func counterClerk(id int, wg *sync.WaitGroup, tray <-chan int) {
	defer wg.Done()

	for cupcake := range tray {
		fmt.Printf("🧍 Clerk %d served cupcake %d\n", id, cupcake)
	}
}

Now everything comes together. We set up the tray and stop signal, launch the workers, let them run for a while, and then begin the shutdown process:

func main() {
	tray := make(chan int, 5)
	stop := make(chan struct{})
	bakerGroup := &sync.WaitGroup{}
	clerkGroup := &sync.WaitGroup{}

	for i := 0; i < 5; i++ {
		bakerGroup.Add(1)
		go cupcakeBaker(i, bakerGroup, stop, tray)
	}

	for i := 0; i < 5; i++ {
		clerkGroup.Add(1)
		go counterClerk(i, clerkGroup, tray)
	}

	time.Sleep(10 * time.Second)
	close(stop)
	bakerGroup.Wait()
	close(tray)
	clerkGroup.Wait()
}

With this version, after 10 seconds:

The Dining Philosophers Problem

Imagine five philosophers sitting at a round table. They're deep thinkers, but they also love spaghetti. In front of each philosopher is a plate, and between each pair of plates lies a single fork five in total. The rule is: to eat, a philosopher needs both the fork on their left and the fork on their right. Thinking can be done anytime, but eating requires cooperation.

Let’s turn this into a Go program.

Step 1: Philosophers as Goroutines

Each philosopher is represented as a goroutine. The forks are modeled as mutexes locks that represent exclusive access to the forks.

When a philosopher wants to eat:

  1. They lock the first fork (say, the one on the left).
  2. Then they try to lock the second fork (the right one).
  3. If they get both great! They eat.
  4. After eating, they unlock both forks and go back to thinking.

Here’s what one philosopher looks like in code:

func philosopher(index int, firstFork, secondFork *sync.Mutex) {
	for {
		fmt.Printf("Philosopher %d is thinking\n", index)
		time.Sleep(time.Millisecond * time.Duration(rand.Intn(1000)))

		firstFork.Lock()
		secondFork.Lock()

		fmt.Printf("Philosopher %d is eating\n", index)
		time.Sleep(time.Millisecond * time.Duration(rand.Intn(1000)))

		secondFork.Unlock()
		firstFork.Unlock()
	}
}

Step 2: One Fork Between Each Philosopher

Now let’s seat our five philosophers and lay out the five forks:

func main() {
	forks := [5]sync.Mutex{}
	go philosopher(0, &forks[4], &forks[0])
	go philosopher(1, &forks[0], &forks[1])
	go philosopher(2, &forks[1], &forks[2])
	go philosopher(3, &forks[2], &forks[3])
	go philosopher(4, &forks[3], &forks[4])
	select {}
}

Each philosopher tries to grab their left and right forks but notice something suspicious?!

The Deadlock Trap

This setup looks symmetrical but it’s dangerous.

Imagine this:

Nobody can move forward. Everyone is stuck. That’s a deadlock.

In concurrency terms, this situation satisfies all Coffman’s conditions for deadlock:

  1. Mutual exclusion: Each fork is held by only one philosopher.
  2. Hold and wait: Each holds one fork and waits for another.
  3. No preemption: Forks aren't forcibly taken away.
  4. Circular wait: Everyone is waiting in a circle.

Breaking the Deadlock

How can we prevent this? One simple trick: change the order in which just one philosopher picks up the forks.

Let’s reverse the order for the first philosopher:

func main() {
	forks := [5]sync.Mutex{}
	go philosopher(0, &forks[0], &forks[4]) // reversed order
	go philosopher(1, &forks[0], &forks[1])
	go philosopher(2, &forks[1], &forks[2])
	go philosopher(3, &forks[2], &forks[3])
	go philosopher(4, &forks[3], &forks[4])
	select {}
}

Now there’s no circular wait. One break in the chain is enough to avoid deadlock entirely.

TryLock Approach

Some concurrency libraries offer a TryLock() method. It attempts to lock the fork, and if it's already taken, it gives up. Let’s use it:

func philosopher(index int, leftFork, rightFork *sync.Mutex) {
	for {
		fmt.Printf("Philosopher %d is thinking\n", index)
		time.Sleep(time.Millisecond * time.Duration(rand.Intn(1000)))

		leftFork.Lock()
		if rightFork.TryLock() {
			fmt.Printf("Philosopher %d is eating\n", index)
			time.Sleep(time.Millisecond * time.Duration(rand.Intn(1000)))
			rightFork.Unlock()
		}
		leftFork.Unlock()
	}
}

This version prevents deadlocks because no one gets stuck holding one fork forever. But there’s a downside: philosophers might starve. They keep trying and failing to eat, spinning uselessly. It’s deadlock-free, but not fair.

Channels to the Rescue

Let’s ditch mutexes altogether and use channels to represent forks. Each fork is now a channel of capacity 1. If the channel has a value, the fork is available. If it’s empty, the fork is in use.

Here’s how this approach looks:

func philosopher(index int, leftFork, rightFork chan bool) {
	for {
		fmt.Printf("Philosopher %d is thinking\n", index)
		time.Sleep(time.Duration(rand.Intn(1000)))

		select {
		case <-leftFork:
			select {
			case <-rightFork:
				fmt.Printf("Philosopher %d is eating\n", index)
				time.Sleep(time.Millisecond * time.Duration(rand.Intn(1000)))
				rightFork <- true
			default:
			}
			leftFork <- true
		}
	}
}

And the setup:

func main() {
	var forks [5]chan bool
	for i := range forks {
		forks[i] = make(chan bool, 1)
		forks[i] <- true
	}
	go philosopher(0, forks[4], forks[0])
	go philosopher(1, forks[0], forks[1])
	go philosopher(2, forks[1], forks[2])
	go philosopher(3, forks[2], forks[3])
	go philosopher(4, forks[3], forks[4])
	select {}
}

This version mimics the behavior of TryLock using non-blocking select statements. If the philosopher can’t get both forks, they put the one they took back and go back to thinking.

Rate Limiting

Imagine you're managing the entrance to a concert venue. People (requests) are showing up at random times, eager to get inside. But you can't let them all in at once there's a limit to how many people can be safely admitted per second. So, you devise a system: each person needs a ticket (token) to enter, and tickets are printed at a steady pace say, two per second.

You store these tickets in a small bucket. If someone shows up and there's a ticket in the bucket, they take one and enter. If the bucket is empty, they must wait for the next one to be printed.

Welcome to the Token Bucket Algorithm. Picture this:

Let’s start coding this in Go with channels.

Step 1: Defining the Token Bucket Structure

We’ll use a channel to represent the bucket. Tokens are just empty values (struct{}), and a goroutine with a ticker will generate them at a fixed rate.

type ChannelRate struct {
	bucket chan struct{}
	ticker *time.Ticker
	done   chan struct{}
}

Here, bucket holds tokens, ticker triggers token production, and done lets us shut everything down.

Step 2: Building the Rate Limiter

We’ll write a constructor to create the limiter, fill the bucket, and start producing tokens.

func NewChannelRate(rate float64, limit int) *ChannelRate {
	ret := &ChannelRate{
		bucket: make(chan struct{}, limit),
		ticker: time.NewTicker(time.Duration(1/rate * float64(time.Second))),
		done:   make(chan struct{}),
	}

	for i := 0; i < limit; i++ {
		ret.bucket <- struct{}{}
	}

	go func() {
		for {
			select {
			case <-ret.done:
				return
			case <-ret.ticker.C:
				select {
				case ret.bucket <- struct{}{}:
				default:
				}
			}
		}
	}()

	return ret
}

Tokens are added periodically, but only if the bucket isn’t full. That’s important we don’t want to overflow it.

Step 3: Waiting for a Token

When a request comes in, we call Wait(). It simply waits until a token is available:

func (s *ChannelRate) Wait() {
	<-s.bucket
}

And to clean up:

func (s *ChannelRate) Close() {
	close(s.done)
	s.ticker.Stop()
}

Burst vs Steady Flow

Let’s say you have a burst of 4 requests, they’ll all go through if the bucket was full. After that, new tokens arrive every 500ms (rate = 2/sec). So the fifth request must wait. This way, bursts are allowed but the average rate is still enforced.

Now here’s the twist: what if we don’t want to spawn a goroutine per limiter?

We can calculate tokens on the fly instead of adding them periodically. We store:

Here’s the struct:

type Limiter struct {
	mu         sync.Mutex
	rate       int
	bucketSize int
	nTokens    int
	lastToken  time.Time
}

Initialization is simple:

func NewLimiter(rate, limit int) *Limiter {
	return &Limiter{
		rate:       rate,
		bucketSize: limit,
		nTokens:    limit,
		lastToken:  time.Now(),
	}
}

The Wait Method: No Goroutines Needed

This method handles everything: checking tokens, calculating new ones, and waiting if needed.

func (s *Limiter) Wait() {
	s.mu.Lock()
	defer s.mu.Unlock()

	if s.nTokens > 0 {
		s.nTokens--
		return
	}

	tElapsed := time.Since(s.lastToken)
	period := time.Second / time.Duration(s.rate)
	nTokens := int(tElapsed.Nanoseconds() / period.Nanoseconds())

	s.nTokens = nTokens
	if s.nTokens > s.bucketSize {
		s.nTokens = s.bucketSize
	}
	s.lastToken = s.lastToken.Add(time.Duration(nTokens) * period)

	if s.nTokens > 0 {
		s.nTokens--
		return
	}

	next := s.lastToken.Add(period)
	wait := next.Sub(time.Now())
	if wait >= 0 {
		time.Sleep(wait)
	}
	s.lastToken = next
}

When a request arrives, the limiter first checks whether there are any tokens available. If there are, it simply consumes one and allows the request to proceed without delay. But if the bucket is empty, things get a bit more interesting. Instead of blocking indefinitely or relying on a background process, the limiter calculates how much time has passed since the last token was generated. Using that elapsed time, it determines how many new tokens should have been added in the meantime. If enough tokens are virtually “available” based on time, it updates its internal counters and continues. If not, the limiter waits just long enough until the next token would be produced and then proceeds. The beauty of this approach is that it’s entirely self-contained: there are no extra goroutines or tickers running in the background. Everything is handled in a single thread with careful time-based logic.

The channel-based version of the rate limiter works wonderfully for simple applications or internal services shared among a small number of users. It’s straightforward and easy to reason about. However, when building systems that serve a large number of concurrent users like public APIs or multi-tenant platforms the mutex-based limiter becomes the better choice. It avoids spawning additional goroutines and saves memory, making it more efficient and scalable.

Go even has a built-in library for this: golang.org/x/time/rate. For production systems, it’s highly recommended.