Go channels are one of the language's signature features. They provide a structured way for goroutines to communicate and coordinate. Instead of manually sharing memory and managing locks, channels let goroutines send and receive values directly, ensuring that data is transferred correctly and synchronization is handled automatically.

But what really happens when we write something like:

ch := make(chan int)
go func() {
    ch <- 42
}()
value := <-ch

Under the hood, channels are not magic. They are a carefully engineered data structure in the Go runtime, combining a ring buffer, wait queues, and integration with the scheduler.

In this post, we'll explore the internals: how channels are represented, how send and receive operations work, what happens when you close a channel, how select interacts with channels, and how the scheduler and memory model come into play.

Historical Context

Go didn't invent the concept of channels. They are inspired by Communicating Sequential Processes (CSP), introduced by Tony Hoare in 1978. The core idea: processes don't share memory directly, they communicate by passing messages.

Other influences include:

The channel primitive embodies the CSP principle that underpins Go's concurrency philosophy: don't communicate by sharing memory; share memory by communicating.

In contrast to Java's BlockingQueue or pthreads condition variables, Go chose to make channels built into the language, with first-class syntax and tight runtime integration. This allows channels to express communication patterns naturally while remaining safe and type-checked.

hchan: Memory Layout & Implementation Details

Every channel created with make(chan T, N) is represented internally by an hchan struct. Here's a simplified view:

type hchan struct {
    qcount   uint           // number of elements in the buffer
    dataqsiz uint           // buffer capacity
    buf      unsafe.Pointer // circular buffer for elements
    elemsize uint16         // size of each element
    closed   uint32         // is channel closed?

    sendx    uint32         // send index into buffer
    recvx    uint32         // receive index into buffer

    recvq    waitq          // waiting receivers
    sendq    waitq          // waiting senders

    lock     mutex
}

Fields Breakdown:

The memory layout is designed for fast common paths:

hchan is allocated on the heap. That means it's managed by Go's garbage collector just like slices, maps, or other heap objects. When there are no references to a channel, the hchan header and its associated buffer become eligible for collection.

Concurrency control is provided by the embedded lock (hchan.lock). Internally, Go uses a spin–mutex hybrid strategy for this lock: in the uncontended case, goroutines may briefly spin to acquire it, avoiding expensive context switches. Under contention, they fall back to a traditional mutex with queuing. This design reduces overhead for high-frequency channel operations while still handling contention robustly.

Together, these details make hchan both lightweight enough for everyday concurrency and sophisticated enough to handle thousands of goroutines hammering the same channel under load.

sudog in the Go Runtime

A sudog ("suspended goroutine") is an internal runtime structure that represents a goroutine waiting on a channel operation.

The naming comes from old Plan 9/Alef/Inferno runtime code, which influenced Go's runtime. In that lineage, su stood for synchronous, so sudog means something closer to synchronous goroutine record.

When a goroutine tries to send or receive on a channel and can't proceed immediately (because there's no matching receiver/sender):

  1. The goroutine is marked as waiting.
  2. The runtime creates or reuses a sudog object to store metadata about that wait.
  3. This sudog is put into the channel's wait queue (a linked list for senders and another for receivers).
  4. When a matching operation happens, the sudog is popped off the queue, and the corresponding goroutine is woken up.

What's Inside a sudog?

From the Go runtime source (runtime/runtime2.go), a sudog holds:

In simplified pseudocode:

type sudog struct {
    g     *g       // the waiting goroutine
    elem  unsafe.Pointer // value being sent/received
    c     *hchan   // channel this sudog is tied to
    next  *sudog   // linked-list pointer
    prev  *sudog
    // ... other bookkeeping
}

So the sudog is the "ticket" that says:

This goroutine G is parked on channel C, waiting to send/receive the value at elem.

Lifecycle of a sudog

Why Not Just Store the Goroutine?

Because the runtime needs extra context: not only which goroutine is waiting, but also what it's doing (sending or receiving, which channel, which value pointer, which select case). The sudog bundles all of that into a single structure.

A key detail is that sudogs are pooled and reused by the runtime. This reduces garbage collector pressure, since channel-heavy programs (like servers handling thousands of goroutines) would otherwise generate massive amounts of short-lived allocations every time a goroutine blocks.

Another subtlety: a single goroutine can be represented by multiple sudogs at once. This happens in a select statement, where the same goroutine is registered as waiting on several channels simultaneously. When one case succeeds, the runtime cancels the others and recycles those extra sudogs.

Lifecycle of Send/Receive

Channel operations have a multi-step journey that ensures correctness under concurrency. Let's break down both sending and receiving:

Sending a Value (ch <- v)

1. Acquire lock

2. Check waiting receivers

3. Check buffer availability (for buffered channels)

4. Block if necessary

5. Edge cases

Receiving a Value (x := <-ch)

1. Acquire lock

2. Check waiting senders

3. Check buffer content

4. Check closed channel

5. Block if necessary

6. Edge cases

Simplified Pseudo-Code: chansend / chanrecv

func chansend(c *hchan, val T) {
    lock(c)

    if receiver := dequeue(c.recvq); receiver != nil {
        copy(val, receiver.stackslot)
        ready(receiver)
        unlock(c)
        return
    }

    if c.qcount < c.dataqsiz {
        c.buf[c.sendx] = val
        c.sendx = (c.sendx + 1) % c.dataqsiz
        c.qcount++
        unlock(c)
        return
    }

    enqueue(c.sendq, currentGoroutine, val)
    park()
    unlock(c)
}


func chanrecv(c *hchan) (val T, ok bool) {
    lock(c)

    if sender := dequeue(c.sendq); sender != nil {
        val = sender.val
        ready(sender)
        unlock(c)
        return val, true
    }

    if c.qcount > 0 {
        val = c.buf[c.recvx]
        c.recvx = (c.recvx + 1) % c.dataqsiz
        c.qcount--
        unlock(c)
        return val, true
    }

    if c.closed != 0 {
        unlock(c)
        return zeroValue(T), false
    }

    enqueue(c.recvq, currentGoroutine)
    park()
    unlock(c)
    return
}

Direct Stack Copy vs. Buffered Copy

An important optimization in Go's channel implementation is how values are copied:

This is part of why unbuffered channels are sometimes faster than buffered ones under low contention: fewer memory touches and no extra buffer indirection.

It also explains why channels can safely transfer values without data races: because the handoff is done via controlled stack or buffer copies managed by the runtime, not by exposing shared mutable memory.

Closing Channels

Closing a channel is more complex than it seems due to multiple goroutines potentially waiting to send or receive.

Step-by-Step Behavior

1. Acquire lock

2. Set closed flag

3. Wake all receivers

4. Wake all senders

5. Edge Cases / Race Conditions

6. Notes on fairness

select Internals

The select statement in Go allows a goroutine to wait on multiple channel operations simultaneously. Its power comes from combining non-determinism (randomized choice when multiple channels are ready) with safety (proper synchronization and fairness). Internally, select is implemented using structures and algorithms in the runtime that ensure correct behavior even under high contention.

How select Works

1. Compile-time representation: each case in a select statement is represented at runtime as an scase struct. It contains:

2. Randomized selection: when multiple cases are ready, Go runtime picks one pseudo-randomly to avoid starvation. This ensures that a channel that’s always ready does not permanently dominate other channels.

3. Blocking behavior

4. Queue management: each channel's sendq or recvq may contain multiple goroutines waiting from various select statements.

5. Wakeup and execution: when a channel in the select becomes ready:

Example Scenarios

Scenario 1: Multiple ready channels

select {
case ch1 <- 42:
    fmt.Println("Sent to ch1")
case ch2 <- 43:
    fmt.Println("Sent to ch2")
}

Scenario 2: No ready channels, with default

select {
case val := <-ch1:
    fmt.Println("Received", val)
default:
    fmt.Println("No channel ready")
}

Scenario 3: No ready channels, no default

select {
case val := <-ch1:
    fmt.Println("Received", val)
case val := <-ch2:
    fmt.Println("Received", val)
}

Closed Channels in select

Channels that are closed have special behavior in select:

Lifecycle Summary of a select Operation

  1. Goroutine reaches select.
  2. Runtime inspects all channels for readiness.
  3. If any are ready:
    • Choose one case randomly.
    • Execute and return immediately.
  4. If none are ready:
    • If default exists, execute it.
    • Otherwise, enqueue goroutine on all channels and park.
  5. When a channel becomes ready:
    • Runtime wakes the goroutine.
    • Executes the selected case.
    • Removes the goroutine from all other queues.

Memory Model & Synchronization

One of the most important - yet often overlooked - aspects of Go channels is how they fit into the Go memory model. At first glance, channels might seem like simple FIFO queues, but they are also synchronization points that define happens-before relationships between goroutines.

Happens-Before with Channels

The Go memory model states:

This is crucial, because it means that data sent over a channel is fully visible to the receiving goroutine by the time it executes the receive. You don't need extra memory barriers, sync/atomic, or mutexes to establish visibility when you use channels correctly.

done := make(chan struct{})

var shared int

go func() {
    shared = 42
    done <- struct{}{}  // send happens-before the receive
}()

<-done                  // receive completes here
fmt.Println(shared)     // guaranteed to print 42

In this example, the assignment to shared is guaranteed to be observed by the main goroutine. The send/receive pair forms the synchronization boundary.

Buffered Channels and Visibility

For buffered channels, the happens-before guarantee applies to the value being sent but not to unrelated memory writes that occur before or after. This distinction can be subtle:

ch := make(chan int, 1)
x := 0

go func() {
    x = 99
    ch <- 1
}()

<-ch
fmt.Println(x) // guaranteed to see 99

Here, because the write to x occurs before the send, and the send happens-before the receive, the main goroutine is guaranteed to see x = 99.

But if you reverse the order, things get trickier:

ch := make(chan int, 1)
x := 0

go func() {
    ch <- 1
    x = 99
}()

<-ch
fmt.Println(x) // NOT guaranteed to see 99

Why? Because the assignment to x occurs after the send. The only synchronization point is the send→receive pair, and nothing orders the x = 99 relative to the main goroutine's read of x.

Closing Channels

Closing a channel introduces its own happens-before rule:

But the guarantee only applies to memory writes that happen before the close. Anything after close(done) is unordered relative to the receivers.

Note that in idiomatic Go, closing a channel is relatively rare. Most programs simply let goroutines stop sending and rely on garbage collection. Channels are usually closed only for broadcast or completion signals, for example to indicate that no more work will be sent to multiple receivers. This pattern is common in fan-out/fan-in pipelines, worker pools, or signaling done conditions.

Attempting to send on a closed channel triggers a runtime panic immediately. This is Go’s way of preventing silent corruption or unexpected behavior:

ch := make(chan int)
close(ch)          // channel is now closed

go func() {
    ch <- 42       // panic: send on closed channel
}()

Receivers, on the other hand, are safe: a receive from a closed channel returns the zero value of the channel type:

x, ok := <-ch  // ok == false, x is zero value (0 for int)

Why this matters:

Practical Guidance

Scheduler Integration

Go's channels are not just clever data structures - they're tightly woven into the runtime scheduler. This integration is what makes blocking channel operations feel natural and efficient.

The G/M/P Model

Go's scheduler uses three main entities:

Blocking on Channels

When a goroutine tries to send or receive on a channel and the operation cannot proceed immediately:

  1. The goroutine is parked (put to sleep).

  2. It's removed from the P's run queue.

  3. A record of what it was waiting for is stored in the channel's sudog queue (a lightweight runtime structure that ties a goroutine to a channel operation).

  4. The scheduler then picks another runnable G to execute on that P.

  5. When the channel operation can proceed (e.g., another goroutine performs the corresponding send/receive), the parked goroutine is unblocked and can continue execution. This makes channel operations fully cooperative with the scheduler—there is no busy waiting.

    ch := make(chan int)

    go func() { fmt.Println(<-ch) // blocks, goroutine parked }()

    // main goroutine keeps running until it sends ch <- 42

Here the anonymous goroutine is descheduled the moment it blocks on <-ch. The main goroutine keeps running until it eventually sends. At that point, the runtime wakes the parked goroutine, puts it back on a run queue, and resumes execution.

Waking Up

When a channel operation becomes possible (e.g., a send finds a waiting receiver, or a receive finds a waiting sender):

Fairness and Scheduling Order

Go's channel implementation enforces FIFO queues for waiting senders and receivers. This provides fairness - goroutines blocked earlier get served first.

But fairness interacts with the scheduler:

Impact on Performance

Because channels are scheduler-aware, blocking operations are relatively cheap compared to traditional system calls. Parking/unparking a goroutine only requires:

Subtle Consequences

Closing Thoughts

Go channels are deceptively simple. From the outside, they look like <- and ch <- v. Underneath lies a sophisticated orchestration of buffers, queues, parked goroutines, and scheduler hooks. Every pipeline, worker pool, or fan-in/fan-out pattern leverages this machinery to safely and efficiently move data between goroutines.

As Go evolves, channels remain central to its concurrency model, so understanding their internals gives you the intuition to use them effectively - and the caution to avoid misuse in high-contention scenarios.

© Gabor Koos