Solana’s high throughput and low fees come from its parallel execution model. Unlike most blockchains that process transactions one by one, Solana’s Sealevel runtime can schedule many transactions at the same time.

To make this possible, every transaction must declare which accounts it will read or write. The runtime uses that information to run non-overlapping transactions in parallel.

The issues arise when multiple transactions address the same account. If at least one of them writes, the runtime applies a write lock and runs them sequentially. That’s what developers call a “hot account”.

Hot accounts aren’t a rare edge case, and they show up often in real apps. For example, NFT mints that increment the same counter, DeFi pools where each swap updates the same liquidity account, or MEV bundles competing for shared state.

This guide looks at how write locks work, why hot accounts appear, and how to avoid them in your code.

Parallel execution in Solana

In most chains, all state lives in a single global tree. Solana uses an account model instead. Solana’s global state can be seen as a large database where each record is an account.

Each account has a few core fields:

Programs in Solana don’t hold state directly, they operate on external accounts. This separation of code and data makes parallel execution possible: programs don’t share memory, they only work with isolated accounts.

The other part of the model is how transactions are built. Every transaction must declare all the accounts it will use, and whether each one is read-only or writable.

Writable accounts require an exclusive lock while the transaction runs.

When a validator receives a batch of transactions, the Sealevel runtime schedules them based on these declarations.

The rules are straightforward:

Conflicts

In Solana, a conflict happens when two or more transactions declare the same account in a way that prevents parallel execution:

Transactions can only run in parallel if all shared accounts are read-only. Declaring an account as writable requests an exclusive lock for the duration of that transaction. This is called a write lock.

Each transaction declares the accounts it will touch:

Tx1: [UserA (w), Pool (w)]

Tx2: [UserB (w), Pool (w)]

Tx3: [UserC (w), OrderBook (w)]

Tx4: [UserD (w), Pool (r)]

The scheduler checks for overlaps:

Result:

The cost of a conflict is more than just lost parallelism:

Agave uses a simple conservative strategy:

- Sort transactions by arrival order.- Check each transaction for account overlap with those already running.- If there’s a conflict, put the transaction in a wait queue.- Execute the queue as accounts unlock.This means that even with many CPU cores available, Agave can sit idle when too many transactions contend for the same account.

Firedancer was built from scratch to push modern server hardware to its limits. It takes a more aggressive scheduling approach:
-Detect conflicts extremely quickly.-Pack non-conflicting transactions more efficiently around hot accounts.-Reduce the overhead of managing the wait queue.Firedancer can’t make conflicting transactions run in parallel — write locks are part of Solana’s model — but it minimizes delays. \

Avoiding conflicts

Understanding how conflicts work and how validator clients schedule transactions is the foundation. But for a developer the real question is how to write code that doesn’t turn into a bottleneck. The goal is to maximize parallel writes so your dApp doesn’t degrade into a queue.

In practice, developers rely on three main techniques to avoid hot accounts:

Transaction-level optimization

It’s essential to understand how to form transactions. The fewer accounts marked as writable, the higher the chance the scheduler can run them in parallel. Review each instruction carefully to determine whether that account actually needs to be marked writable.A smaller writable set usually means fewer conflicts.

Local Fee Markets also help. When an account gets hot, competition for access raises the fee pressure on those specific transactions. This doesn’t remove conflicts, but it pushes the load to spread out. For transactions that touch hot accounts — like NFT mints or DEX swaps —adding a priority fee improves the chance of quick execution. The best practice is not to hardcode it, but to fetch recent prioritisation fees through RPC (getRecentPrioritizationFees) and set the value dynamically:

const modifyComputeUnits = ComputeBudgetProgram.setComputeUnitLimit({
  units: 300,
});

const addPriorityFee = ComputeBudgetProgram.setComputeUnitPrice({
  microLamports: 20000,
});

const transaction = new Transaction()
  .add(modifyComputeUnits)
  .add(addPriorityFee)
  .add(
    SystemProgram.transfer({
      fromPubkey: payer.publicKey,
      toPubkey: toAccount,
      lamports: 10000000,
    }),
  );

State sharding

State sharding is the strongest technique against hot accounts. The core idea is to split state across multiple accounts when a single account becomes overloaded.

Instead of a single global account updated on every action, create a set of shard accounts and distribute transactions between them.

For example:

❌ The naive approach would be to create a single counter account:

// lib.rs
use anchor_lang::prelude::*;
declare_id!("Fg6PaFpoGXkYsidMpWTK6W2BeZ7FEfcYkg476zPFsLnS");
#[program]
pub mod hot_counter {
    use super::*;
    pub fn increment(ctx: Context<Increment>) -> Result<()> {
        ctx.accounts.global_counter.count += 1;
        Ok(())
    }
}
#[derive(Accounts)]
pub struct Increment<'info> {
    #[account(mut, seeds = [b"counter"], bump)]
    pub global_counter: Account<'info, GlobalCounter>,
}
#[account]
pub struct GlobalCounter {
    pub count: u64,
}


This account will be locked on every increment, creating a queue.

✅A better approach is to create multiple counter accounts. The client can randomly choose which shard to send the transaction to:

// lib.rs
use anchor_lang::prelude::*;
declare_id!("Fg6PaFpoGXkYsidMpWTK6W2BeZ7FEfcYkg476zPFsLnS");
const NUM_SHARDS: u16 = 8;
#[program]
pub mod sharded_counter {
    use super::*;
    pub fn increment(ctx: Context<Increment>, shard_id: u16) -> Result<()> {
        require!(shard_id < NUM_SHARDS, MyError::InvalidShardId);
        
        let counter_shard = &mut ctx.accounts.counter_shard;
        counter_shard.count += 1;
        Ok(())
    }
}
#[derive(Accounts)]
#[instruction(shard_id: u16)]
pub struct Increment<'info> {
    #[account(
        mut,
        seeds = [b"counter_shard", &shard_id.to_le_bytes()],
        bump
    )]
    pub counter_shard: Account<'info, CounterShard>,
}
#[account]
pub struct CounterShard {
    pub count: u64,
}
#[error_code]
pub enum MyError {
    #[msg("Invalid shard ID provided.")]
    InvalidShardId,
}

How it works:

On the client side (TypeScript/JavaScript), you generate a random number from 0 to 7 (const shardId = Math.floor(Math.random() * NUM_SHARDS);) and pass it to the instruction.

On the program side, the shard_id is used to look up the right PDA counter. Now 8 users can all call increment at the same time, and their transactions will most likely land in different shards and run in parallel.

To read the total, you fetch all 8 accounts and sum them on the client. This is a read-only operation, so it doesn’t cause locks and stays efficient.


**Jito case
**

Jito lets users (mostly MEV searchers and trading bots) pay validators tips to get their bundles included in a block in a specific order and at high speed. These payments happen thousands of times per block. If Jito had only one global account for tips, it would instantly become the hottest account in Solana and a bottleneck for their own service.

To avoid that, Jito uses sharding. Instead of one central wallet, they maintain a set of accounts for receiving tips. When a searcher or trader builds a bundle, the last transaction is usually a tip transfer. The Jito SDK doesn’t always send to the same address. Instead it:

This spreads out the write load so the scheduler can process multiple tip payments in parallel


Using PDAs for data isolation

Another common mistake is to store all user data in a single big account, for example, with a BTreeMap<Pubkey, UserData> This immediately creates a hot account, since any update for one user blocks the whole map.

A better approach is to give each user their own account using a PDA (Program Derived Address). A PDA is a deterministic account address derived from a user’s key and other seeds. With this pattern, updating one user’s state doesn’t block reads or writes for others.

Conclusion

Hot accounts in Solana aren’t an oversight. They’re what you get as a tradeoff for parallel execution. Where other blockchains line everything up in one slow queue, Solana lets developers build ultra-fast applications.

The takeaway is that sсalability in Solana depends not only on code but also on how state is designed. Well-structured accounts let your program take full advantage of parallelism. Poor account design leads to hot accounts and wasted throughput.

To be effective on Solana you have to think in parallel. That means seeing not only the logic of your app, but also how its data lives in a global state and how transactions will contend for it.

TL;DR

Solana runs transactions in parallel, but write locks turn shared state into hot accounts. When too many transactions touch the same account, they queue up, raise latency, and drive up fees. Validator clients like Agave and Firedancer differ in how they schedule conflicts, but neither removes the problem. This guide shows how to deal with them. Overall, the only real fix is in program design. Common techniques are:

Hot accounts don’t disappear, but with the right patterns you can keep your app scalable.