Table of Links
2 Background
2.2 Fragmentation and PagedAttention
3 Issues with the PagedAttention Model and 3.1 Requires re-writing the attention kernel
3.2 Adds redundancy in the serving framework and 3.3 Performance Overhead
4 Insights into LLM Serving Systems
5 vAttention: System Design and 5.1 Design Overview
5.2 Leveraging Low-level CUDA Support
5.3 Serving LLMs with vAttention
6 vAttention: Optimizations and 6.1 Mitigating internal fragmentation
6.2 Hiding memory allocation latency
7.1 Portability and Performance for Prefills
7.2 Portability and Performance for Decodes
7.3 Efficacy of Physical Memory Allocation
7.4 Analysis of Memory Fragmentation
4 Insights into LLM Serving Systems
To highlight the memory allocation pattern of LLM serving systems, we experiment with Yi-6B running on a single NVIDIA A100 GPU, and Llama-3-8B and Yi-34B running on two A100 GPUs with tensor-parallelism. We set the initial context length of each request to 1K tokens, vary the batch size from 1 to 320 and measure latency, throughput and memory requirement of the decode phase (see §6 for our discussion and optimizations for the prefill phase).
Observation-1: On a per-iteration basis, KV-cache memory requirement is known in advance. This is due to autoregressive decoding that generates one token at a time, perrequest. Therefore, with every iteration, the KV-cache memory footprint of a request grows uniformly by one token until the request completes.
Observation-2: KV-cache does not require high memory allocation bandwidth. The memory footprint of a single token across all layers is typically few 10s-100s of kilobytes of memory. For example, the per-token memory footprint of Yi-6B, Llama-3-8B and Yi-34B is 64KB, 128KB and 240KB, respectively. Further, each iteration runs for 10s-100s of milliseconds (Figure 6b) implying that a request requires at most a few megabytes of memory per second. While batching improves system throughput, the number of tokens generated per second plateaus beyond a certain batch size (Figure 6a). This implies that the memory allocation bandwidth requirement also saturates at large batch sizes (e.g., at 128 for Yi34B). For all the models we studied, we observe that the highest memory allocation rate is at most 600MB per second (Figure 6c).
In vAttention, we leverage these observations to implement an efficient dynamic memory management system for KV-cache. In the next section, we begin with a high-level design description of vAttention (§5.1), then discuss how vAttention is used for serving LLMs (§5.3) and finally describe our optimizations (§6).
This paper is available on arxiv under CC BY 4.0 DEED license.
Authors:
(1) Ramya Prabhu, Microsoft Research India;
(2) Ajay Nayak, Indian Institute of Science and Contributed to this work as an intern at Microsoft Research India;
(3) Jayashree Mohan, Microsoft Research India;
(4) Ramachandran Ramjee, Microsoft Research India;
(5) Ashish Panwar, Microsoft Research India.