Abstract and 1. Introduction

  1. Related Works

    2.1 Traditional Index Selection Approaches

    2.2 RL-based Index Selection Approaches

  2. Index Selection Problem

  3. Methodology

    4.1 Formulation of the DRL Problem

    4.2 Instance-Aware Deep Reinforcement Learning for Efficient Index Selection

  4. System Framework of IA2

    5.1 Preprocessing Phase

    5.2 RL Training and Application Phase

  5. Experiments

    6.1 Experimental Setting

    6.2 Experimental Results

    6.3 End-to-End Performance Comparison

    6.4 Key Insights

  6. Conclusion and Future Work, and References

5 System Framework of IA2

As shown in Figure 3, IA2 operates through a structured two-phase approach, leveraging deep reinforcement learning to optimize index selection for both trained workloads and unseen scenarios. It depicts IA2’s workflow, where the user’s input workload is processed to generate states and action pools for downstream RL agents. These agents then make sequential decisions on index additions, adhering to budget

constraints, demonstrating IA2’s methodical approach to enhancing database performance through intelligent index selection.

Authors:

(1) Taiyi Wang, University of Cambridge, Cambridge, United Kingdom ([email protected]);

(2) Eiko Yoneki, University of Cambridge, Cambridge, United Kingdom ([email protected]).


This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.