Is Large Language Model Good at Database Knob Tuning? A Comprehensive Experimental Evaluation

Y Li, H Li, Z Pu, J Zhang, X Zhang, T Ji, L Sun…�- arXiv preprint arXiv�…, 2024 - arxiv.org
Y Li, H Li, Z Pu, J Zhang, X Zhang, T Ji, L Sun, C Li, H Chen
arXiv preprint arXiv:2408.02213, 2024arxiv.org
Knob tuning plays a crucial role in optimizing databases by adjusting knobs to enhance
database performance. However, traditional tuning methods often follow a Try-Collect-Adjust
approach, proving inefficient and database-specific. Moreover, these methods are often
opaque, making it challenging for DBAs to grasp the underlying decision-making process.
The emergence of large language models (LLMs) like GPT-4 and Claude-3 has excelled in
complex natural language tasks, yet their potential in database knob tuning remains largely�…
Knob tuning plays a crucial role in optimizing databases by adjusting knobs to enhance database performance. However, traditional tuning methods often follow a Try-Collect-Adjust approach, proving inefficient and database-specific. Moreover, these methods are often opaque, making it challenging for DBAs to grasp the underlying decision-making process. The emergence of large language models (LLMs) like GPT-4 and Claude-3 has excelled in complex natural language tasks, yet their potential in database knob tuning remains largely unexplored. This study harnesses LLMs as experienced DBAs for knob-tuning tasks with carefully designed prompts. We identify three key subtasks in the tuning system: knob pruning, model initialization, and knob recommendation, proposing LLM-driven solutions to replace conventional methods for each subtask. We conduct extensive experiments to compare LLM-driven approaches against traditional methods across the subtasks to evaluate LLMs' efficacy in the knob tuning domain. Furthermore, we explore the adaptability of LLM-based solutions in diverse evaluation settings, encompassing new benchmarks, database engines, and hardware environments. Our findings reveal that LLMs not only match or surpass traditional methods but also exhibit notable interpretability by generating responses in a coherent ``chain-of-thought'' manner. We further observe that LLMs exhibit remarkable generalizability through simple adjustments in prompts, eliminating the necessity for additional training or extensive code modifications. Drawing insights from our experimental findings, we identify several opportunities for future research aimed at advancing the utilization of LLMs in the realm of database management.
arxiv.org
Showing the best result for this search. See all results