Sharing Buffer Pool Memory in Multi-Tenant Relational Database-as-a-Service
3 minute read ∼ Filed in : A paper noteSummary
Why need a shared buffer pool to reduce the cost? Resource solution and shared resource in a single DB server?
Questions
Introduction
Background & Motivation
Database as a service combines RDBMS with the goal of the reduced total cost.
Multitenance is crucial for service providers to increase consolidation and reduce costs.
- Multiple tenant databases are hosted inside a single database server, thus sharing the same DB server resources. As a result, one user’s workload may influence another user.
- Reserved resources to a tenant, isolating one tenante’s performance from the resource demands of another.
Goal
This paper studies the problem of how to share buffer pool memory in a multi-tenant setting effectively.
Challenge
- Define accountability of service providers. The paper defines the metrics HRD: the difference between the hit ratio of baseline (memory is statically reserved to the user) and of shared-buffer-pool env. If the service provider is unable to meet this promise, a penalty function
- How to effectively measure the HRD with low memory and CPU overheads. (baseline hit ratio requires some logic)
- Design the page replacement algorithm. MT-LRU
Details
SLA definition:
- Hit ratio degradation will reduce the user’s experience. Each user will have
- Penalty function: how a service provider is accountable for any value of HRD
Page replacement
LRU: some pages were visited once, and then the previously least-recent-used page was dropped.
- But which would then be referenced again, resulting in increased I/O.
- LRU is unable to differentiate between pages that have relatively frequent references.
- In general, bad for database applications
LRU- K:
- visiting a page -> putting page to the history list
- Once Key’s visiting time in the history list exceeds K, -> putting Key to cache.
- If the cache is full, remove the Key whose kth timestamp has the largest distance with now.
- history also cannot grow infinitely. It uses FIFO、LRU、LFU to control.
MT-LRU:
- Alg objective:
This paper designs an algorithm that would reduce to LRU-K for any individual tenant while being able to handle new requirements arising from multiple tenants with SLAs, including:
- Asymmetry of tenants. Tenants could differ in the amount of memory promised in the SLA and their penalty functions. Furthermore, different tenants could be active at other times.
- Unlike LRU-K, which aims to maximize the likelihood of a page being found in the cache, we need to optimize for different objectives, e.g., minimizing total penalties..=> Convert to minimize the number of misses.
- LRU-K cannot meet the requirements
- Algorithms.
- Mark each page’s importance with the proposed pental function’s score instead of K’ visited timestamp.
- Each time eviction will reduce the score, such that the older one has a low score -> consistent with LRU.
- Eviction:
- For each tenant, assign a score for each page.
- Sort global pages, and get the percentile.
- For each tenant, remove the pages with scores lower than the percentile.
- update the score vector.