Pagination ensures the processing of large SELECT results without overloading the server and application. The main approaches:
Example (LIMIT/OFFSET)
SELECT * FROM Orders ORDER BY OrderID LIMIT 100 OFFSET 1000;
This query returns records 1011–1110. However, OFFSET forces the server to still sort and skip the first 1000 rows — hence it becomes slow with deep pagination.
Example (Keyset/Seek method)
SELECT * FROM Orders WHERE OrderID > 1110 ORDER BY OrderID LIMIT 100;
This query quickly finds the next page without expending resources on counting OFFSET, especially effective when there is an index on OrderID.
Trick: "Why does pagination using LIMIT/OFFSET perform poorly with large data sets, and how can it be improved?"
Answer: The difficulty lies in the fact that OFFSET requires the server to scan and sort all previous rows — meaning the deeper into the pages you go, the slower it gets. The optimization is to switch to keyset/seek paging: selecting not by offset but by the key of the last record on the previous page.
-- Get the next page by key SELECT * FROM Orders WHERE OrderID > @LastOrderID ORDER BY OrderID LIMIT 100;
Story
Project: Marketplace, order database for 5 years (50 million rows).
Error: Used OFFSET for paginating orders of old users. Queries with OFFSET > 1 million began to execute in 30–60 seconds. This affected reports and the API interface — leading to CPU server overload and increased waiting time in queues.
Story
Project: Corporate CRM, reports on clients.
Error: Did not consider sorting order in pagination and did not use indexes. Performance and consistency of samples dropped — users received the same rows on different pages when changes occurred in the table.
Story
Project: Financial platform, dashboards.
Error: Complex paginations were built using generated dynamic SQL without bind variables, leading to SQL injection issues and problems with transactional support between data pages.