Background:
Bulk updates of data are in demand during migrations, transfers, and corrections of business logic. A typical example: you need to change the status of tens of millions of rows in a working table without stopping the service, maintaining availability and performance.
Problem:
A regular UPDATE without a limit works for a long time, may lead to escalation of locks, locks the table, and causes a collective rollback on error. A method is needed that minimizes the impact on users and ensures transactional consistency.
Solution:
Code example:
-- Example of batch update of 10,000 rows WHILE 1 = 1 BEGIN UPDATE TOP (10000) mytable SET status = 'archived', updated = GETDATE() WHERE status = 'active'; IF @@ROWCOUNT = 0 BREAK; END
Key features:
Can you do a bulk UPDATE in one transaction and not lock the table?
Generally, no. A large transaction locks the table/pages and increases the risk of locks and timeouts. It's better to work in batches.
Does having indexes affect the speed of bulk updates?
Yes. Any update of indexed fields requires rebuilding the index for each row. Sometimes it is advisable to temporarily remove indexes, but this requires deep analysis.
Are all rows updated atomically in batch updates?
No, atomicity is guaranteed only within one batch (limit of rows/transaction). If a batch fails, some rows will be updated, some will not. For true atomicity — only a full UPDATE in one transaction, which is risky with large volumes.
A technical engineer decided to update 10 million rows with one query in a production database: UPDATE mytable SET status = 'archived'. The site "froze," the rollback took tens of minutes, and performance suffered.
Pros:
The query is broken into batches of 10,000 rows with short transactions, updates occur during working hours without downtime.
Pros: