The Rust standard library provides the main synchronization primitives for safe multi-threading:
AtomicBool, AtomicUsize, etc.) — reading/writing operations without locks at the hardware level;Example:
use std::sync::{Arc, Mutex}; use std::thread; fn main() { let counter = Arc::new(Mutex::new(0)); let mut handles = vec![]; for _ in 0..10 { let counter = Arc::clone(&counter); let handle = thread::spawn(move || { let mut num = counter.lock().unwrap(); *num += 1; }); handles.push(handle); } for handle in handles { handle.join().unwrap(); } println!("Result: {}", *counter.lock().unwrap()); }
Question: Does Rust guarantee that using Mutex<T> completely eliminates deadlocks by checking at compile-time?
Answer: No. Rust ensures safe access to data through ownership and borrow-checker, but does not protect against deadlocks at the language level. Deadlocks occur purely logically when the order of acquiring multiple Mutexes or their recursive acquisition is violated. Example:
use std::sync::Mutex; let lock1 = Mutex::new(0); let lock2 = Mutex::new(0); // Thread 1: lock1 -> lock2, Thread 2: lock2 -> lock1 ⇒ deadlock
History
History
Mutex<Option<T>> to RwLock<T>, not accounting for the fact that a writable lock could be longer than a read lock. Under peak load, this resulted in processing delays of tens of seconds due to queues for writing.History
Programmers tried to save on threads, "pushing" Arc<Mutex<_>> into hundreds of threads. Due to the subtleties of the scheduler's work and the reuse of mutexes, astonishing wait conditions appeared — performance dropped by 5 times!