In Go, map is by default not thread-safe. Concurrent writes or modifications to a map from different goroutines lead to race conditions and panics like "concurrent map writes". To protect a map, it’s common to use sync.Mutex, sync.RWMutex, or the specialized type sync.Map from the standard library, which implements safe atomic operations.
Elements can be removed from a map using delete(map, key), however, there will be nuances when iterating over a map (using range):
Example of thread-safe work with map:
type SafeMap struct { mu sync.RWMutex m map[string]int } func (s *SafeMap) Load(key string) (int, bool) { s.mu.RLock() v, ok := s.m[key] s.mu.RUnlock() return v, ok } func (s *SafeMap) Store(key string, value int) { s.mu.Lock() s.m[key] = value s.mu.Unlock() }
Question: Is it safe to remove elements from a map during a range iteration over that same map?
Answer: Yes, but only if deleting the key that has already been yielded by the range iteration. However, you cannot modify the map from other goroutines at the same time: this will lead to a race.
m := map[string]int{"a": 1, "b": 2, "c": 3} for k := range m { delete(m, k) // safe! (if done only from this goroutine) }
Story
One of the monitoring services was collecting statistics from a map and was being updated from several goroutines (metric counters). At peak moments, panics"concurrent map writes" occurred, the service crashed, and data was lost. Solution: add a mutex or use sync.Map instead of a regular map.
Story
During data migration, someone decided to speed up the cleanup of a large map using parallel goroutines, each of which deleted its part of keys via range. As a result — constant data races and unpredictable crashes. After migration, it was necessary to revert to sequential cleanup or block access during range.
Story
In a range loop over a map, the developer added new data to the same map (for example, forming a list of neighboring nodes for a graph). It turned out that new keys could be ignored in the current pass of the range, leading to incomplete processing of the graph. The bug was discovered only during full testing of rare cases. After fixing, the algorithm was rewritten using a separate queue for additions.