Go was originally designed for writing high-performance network and concurrent programs, so special attention is paid to the built-in scheduler and simple concurrency management using goroutines. Unlike most languages where OS threads are directly managed by the language user, in Go goroutines are lighter, and thousands can run simultaneously over a fixed number of system threads.
Problem: Direct control over threads is complex: it leads to rapid resource exhaustion, data races, and memory management difficulties.
Solution: Go uses the M:N model — a large number of goroutines (M) are multiplexed onto a limited number of OS threads (N). This is handled by the scheduler implemented at the Go runtime level, which automatically balances and redistributes the execution of goroutines. The programmer only manages the launching and synchronization of goroutines, not OS threads directly.
Example code:
package main import ( "fmt" "time" ) func worker(id int) { fmt.Printf("Worker %d starting\n", id) time.Sleep(time.Second) fmt.Printf("Worker %d done\n", id) } func main() { for i := 0; i < 5; i++ { go worker(i) } time.Sleep(2 * time.Second) }
Key features:
Will each goroutine be run concurrently on a separate CPU core?
No. Goroutines are multiplexed, and the scheduler determines the actual number of concurrently executing tasks.
Can specific goroutines/threads be manually controlled?
No. The Go runtime does not provide an interface for direct scheduling. The exception is GOMAXPROCS for setting the number of OS threads.
Does a large number of goroutines automatically accelerate the program?
No. A large number of concurrent operations can lead to additional overhead: context switching, resource contention, increased GC time, and memory consumption.
A microservice simultaneously processed incoming requests but did not limit the number of goroutines and did not wait for completion via WaitGroup — result: increased response time, data races, unpredictable timeouts.
Pros:
Cons:
A worker manager is implemented using a pool of goroutines, limiting the number of concurrently running tasks via semaphore or channels. WaitGroup correctly waits for all tasks to complete.
Pros:
Cons: