ProgrammingBackend Developer

How is thread execution control and the scheduler implemented in Go? What features, internal structure, and limitations should be considered when designing parallel tasks?

Pass interviews with Hintsage AI assistant

Answer.

Go was originally designed for writing high-performance network and concurrent programs, so special attention is paid to the built-in scheduler and simple concurrency management using goroutines. Unlike most languages where OS threads are directly managed by the language user, in Go goroutines are lighter, and thousands can run simultaneously over a fixed number of system threads.

Problem: Direct control over threads is complex: it leads to rapid resource exhaustion, data races, and memory management difficulties.

Solution: Go uses the M:N model — a large number of goroutines (M) are multiplexed onto a limited number of OS threads (N). This is handled by the scheduler implemented at the Go runtime level, which automatically balances and redistributes the execution of goroutines. The programmer only manages the launching and synchronization of goroutines, not OS threads directly.

Example code:

package main import ( "fmt" "time" ) func worker(id int) { fmt.Printf("Worker %d starting\n", id) time.Sleep(time.Second) fmt.Printf("Worker %d done\n", id) } func main() { for i := 0; i < 5; i++ { go worker(i) } time.Sleep(2 * time.Second) }

Key features:

  • Goroutines are cheaper than OS threads, quickly and easily created, and do not require additional management.
  • The Go scheduler does preemptive scheduling using special points in the runtime for correct goroutine switching.
  • GOMAXPROCS sets the number of OS threads being used, but usually does not require manual configuration.

Trick questions.

Will each goroutine be run concurrently on a separate CPU core?

No. Goroutines are multiplexed, and the scheduler determines the actual number of concurrently executing tasks.

Can specific goroutines/threads be manually controlled?

No. The Go runtime does not provide an interface for direct scheduling. The exception is GOMAXPROCS for setting the number of OS threads.

Does a large number of goroutines automatically accelerate the program?

No. A large number of concurrent operations can lead to additional overhead: context switching, resource contention, increased GC time, and memory consumption.

Common mistakes and anti-patterns

  • Creating millions of goroutines without limiting their number (goroutine leak).
  • Waiting for completion via time.Sleep instead of sync.WaitGroup/channels.
  • Excessive focus on the number of GOMAXPROCS without understanding the architecture.

Real-life example

Negative case

A microservice simultaneously processed incoming requests but did not limit the number of goroutines and did not wait for completion via WaitGroup — result: increased response time, data races, unpredictable timeouts.

Pros:

  • Simple addition of parallelism;

Cons:

  • Memory limitations, leaks, complex diagnosis.

Positive case

A worker manager is implemented using a pool of goroutines, limiting the number of concurrently running tasks via semaphore or channels. WaitGroup correctly waits for all tasks to complete.

Pros:

  • Controlled parallelism, repeatability of tests, successful scaling.

Cons:

  • Additional code for constraints and synchronization; tests for deadlock are required.