Async in Golang
Goroutines implement asynchronous execution through Go’s runtime scheduler, which is designed to manage and execute goroutines efficiently across available CPU cores. Here’s a detailed explanation of how this works:
Key Mechanisms Enabling Asynchronous Execution in Goroutines
1. Goroutine Abstraction
- A goroutine is a lightweight abstraction over threads.
- Unlike traditional threads, goroutines are managed by the Go runtime, not the operating system. They have a small initial stack size (about 2 KB), which can grow and shrink dynamically.
2. Go Runtime Scheduler
- The Go runtime includes a scheduler that handles the execution of goroutines.
- It uses a work-stealing algorithm and a M:N threading model:
- M (Machine threads): Represents OS-level threads.
- N (Goroutines): Represents the goroutines mapped to M threads.
- Multiple goroutines are multiplexed onto a smaller number of OS threads, allowing efficient use of system resources.
3. Non-blocking Execution
- When a goroutine performs a blocking operation (e.g., I/O or waiting on a channel), the scheduler parks that goroutine and assigns the thread to another runnable goroutine. This ensures that no thread remains idle.
4. Preemption
- The Go scheduler preempts long-running goroutines to ensure that other goroutines get CPU time. This helps maintain responsiveness in the program.
5. Channels and Synchronization
- Go uses channels for communication between goroutines. Channels provide a safe way to share data without explicit locking, allowing goroutines to coordinate their work asynchronously.
How Asynchronous Execution Works in Practice
Example 1: Concurrent Execution
package main
import (
"fmt"
"time"
)
func work(id int) {
time.Sleep(2 * time.Second) // Simulate work
fmt.Printf("Worker %d done\n", id)
}
func main() {
for i := 1; i <= 3; i++ {
go work(i) // Launch work asynchronously
}
fmt.Println("Main function continues...")
time.Sleep(3 * time.Second) // Wait to see all outputs
}
Output:
Main function continues...
Worker 1 done
Worker 2 done
Worker 3 done
- The
workfunction runs in separate goroutines, allowing the main function to continue execution immediately.
Example 2: Non-blocking I/O
package main
import (
"fmt"
"net/http"
)
func fetchURL(url string) {
resp, err := http.Get(url)
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("Fetched %s: %d\n", url, resp.StatusCode)
}
func main() {
go fetchURL("https://example.com")
go fetchURL("https://golang.org")
// Prevent main from exiting immediately
select {}
}
Explanation:
- The HTTP requests run asynchronously in separate goroutines. The
select {}prevents the program from exiting while the goroutines are still running.
Behind the Scenes: Why Is It Async?
- Dynamic Stack Management:
- Each goroutine starts with a small stack, making it cheaper to create and run multiple goroutines. The stack grows or shrinks based on need, avoiding large memory overhead.
- Efficient Context Switching:
- The runtime scheduler switches between goroutines efficiently, without the overhead of kernel-level context switches.
- I/O Multiplexing:
- For I/O operations, Go leverages OS-level mechanisms like epoll (Linux), kqueue (macOS/BSD), or IOCP (Windows). This allows a single thread to manage multiple I/O operations asynchronously.
- Runtime Cooperation:
- The scheduler ensures that goroutines cooperate, yielding control when they perform blocking operations like channel communication or
time.Sleep.
- The scheduler ensures that goroutines cooperate, yielding control when they perform blocking operations like channel communication or
In summary, goroutines achieve asynchronous execution by being lightweight, non-blocking, and efficiently managed by the Go runtime scheduler, which multiplexes them across OS threads. This design enables Go programs to perform high-concurrency operations with simplicity and minimal overhead.