Swift Concurrency
Swift Concurrency: From GCD to Structured Concurrency

Background
I remember when I first graduated and started iOS game development, I used NSThread directly for multithreading. When GCD (Grand Central Dispatch) was introduced, it was very convenient to use, and I used it almost exclusively. occasionally, when tasks had dependencies, I would use NSOperation.
When using multithreading, having too few locks can easily lead to program crashes due to concurrent access to shared data in multiple places; having too many locks can easily cause deadlocks. In short, it was quite a headache. When developing apps, there were a few crashes that were never resolved, and these bugs were very difficult to troubleshoot.
So Swift introduced the concept of Structured Concurrency, whose concurrency threading model ensures no deadlocks. At the same time, the introduction of Actor maximizes data safety. Why maximize? Because Actors have reentrancy issues for performance reasons, and there can also be issues with competitive access to the same data.
GCD (Grand Central Dispatch)
GCD mainly consists of queues and task closures. Queues are divided into:
- Serial Queue: Executes tasks sequentially
- Concurrent Queue: Executes tasks in parallel
You will notice there is no concept of threads here. In fact, concurrent queues and threads have a many-to-many relationship:
- A queue can distribute tasks to multiple threads
- A thread can also process tasks distributed by multiple concurrent queues
GCD encapsulation hides the details of threads. There are some special queues, such as:
mainserial queueglobalQoS concurrent queue
You can also define custom queues if necessary.
Common GCD Issues
Synchronously calling tasks in a Serial Queue can cause program deadlocks. You need to be careful when using locks to solve data access issues to avoid crashes and deadlocks.
Although GCD blocks can set priorities, if there is a low-priority task in a Serial queue, and then a high-priority task is added. In this case, since the low-priority task occupies less CPU time, the high-priority task needs to wait for it to complete. Here GCD will perform Priority Inversion, adjusting all previous low-priority tasks to be the same as itself. This approach is far less elegant than the structured concurrency model.
GCD code can easily lead to Callback Hell.
GCD Group Usage Notes
group.enter() / group.leave() must appear in pairs:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
let group = DispatchGroup()
let queue = DispatchQueue.global()
// --- Task A ---
group.enter() // 1. Enter group
queue.async {
print("Start requesting data A...")
sleep(2) // Simulate network delay
print("Data A request completed")
group.leave() // 2. Leave group
}
// --- Task B ---
group.enter()
queue.async {
print("Start requesting data B...")
sleep(3)
print("Data B request completed")
group.leave()
}
// --- Summary Notification ---
// Note: This will not block the current thread, it is an asynchronous callback
group.notify(queue: .main) {
print("🎉 All requests completed! Refresh UI")
}
print("I am the main thread, I will not be blocked")NSOperation
NSOperation is an encapsulation based on GCD. A natural question is: Why do we need NSOperation when we have GCD?
Advantages of NSOperation
| Feature | GCD | NSOperation |
|---|---|---|
| Concurrency Control | Hard to control | maxConcurrentOperationCount can easily control concurrency size |
| Encapsulation | Based on C-language Blocks, hard to encapsulate data | Class, easy to encapsulate and reuse (can be subclassed) |
| Dependencies | Can only synchronize via semaphores, etc. | Natively supports task dependencies |
| Priority Inversion | Simple adjustment | Adjusts Priority based on dependencies |
Task organization is less error-prone compared to GCD, and dependencies can be set:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
let queue = OperationQueue()
// 1. Create specific tasks
let opA = BlockOperation {
print("Task A: Download image")
sleep(2)
}
let opB = BlockOperation {
print("Task B: Download video")
sleep(3)
}
let opC = BlockOperation {
print("Task C: Synthesize and upload to server")
}
// 2. Set dependencies: C depends on A and B completing
opC.addDependency(opA)
opC.addDependency(opB)
// 3. Add to queue
queue.addOperations([opA, opB, opC], waitUntilFinished: false)
print("Tasks submitted, main thread continues running")Structured Concurrency
From GCD to NSOperation, there has been an improvement in usage safety, but common concurrency programming issues—data race access and deadlocks—have not been resolved. Additionally, whether using Blocks or Delegates, it is easy to end up with Callback Hell and spaghetti code.
These have all been resolved with the introduction of async/await in Swift.
What is Structured?
When learning C, many books suggest not using goto syntax because such code is hard to debug and maintain. This means a piece of code has multiple entries and multiple exits, which is spaghetti code. Besides goto, closures and delegates also lead to a poor code experience.
Using async/await can reduce cognitive load, making code easier to maintain.
Example of Callback Hell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
func loadUserProfileOldSchool() {
print("1. Start login...")
// Level 1: Login
login(username: "user", password: "123") { [weak self] result in
guard let self = self else { return }
switch result {
case .success(let token):
print("2. Login success, Token: \(token)")
// Level 2: Get ID (Code starts indenting to the right)
self.fetchUserID(token: token) { [weak self] result in
guard let self = self else { return }
switch result {
case .success(let userID):
print("3. Got ID: \(userID)")
// Level 3: Get details (Continues indenting right...)
self.fetchProfile(userID: userID) { [weak self] result in
switch result {
case .success(let profile):
// Finally got the result!
self?.updateUI(with: profile)
case .failure(let error):
self?.showError(error)
}
}
case .failure(let error):
// Error handling scattered in each layer
self.showError(error)
}
}
case .failure(let error):
self.showError(error)
}
}
}Example of async/await
You will find the code is shorter, and most importantly, the code executes from top to bottom, making it readable and easy to maintain:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
func loadUserProfileModern() async {
print("1. Start login...")
do {
// Code executes sequentially like synchronous code
let token = try await login(username: "user", password: "123")
print("2. Login success, Token: \(token)")
let userID = try await fetchUserID(token: token)
print("3. Got ID: \(userID)")
let profile = try await fetchProfile(userID: userID)
// Update UI
await MainActor.run {
updateUI(with: profile)
}
} catch {
// Error handling concentrated in one place!
await MainActor.run {
showError(error)
}
}
}
// How to call
Task {
await loadUserProfileModern()
}Concurrency Threading Model
When talking about synchronous functions, we think of threads, but when talking about asynchronous functions, we use Tasks. Asynchronous functions run within Tasks.
Swift concurrency introduces a new scheduling method called Cooperative Thread Pool, where a serial queue schedules work, abstracting the remaining execution content in a function as a continuation, and then scheduling it. The actual work is handled by a global concurrent queue, which is assigned to threads for execution.

This new scheduling method ensures the program will not deadlock and also solves the previous Priority Inversion problem.
When a scheduling thread is idle (e.g., after await), the executor finds the next instruction for it to execute. Unlike traditional preemptive GCD scheduling, this scheduling method uses executors, pending work, and scheduling queues to jointly ensure thread execution instructions.

After Task {}, the task is submitted to the Default Executor, then assigned to an idle thread in the thread pool for execution. Upon encountering await, the current task is suspended (context saved to heap, packaged into a Continuation), and other tasks continue execution.
This is the underlying reason why Swift Concurrency reduces “cognitive load”: You no longer need to worry about “creating too many threads will freeze the App” or “deadlocks”. Because the number of threads is always equal to the number of CPU cores, when waiting, everyone yields to each other, rather than hiring temporary workers without limit like GCD.
Common Issues
- Do not block threads: Although the executor schedules tasks, if all threads are blocked, the program will freeze.
- Try not to use locks: Even if you must use them, use lock/unlock on the same side of an await, otherwise it is easy to cause thread blocking.
- DispatchSemaphore or NSCondition: Semaphores represented by these will unconditionally block the current thread when waiting, so use them with caution.
Common Executors
- Global Concurrent Executor
- RunLoop Executor
- Schedulable Executor
- Serial Executor
- Task Executor
1
2
3
4
let executor = CustomTaskExecutor(label: "com.example.custom")
await withTaskExecutorPreference(executor) {
await performWork()
}Actor Model
To avoid data races, the Actor model was introduced. Actor is a reference type. Each Actor has an Executor to be responsible for external access calls to it.
Global Actor
Mainly MainActor. All code marked with @MainActor belongs to MainActor.
Sendable
Sendable is a marker used to tell the compiler that they can be passed across different concurrency domains.
- Actor is also Sendable
- Basic value types are Sendable
- If a struct only contains basic value types, then it is also Sendable
- If it is a class, all members must be constants, or marked with
@unchecked Sendable
Additionally, the sending keyword can transfer ownership and ensure subsequent inability to use. This can also avoid data race analysis for safe passing.
Summary
| Feature | GCD | NSOperation | Swift Concurrency |
|---|---|---|---|
| Deadlock Risk | High | Medium | Low (Cooperative scheduling avoids deadlocks) |
| Callback Hell | Easy to fall into | Easy to fall into | async/await linear code |
| Data Safety | Manual lock management | Manual lock management | Actor + Sendable |
| Thread Count | Dynamic (Prone to thread explosion) | Dynamic (Prone to thread explosion) | Fixed (CPU core count) |
| Priority Inversion | Risky | Risky | Auto-handled |
Swift Concurrency provides a safer and more maintainable concurrency programming solution for iOS/macOS development through structured concurrency, the Actor model, and cooperative thread pools. Although the learning curve is steeper, in the long run, it can significantly reduce hard-to-troubleshoot concurrency bugs and improve code quality.








