When Swift introduced native concurrency support in version 5.5, the existing Sequence protocol had already established a synchronous iteration model through IteratorProtocol. The Sequence protocol requires a makeIterator() method returning a mutating next() function that produces elements immediately without suspension. This design predated Swift's async/await paradigm, creating a fundamental impedance mismatch between synchronous consumption expectations and asynchronous production capabilities that necessitated a parallel hierarchy.
The core conflict arises because Sequence's next() method signature cannot include the async keyword. If AsyncSequence were to refine Sequence, it would inherit a requirement for synchronous element access that is impossible to satisfy when data arrives asynchronously from network I/O or timers. Furthermore, allowing synchronous code to trigger asynchronous operations would violate Swift's structured concurrency guarantees, potentially permitting async code to run outside of a Task context and breaking hierarchical cancellation propagation across the runtime.
Swift architects created an independent protocol hierarchy where AsyncSequence does not inherit from Sequence. The AsyncIteratorProtocol defines mutating func next() async throws -> Element?, explicitly marking the suspension point in the type signature. This isolation ensures that iteration can only occur within an asynchronous context, allowing the Swift runtime to manage the continuation, handle task cancellation, and preserve the call stack correctly while preventing synchronous code from accidentally invoking suspension-dependent operations.
// Attempting to mix sync and async (illustrative failure) protocol BrokenAsyncSequence: Sequence { // Cannot satisfy both sync IteratorProtocol.next() and async requirements } // Correct async design struct TimedEvents: AsyncSequence { typealias Element = Date struct Iterator: AsyncIteratorProtocol { var count = 0 mutating func next() async -> Date? { guard count < 5 else { return nil } count += 1 await Task.sleep(1_000_000_000) // Suspension point return Date() } } func makeAsyncIterator() -> Iterator { Iterator() } }
Scenario: Processing high-frequency sensor data in a health monitoring app.
Problem description: The development team needed to stream accelerometer data at 60Hz to detect falls using CoreMotion. They initially modeled the sensor feed as a Sequence, polling the hardware in a tight while loop on the main thread. This approach blocked the UI during data collection and risked app termination. They considered three architectural approaches to integrate async sensor callbacks with data processing pipelines.
Solution 1: Thread-blocking bridge.
They considered wrapping the async sensor API in a DispatchSemaphore to force synchronous waiting within a custom Sequence iterator.
Pros: Allows use of standard Array initializers and map/filter algorithms.
Cons: Blocks the calling thread, risking watchdog termination on iOS, wastes CPU cycles spinning, and prevents cancellation during sleep.
Solution 2: Callback-based delegation. They considered abandoning Sequence conformance entirely, using delegate patterns with completion handlers for each sensor update. Pros: Non-blocking, allows async hardware access without freezing the main thread. Cons: Loses composability of Sequence operations, creates deeply nested "callback hell" when chaining transformations, and makes backpressure implementation nearly impossible.
Solution 3: Native AsyncSequence with AsyncStream.
They would wrap the CoreMotion callbacks in an AsyncStream using continuations, then process with for try await and the AsyncAlgorithms package.
Pros: Integrates with Swift concurrency, supports task cancellation, enables use of throttle and debounce operators, and maintains a responsive UI.
Cons: Requires iOS 13+ deployment target, and the team must learn structured concurrency patterns.
Chosen solution: The team adopted Solution 3, wrapping CMMotionManager updates in an AsyncStream with a .bufferingNewest(1) policy. This ensured that if data processing lagged behind the 60Hz hardware sampling, only the latest reading was retained, preventing memory bloat.
Result: The fall detection algorithm maintained full sampling frequency without dropping frames, CPU usage dropped by 70% compared to the polling approach, and the UI remained responsive. The system properly released hardware resources when the user backgrounded the app due to automatic Task cancellation propagating to the stream iterator.
Question 1: Can I use break or continue with labels in an async for loop, and what happens to the iterator?
Answer: Yes, labeled control flow works in for try await loops. However, candidates often misunderstand the lifecycle implications. When you break from an async loop, the AsyncIterator goes out of scope immediately. If the iterator is a value type, its deinit runs, releasing resources like file descriptors. If it's a reference type, the reference is dropped. Crucially, AsyncSequence does not have a cancel() method on the protocol itself; cancellation is handled through the Task hierarchy. The iterator's cleanup must be implemented in its deinit, not a separate cancellation handler, because the protocol cannot guarantee that all iterators are reference types.
Question 2: Why does AsyncSequence not support the Array(myAsyncSequence) initializer like regular sequences?
Answer: Array's initializer requires its argument to conform to Sequence, not AsyncSequence. Since AsyncSequence does not refine Sequence, you cannot directly pass it to the Array constructor. Candidates often miss that you must use Array initializer specifically designed for async sequences: try await Array(myAsyncSequence). This is a global async function, not a memberwise initializer, because Swift does not support async initializers in this context. The operation aggregates all elements by awaiting each next() call sequentially, and it respects task cancellation, throwing a CancellationError if the parent Task is cancelled during materialization.
Question 3: How does backpressure work in AsyncStream versus NotificationCenter's AsyncSequence?
Answer: This reveals a critical implementation detail. AsyncStream supports backpressure: if the consumer is slow, the producer's call to yield suspends until the consumer calls next(). This is implemented via a continuation-based semaphore. However, NotificationCenter's sequence does not implement backpressure; it uses an unbounded buffer, allowing notifications to accumulate indefinitely if the consumer cannot keep pace. Candidates often assume all AsyncSequence implementations handle backpressure uniformly. The reality is that AsyncSequence is a pull-based protocol, but the producer's behavior is implementation-defined. Understanding that AsyncStream is the primary tool for bridging push-based APIs to pull-based async sequences with backpressure is essential for preventing memory exhaustion in high-throughput scenarios.