Answer to the question
In Python, coroutines created via async def are implemented as single-use state machines governed by the CO_ITERABLE_COROUTINE or native coroutine flags at the CPython bytecode level. When you call an async function, it immediately returns a coroutine object containing a frame object and execution state; awaiting it drives this state machine to completion, at which point the internal f_lasti (last instruction) marker reaches the end and the frame is marked as exhausted. The Python runtime explicitly guards against re-entry by checking this completion flag, raising RuntimeError if subsequent awaits occur, because coroutines are designed to represent singular, discrete asynchronous operations with linear control flow. Conversely, generator functions are factories—each invocation creates a brand new PyGenObject with its own independent stack frame and instruction pointer, allowing the function to produce multiple independent iterators that each maintain separate execution contexts.
Situation from life
A development team was building a resilient WebSocket client that needed to retry failed connection attempts using exponential backoff. They initially defined a connection coroutine at the module level and attempted to reuse it across retry logic.
import asyncio async def establish_connection(): return await websockets.connect("wss://api.example.com") # Module-level instantiation connection_coro = establish_connection() async def retry_connect(max_attempts=3): for attempt in range(max_attempts): try: ws = await connection_coro # Fails on second iteration return ws except Exception: await asyncio.sleep(2 ** attempt)
The problem emerged when the second loop iteration attempted to await connection_coro again, triggering a RuntimeError because the first successful await had already exhausted the coroutine object. The team considered three architectural solutions.
One approach involved manually reconstructing the coroutine object inside the except block after catching the RuntimeError. While technically feasible, this introduced fragile state management and made the code dependent on detecting exhaustion via exception handling, which is semantically ambiguous and could mask legitimate runtime errors within the connection logic itself.
Another solution proposed converting establish_connection into a class implementing __await__ to create a resettable awaitable. This provided a factory pattern but added unnecessary boilerplate and complexity, obscuring the simple intent of establishing a connection and requiring manual state tracking that duplicated what the Python runtime already provides through function calls.
The chosen solution was to treat the async function as a factory by moving the call site inside the loop, ensuring each iteration received a pristine coroutine object. By refactoring to ws = await establish_connection(), each attempt instantiated a fresh state machine with independent resource management. This aligned with Python's design philosophy where async functions are constructors for one-shot computational futures, resulting in clean, exception-free retry logic that properly isolated failed connection attempts from subsequent retries.
What candidates often miss
Why does storing a coroutine in a variable and forgetting to await it create a resource leak, and how does close() mitigate this?
Candidates often assume that unawaited coroutines simply garbage collect without side effects. However, if a coroutine has entered its body and suspended at an await expression (for example, holding a database connection or lock), the frame retains references to these resources. Calling close() on the coroutine object forces a GeneratorExit exception through the frame, triggering context managers (async with) and try/finally blocks to release resources immediately. Without explicit close(), these resources remain held until cyclic garbage collection runs, which may be too late for connection pool exhaustion scenarios.
How does inspect.iscoroutine() differ from inspect.isawaitable(), and why does this distinction matter when writing generic asyncio utilities?
inspect.iscoroutine() returns True only for native coroutine objects created by async def functions, while inspect.isawaitable() returns True for any object implementing __await__, including coroutines, tasks, futures, and custom awaitables. Candidates miss that asyncio functions like ensure_future() accept any awaitable, not just coroutines. Writing libraries that strictly check iscoroutine() rejects valid awaitables like asyncio.Queue().get() or custom future objects, breaking polymorphism in generic utility functions designed to schedule arbitrary asynchronous operations.
What is the difference between async for and await when consuming an asynchronous generator, and why does the former require __aiter__ to return the generator itself rather than a coroutine?
await consumes a coroutine or future to completion and returns a single value, while async for iterates over an asynchronous iterator, pausing at each yield inside an async def generator function. Candidates confuse async for with awaiting a list of coroutines. Crucially, __aiter__ must return the asynchronous iterator object directly (not an awaitable), because the Python runtime calls __aiter__ synchronously to obtain the iterator before beginning the iteration protocol. Returning a coroutine from __aiter__ causes TypeError, as the protocol expects immediate access to the iterator's __anext__ method to drive the asynchronous iteration state machine.