Async and Await
Write concurrent I/O-bound code with async def and await — do many slow operations at once without threads.
"async/await doesn't make your code faster by running things in parallel — it makes waiting efficient. While one task waits for the network, another task runs."
— ShurAIThe Problem: Waiting Wastes Time
Normal (synchronous) code pauses completely while waiting for slow operations — network requests, disk reads, database queries:
async def and await
Add async before def to make a coroutine. Use await to pause execution at a slow operation and let other tasks run while waiting:
import asyncio
async def fetch_data(name, delay):
print(f"Starting: {name}")
await asyncio.sleep(delay) # pause HERE, let others run
print(f"Done: {name}")
return f"{name} result"
async def main():
# Run both tasks concurrently with gather()
results = await asyncio.gather(
fetch_data("user", 2),
fetch_data("orders", 3),
)
print(results)
asyncio.run(main())
Starting: user
Starting: orders
Done: user ← finishes after 2s
Done: orders ← finishes after 3s (total ~3s not 5s)
['user result', 'orders result']
Key Vocabulary
| Term | Meaning |
|---|---|
| coroutine | A function defined with async def. Must be awaited to run. |
| await | Pauses the coroutine at this point and yields control back to the event loop. |
| event loop | Python’s scheduler. Runs coroutines, picks the next one when one awaits. |
| asyncio.run() | Entry point — starts the event loop and runs one top-level coroutine. |
| asyncio.gather() | Run multiple coroutines concurrently and wait for all to finish. |
await can only go inside async def
# CORRECT — await is inside async def
async def good():
await asyncio.sleep(1)
# WRONG — await outside async def → SyntaxError
def bad():
await asyncio.sleep(1) # SyntaxError!
Real Example — Async Web Scraper (Simulated)
import asyncio
async def fetch_page(url, delay):
print(f" Fetching {url}...")
await asyncio.sleep(delay) # simulates network latency
return f"<html from {url}>"
async def scrape_all(pages):
print("Starting concurrent scrape...")
tasks = [fetch_page(url, delay) for url, delay in pages]
results = await asyncio.gather(*tasks)
return results
pages = [
("shurai.com/home", 1),
("shurai.com/about", 2),
("shurai.com/docs", 1.5),
]
results = asyncio.run(scrape_all(pages))
for r in results:
print(f" Got: {r}")
# All 3 pages fetched in ~2s (longest), not 4.5s (sum)
"async/await shines for I/O-bound tasks: API calls, file reads, database queries, websockets. For CPU-bound work (heavy computation), use multiprocessing instead."
— ShurAI🧠 Quiz — Q1
What kind of tasks benefit most from async/await?
🧠 Quiz — Q2
What does await do when a coroutine reaches it?
🧠 Quiz — Q3
What does asyncio.gather(task1, task2) do?
🧠 Quiz — Q4
Where can you use the await keyword?