Async Python is powerful but confusing. Here's what you need to know.
When to Use Async
Use async for I/O-bound work:
- HTTP requests
- Database queries
- File operations
- WebSocket connections
Don't use async for CPU-bound work:
- Number crunching
- Image processing
- Data transformation
Async lets you do other things while waiting for I/O. It doesn't make CPU work faster.
Basic Patterns
Simple async function
import asyncio
async def fetch_data():
await asyncio.sleep(1) # Simulates I/O
return {"status": "ok"}
# Run it
result = asyncio.run(fetch_data())Running tasks concurrently
async def main():
# Sequential: 3 seconds
result1 = await fetch_data()
result2 = await fetch_data()
result3 = await fetch_data()
# Concurrent: 1 second
results = await asyncio.gather(
fetch_data(),
fetch_data(),
fetch_data(),
)gather runs all tasks concurrently.
Fire and forget
async def background_task():
await asyncio.sleep(10)
print("Done in background")
async def main():
# Don't await - runs in background
task = asyncio.create_task(background_task())
# Do other work
await do_something_else()
# Optionally wait for it later
await taskHTTP Requests with aiohttp
import aiohttp
async def fetch_url(session, url):
async with session.get(url) as response:
return await response.json()
async def fetch_all(urls):
async with aiohttp.ClientSession() as session:
tasks = [fetch_url(session, url) for url in urls]
return await asyncio.gather(*tasks)
# Fetch 100 URLs concurrently
urls = [f"https://api.example.com/item/{i}" for i in range(100)]
results = asyncio.run(fetch_all(urls))Timeouts
async def fetch_with_timeout():
try:
async with asyncio.timeout(5):
return await slow_operation()
except asyncio.TimeoutError:
return NoneSemaphores for Rate Limiting
async def fetch_with_limit(sem, url):
async with sem: # Only N concurrent requests
return await fetch_url(url)
async def main():
sem = asyncio.Semaphore(10) # Max 10 concurrent
tasks = [fetch_with_limit(sem, url) for url in urls]
return await asyncio.gather(*tasks)Error Handling
async def safe_fetch(url):
try:
return await fetch_url(url)
except aiohttp.ClientError as e:
return {"error": str(e)}
async def fetch_all_safe(urls):
tasks = [safe_fetch(url) for url in urls]
results = await asyncio.gather(*tasks, return_exceptions=True)
for result in results:
if isinstance(result, Exception):
print(f"Task failed: {result}")Common Pitfalls
Forgetting to await
# Bug: returns coroutine object, not result
result = fetch_data()
# Correct
result = await fetch_data()Blocking the event loop
# Bad: blocks everything
def blocking_io():
time.sleep(5) # Blocks event loop!
# Good: use async sleep
async def async_wait():
await asyncio.sleep(5)
# Good: run blocking code in thread
await asyncio.to_thread(blocking_io)Creating tasks without awaiting
# Bug: task may not complete
async def main():
asyncio.create_task(important_task())
# Main exits, task cancelled
# Correct: store and await
async def main():
task = asyncio.create_task(important_task())
await taskWhen to Use Threads Instead
Use threads when:
- Calling blocking libraries (most database drivers)
- CPU-bound work (with GIL limitations)
- Legacy code that can't be made async
import asyncio
from concurrent.futures import ThreadPoolExecutor
async def run_blocking():
loop = asyncio.get_event_loop()
with ThreadPoolExecutor() as pool:
result = await loop.run_in_executor(pool, blocking_function)
return resultMy Rules
- Don't make everything async. Only I/O-bound code benefits.
- Use
gatherfor concurrency. Not sequentialawaits. - Handle errors per-task. One failure shouldn't crash everything.
- Limit concurrency. Semaphores prevent overwhelming servers.
- Never block the loop. Use
to_threadfor blocking calls.
Async is about waiting efficiently. If your code isn't waiting on I/O, async won't help.
React to this post: