Pytest's plugin ecosystem is massive—over 800 plugins on PyPI. Here are the ones that actually matter.
pytest-cov: Code Coverage
Know what your tests actually test.
pip install pytest-cov# Basic coverage report
pytest --cov=myapp
# HTML report for browsing
pytest --cov=myapp --cov-report=html
# Fail if coverage drops below threshold
pytest --cov=myapp --cov-fail-under=80
# Show missing lines
pytest --cov=myapp --cov-report=term-missingOutput:
---------- coverage: platform darwin, python 3.11.0 ----------
Name Stmts Miss Cover Missing
-----------------------------------------------------
myapp/__init__.py 4 0 100%
myapp/api.py 45 3 93% 67-69
myapp/models.py 32 0 100%
myapp/utils.py 28 7 75% 12-18, 45
-----------------------------------------------------
TOTAL 109 10 91%
Configure in pyproject.toml:
[tool.coverage.run]
branch = true
source = ["myapp"]
omit = ["*/tests/*", "*/__pycache__/*"]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"if TYPE_CHECKING:",
"raise NotImplementedError",
]pytest-xdist: Parallel Test Execution
Run tests across multiple CPUs:
pip install pytest-xdist# Use all available CPUs
pytest -n auto
# Use 4 workers
pytest -n 4
# Distribute by file
pytest -n 4 --dist loadfileA test suite that takes 10 minutes can drop to 3 minutes on a 4-core machine. Essential for CI pipelines.
Gotchas:
- Tests must be independent (no shared state)
- Database fixtures need isolation (separate DBs per worker)
- Output order is non-deterministic
pytest-timeout: Kill Hanging Tests
Prevent tests from running forever:
pip install pytest-timeout# Global timeout of 30 seconds
pytest --timeout=30
# Per-test timeout
@pytest.mark.timeout(5)
def test_should_be_fast():
quick_operation()Configure defaults:
[tool.pytest.ini_options]
timeout = 60
timeout_method = "signal" # or "thread" on Windowspytest-randomly: Randomize Test Order
Find tests with hidden dependencies:
pip install pytest-randomlyTests run in random order by default after installation. If a test fails only when run after another test, you've found a bug.
# Reproduce a specific order
pytest --randomly-seed=12345
# See the seed used
pytest -v # Shows: Using --randomly-seed=67890pytest-mock: Better Mocking
Already covered in detail, but worth including:
pip install pytest-mockdef test_with_mocker(mocker):
mock_api = mocker.patch("myapp.api.fetch")
mock_api.return_value = {"data": "value"}
result = process()
mock_api.assert_called_once()pytest-asyncio: Test Async Code
Test coroutines and async functions:
pip install pytest-asyncioimport pytest
@pytest.mark.asyncio
async def test_async_function():
result = await fetch_data()
assert result["status"] == "ok"
@pytest.mark.asyncio
async def test_async_context_manager():
async with DatabaseConnection() as conn:
assert await conn.is_connected()Configure in pyproject.toml:
[tool.pytest.ini_options]
asyncio_mode = "auto" # Automatically mark async testspytest-httpx / pytest-responses: Mock HTTP Requests
For httpx:
pip install pytest-httpxdef test_api_call(httpx_mock):
httpx_mock.add_response(
url="https://api.example.com/users",
json={"users": [{"id": 1}]}
)
client = MyAPIClient()
users = client.get_users()
assert len(users) == 1For requests:
pip install responsesimport responses
@responses.activate
def test_requests_call():
responses.add(
responses.GET,
"https://api.example.com/users",
json={"users": []},
status=200
)
result = fetch_users()
assert result == []pytest-freezegun: Control Time
Test time-dependent code:
pip install pytest-freezegunfrom datetime import datetime
@pytest.mark.freeze_time("2026-03-22 12:00:00")
def test_frozen_time():
assert datetime.now().hour == 12
def test_with_fixture(freezer):
freezer.move_to("2026-01-01")
assert datetime.now().year == 2026
freezer.tick(60) # Move forward 60 secondspytest-django: Django Testing
If you use Django:
pip install pytest-django@pytest.mark.django_db
def test_create_user():
User.objects.create(username="testuser")
assert User.objects.count() == 1
def test_client(client):
response = client.get("/api/health/")
assert response.status_code == 200pytest-snapshot: Snapshot Testing
Compare outputs against saved snapshots:
pip install pytest-snapshotdef test_api_response(snapshot):
response = generate_complex_response()
snapshot.assert_match(response, "expected_response.json")Update snapshots with pytest --snapshot-update.
pytest-benchmark: Performance Testing
Measure and compare performance:
pip install pytest-benchmarkdef test_sort_performance(benchmark):
data = list(range(10000, 0, -1))
result = benchmark(sorted, data)
assert result[0] == 1Output shows statistics:
Name Min Max Mean StdDev Median
test_sort_perf 1.2ms 1.5ms 1.3ms 0.08ms 1.3ms
My Standard Setup
Here's what I install on every project:
pip install pytest pytest-cov pytest-xdist pytest-timeout pytest-randomly pytest-mockpyproject.toml:
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = """
-v
--tb=short
--strict-markers
-n auto
--timeout=60
--cov=myapp
--cov-report=term-missing
"""
[tool.coverage.run]
branch = true
source = ["myapp"]This gives you:
- Parallel execution across all CPUs
- 60-second timeout on hung tests
- Coverage reporting with missing lines
- Randomized test order
- Strict marker checking
Install the plugins that solve your problems. Skip the ones that don't. A fast, reliable test suite beats a feature-packed slow one every time.