As your test suite grows, you need ways to categorize and selectively run tests. Markers are pytest's answer: labels you attach to tests that control how and when they run.
Built-in Markers
skip: Don't Run This Test
Skip tests that aren't applicable:
import pytest
import sys
@pytest.mark.skip(reason="Feature not implemented yet")
def test_future_feature():
assert future_function() == "expected"
@pytest.mark.skipif(
sys.platform == "win32",
reason="Unix-only functionality"
)
def test_unix_permissions():
assert check_file_permissions("/etc/passwd")
@pytest.mark.skipif(
sys.version_info < (3, 10),
reason="Requires Python 3.10+ match statement"
)
def test_pattern_matching():
assert uses_match_statement()The output shows why tests were skipped:
test_features.py::test_future_feature SKIPPED (Feature not implemented yet)
xfail: Expected Failures
Mark tests that should fail—useful for known bugs or incomplete features:
@pytest.mark.xfail(reason="Bug #1234 - division by zero not handled")
def test_division_edge_case():
assert divide(1, 0) == float("inf")
@pytest.mark.xfail(raises=IndexError)
def test_known_index_bug():
get_item_at([], 0) # Should raise IndexErrorOutput:
XFAIL: Test failed as expected ✓XPASS: Test unexpectedly passed (might indicate a fix!)
Use strict=True to fail if the test passes:
@pytest.mark.xfail(strict=True, reason="Remove when bug is fixed")
def test_will_fail_build_if_passes():
assert broken_function() == "works"Custom Markers
Define your own markers to categorize tests:
# pytest.ini or pyproject.toml
# [tool.pytest.ini_options]
# markers = [
# "slow: marks tests as slow",
# "integration: marks tests as integration tests",
# "smoke: quick sanity check tests",
# ]
@pytest.mark.slow
def test_large_dataset_processing():
process_million_records()
@pytest.mark.integration
def test_database_connection():
db = connect_to_real_database()
assert db.is_connected()
@pytest.mark.smoke
def test_app_starts():
app = create_app()
assert app is not NoneRun tests by marker:
# Only slow tests
pytest -m slow
# Everything except slow tests
pytest -m "not slow"
# Smoke tests only (quick CI check)
pytest -m smoke
# Integration tests that aren't slow
pytest -m "integration and not slow"Registering Markers
Always register custom markers to avoid typos. In pyproject.toml:
[tool.pytest.ini_options]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"integration: integration tests requiring external services",
"smoke: quick sanity checks",
"unit: fast unit tests",
]Run pytest --markers to see all available markers.
Applying Markers to Classes and Modules
Mark entire test classes:
@pytest.mark.integration
class TestDatabaseOperations:
def test_insert(self):
pass
def test_update(self):
pass
def test_delete(self):
passMark entire modules with pytestmark:
# test_integration_suite.py
import pytest
pytestmark = pytest.mark.integration
def test_api_endpoint():
pass
def test_database_query():
passOr apply multiple markers:
pytestmark = [pytest.mark.integration, pytest.mark.slow]Conditional Markers
Apply markers based on conditions:
import os
import pytest
# Skip all tests in module if no database URL configured
pytestmark = pytest.mark.skipif(
not os.environ.get("DATABASE_URL"),
reason="DATABASE_URL not set"
)
def test_database_stuff():
passCombining Markers
Tests can have multiple markers:
@pytest.mark.slow
@pytest.mark.integration
@pytest.mark.timeout(30)
def test_full_sync():
sync_all_data()Markers with Fixtures
Use markers to pass data to fixtures:
@pytest.fixture
def database(request):
marker = request.node.get_closest_marker("database")
db_name = marker.args[0] if marker else "default"
return connect_to_database(db_name)
@pytest.mark.database("test_db")
def test_with_specific_db(database):
assert database.name == "test_db"Practical Marker Strategies
CI Pipeline Markers
@pytest.mark.ci_required
def test_must_pass_in_ci():
"""This test blocks deployments if it fails."""
pass
@pytest.mark.nightly
def test_expensive_computation():
"""Run in nightly builds, not every PR."""
pass# PR checks: fast tests only
pytest -m "not slow and not nightly"
# Nightly builds: everything
pytest
# Pre-deploy: critical tests
pytest -m ci_requiredFeature Flags
FEATURE_FLAGS = {
"new_checkout": os.environ.get("ENABLE_NEW_CHECKOUT", "false") == "true",
}
@pytest.mark.skipif(
not FEATURE_FLAGS["new_checkout"],
reason="New checkout feature not enabled"
)
def test_new_checkout_flow():
passFlaky Tests
Mark unreliable tests for special handling:
@pytest.mark.flaky(reruns=3) # Requires pytest-rerunfailures
def test_race_condition():
pass
@pytest.mark.flaky
@pytest.mark.timeout(10)
def test_network_dependent():
fetch_external_resource()Best Practices
- Register all markers: Prevents typos like
@pytest.mark.integation - Use markers consistently: Establish team conventions
- Don't over-categorize: A few useful markers beat dozens of unused ones
- Document marker meaning: Include descriptions in marker registration
- Use markers for CI optimization: Fast tests for PRs, full suite for deploys
Markers turn a monolithic test suite into a flexible tool. Run quick checks during development, full integration tests before deploying, and slow tests overnight. The same tests, different contexts.