Pytest Framework
Write cleaner, more powerful tests with pytest — plain assert, parametrize for many cases, and fixtures for shared setup.
"pytest's magic: just write functions starting with test_, use plain assert, and run pytest. No classes, no self, no special assert methods. Less code, clearer intent."
— Shuraipytest vs unittest — Side by Side
The same test written both ways. See how much cleaner pytest is:
import unittest
from math_utils import add
class TestAdd(unittest.TestCase):
def test_add(self):
self.assertEqual(
add(2, 3), 5
)
if __name__ == "__main__":
unittest.main()
from math_utils import add
def test_add():
assert add(2, 3) == 5
pip install pytest
Writing pytest Tests
import pytest
from math_utils import add, divide, is_even
# Plain functions — no class needed
def test_add_positive():
assert add(2, 3) == 5
def test_add_negative():
assert add(-1, -1) == -2
def test_divide_normal():
assert divide(10, 2) == 5.0
def test_divide_by_zero():
with pytest.raises(ValueError, match="zero"):
divide(5, 0)
# match="zero" checks the error message contains "zero"
def test_is_even():
assert is_even(4)
assert not is_even(7)
pytest # discover + run all test_*.py files
pytest -v # verbose — shows each test name
pytest test_math.py # one specific file
pytest test_math.py::test_add # one specific test
pytest -x # stop at the first failure
pytest's Superpower — Better Error Messages
When a test fails, pytest shows you the exact values, not just "AssertionError":
FAILED test_math.py::test_add_positive
_____ test_add_positive ______
def test_add_positive():
> assert add(2, 3) == 6 # expected 6 but got 5
E AssertionError: assert 5 == 6
E + where 5 = add(2, 3)
1 failed in 0.05s
Parametrize — One Test, Many Cases
Instead of writing a separate test function for every input, use @pytest.mark.parametrize to run the same test with many values automatically:
import pytest
from math_utils import add
@pytest.mark.parametrize("a, b, expected", [
(1, 2, 3), # positive numbers
(0, 0, 0), # zeros
(-5, 5, 0), # negative + positive
(100, 200, 300), # large numbers
])
def test_add_many(a, b, expected):
assert add(a, b) == expected
# pytest runs this function 4 times, once per row
# test_add_many[1-2-3] PASSED
# test_add_many[0-0-0] PASSED
# test_add_many[-5-5-0] PASSED
# test_add_many[100-200-300] PASSED
Fixtures — Reusable Setup
A fixture is setup code that multiple tests share. Decorated with @pytest.fixture, it runs before each test that requests it:
import pytest
@pytest.fixture
def sample_scores():
"""Returns a fresh list of scores for each test."""
return [85, 92, 78, 96, 65]
# Each test receives sample_scores by listing it as a parameter
def test_average(sample_scores):
assert sum(sample_scores) / len(sample_scores) == 83.2
def test_max_score(sample_scores):
assert max(sample_scores) == 96
def test_passing(sample_scores):
# how many scored 75 or above?
assert sum(1 for s in sample_scores if s >= 75) == 4
You want parametrize
Better failure messages matter
Most professional codebases
Existing unittest codebase
Standard library only
Embedded/restricted environments
"pytest can run unittest tests too — so you never have to choose. Start with pytest on new projects. If you inherit unittest tests, just run pytest against them and they'll work."
— Shurai🧠 Quiz — Q1
What is the minimum requirement for pytest to discover and run a test function?
🧠 Quiz — Q2
What is the big advantage of pytest's error messages over unittest?
🧠 Quiz — Q3
What does @pytest.mark.parametrize("a, b, result", [(1,2,3),(0,0,0)]) do?
🧠 Quiz — Q4
What is a pytest fixture?