Testing Fundamentals in Python for DevOps π§ͺ
Testing is a critical component in DevOps practices, ensuring code quality, reliability, and maintainability. This guide covers fundamental testing concepts and practices specifically tailored for DevOps engineers working with Python.
Table of Contents
Why Testing Matters in DevOps
Testing is essential in DevOps for several reasons:
Continuous Integration/Continuous Deployment (CI/CD)
- Early Bug Detection: Catch issues before they reach production
- Automated Quality Gates: Prevent broken code from being deployed
- Confidence in Deployments: Deploy with confidence knowing code is tested
Infrastructure as Code (IaC)
- Configuration Validation: Test infrastructure configurations before deployment
- Environment Consistency: Ensure environments work as expected
- Rollback Safety: Verify rollback procedures work correctly
Automation Scripts
- Script Reliability: Ensure automation scripts work under various conditions
- Error Handling: Test error scenarios and edge cases
- Maintenance: Make scripts easier to maintain and modify
Types of Testing
1. Unit Testing
- Tests individual functions, methods, or classes in isolation
- Fast execution and immediate feedback
- Foundation for all other testing types
2. Integration Testing
- Tests interaction between different components
- Validates API calls, database connections, external services
- Ensures components work together correctly
3. Functional Testing
- Tests complete features or user scenarios
- Validates end-to-end workflows
- Ensures system meets business requirements
4. System Testing
- Tests complete system in production-like environment
- Validates performance, security, and scalability
- Final validation before deployment
Python Testing Frameworks
unittest (Built-in)
- Ships with Python standard library
- Object-oriented approach
- Good for simple testing scenarios
1
2
3
4
5
6
7
8
9
| import unittest
class TestExample(unittest.TestCase):
def test_addition(self):
result = 2 + 2
self.assertEqual(result, 4)
if __name__ == '__main__':
unittest.main()
|
pytest (Recommended)
- Third-party framework with powerful features
- Simple, readable syntax
- Extensive plugin ecosystem
- Better error reporting
1
2
3
| def test_addition():
result = 2 + 2
assert result == 4
|
Unit Testing with unittest
Basic Test Structure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| import unittest
from mymodule import Calculator
class TestCalculator(unittest.TestCase):
def setUp(self):
"""Set up test fixtures before each test method."""
self.calc = Calculator()
def tearDown(self):
"""Clean up after each test method."""
pass
def test_addition(self):
"""Test addition functionality."""
result = self.calc.add(2, 3)
self.assertEqual(result, 5)
def test_division_by_zero(self):
"""Test division by zero raises exception."""
with self.assertRaises(ZeroDivisionError):
self.calc.divide(10, 0)
if __name__ == '__main__':
unittest.main()
|
Common Assertions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| # Equality assertions
self.assertEqual(a, b) # a == b
self.assertNotEqual(a, b) # a != b
# Truth assertions
self.assertTrue(x) # bool(x) is True
self.assertFalse(x) # bool(x) is False
# Membership assertions
self.assertIn(a, b) # a in b
self.assertNotIn(a, b) # a not in b
# Exception assertions
self.assertRaises(Exception, func, *args)
with self.assertRaises(Exception):
# code that should raise exception
# Numeric assertions
self.assertGreater(a, b) # a > b
self.assertLess(a, b) # a < b
self.assertAlmostEqual(a, b, places=2) # For floating point
|
Modern Testing with pytest
Installation and Basic Usage
1
| pip install pytest pytest-cov pytest-mock
|
Simple Test Functions
1
2
3
4
5
6
7
8
9
10
11
| # test_math_operations.py
def add(a, b):
return a + b
def test_add():
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
def test_add_strings():
assert add("hello", " world") == "hello world"
|
Test Classes with pytest
1
2
3
4
5
6
7
8
9
10
| class TestCalculator:
def setup_method(self):
"""Setup before each test method."""
self.calc = Calculator()
def test_multiplication(self):
assert self.calc.multiply(3, 4) == 12
def test_negative_numbers(self):
assert self.calc.multiply(-2, 3) == -6
|
Parametrized Testing
1
2
3
4
5
6
7
8
9
10
| import pytest
@pytest.mark.parametrize("a,b,expected", [
(2, 3, 5),
(-1, 1, 0),
(0, 0, 0),
(10, -5, 5)
])
def test_add_parametrized(a, b, expected):
assert add(a, b) == expected
|
Fixtures
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| @pytest.fixture
def sample_data():
"""Provide test data for multiple tests."""
return {
'users': ['alice', 'bob', 'charlie'],
'config': {'debug': True, 'timeout': 30}
}
@pytest.fixture
def temp_file():
"""Create temporary file for testing."""
import tempfile
import os
fd, path = tempfile.mkstemp()
yield path
os.close(fd)
os.unlink(path)
def test_user_processing(sample_data):
users = sample_data['users']
assert len(users) == 3
assert 'alice' in users
def test_file_operations(temp_file):
with open(temp_file, 'w') as f:
f.write("test content")
with open(temp_file, 'r') as f:
content = f.read()
assert content == "test content"
|
Test Structure and Organization
Directory Structure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| project/
βββ src/
β βββ __init__.py
β βββ calculator.py
β βββ file_utils.py
β βββ api_client.py
βββ tests/
β βββ __init__.py
β βββ test_calculator.py
β βββ test_file_utils.py
β βββ test_api_client.py
β βββ conftest.py # Shared fixtures
βββ requirements.txt
βββ pytest.ini # pytest configuration
βββ README.md
|
pytest Configuration (pytest.ini)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| [tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
--verbose
--tb=short
--cov=src
--cov-report=html
--cov-report=term-missing
markers =
slow: marks tests as slow
integration: marks tests as integration tests
unit: marks tests as unit tests
|
Running Tests
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| # Run all tests
pytest
# Run specific test file
pytest tests/test_calculator.py
# Run with coverage
pytest --cov=src
# Run only unit tests
pytest -m unit
# Run tests in parallel
pip install pytest-xdist
pytest -n 4 # Run with 4 workers
|
Best Practices
1. Test Naming Conventions
- Use descriptive names that explain what is being tested
- Follow pattern:
test_[unit_being_tested]_[scenario]_[expected_behavior]
1
2
3
4
5
6
7
8
9
| # Good examples
def test_user_authentication_valid_credentials_returns_token():
pass
def test_file_upload_missing_file_raises_exception():
pass
def test_api_client_timeout_returns_none():
pass
|
2. Arrange-Act-Assert (AAA) Pattern
1
2
3
4
5
6
7
8
9
10
11
12
| def test_user_creation():
# Arrange
user_data = {"name": "John", "email": "john@example.com"}
user_service = UserService()
# Act
user = user_service.create_user(user_data)
# Assert
assert user.name == "John"
assert user.email == "john@example.com"
assert user.id is not None
|
3. Test Independence
- Each test should be independent and not rely on other tests
- Use fixtures to set up test data
- Clean up after tests
4. Test One Thing at a Time
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| # Bad - testing multiple things
def test_user_operations():
user = create_user("John")
assert user.name == "John"
updated_user = update_user(user.id, {"name": "Jane"})
assert updated_user.name == "Jane"
delete_user(user.id)
assert get_user(user.id) is None
# Good - separate tests
def test_create_user():
user = create_user("John")
assert user.name == "John"
def test_update_user():
user = create_user("John")
updated_user = update_user(user.id, {"name": "Jane"})
assert updated_user.name == "Jane"
def test_delete_user():
user = create_user("John")
delete_user(user.id)
assert get_user(user.id) is None
|
Practical Examples
Example 1: Testing File Operations
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| # file_utils.py
import os
import json
def read_config_file(filepath):
"""Read JSON configuration file."""
if not os.path.exists(filepath):
raise FileNotFoundError(f"Config file not found: {filepath}")
with open(filepath, 'r') as f:
return json.load(f)
def write_config_file(filepath, config):
"""Write configuration to JSON file."""
os.makedirs(os.path.dirname(filepath), exist_ok=True)
with open(filepath, 'w') as f:
json.dump(config, f, indent=2)
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
| # test_file_utils.py
import pytest
import json
import tempfile
import os
from file_utils import read_config_file, write_config_file
@pytest.fixture
def temp_config_file():
"""Create temporary config file."""
fd, path = tempfile.mkstemp(suffix='.json')
config = {"debug": True, "timeout": 30}
with os.fdopen(fd, 'w') as f:
json.dump(config, f)
yield path, config
os.unlink(path)
def test_read_config_file_success(temp_config_file):
filepath, expected_config = temp_config_file
config = read_config_file(filepath)
assert config == expected_config
def test_read_config_file_not_found():
with pytest.raises(FileNotFoundError):
read_config_file("/nonexistent/config.json")
def test_write_config_file():
with tempfile.TemporaryDirectory() as temp_dir:
filepath = os.path.join(temp_dir, "test_config.json")
config = {"env": "test", "debug": False}
write_config_file(filepath, config)
assert os.path.exists(filepath)
with open(filepath, 'r') as f:
saved_config = json.load(f)
assert saved_config == config
|
Example 2: Testing API Client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| # api_client.py
import requests
import logging
logger = logging.getLogger(__name__)
class APIClient:
def __init__(self, base_url, timeout=30):
self.base_url = base_url.rstrip('/')
self.timeout = timeout
self.session = requests.Session()
def get_user(self, user_id):
"""Get user by ID."""
url = f"{self.base_url}/users/{user_id}"
try:
response = self.session.get(url, timeout=self.timeout)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
logger.error(f"Failed to get user {user_id}: {e}")
return None
def create_user(self, user_data):
"""Create new user."""
url = f"{self.base_url}/users"
try:
response = self.session.post(url, json=user_data, timeout=self.timeout)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
logger.error(f"Failed to create user: {e}")
return None
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
| # test_api_client.py
import pytest
from unittest.mock import Mock, patch
from api_client import APIClient
@pytest.fixture
def api_client():
return APIClient("https://api.example.com")
@pytest.fixture
def mock_response():
"""Create mock response object."""
mock = Mock()
mock.json.return_value = {"id": 1, "name": "John", "email": "john@example.com"}
mock.raise_for_status.return_value = None
return mock
def test_get_user_success(api_client, mock_response):
with patch.object(api_client.session, 'get', return_value=mock_response):
user = api_client.get_user(1)
assert user["id"] == 1
assert user["name"] == "John"
api_client.session.get.assert_called_once_with(
"https://api.example.com/users/1",
timeout=30
)
def test_get_user_not_found(api_client):
mock_response = Mock()
mock_response.raise_for_status.side_effect = requests.exceptions.HTTPError("404")
with patch.object(api_client.session, 'get', return_value=mock_response):
user = api_client.get_user(999)
assert user is None
def test_create_user_success(api_client, mock_response):
user_data = {"name": "Jane", "email": "jane@example.com"}
with patch.object(api_client.session, 'post', return_value=mock_response):
user = api_client.create_user(user_data)
assert user["name"] == "John" # Mock returns John
api_client.session.post.assert_called_once_with(
"https://api.example.com/users",
json=user_data,
timeout=30
)
|
Next Steps
After mastering these fundamentals, you should explore:
- Advanced Testing - Integration testing, mocking strategies, and API testing
- CI/CD Testing - Automated testing in pipelines and continuous integration
- Code Coverage Analysis - Measuring and improving test coverage
- Performance Testing - Load testing and benchmarking
Resources
Remember: Good tests are investments in your codeβs future. They save time, prevent bugs, and make refactoring safer.