CI/CD Testing Integration ๐
Integrating testing into Continuous Integration and Continuous Deployment (CI/CD) pipelines is essential for maintaining code quality and ensuring reliable deployments in DevOps workflows.
Table of Contents
CI/CD Testing Overview
The Testing Pyramid in CI/CD
1
2
3
4
5
6
7
8
9
10
| /\
/ \
/E2E \ <- End-to-End Tests (Few, Slow, Expensive)
/______\
/ \
/Integration\ <- Integration Tests (Some, Medium Speed)
/____________\
/ \
/ Unit Tests \ <- Unit Tests (Many, Fast, Cheap)
/________________\
|
Testing Phases in CI/CD Pipeline
- Pre-commit: Local testing, linting, formatting
- Commit: Unit tests, static analysis
- Build: Integration tests, security scanning
- Deploy: End-to-end tests, smoke tests
- Post-deploy: Monitoring, health checks
GitHub Actions Integration
Basic Testing Workflow
Create .github/workflows/test.yml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
| name: Test Suite
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, 3.10, 3.11]
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python $
uses: actions/setup-python@v4
with:
python-version: $
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Lint with flake8
run: |
# Stop build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# Exit-zero treats all errors as warnings
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest tests/ --cov=src --cov-report=xml --cov-report=html
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
name: codecov-umbrella
|
Advanced Testing Workflow with Services
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
| name: Advanced Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:13
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: test_db
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
redis:
image: redis:6
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 6379:6379
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.11
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: $-pip-$
restore-keys: |
$-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Wait for services
run: |
until pg_isready -h localhost -p 5432; do sleep 1; done
until redis-cli -h localhost -p 6379 ping; do sleep 1; done
- name: Run database migrations
run: |
python manage.py migrate
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
REDIS_URL: redis://localhost:6379/0
- name: Run unit tests
run: |
pytest tests/unit/ -v --cov=src --cov-branch
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
REDIS_URL: redis://localhost:6379/0
- name: Run integration tests
run: |
pytest tests/integration/ -v --cov=src --cov-append
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
REDIS_URL: redis://localhost:6379/0
- name: Generate coverage report
run: |
coverage html
coverage xml
- name: Upload coverage reports
uses: actions/upload-artifact@v3
with:
name: coverage-report
path: htmlcov/
|
Multi-Stage Pipeline with Quality Gates
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
| name: Multi-Stage Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
lint:
name: Code Quality Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.11
- name: Install linting tools
run: |
pip install flake8 black isort mypy
- name: Check formatting with black
run: black --check .
- name: Check imports with isort
run: isort --check-only .
- name: Lint with flake8
run: flake8 .
- name: Type check with mypy
run: mypy src/
unit-tests:
name: Unit Tests
runs-on: ubuntu-latest
needs: lint
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.11
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run unit tests
run: |
pytest tests/unit/ --cov=src --cov-report=xml --cov-fail-under=80
- name: Upload unit test results
uses: actions/upload-artifact@v3
if: always()
with:
name: unit-test-results
path: coverage.xml
integration-tests:
name: Integration Tests
runs-on: ubuntu-latest
needs: unit-tests
services:
postgres:
image: postgres:13
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.11
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run integration tests
run: |
pytest tests/integration/ -v
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres
security-scan:
name: Security Scan
runs-on: ubuntu-latest
needs: lint
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.11
- name: Install safety
run: pip install safety bandit
- name: Check for vulnerabilities
run: |
safety check
bandit -r src/
build-and-test:
name: Build and Test Container
runs-on: ubuntu-latest
needs: [unit-tests, integration-tests, security-scan]
steps:
- uses: actions/checkout@v3
- name: Build Docker image
run: |
docker build -t myapp:$ .
- name: Run container tests
run: |
# Start container
docker run -d --name test-container -p 8080:8080 myapp:$
# Wait for container to be ready
sleep 30
# Run health check
curl -f http://localhost:8080/health || exit 1
# Stop container
docker stop test-container
|
Test Automation Strategies
Test Categories and Execution Strategy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| # pytest.ini
[tool:pytest]
testpaths = tests
markers =
unit: Unit tests (fast, isolated)
integration: Integration tests (medium speed, external dependencies)
e2e: End-to-end tests (slow, full system)
smoke: Smoke tests (quick validation after deployment)
security: Security tests
performance: Performance tests
addopts =
--strict-markers
--tb=short
--cov=src
--cov-branch
--cov-report=html
--cov-report=xml
--cov-report=term-missing
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| # conftest.py - Shared test configuration
import pytest
import os
from unittest.mock import Mock
def pytest_configure(config):
"""Configure pytest with custom markers and settings."""
config.addinivalue_line("markers", "unit: Unit tests")
config.addinivalue_line("markers", "integration: Integration tests")
config.addinivalue_line("markers", "e2e: End-to-end tests")
config.addinivalue_line("markers", "smoke: Smoke tests")
config.addinivalue_line("markers", "slow: Slow running tests")
@pytest.fixture(scope="session")
def test_environment():
"""Determine test environment and configuration."""
env = os.getenv("TEST_ENV", "local")
return {
"environment": env,
"database_url": os.getenv("DATABASE_URL", "sqlite:///:memory:"),
"redis_url": os.getenv("REDIS_URL", "redis://localhost:6379/0"),
"api_base_url": os.getenv("API_BASE_URL", "http://localhost:8000")
}
@pytest.fixture
def mock_external_api():
"""Mock external API calls for unit tests."""
mock_api = Mock()
mock_api.get_data.return_value = {"status": "success", "data": []}
return mock_api
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| # tests/test_smoke.py - Smoke tests for critical functionality
import pytest
import requests
@pytest.mark.smoke
def test_application_health_endpoint(test_environment):
"""Test that application health endpoint responds correctly."""
health_url = f"{test_environment['api_base_url']}/health"
response = requests.get(health_url, timeout=10)
assert response.status_code == 200
assert response.json()["status"] == "healthy"
@pytest.mark.smoke
def test_database_connection(test_environment):
"""Test that database connection is working."""
# This would test actual database connectivity
# Implementation depends on your database setup
pass
@pytest.mark.smoke
def test_critical_user_workflow():
"""Test the most critical user workflow."""
# Test the most important user journey
pass
|
Conditional Test Execution
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| # tests/test_conditional.py
import pytest
import os
import sys
# Skip tests based on environment
@pytest.mark.skipif(
os.getenv("CI") != "true",
reason="Integration tests only run in CI environment"
)
def test_ci_only_functionality():
"""Test that only runs in CI environment."""
pass
# Skip tests based on Python version
@pytest.mark.skipif(
sys.version_info < (3, 9),
reason="Requires Python 3.9 or higher"
)
def test_modern_python_feature():
"""Test that requires newer Python features."""
pass
# Skip tests based on available services
@pytest.mark.skipif(
not os.getenv("DATABASE_URL"),
reason="Database not available"
)
def test_database_functionality():
"""Test that requires database connection."""
pass
# Expected failure (for known issues)
@pytest.mark.xfail(reason="Known issue with external API")
def test_known_failing_feature():
"""Test that is expected to fail due to known issue."""
pass
|
Quality Gates
Coverage Requirements
1
2
3
4
5
6
| # setup.cfg or pyproject.toml
[tool:pytest]
addopts = --cov-fail-under=85
# Or in pytest command
# pytest --cov=src --cov-fail-under=85
|
1
2
3
4
5
6
7
8
9
10
11
| # In GitHub Actions
- name: Run tests with coverage requirement
run: |
pytest --cov=src --cov-fail-under=80 --cov-report=xml
- name: Coverage comment
uses: py-cov-action/python-coverage-comment-action@v3
with:
GITHUB_TOKEN: $
MINIMUM_GREEN: 80
MINIMUM_ORANGE: 70
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
| # tests/test_performance.py
import pytest
import time
from performance_monitor import measure_time
@pytest.mark.performance
def test_api_response_time():
"""Ensure API responses are under acceptable threshold."""
with measure_time() as timer:
# Make API call
response = make_api_request()
assert timer.elapsed < 2.0 # 2 second threshold
assert response.status_code == 200
@pytest.mark.performance
def test_database_query_performance():
"""Ensure database queries are optimized."""
start_time = time.time()
# Execute database query
results = execute_complex_query()
end_time = time.time()
query_time = end_time - start_time
assert query_time < 0.5 # 500ms threshold
assert len(results) > 0
|
Security Quality Gates
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
| # tests/test_security.py
import pytest
import requests
from security_scanner import scan_for_vulnerabilities
@pytest.mark.security
def test_no_sensitive_data_in_logs():
"""Ensure no sensitive data is logged."""
# Trigger logging
perform_user_login("test@example.com", "password123")
# Check logs don't contain sensitive data
with open("app.log", "r") as f:
log_content = f.read()
assert "password123" not in log_content
assert "secret_key" not in log_content
@pytest.mark.security
def test_dependency_vulnerabilities():
"""Check for known vulnerabilities in dependencies."""
vulnerabilities = scan_for_vulnerabilities()
# Fail if high severity vulnerabilities found
high_severity = [v for v in vulnerabilities if v.severity == "high"]
assert len(high_severity) == 0, f"High severity vulnerabilities found: {high_severity}"
@pytest.mark.security
def test_api_rate_limiting():
"""Test that API rate limiting is enforced."""
api_url = "http://localhost:8000/api/data"
# Make rapid requests
responses = []
for i in range(20):
response = requests.get(api_url)
responses.append(response)
# Should get rate limited
rate_limited = [r for r in responses if r.status_code == 429]
assert len(rate_limited) > 0, "Rate limiting not working"
|
Test Reporting
JUnit XML Reports
1
2
| # Generate JUnit XML for CI/CD systems
pytest --junit-xml=test-results.xml
|
1
2
3
4
5
6
7
8
9
10
| # GitHub Actions - Upload test results
- name: Run tests
run: |
pytest --junit-xml=test-results.xml
- name: Publish test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: test-results.xml
|
HTML Reports
1
2
3
4
5
| # pytest-html plugin
# pip install pytest-html
# Generate HTML report
# pytest --html=report.html --self-contained-html
|
Custom Test Reporting
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
| # tests/conftest.py
import pytest
import json
from datetime import datetime
class TestResultCollector:
def __init__(self):
self.results = []
def add_result(self, test_name, outcome, duration, error=None):
self.results.append({
"test_name": test_name,
"outcome": outcome,
"duration": duration,
"timestamp": datetime.now().isoformat(),
"error": str(error) if error else None
})
def save_results(self, filename):
with open(filename, 'w') as f:
json.dump(self.results, f, indent=2)
@pytest.fixture(scope="session")
def test_collector():
return TestResultCollector()
@pytest.hookimpl(tryfirst=True)
def pytest_runtest_makereport(item, call):
"""Hook to collect test results."""
if call.when == "call":
test_collector = item.session.config.getoption("--test-collector", None)
if test_collector:
outcome = "passed" if call.excinfo is None else "failed"
duration = call.duration
error = call.excinfo if call.excinfo else None
# This would need to be properly integrated with pytest
# test_collector.add_result(item.name, outcome, duration, error)
def pytest_sessionfinish(session, exitstatus):
"""Save test results at end of session."""
# Save custom test results
pass
|
Docker Testing
Testing Containerized Applications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| # Dockerfile.test - Multi-stage build for testing
FROM python:3.11-slim as base
WORKDIR /app
# Install dependencies
COPY requirements*.txt ./
RUN pip install -r requirements.txt
RUN pip install -r requirements-test.txt
# Copy source code
COPY src/ ./src/
COPY tests/ ./tests/
# Test stage
FROM base as test
CMD ["pytest", "tests/", "--cov=src", "--cov-report=xml", "--junit-xml=test-results.xml"]
# Production stage
FROM base as production
CMD ["python", "-m", "src.main"]
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| # Build and run tests in container
docker build --target test -t myapp:test .
docker run --rm -v $(pwd)/reports:/app/reports myapp:test
# Test production container
docker build --target production -t myapp:prod .
docker run -d --name test-container -p 8080:8080 myapp:prod
# Run health checks
curl -f http://localhost:8080/health
# Cleanup
docker stop test-container
docker rm test-container
|
Docker Compose Testing
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| # docker-compose.test.yml
version: "3.8"
services:
app:
build:
context: .
target: test
depends_on:
- postgres
- redis
environment:
- DATABASE_URL=postgresql://user:password@postgres:5432/testdb
- REDIS_URL=redis://redis:6379/0
volumes:
- ./reports:/app/reports
postgres:
image: postgres:13
environment:
POSTGRES_DB: testdb
POSTGRES_USER: user
POSTGRES_PASSWORD: password
tmpfs:
- /var/lib/postgresql/data
redis:
image: redis:6
tmpfs:
- /data
|
1
2
3
| # Run tests with Docker Compose
docker-compose -f docker-compose.test.yml up --build --abort-on-container-exit
docker-compose -f docker-compose.test.yml down
|
Infrastructure Testing
Testing Infrastructure as Code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
| # tests/test_infrastructure.py
import pytest
import boto3
from moto import mock_ec2, mock_s3
import yaml
@mock_ec2
def test_ec2_instance_creation():
"""Test EC2 instance creation with proper configuration."""
ec2 = boto3.client('ec2', region_name='us-east-1')
# Create instance
response = ec2.run_instances(
ImageId='ami-12345678',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
SecurityGroupIds=['sg-12345678'],
TagSpecifications=[
{
'ResourceType': 'instance',
'Tags': [
{'Key': 'Environment', 'Value': 'test'},
{'Key': 'Application', 'Value': 'myapp'}
]
}
]
)
instance_id = response['Instances'][0]['InstanceId']
# Verify instance configuration
instances = ec2.describe_instances(InstanceIds=[instance_id])
instance = instances['Reservations'][0]['Instances'][0]
assert instance['InstanceType'] == 't2.micro'
assert instance['State']['Name'] == 'running'
# Verify tags
tags = {tag['Key']: tag['Value'] for tag in instance.get('Tags', [])}
assert tags['Environment'] == 'test'
assert tags['Application'] == 'myapp'
def test_kubernetes_deployment_yaml():
"""Test Kubernetes deployment configuration."""
with open('k8s/deployment.yaml', 'r') as f:
deployment = yaml.safe_load(f)
# Validate deployment structure
assert deployment['kind'] == 'Deployment'
assert deployment['metadata']['name'] == 'myapp'
# Validate container configuration
containers = deployment['spec']['template']['spec']['containers']
app_container = next(c for c in containers if c['name'] == 'myapp')
assert app_container['image'].startswith('myapp:')
assert app_container['ports'][0]['containerPort'] == 8080
# Validate resource limits
resources = app_container.get('resources', {})
assert 'limits' in resources
assert 'requests' in resources
|
Testing Ansible Playbooks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
| # tests/test_ansible.py
import pytest
import ansible_runner
import tempfile
import os
def test_ansible_playbook_syntax():
"""Test Ansible playbook syntax is valid."""
result = ansible_runner.run(
playbook='playbooks/deploy.yml',
inventory='inventory/test',
check=True, # Dry run mode
verbosity=1
)
assert result.status == 'successful'
assert result.rc == 0
def test_ansible_playbook_execution():
"""Test Ansible playbook execution in test environment."""
with tempfile.TemporaryDirectory() as temp_dir:
# Create test inventory
inventory_content = """
[test]
test-server ansible_host=localhost ansible_connection=local
"""
inventory_path = os.path.join(temp_dir, 'inventory')
with open(inventory_path, 'w') as f:
f.write(inventory_content)
# Run playbook
result = ansible_runner.run(
playbook='playbooks/setup.yml',
inventory=inventory_path,
extravars={'target_env': 'test'}
)
assert result.status == 'successful'
# Verify specific tasks completed
for event in result.events:
if event['event'] == 'runner_on_ok':
# Check that required tasks completed successfully
pass
|
Deployment Testing
Blue-Green Deployment Testing
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
| # tests/test_deployment.py
import pytest
import requests
import time
class TestBlueGreenDeployment:
def test_green_environment_health(self):
"""Test that green environment is healthy before switch."""
green_url = "http://green.example.com/health"
response = requests.get(green_url, timeout=10)
assert response.status_code == 200
assert response.json()["status"] == "healthy"
def test_traffic_switch(self):
"""Test traffic switching between blue and green."""
# Test current active environment
active_url = "http://api.example.com/version"
initial_response = requests.get(active_url)
initial_version = initial_response.json()["version"]
# Perform traffic switch (this would be done by deployment script)
# switch_traffic_to_green()
# Verify new version is active
time.sleep(5) # Allow for DNS propagation
new_response = requests.get(active_url)
new_version = new_response.json()["version"]
assert new_version != initial_version
def test_rollback_capability(self):
"""Test ability to rollback to previous version."""
# Get current version
active_url = "http://api.example.com/version"
current_response = requests.get(active_url)
current_version = current_response.json()["version"]
# Simulate rollback
# rollback_to_previous_version()
# Verify rollback worked
time.sleep(5)
rollback_response = requests.get(active_url)
rollback_version = rollback_response.json()["version"]
# Should be different from current (back to previous)
assert rollback_version != current_version
|
Canary Deployment Testing
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
| # tests/test_canary.py
import pytest
import requests
import statistics
class TestCanaryDeployment:
def test_canary_traffic_distribution(self):
"""Test that canary receives appropriate traffic percentage."""
api_url = "http://api.example.com/version"
versions = []
# Make multiple requests to check traffic distribution
for _ in range(100):
response = requests.get(api_url)
version = response.json()["version"]
versions.append(version)
# Count version distribution
version_counts = {}
for version in versions:
version_counts[version] = version_counts.get(version, 0) + 1
# Canary should receive ~10% of traffic
canary_percentage = (version_counts.get("v2.0.0", 0) / len(versions)) * 100
assert 5 <= canary_percentage <= 15 # Allow for some variance
def test_canary_performance_metrics(self):
"""Test that canary version meets performance requirements."""
canary_url = "http://canary.example.com/api/heavy-operation"
response_times = []
# Collect performance metrics
for _ in range(10):
start_time = time.time()
response = requests.get(canary_url)
end_time = time.time()
assert response.status_code == 200
response_times.append(end_time - start_time)
# Verify performance is acceptable
avg_response_time = statistics.mean(response_times)
max_response_time = max(response_times)
assert avg_response_time < 2.0 # Average under 2 seconds
assert max_response_time < 5.0 # Max under 5 seconds
|
Best Practices for CI/CD Testing
1. Test Pyramid Implementation
- Unit Tests (70%): Fast, isolated, mock dependencies
- Integration Tests (20%): Medium speed, test component interactions
- E2E Tests (10%): Slow, test complete user workflows
2. Fail Fast Principle
1
2
3
4
5
6
| # Run fast tests first
jobs:
lint: # Fastest - catches syntax/style issues
unit: # Fast - catches logic issues
integration: # Medium - catches integration issues
e2e: # Slowest - catches workflow issues
|
3. Test Environment Management
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| # Environment-specific test configuration
@pytest.fixture
def test_config():
env = os.getenv("TEST_ENV", "local")
configs = {
"local": {
"database_url": "sqlite:///:memory:",
"external_api_url": "http://localhost:8000"
},
"ci": {
"database_url": "postgresql://postgres:postgres@postgres:5432/test",
"external_api_url": "https://staging-api.example.com"
},
"staging": {
"database_url": os.getenv("STAGING_DATABASE_URL"),
"external_api_url": "https://staging-api.example.com"
}
}
return configs[env]
|
4. Test Data Management
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| # Fixture for test data setup/teardown
@pytest.fixture(scope="function")
def clean_database():
"""Clean database before each test."""
# Setup
db.create_all()
yield db
# Teardown
db.drop_all()
@pytest.fixture
def sample_users():
"""Provide consistent test data."""
return [
{"name": "Alice", "email": "alice@test.com"},
{"name": "Bob", "email": "bob@test.com"}
]
|
5. Monitoring and Alerting
1
2
3
4
5
6
7
8
9
10
11
| # Add monitoring to CI/CD pipeline
- name: Monitor test trends
run: |
python scripts/analyze_test_trends.py
- name: Alert on test failures
if: failure()
uses: 8398a7/action-slack@v3
with:
status: failure
channel: "#dev-alerts"
|
Next Steps
Continue your CI/CD testing journey with:
- Security Testing Integration - SAST, DAST, dependency scanning
- Performance Testing in CI/CD - Load testing, benchmarking
- Contract Testing - API contract validation between services
- Chaos Engineering - Testing system resilience
Effective CI/CD testing ensures reliable, fast, and confident deployments while maintaining high code quality.