Appearance
TDD Cycle — Med Tracker
Run the full TDD Red-Green-Refactor cycle for a given feature or bug fix.
Workflow
1. RED — Write Failing Test
- Identify the feature or bug to implement
- Write test(s) in
backend/tests/following existing patterns (pytest-asyncio, mock Supabase) - Run:
cd backend && pytest <test_file> -v - Verify: All new tests FAIL (expected)
2. GREEN — Minimal Implementation
- Write the minimum code to make all tests pass
- For API endpoints: create schema in
app/schemas/, route inapp/api/v1/, register inapp/main.py - For frontend: create component, add to router in
App.tsx - Run:
cd backend && pytest <test_file> -v - Verify: All tests PASS
3. REFACTOR — Clean Up
- Remove duplication, improve naming, optimize
- Fix any deprecation warnings (e.g.
datetime.utcnow()→datetime.now(UTC)) - Run full suite:
cd backend && pytest - Verify: ALL 157+ tests still pass, no regressions
4. VERIFY — Coverage & Review
- Run:
cd backend && pytest --cov=app --cov-report=term-missing - Target: 80%+ coverage on new code
- Use code-reviewer agent for quality check
Test Patterns
python
# Backend test pattern (mock Supabase)
@pytest.fixture
def mock_supabase(mocker):
client = mocker.MagicMock()
mocker.patch("app.api.deps.get_supabase_client", return_value=client)
return client
async def test_create_resource(client, mock_supabase):
mock_supabase.table().insert().execute.return_value.data = [{"id": "123"}]
response = await client.post("/api/v1/resource", json={...})
assert response.status_code == 201Commands
bash
# Run specific test file
cd backend && pytest tests/test_<module>.py -v
# Run full suite
cd backend && pytest
# Run with coverage
cd backend && pytest --cov=app --cov-report=term-missing
# Frontend type check
cd frontend && npx tsc --noEmit