8.3 KiB
| name | description |
|---|---|
| tdd | Backend Agent uses this skill for Test-Driven Development. Follows Red-Green-Refactor cycle and vertical slicing principles, ensuring tests cover behavior rather than implementation details. Trigger: Implementation phase (Stage 9), integrated with go-backend-dev skill. |
/tdd — Test-Driven Development
Backend Agent uses this skill for Test-Driven Development.
Core Philosophy
Test behavior, not implementation details.
Good tests verify behavior through public interfaces, describing the system "what" it does, not "how" it does it. Tests still pass after refactoring.
Bad tests are coupled to implementation: mocking internal collaborators, testing private methods. Tests fail after refactoring, but behavior hasn't changed.
Anti-Pattern: Horizontal Slicing
Don't write all tests first, then all implementations. This is "horizontal slicing":
❌ Wrong way:
RED: test1, test2, test3, test4, test5
GREEN: impl1, impl2, impl3, impl4, impl5
✅ Correct way (vertical slicing):
RED→GREEN: test1 → impl1
RED→GREEN: test2 → impl2
RED→GREEN: test3 → impl3
Horizontal slicing produces low-quality tests:
- Tests written early verify "imagined" behavior, not "actual" behavior
- Tests become validators of data structures and function signatures, not user-observable behavior
- Tests are insensitive to real changes — they pass when behavior is broken, fail when behavior hasn't changed but after refactoring
Flow
Confirm interface changes and test scope
↓
Write first test (tracer bullet)
↓
RED: Test fails
↓
GREEN: Write minimal code to make test pass
↓
Write next test
↓
RED → GREEN loop
↓
All behavior tests complete
↓
REFACTOR: Refactor
↓
Confirm all tests still pass
Step Details
1. Planning
Before writing any code:
- Confirm which interface changes are needed with user
- Confirm which behaviors need testing (prioritize)
- Identify opportunities for deep modules (small interface, deep implementation)
- Design interfaces for testability
- List behaviors to test (not implementation steps)
- Get user approval for test plan
Ask: "What should the public interface look like? Which behaviors are most important to test?"
You cannot test everything. Confirm with user which behaviors are most important, focus testing effort on critical paths and complex logic, not every possible edge case.
2. Tracer Bullet
Write a test that confirms one thing about the system:
RED: Write first behavior test → Test fails
GREEN: Write minimal code to make test pass → Test passes
This is your tracer bullet — proving the end-to-end path works.
3. Incremental Loop
For each remaining behavior:
RED: Write next test → Fails
GREEN: Minimal code to make test pass → Passes
Rules:
- One test at a time
- Write only enough code to make current test pass
- Don't predict future tests
- Tests focus on observable behavior
4. Refactoring
After all tests pass, look for refactoring candidates:
- Extract duplicate logic
- Deepen modules (move complexity behind simple interfaces)
- Apply SOLID principles naturally
- Consider what new code reveals about existing code problems
- Run tests after each refactoring step
Never refactor while in RED state. Get back to GREEN first.
Good Tests vs Bad Tests
Good Tests
Integration style: Test through real interfaces, not mocking internal parts.
// GOOD: Test observable behavior
func TestUserUsecase_CreateUser_Success(t *testing.T) {
mockRepo := new(mock.UserRepository)
uc := NewUserUsecase(mockRepo, logger)
mockRepo.On("GetByEmail", mock.Anything, "test@example.com").Return(nil, nil)
mockRepo.On("Create", mock.Anything, mock.AnythingOfType("*domain.User")).Return(nil)
user, err := uc.CreateUser(context.Background(), input)
assert.NoError(t, err)
assert.NotNil(t, user)
assert.Equal(t, "test@example.com", user.Email)
}
Characteristics:
- Tests behavior that users/callers care about
- Uses only public APIs
- Tests still pass after internal implementation refactoring
- Describes "what" instead of "how"
- One logical assertion per test
Bad Tests
Implementation detail testing: Coupled to internal structure.
// BAD: Test implementation details
func TestUserUsecase_CreateUser_CallsRepoCreate(t *testing.T) {
mockRepo := new(mock.UserRepository)
uc := NewUserUsecase(mockRepo, logger)
uc.CreateUser(context.Background(), input)
// This tests "how" instead of "what"
mockRepo.AssertCalled(t, "Create", mock.Anything, mock.Anything)
}
Red flags:
- Mocking internal collaborators just to verify they were called
- Testing private methods
- Asserting call counts or order
- Tests fail after refactoring but behavior unchanged
- Test names describe "how" instead of "what"
// BAD: Bypass interface validation
func TestCreateUser_SavesToDatabase(t *testing.T) {
CreateUser(ctx, input)
row := db.QueryRow("SELECT * FROM users WHERE name = $1", "Alice")
// Direct database query, bypassing public interface
}
// GOOD: Validate through interface
func TestCreateUser_MakesUserRetrievable(t *testing.T) {
user, _ := CreateUser(ctx, input)
retrieved, _ := GetUser(ctx, user.ID)
assert.Equal(t, "Alice", retrieved.Name)
// Validate behavior through public interface
}
Golang Testing Standards
Test Naming
// Test{Unit}_{Scenario}
func TestUserUsecase_CreateUser_Success(t *testing.T) {}
func TestUserUsecase_CreateUser_InvalidEmail(t *testing.T) {}
func TestUserUsecase_CreateUser_Duplicate(t *testing.T) {}
Test Pyramid
/\
/ \
/ E2E \ <- Few critical flows
/--------\
/Integration\ <- API + DB
/--------------\
/ Unit Tests \ <- Most, 80%+ coverage
/--------------------\
Mock Strategy
Only mock at system boundaries:
- External APIs (payments, email, etc.)
- Database (sometimes — prefer test DB)
- Time/randomness
- File system (sometimes)
Don't mock:
- Your own classes/modules
- Internal collaborators
- Things you can control
// Use mockery to auto-generate mocks
//go:generate mockery --name=UserRepository
// Unit tests use mock repo
func TestUserUsecase_CreateUser_Success(t *testing.T) {
mockRepo := new(mock.UserRepository)
uc := NewUserUsecase(mockRepo, logger)
mockRepo.On("GetByEmail", mock.Anything, "test@example.com").Return(nil, nil)
mockRepo.On("Create", mock.Anything, mock.AnythingOfType("*domain.User")).Return(nil)
user, err := uc.CreateUser(context.Background(), input)
assert.NoError(t, err)
assert.NotNil(t, user)
}
Interface Design for Testability
Good interfaces make testing natural:
1. Accept dependencies, don't create them
// Testable
func (s *UserService) CreateUser(ctx context.Context, input CreateUserInput, repo UserRepository) (*User, error) {}
// Hard to test
func (s *UserService) CreateUser(ctx context.Context, input CreateUserInput) (*User, error) {
repo := postgres.NewUserRepository(db) // Creates dependency
}
2. Return results, don't produce side effects
// Testable
func CalculateDiscount(cart *Cart) Discount {}
// Hard to test
func ApplyDiscount(cart *Cart) {
cart.Total -= discount // Mutates input
}
3. Small interface surface area
- Fewer methods = fewer tests to write
- Fewer parameters = simpler test setup
Checklist for Each Cycle
[ ] Test describes behavior, not implementation
[ ] Test uses only public interfaces
[ ] Test still passes after internal refactoring
[ ] Code is minimal implementation to make current test pass
[ ] No speculative features
Refactoring Candidates
After TDD cycle completes, look for:
- Duplicate logic → Extract function/class
- Too long methods → Split into private helpers (keep tests on public interface)
- Shallow modules → Merge or deepen
- Feature envy → Move logic to where the data is
- Primitive obsession → Introduce value objects
- New code revealing existing code problems
Related Skills
- Prerequisite:
go-backend-dev(used in implementation) - 辅助:
design-an-interface(design interfaces for testability) - Follow-up:
qa(QA testing)