title: Testing sidebar_label: Testing sidebar_position: 2 description: Running tests and understanding test patterns.#
Testing#
Noteleaf maintains comprehensive test coverage using Go's built-in testing framework with consistent patterns across the codebase.
Running Tests#
All Tests#
task test
# or
go test ./...
Coverage Report#
Generate HTML coverage report:
task coverage
Output: coverage.html (opens in browser)
Terminal Coverage#
View coverage in terminal:
task cov
Shows function-level coverage percentages.
Package-Specific Tests#
Test specific package:
go test ./internal/repo
go test ./internal/handlers
go test ./cmd
Verbose Output#
go test -v ./...
Test Organization#
Tests follow a hierarchical 3-level structure:
func TestRepositoryName(t *testing.T) {
// Setup once
db := CreateTestDB(t)
repos := SetupTestData(t, db)
t.Run("Feature", func(t *testing.T) {
t.Run("scenario description", func(t *testing.T) {
// Test logic
})
})
}
Levels:
- Package (top function)
- Feature (first t.Run)
- Scenario (nested t.Run)
Test Patterns#
Repository Tests#
Repository tests use scaffolding from internal/repo/test_utilities.go:
func TestTaskRepository(t *testing.T) {
db := CreateTestDB(t)
repos := SetupTestData(t, db)
ctx := context.Background()
t.Run("Create", func(t *testing.T) {
t.Run("creates task successfully", func(t *testing.T) {
task := NewTaskBuilder().
WithDescription("Test task").
Build()
created, err := repos.Tasks.Create(ctx, task)
AssertNoError(t, err, "create should succeed")
AssertEqual(t, "Test task", created.Description, "description should match")
})
})
}
Handler Tests#
Handler tests use internal/handlers/handler_test_suite.go:
func TestHandlerName(t *testing.T) {
suite := NewHandlerTestSuite(t)
defer suite.cleanup()
handler := CreateHandler(t, NewHandlerFunc)
t.Run("Feature", func(t *testing.T) {
t.Run("scenario", func(t *testing.T) {
AssertNoError(t, handler.Method(), "operation should succeed")
})
})
}
Test Utilities#
Assertion Helpers#
Located in internal/repo/test_utilities.go and internal/handlers/test_utilities.go:
// Error checking
AssertNoError(t, err, "operation should succeed")
AssertError(t, err, "operation should fail")
// Value comparison
AssertEqual(t, expected, actual, "values should match")
AssertTrue(t, condition, "should be true")
AssertFalse(t, condition, "should be false")
// Nil checking
AssertNil(t, value, "should be nil")
AssertNotNil(t, value, "should not be nil")
// String operations
AssertContains(t, str, substr, "should contain substring")
Test Data Builders#
Create test data with builders:
task := NewTaskBuilder().
WithDescription("Test task").
WithStatus("pending").
WithPriority("high").
WithProject("test-project").
Build()
book := NewBookBuilder().
WithTitle("Test Book").
WithAuthor("Test Author").
Build()
note := NewNoteBuilder().
WithTitle("Test Note").
WithContent("Test content").
Build()
Test Database#
In-memory SQLite for isolated tests:
db := CreateTestDB(t) // Automatic cleanup via t.Cleanup()
Sample Data#
Pre-populated test data:
repos := SetupTestData(t, db)
// Creates tasks, notes, books, movies, TV shows
Test Naming#
Use direct descriptions without "should":
t.Run("creates task successfully", func(t *testing.T) { }) // Good
t.Run("should create task", func(t *testing.T) { }) // Bad
t.Run("returns error for invalid input", func(t *testing.T) { }) // Good
Test Independence#
Each test must be independent:
- Use
CreateTestDB(t)for isolated database - Don't rely on test execution order
- Clean up resources with
t.Cleanup() - Avoid package-level state
Coverage Targets#
Maintain high coverage for:
- Repository layer (data access)
- Handler layer (business logic)
- Services (external integrations)
- Models (data validation)
Current coverage visible via:
task cov
Continuous Integration#
Tests run automatically on:
- Pull requests
- Main branch commits
- Release builds
CI configuration validates:
- All tests pass
- No race conditions
- Coverage thresholds met
Debugging Tests#
Run Single Test#
go test -run TestTaskRepository ./internal/repo
go test -run TestTaskRepository/Create ./internal/repo
Race Detector#
go test -race ./...
Verbose with Stack Traces#
go test -v -race ./internal/repo 2>&1 | grep -A 10 "FAIL"
Best Practices#
- Write tests for all public APIs
- Use builders for complex test data
- Apply semantic assertion helpers
- Keep tests focused and readable
- Test both success and error paths
- Avoid brittle time-based tests
- Mock external dependencies
- Use table-driven tests for variations