Chapter 7: Large Projects with Phase Documents + Implementation Prompts

January 26, 2026 · 5 min read
blog

Series: LLM Development Guide

Chapter 7 of 16

Previous: Chapter 6: Scaling the Workflow: Phases, Parallelism, Hygiene

Next: Chapter 8: Choosing the Right Model: Capability Tiers, Not Hype

What you’ll be able to do

You’ll be able to run large, multi-phase delivery with less drift by introducing two explicit artifacts:

  • A phase specification document that defines scope, dependencies, files, and exit criteria.
  • A phase implementation prompt document that defines the prompt-by-prompt execution contract.
  • A repeatable operating cadence for execution, verification, and commits.

TL;DR

  • Large projects fail when a single prompt tries to carry the whole implementation plan.
  • Use one phase spec and one implementation-prompt file per sub-phase.
  • Execute prompts sequentially; do not continue if build/vet/test gates fail.
  • Keep context loading explicit for each prompt.
  • For copy/paste templates, use Chapter 13: Templates + Checklists: The Copy/Paste Kit .

Table of contents

Why this pattern exists

For a one-day task, a plan plus one execution prompt is usually enough.

For multi-week work, that breaks down:

  • Context gets too large and detail gets dropped.
  • Sessions diverge when constraints are implied instead of written.
  • Verification becomes optional instead of required.
  • Commits become large and hard to review.

The fix is to treat phase docs and implementation prompt docs as first-class project artifacts.

The two-document system

For each sub-phase, create two files.

1) Phase spec document

Purpose: define what this sub-phase must accomplish and how completion is validated.

Typical sections:

  • Status, dependency, and migration notes.
  • Design rationale (why this slice exists now).
  • Tasks grouped by prompt number.
  • Files: new, modified, and referenced-only.
  • Exit criteria with concrete commands and expected results.
  • Progress notes placeholder.

2) Phase implementation prompt document

Purpose: define exactly how execution happens, prompt by prompt.

Each prompt should include:

  • Context files to load (small, explicit list).
  • Task details: signatures, interfaces, constraints.
  • Quality gates and required verification commands.
  • Stop condition: do not proceed until the current prompt passes.

A useful pattern is to couple one prompt to one logical implementation unit.

Worked example: a multi-phase engineering initiative

Assume you are delivering a new runtime capability over six weeks.

You split work into:

  • Phase A: contracts and types.
  • Phase B: core implementation.
  • Phase C: API and integration points.
  • Phase D: tests and validation.
  • Phase E: observability and rollout safety.

For Phase B, your phase spec might look like this:

# Phase B - Core Implementation

## Status
Planned

## Depends on
Phase A

## Design rationale
Phase B isolates core behavior behind the contracts from Phase A.
This prevents API and infrastructure concerns from polluting the core logic.

## Tasks
### Prompt 1
- Implement core orchestration types and constructor.

### Prompt 2
- Implement main execution method with deterministic error paths.

### Prompt 3
- Add unit tests for success and failure branches.

## Files
### New
- internal/core/runtime.go
- internal/core/runtime_test.go

### Modified
- internal/core/types.go

### Referenced (read-only)
- internal/contracts/interfaces.go

## Exit criteria
- [ ] `go build ./internal/core/...` exits 0
- [ ] `go vet ./internal/core/...` exits 0
- [ ] `go test ./internal/core/...` exits 0
- [ ] No unchecked returned errors

## Progress notes

Now pair it with a Phase B implementation prompt file:

# Phase B - Implementation Prompts

## Prompt 1 of 3: Runtime skeleton

Context files to load:
- docs/phases/PHASEB.md
- internal/contracts/interfaces.go
- internal/core/types.go
- README.md

Task:
- Create `internal/core/runtime.go` with constructor and public methods.

Constraints:
- Do not change files outside listed scope.
- Handle all returned errors explicitly.
- Keep methods short enough to remain reviewable.

Verification:
- `go build ./internal/core/...`
- `go vet ./internal/core/...`

Stop rule:
- Do not proceed to Prompt 2 until both commands pass.

This is intentionally boring. Boring is what scales.

Execution protocol for prompt files

Use the same cadence for every prompt in a sub-phase:

  1. Load only listed context files.
  2. Execute exactly one prompt.
  3. Update work notes (decisions, assumptions, blockers, next step).
  4. Run required verification gates.
  5. Commit one logical unit.
  6. Move to the next prompt.

Suggested commit discipline:

  • One commit per prompt when prompts are independent.
  • One commit per tightly coupled prompt pair when separation creates broken intermediate states.
  • Message format should state scope and intent clearly.

When prompt counts are high, add a completion table in work notes:

## Prompt progress
- [x] Prompt 1
- [x] Prompt 2
- [ ] Prompt 3
- [ ] Prompt 4

Verification

You can verify this system is functioning with mechanical checks.

# All phase specs have exit criteria.
rg -n "^## Exit criteria" docs/phases/PHASE*.md

# All prompt docs define context loading and verification.
rg -n "^Context files to load:|^Verification:" docs/phases/*-PROMPT.md

# Work notes track progression.
rg -n "^## Prompt progress|^## Session log" work-notes || true

Expected results:

  • Every phase spec has explicit exit criteria.
  • Every prompt file defines context and verification.
  • Session state is recoverable without re-explaining the whole project.

Failure modes

  • Phase docs describe architecture but skip executable gates.
  • Prompt docs are too broad (“implement phase”) and lose determinism.
  • Prompts proceed despite failing verification.
  • Context file lists are bloated and include unrelated material.

If this starts happening, shrink prompt scope and tighten exit criteria before continuing.

Continue -> Chapter 8: Choosing the Right Model: Capability Tiers, Not Hype

Authors
DevOps Architect · Applied AI Engineer
I’ve spent 20 years building systems across embedded systems, micro-controllers, PLCS, security platforms, fintech, SRE, and platform architecture. Today I focus on production AI systems in Go: multi-agent orchestration, MCP server ecosystems, and the DevOps platforms that keep them running. I care about systems that work under pressure: observable, recoverable, and built to last.