LLM Development Guide

A practical, trust-first workflow for planning, prompting, executing, and reviewing LLM-assisted development.

This series turns LLM assistance into something you can run repeatedly without losing context.

You will be able to:

  • Turn vague work into explicit plans with verification and stop rules.
  • Write prompt documents that survive across sessions and handoffs.
  • Run large projects with phase documents and phase implementation prompt sets.
  • Preserve state with work notes so you can resume deterministically.
  • Execute in small units with review discipline and commit discipline.

Last updated: 2026-02-16

Questions? Contact me via my site contact form.

Found an issue? Open a GitHub issue.

Chapters

16 chapters

  1. Chapter 1: A Practical Workflow for LLM-Assisted Development That Doesn't Collapse After Day 2
    A trust-first, executable loop for LLM-assisted development: plan artifacts, prompt docs, work notes, verification, and commit discipline (with a worked example).
  2. Chapter 2: Planning: Plan Artifacts, Constraints, Definition of Done
    How to turn vague work into a phased plan that an LLM can execute safely: goals, constraints, references, verification, and stop rules.
  3. Chapter 3: Prompt Documents: Prompts That Survive Sessions
    Turn your plan into reusable prompt docs: phase-aligned prompts with constraints, deliverables, session management, and verification.
  4. Chapter 4: Work Notes: External Memory + Running Log
    How to preserve state across LLM sessions with work notes: decisions, assumptions, open questions, session logs, and commit links.
  5. Chapter 5: The Execution Loop: Review Discipline + Commit Discipline
    A repeatable execution loop for LLM-assisted work: implement small units, update notes, verify, and commit (without batching).
  6. Chapter 6: Scaling the Workflow: Phases, Parallelism, Hygiene
    How to scale LLM-assisted development from a 1-day task to multi-week work: sub-phasing, parallelization, and repo hygiene.
  7. Chapter 8: Choosing the Right Model: Capability Tiers, Not Hype
    Model choice is an engineering decision: match capability to task complexity, upgrade when stuck, and avoid stale vendor claims.
  8. Chapter 7: Large Projects with Phase Documents + Implementation Prompts
    An example-heavy pattern for multi-week LLM-assisted work: phase specifications, implementation prompt documents, and strict execution gates.
  9. Chapter 9: Security & Sensitive Data: Sanitize, Don't Paste Secrets
    Practical data-handling rules for LLM-assisted development: what never to paste, how to sanitize, and how to verify you didn't leak secrets.
  10. Chapter 10: Stop Rules + Pitfalls: When to Upgrade, Bail, or Go Manual
    Concrete stop rules for LLM-assisted development, plus common pitfalls and a recovery checklist when things go sideways.
  11. Chapter 11: Measuring Success: Solo + Team Metrics Without Fake Precision
    How to measure whether LLM-assisted development is actually helping: practical metrics, baselines, and lightweight reporting.
  12. Chapter 12: Team Collaboration: Handoffs, Shared Prompts, and Review
    How to make LLM-assisted development work on a team: handoff artifacts, shared prompt libraries, and review discipline.
  13. Chapter 13: Templates + Checklists: The Copy/Paste Kit
    Minimal templates for plans, prompts, work notes, and checklists. Copy, adapt, and keep the workflow consistent.
  14. Chapter 14: Building a Prompt Library: Governance + Quality Bar
    How to build and maintain a team prompt library that stays useful: structure, templates, contribution rules, and governance.
  15. Chapter 15: Worked Example: Creating a Helm Chart From a Reference Chart
    An end-to-end example of the workflow: plan, prompt docs, work notes, execution loop, and verification to create a new Helm chart from a known-good reference.
  16. Chapter 16: Worked Example: Converting an Ansible Playbook to a Go Temporal Workflow
    An end-to-end example of migrating a procedural runbook to a durable Temporal workflow using reference implementations, phased prompts, and verification.