The iteration tax
Every software project involves rework. You write requirements, then revise them because you forgot an edge case. You design an interface, then change it because you discovered a dependency you missed. You implement a function, then rewrite it because the first version had a bug. You write tests, then add more because the initial set didn't catch a failure mode.
This rework isn't waste. It's how you converge on correct solutions. The question isn't whether you'll iterate, but where you'll do it. The cost of iteration varies by three orders of magnitude depending on which phase you're in.
Token economics
Let's start with the direct costs. Frontier AI models (Claude Sonnet, GPT-4, Gemini Pro) cost roughly $3 per million input tokens and $15 per million output tokens as of early 2026. Mid-tier models cost $0.25 to $3 per million tokens. These are the marginal costs of iteration.
A requirements document for a moderately complex feature runs about 800 tokens. Iterating on it means reading it (800 input tokens) and generating a revised version (800 output tokens). One iteration costs $0.014. Ten iterations cost $0.14. You can refine requirements through ten rounds of revision for fourteen cents.
The same feature implemented as code is roughly 3,000 to 5,000 tokens depending on the language and structure. Let's use 4,000 tokens as the baseline. Iterating on code means providing the current implementation plus context (6,000 input tokens) and generating a revised version (4,000 output tokens). One iteration costs $0.078. Ten iterations cost $0.78.
The direct token costs favor document iteration by about 5x. But this understates the real difference, because documents and code don't iterate the same way.
The iteration cascade
When you change a requirement, the requirement changes. When you change code, the code changes, the tests that depend on it might need changes, the documentation might need updates, the interfaces that call it might need adjustments, and the assumptions other components made about its behavior might now be wrong.
Here's a concrete example. You're building a user profile service. The requirement says "return user profile data given a user ID." You implement it, then discover that some profiles are private and shouldn't be returned to all callers. If you're iterating at the requirements level, you add "return an error for private profiles unless the caller is authorized." That's a 30 second change.
If you're iterating at the code level, you need to modify the function signature to accept caller credentials, update all callers to provide them, add authorization logic, handle the new error case in all callers, write tests for the authorization logic, update tests for all callers to provide credentials, and possibly update related functions that made assumptions about profile accessibility. That's 30 minutes to two hours depending on how many call sites exist.
The requirements change was isolated. The code change cascaded through the system.
Why code changes cascade
Code has dependencies in both directions. Your function depends on libraries and other functions (dependencies you use). Other code depends on your function (dependents that use you). Changing your function potentially affects everything in both categories. Requirements documents only have forward dependencies. Changing them doesn't break anything that already exists.
Cognitive load
Reviewing and revising a requirements document requires reading prose and checking it for completeness and consistency. The mental model is "does this describe what the system should do?" You're thinking about one thing: the requirements.
Reviewing and revising code requires loading the current implementation into your head, understanding what it does, identifying the problem, figuring out how to change it without breaking other things, making the change, and verifying it works. The mental model spans implementation details, interface contracts, dependent code, test coverage, and edge cases. You're juggling five things simultaneously.
The difference matters because context switching is expensive. When you iterate on a requirements document, you stay in requirements-thinking mode. The context persists across iterations. When you iterate on code, you often need to reload context between iterations. You run tests, see a failure, switch to fix-mode, make changes, switch back to test-mode, and repeat. Each switch costs minutes of reloading mental state.
Defect economics
IBM's research on defect costs gives us multipliers for how expensive bugs are to fix at different phases. A defect found and fixed during requirements costs 1x (the baseline). The same defect found during implementation costs 6.5x. Found in testing costs 15x. Found in production costs 100x.
These multipliers aren't arbitrary. They reflect the work required to fix problems at each phase. During requirements, you change prose. During implementation, you change code that might be partially integrated. During testing, you change code that's fully integrated and passing some tests. In production, you're changing code that real users depend on, often under time pressure, with incomplete information about what triggered the bug.
The multipliers compound with the iteration costs. If iterating on requirements costs $0.014 and iterating on implementation costs $0.078, and implementation-phase bugs cost 6.5x more to fix, the real cost ratio is ($0.078 × 6.5) / $0.014 = 36x. Finding and fixing a defect during implementation costs 36 times more than finding and fixing it during requirements.
The production multiplier is even worse. If a defect escapes to production and costs 100x more to fix, and you're fixing it in code (not requirements), the cost ratio is ($0.078 × 100) / $0.014 = 557x. Five hundred fifty seven times more expensive.
This is why the front-loading approach works. Spending an extra hour refining requirements costs maybe $3 in AI costs and an hour of your time. Missing a requirement and fixing it in production costs days of debugging, emergency patches, and user-visible failures.
Time costs
Token costs are low enough that the real cost is your time. Reviewing an 800 token requirements document takes about five minutes. You read it, think about edge cases, identify gaps, and ask for revisions. With AI assistance, the revision appears in seconds. You review again. Total time for an iteration: five to ten minutes.
Reviewing 4,000 tokens of code takes 15 to 25 minutes. You need to understand what the code does (which requires more than just reading it), identify problems, think through fixes, and verify the fix doesn't break anything else. The AI generates revised code in 30 to 60 seconds, but your review time dominates.
Over ten iterations, the requirements track costs 50 to 100 minutes of your time. The code track costs 150 to 250 minutes. The ratio is about 2.5x, which matches the cognitive load difference.
The thousand token rule
Here's the threshold where the economics become decisive. If you can describe a component, feature, or architectural decision in roughly 1,000 tokens, you can iterate on that description 10 to 20 times for less cost (both time and money) than implementing it once and iterating on the code.
The math: 1,000 token document, 15 iterations. Input tokens: 15,000. Output tokens: 15,000. Cost at frontier model pricing: $0.27. Time cost: 15 iterations × 7 minutes average = 105 minutes.
Same feature as 4,000 tokens of code, 5 iterations. Input tokens: 30,000. Output tokens: 20,000. Cost: $0.39. Time cost: 5 iterations × 20 minutes average = 100 minutes.
The crossover happens around 1,000 tokens. Below that threshold, document iteration is dramatically cheaper. Above it, the advantage narrows but still favors documents because of cascade effects.
This creates a design heuristic: if you can't describe a component in roughly 1,000 tokens, it's probably too big. Break it into smaller pieces, each of which fits the rule. The decomposition that keeps descriptions under 1,000 tokens turns out to be the same decomposition that produces modular, understandable code.
Where iteration happens cheap
Here's a table showing typical artifact sizes and iteration costs across development phases.
| Phase | Artifact | Typical tokens | Iteration cost | Iterations before crossover |
|---|---|---|---|---|
| Requirements | Feature requirements | 600-1,000 | $0.014 | 50+ |
| Technical planning | Architecture section | 400-800 | $0.011 | 60+ |
| Test planning | Test plan per component | 300-600 | $0.008 | 80+ |
| Design | Stub/interface per module | 200-500 | $0.006 | 100+ |
| Test | Individual test case | 100-300 | $0.004 | 150+ |
| Implementation | Single function body | 150-500 | $0.006 | 100+ |
The "iterations before crossover" column shows how many times you can iterate on an artifact before the cumulative cost equals implementing and iterating on the code once. Requirements can go through 50 iterations before costing as much as a single code iteration. Test plans can go through 80.
The implication: front-load your iteration budget into early phases. Refine requirements until they're unambiguous. Refine technical plans until dependencies are clear. Refine test plans until edge cases are covered. By the time you reach implementation, most decisions are locked in and iteration becomes rare.
The iteration loop diagram
Different phases have different iteration characteristics:
Requirements phase: tight loop. Write requirement, ask AI to identify gaps, revise, repeat. Each cycle takes minutes. The loop closes when you've covered all edge cases and validated with stakeholders.
Technical planning phase: medium loop. Propose approach, evaluate trade-offs, revise, validate against requirements, repeat. Each cycle takes 10-20 minutes. The loop closes when the approach is feasible and complete.
Test planning phase: medium loop. Enumerate test cases, identify missing coverage, add cases, repeat. Each cycle takes 5-15 minutes. The loop closes when coverage is systematic.
Implementation phase: slow loop. Write code, run tests, debug failures, revise, repeat. Each cycle takes 20-40 minutes depending on test suite speed. The loop closes when all tests pass.
Production: very slow loop. User reports bug, reproduce, diagnose, fix, test, deploy, monitor. Each cycle takes hours to days. The loop only closes when the production issue is resolved.
The iteration speed decreases as you move right through the phases, and the cost per iteration increases. This is why you want to do most of your iteration early.
When the rule breaks down
The thousand token rule assumes you know what you're building well enough to describe it. That assumption breaks in three situations.
Prototyping: When you genuinely don't know what you need, writing requirements is premature. Build a throwaway prototype to explore the problem space, learn from it, then write requirements based on what you learned. The prototype isn't production code. It's a learning tool. Once you understand the problem, discard the prototype and build the real system using the standard workflow.
Exploratory coding: Sometimes understanding a library or framework requires writing code to see how it behaves. This isn't building a feature. It's research. Write small test programs, see what happens, learn the patterns, then write requirements for the actual feature. The exploratory code is documentation, not a deliverable.
Trivial implementations: If the implementation is simpler than any description of it would be, skip the document phase. A function that returns a constant value doesn't need a requirements document. A configuration file change doesn't need a technical design. Apply judgment. The workflow is a tool, not a straightjacket.
The key test: will iteration likely happen? If yes, do it where it's cheap. If you're writing a one-liner that's obviously correct, skip the process. If you're building anything complex enough that you might need to revise your approach, front-load the iteration.
The leverage point
The entire workflow rests on this economic fact: iteration on documents costs 5x to 50x less than iteration on code, depending on the phase and complexity. This means you can afford to be thorough in early phases. Spend an extra 30 minutes refining requirements to cover two more edge cases. It costs maybe $1 in API fees and 30 minutes of your time. Missing those edge cases and discovering them during testing costs hours of debugging and rework.
The leverage comes from the cost asymmetry. A small investment in document quality eliminates large amounts of downstream rework. The workflow described in the rest of this course systematically exploits that leverage.
Iterate where it's cheap. Front-load decisions. By the time you write code, the solution should be so constrained that implementation feels mechanical.