How we scope an AI engagement
Scope is a contract with your future self. Here is the rhythm we use to draw it before any production code lands.
- Method
- March 18, 2026
- 6 min read
Most AI engagements that miss their date do not miss because the engineering was wrong. They miss because the scope was a wish list — too many surfaces, too many stakeholders, too many definitions of done — and the team discovered that fact in week eight, not week one.
We close scope before we start building. The line between in and out is drawn, agreed, and written down. The exercise feels slow on day one and pays for itself by week three.
What 'scope-closed' means in practice
A scope is closed when four artifacts exist and have been read by everyone who can pull the cord on the engagement. The artifacts are short. They are not deliverables — they are decisions made visible.
- A one-paragraph problem statement that names the user, the surface, and the failure mode being addressed.
- A bullet list of what the engagement will produce — not capabilities in the abstract, but artifacts you can point at.
- A bullet list of what the engagement will not produce, with one sentence per omission explaining why.
- A measurable acceptance bar — the eval, the trace, the runbook check that says 'we can ship this.'
If we cannot say in one sentence what done looks like, we cannot finish in twelve weeks.
Four questions we ask before signing
Most of the cost of a bad engagement is paid before the contract — in the conversations that did not happen because both sides were polite. We are not polite about scope. We ask:
- Who is the named owner on your team after handover, and is that person already in the room?
- What is the smallest version of this that proves the bet — and would you ship that version?
- What does your on-call rotation look like, and is it ready to absorb a new system?
- If the eval bar slips by 10%, what is the call you make: hold, ship, or roll back?
If any of those questions does not have an answer, the scope is not yet closeable. We do not start until they do.
What we deliberately leave out
A scope-closed engagement omits more than it includes. The omissions usually look like: adjacent surfaces that 'might as well' be in scope; future capabilities that depend on this one but are not this one; integration work with systems no one in the room owns. We name these in the omission list with one sentence each, so nobody discovers them as a surprise mid-engagement.
Our engagement note on a content operations platform names three adjacent vendor workflows the omission list deliberately kept out — and why ring-fencing them on day one was what made the platform finishable in twelve weeks.
The scope discipline is not glamorous. It is not what wins a stage. It is what makes the engagement finishable — and finishable engagements are the only kind worth offering.