9 min read

Planning in the Agentive Era

Agentic dev breaks planning. Work finishes before tickets get written. Estimates don't apply. But orgs still need dates. The fix: plan outcomes, not tasks. Artifact in the repo where agents read it. Ranges, not commitments. Most orgs aren't ready. Figuring it out live.
Planning in the Agentive Era
Photo by Alvaro Reyes / Unsplash

I'm actively rolling out agentic development across my engineering organization. I currently cannot tell you when anything will be done.

This isn't because I'm bad at planning. It's because the tools I used to plan with no longer apply, and the new tools don't exist yet. This post is speculative—I'm in the middle of this problem, not past it.

What's Broken

The traditional model assumed one engineer works on one task for some estimable amount of time. You could use historical data to forecast. The estimates were imperfect, but they were useful imperfections.

That model is gone.

With agentic development, I can sometimes deploy an entire product feature in five days. My old heuristic was that five days was the maximum size for a single ticket. Now five days is a feature, not a task. We can often implement something before we can write the ticket describing it.

Meetings have become extraordinarily expensive—human time directed at agents is productive; human time in coordination meetings is not. Estimation is meaningless when I have no intuition for the new mode and variance is enormous.

But here's the uncomfortable reality: I still need to signal progress to the rest of the organization. Marketing needs launch timing. Sales needs to know what to promise. Leadership needs to forecast.

We're running two systems simultaneously: internal fluidity and external legibility.

Engineering might be operating in a new mode—fast, variable, iterative—but we still have to emit signals that the rest of the organization can consume. The old tools gave us that legibility, however imperfectly. Roadmaps, sprint commitments, Gantt charts—they were fictions, but useful fictions that let the organization coordinate. Now those fictions are too obviously false to maintain, and we don't have replacements.

What Might Replace It

I don't have answers, but I have intuitions about what the new model might look like.

Outcome-level planning, not task-level. The granularity of planning has to shift up. Instead of breaking features into tasks, estimating tasks, and tracking task completion, you plan at the feature or outcome level. "We're working toward this capability" rather than "engineer A is on ticket B for three days."

The task breakdown still happens—but it happens in real-time, in the conversation between the engineer and the agent, not in advance in a planning system. Work breakdown as a planning activity becomes obsolete.

Strategy becomes the planning artifact. If you're not planning at the task level, what are you planning? Strategic outcomes. The things you're trying to accomplish. The direction you're moving.

Instead of strategy → roadmap → epics → stories → tasks, you just have strategy → outcomes. An engineer picks up an outcome and works with agents until it's done. The intermediate layers vanish.

The artifact lives where agents can access it. Jira is a system humans use to coordinate with other humans about work that humans do. It's not designed for a world where agents are doing significant work and need to understand context.

I'm thinking about something as simple as a markdown file in the repository. Human-readable. Agent-accessible. The agent can read it to understand priorities. The agent can update it when work completes. The agent can even propose additions based on what it encounters during implementation.

The exact format matters less than the principle: the planning artifact must live with the code and be readable and writable by both humans and agents.

And because it's in version control, you get event history for free. Every commit is a record of what changed and when.

Ops work is just a queue. Bug reports, small requests, operational issues—these stay granular. Someone files a ticket, it goes in a queue, engineers pull from it. Jira is fine for this. It's intake, not planning.

The hard problem is project work: new features, reinvestment, strategic initiatives. That's where the old model breaks.

A thin interface for non-technical stakeholders. The rest of the organization needs to see what's happening without touching the repo. Maybe it's a rendered view of the markdown. Maybe it's a bot that answers questions about status. Maybe it's a generated weekly summary.

The key is that you don't rebuild Jira. You create the thinnest possible translation layer between "people who don't live in the repo" and "the planning artifact that lives in the repo."

The Forecasting Problem

Here's the piece I really don't have figured out: how do you predict when things will be done?

To forecast, you need to understand patterns from the past. Humans were never good at estimating—we just had no alternative. We used story points and velocity as proxies for something we couldn't actually measure.

Now the alternative is clear, even if we can't build it yet: the system should observe what actually happens and predict based on patterns. How long did outcomes of similar scope take? What's the variance? What factors correlate with faster or slower delivery?

The output wouldn't be a commitment—it would be a distribution. "Outcomes like this have historically completed in 3-10 days, median 5 days." Probabilistic, which is what planning always should have been.

Commitment-based planning may be structurally incompatible with agentic execution. The variance is too high, the factors too unpredictable, the historical intuitions too obsolete. Organizations that insist on date commitments will either get false precision or constant disappointment.

The infrastructure for this is a planning artifact in version control with event history. Something needs to read that history and project forward. I don't know how to build it yet, but I believe it has to exist. Human estimation doesn't work when the variance is this high and we have no intuition for the new mode.

The bootstrapping problem. There's a painful reality here: you can't have machine-generated forecasts until you have history in the new mode. We're in the period where we're building the dataset that will eventually enable prediction. Until then, we're communicating forecasts as ranges, framing them as "based on emerging patterns," and asking the organization to trust a system we haven't proven yet.

The real shift is cultural, not technical. Moving from human promises to system-observed probability distributions isn't just a tooling change. It's an organizational epistemology problem—how the company believes what it believes about the future.

Sales wants a date to tell the customer. Marketing wants a launch day to plan around. Leadership wants a roadmap they can present to the board. Everyone downstream of engineering has been trained to expect commitments, and they've built their own processes around those commitments.

Telling them "we forecast 70% confidence of delivery within two weeks, with a long tail if we hit architectural complexity" is technically more honest than "it'll be done March 15th." But it requires the entire organization to get comfortable with ranges instead of dates. That's a hard shift. It requires trust in engineering that many organizations haven't built. It requires other functions to adapt their own planning to absorb uncertainty rather than demand false precision.

This is why I say the agentive era isn't just an engineering transformation—it's an organizational one. The planning problem can't be fully solved inside engineering. It requires the rest of the company to change how they consume and act on information about the future.

It also forces uncomfortable questions about roles that were built around coordination rather than decision-making. If task planning disappears, coordination moves to artifacts, and updates are auto-generated, then a lot of traditional delivery and management functions have to reinvent themselves. I don't have answers for that either, but ignoring it would be dishonest.

Protecting Human Time

If human time directed at agents is extraordinarily productive, and human time in meetings is nearly unproductive, then the interface to the rest of the organization has to be async.

Agent-generated summaries. Auto-generated weekly updates pulled from the planning artifact and commit history. Bots that can answer "what's the status on X?" by querying the repo. Recorded demos instead of live walkthroughs.

The goal is to provide the signals the organization needs without pulling engineers into coordination overhead. Marketing gets their launch timing. Sales knows what's coming. Leadership sees progress. But engineers stay in flow, directing agents instead of sitting in meetings.

This is a shift in how the rest of the organization interacts with engineering—from synchronous check-ins to async consumption of generated updates. It requires trust, and it requires the generated updates to actually be good. Another thing to figure out.

What Leadership Becomes

There's an implicit shift here in what engineering leadership does.

If engineers orchestrate agents, and agents decompose work, and planning artifacts encode intent—then leadership's job changes. It's less about allocating tasks and more about making intent legible. Shaping the environment so that engineers and agents can make good decisions without asking.

This means:

  • Writing clear outcome definitions with the components that make them executable
  • Ensuring strategic context is documented, not just known
  • Removing ambiguity before it becomes a blocker
  • Trusting the system and intervening only when direction needs to change

It's not project management. It's not architecture exactly. It's closer to operationalized judgment—creating the conditions for good decisions to happen without you in the loop.

This fits a theme I keep returning to: judgment over frameworks. The frameworks (sprints, story points, velocity) are breaking down. What remains is the need for good judgment, now encoded in artifacts rather than exercised in meetings.

The Prerequisite Most Organizations Are Missing

There's a harder problem underneath all of this: this model requires executable strategy, and most organizations don't have it.

If you're planning at the outcome level, you need to know what outcomes matter. That requires strategic clarity—real strategy that tells you which features to build and which to ignore, not just aspirational goals dressed up in strategic language.

Roger Martin has a useful framing: good strategy is a decision-eliminator, not a decision-raiser. Strategy should answer questions before they're asked. "Should we optimize for speed or accuracy?" "Do we support this edge case?" "Is cost or latency the binding constraint?" If your strategy is clear, these questions have answers. If it's not, every ambiguity becomes an escalation.

In the agentive era, this matters enormously. When an engineer is working with agents and hits an ambiguous decision point, they have two options: stop and escalate, or make a judgment call and keep moving. If strategy is vague, you get one of two failure modes—either constant interruptions that destroy the speed advantage of agentic development, or autonomous decisions that are locally coherent but don't add up to organizational intent.

What makes an outcome executable? An engineer picking up an outcome needs to be able to run—with agents—without stalling for clarification. That means the outcome artifact has to answer certain questions upfront:

  • Objective: What changes in the world if this succeeds? Not "build feature X" but "customers can do Y."
  • Non-goals: What are we explicitly not optimizing for? This prevents agents and engineers from gold-plating or solving adjacent problems.
  • Constraints: Legal, infrastructure, cost, latency, compatibility—whatever bounds the solution space.
  • Reversibility: How dangerous is getting this wrong? Can we iterate, or do we need to be right the first time?

This looks like a product strategy doc stripped of fluff and tuned for machine and human execution. Most organizations don't produce artifacts like this. They produce vague briefs, or they rely on tacit knowledge that lives in someone's head.

The outcome artifact is strategy made operational. It's the mechanism by which strategic intent gets encoded in a form that both humans and agents can execute against. Without it, you're relying on engineers to intuit what leadership meant—and hoping the agents they're directing share that intuition.

In the agentive era, vague strategy doesn't just cause slow execution—it causes fast misalignment. When delivery was slow, there was time to course-correct. Misunderstandings surfaced in sprint reviews, in QA, in the long feedback loops of traditional development. Now you can build the wrong thing quickly and confidently. By the time anyone notices, you've shipped.

Most organizations run on gut feel. A leader has an intuition, says "let's go do project X," and the team executes. That worked okay when execution was slow, because there was time for the thinking to get extracted through meetings and conversations. The slow cycle gave space for clarification.

Now that's a bottleneck. The engineer picks up "do project X," starts working with agents, and immediately hits decisions that require understanding the why behind the gut feel. What tradeoffs are acceptable? What does success look like? If that context is locked in someone's head, the engineer is blocked—or worse, they guess wrong and build the wrong thing fast.

The agentive era makes weak strategy painfully visible. When you can deliver features in days, you notice immediately when you don't know what features to build. The lack of direction that was always there becomes impossible to ignore.

This isn't a problem I can solve with better planning tools. But it's worth naming: if your organization doesn't have executable strategy, you have a bigger problem than planning methodology. The agentive era won't break you, but it puts a ceiling on how much you can gain.

Where I Am

I'm trying things. The markdown-in-repo approach. Outcome-level planning. Protecting engineering time from coordination overhead. Hoping that patterns emerge as we accumulate history in the new mode.

I don't know if it works yet. I don't know how to give the rest of the organization the predictability they need while we figure this out. I'm living in the gap between how engineering now works and what the organization needs from engineering.

What I do know: the old model is broken, and pretending otherwise helps no one. Planning was always probabilistic—we just had tools that let us pretend it wasn't. Now the pretense is unsustainable.

If you're in a similar situation, I'd like to hear what you're trying. This is a problem we're going to have to solve together, because I don't think any of us have the answer yet.