
You have a roadmap. It assumes the technological landscape will be roughly stable for 12-18 months. That assumption is breaking.
Discovery compression is not a future phenomenon. It is happening now. The AI capabilities available to your product in Q4 will be meaningfully different from Q2. Your competitors are experiencing the same acceleration. Your users' expectations are recalibrating in real-time.
This is not a strategy document. It is a survival guide.
Traditional PM planning assumes:
Under discovery compression:
The frameworks you learned are not wrong. They are incomplete. They need to be augmented for an environment where the ground moves.
*Old model**: Annual planning, quarterly adjustments, monthly reviews.
Old model: Annual planning, quarterly adjustments, monthly reviews.
New model: Quarterly planning, monthly replanning, weekly capability scanning.
This is not about working faster. It is about holding plans more loosely. The most dangerous thing you can do is execute a 12-month roadmap without replanning for capability shifts.
Practical move: Build explicit "capability checkpoints" into your roadmap. At each checkpoint, ask: What can we now do that we couldn't when we planned this? What can competitors now do?
Old model: Build core differentiators, buy commodities. The decision is relatively stable.
New model: What's differentiated today is commoditized tomorrow. Today's core may be tomorrow's commodity.
When AI makes capabilities cheap, your custom-built solution becomes a liability competing against purpose-built tools. When AI shifts, purpose-built tools may lag behind what you could build.
Practical move: Track the "months until commoditized" for every feature you're building. If the answer is less than your build time, reconsider.
Old model: Watch competitors' products and features.
New model: Watch foundational AI capabilities. Your next competitor may not be a company—it may be a foundation model update that enables anyone to build your product.
GPT-5, Claude 4, Gemini 2—these are not just tools. They are capability discontinuities. When they land, some products become trivially replicable and others become newly possible.
Practical move: Assign someone to track foundation model development. Not for features—for capability plateaus and cliffs.
Old model: Users compare you to your competitors.
New model: Users compare you to the best AI experience they've had anywhere. ChatGPT, Midjourney, Copilot—these set baselines that bleed across categories.
A user who has experienced AI that just works will not tolerate your legacy search interface. Expectations are not domain-specific anymore.
Practical move: Your baseline is not "better than competitors." It is "doesn't feel broken compared to ChatGPT."

Not everything accelerates. Understanding what is compression-resistant helps you allocate attention.
Still slow (build moats here):
Accelerating (do not assume stability):
Build moats in the first category. Be nimble in the second.
For each feature on your roadmap, annotate:
Features with high compression risk, high dependency on current limits, and high pivot cost should be time-boxed and reversible.
Layer 1: Problem (most stable): The user problem you solve. Problems are human. They compress slowly.
Layer 2: Solution approach (moderately stable): The general approach to solving the problem. This can shift but not weekly.
Layer 3: Implementation (least stable): The specific technical approach. This may need to change quarterly.
Build identity and strategy around Layer 1. Be flexible on Layers 2 and 3.
Maintain a list of "if this becomes possible, we should X" triggers:
Review this list monthly. When a trigger fires, execute the response.
Discovery compression makes product management harder, not easier.
The easy version of the story: AI does more, you do less, everything is simpler. This is wrong.
The real version: AI makes more possible, but also makes more possible for everyone. The landscape of what you could build, what competitors could build, and what users expect expands simultaneously.
The job is not easier. It is different.
What gets easier:
What gets harder:

1. **Audit your roadmap** for compression risk. Flag any feature that assumes AI capabilities will remain static.
Audit your roadmap for compression risk. Flag any feature that assumes AI capabilities will remain static.
Set up capability tracking. Subscribe to AI research updates. Read the GPT-5 release notes, not just the product updates.
Time-box uncertain bets. If you're building something that could be commoditized in 6 months, cap the investment.
Identify your compression-resistant assets. What do you have that AI does not obsolete? Double down there.
Talk to your team. Your engineers are watching AI developments. Ask them what's shifting. They know.
This is not all defensive. Discovery compression creates opportunities:
The PMs who thrive will be those who scan for these opportunities as eagerly as they defend against threats.
The ones who struggle will be those who either ignore compression (executing static roadmaps) or are paralyzed by it (refusing to commit to anything).
The move is neither rigidity nor paralysis. It is adaptive planning—holding direction firmly and implementation loosely.
Discovery compression is the new operating environment. Learn to build in it.
This is a translational piece connecting speculative mechanics to practitioner needs. For the underlying mechanic, see Discovery Compression. For related practitioner guidance, see For Executives: Scarcity Inversion and Strategic Planning.