Engineering Leadership
You Probably Built a Platform for the Wrong Reason
Here is a pattern that repeats itself across engineering organizations with remarkable consistency. A company has three or four product teams all building on the same rough domain — auth, payments, notifications, whatever it happens to be. Each team has built their own version. The versions are slightly inconsistent. There’s duplicated work, duplicated bugs, duplicated maintenance burden. Someone senior looks at this and says: we need a platform. A platform team gets created. Eighteen months later, the platform exists, the product teams are resentful about the migration tax, and the original inconsistency problem has been replaced by a new problem, which is that the platform serves the org chart rather than any coherent product need.
This is the most common platform failure, and it’s almost never diagnosed correctly. The post-mortem usually lands on execution problems — the platform team didn’t communicate well, the API design was too rigid, the migration wasn’t prioritized. Occasionally it lands on prioritization — we should have staffed it better, given it more runway. Rarely does it land on the actual problem, which is that the decision to build a platform was made before anyone rigorously established that the use cases shared the same underlying need. They shared the same surface — they all touched payments, or auth, or notifications — but the requirements were different enough that a common abstraction required so many escape hatches and configuration options that it became harder to use than just building the thing directly.
The insight that’s easy to miss: having multiple teams solve the same problem separately is not, by itself, evidence that a platform is the right answer. It’s evidence that something is being solved separately. The question is whether the separately-solved problem is actually the same problem, or whether it just looks the same from the outside.
The Test You Should Run Before Committing
Before standing up a platform team, the work worth doing is forcing an explicit articulation of the shared need. Not “we all need to handle payments” — that’s the surface. The question is whether the specific requirements, the edge cases, the latency tolerances, the error handling contracts, and the compliance constraints are sufficiently aligned that a common implementation would actually serve all the use cases without generating a pile of exceptions. If the answer is yes, you have a genuine platform opportunity. If the answer is “mostly, but team B has a bunch of edge cases,” you have a hard integration problem masquerading as a platform opportunity.
The organizations that build platforms well tend to have done this work honestly. They’ve found two or three concrete use cases where the overlap is genuine and non-trivial, built the platform against those specific use cases until it proved itself, and then let expansion happen organically as other teams pulled the platform toward their needs rather than having their needs pushed into the platform. That’s a different development model from “build the common infrastructure and let teams migrate to it.” It’s slower in the early stage. It produces something that actually gets used.
The other thing the good ones do is keep the platform team directly accountable to the product teams they serve. Not through occasional stakeholder reviews or quarterly roadmap syncs, but through a real dependency: the platform team’s success metrics are the adoption and satisfaction of the product teams building on the platform. When a product team doesn’t migrate, that’s a platform team problem, not a product team problem. That accountability structure is uncomfortable to set up and uncomfortable to maintain, but it’s the only thing that reliably prevents platform teams from optimizing for elegance over utility.
What “Platform Thinking” Actually Means
The term has gotten inflated to the point where it means almost anything — common infrastructure, shared services, internal developer tools, API-first design. But the core of the idea, stripped of the consultant-deck packaging, is actually pretty specific: it’s the decision to solve a class of problems generically rather than solving each instance of the problem specifically. That decision is worth making when the class is real, when the generalization is tractable, and when the cost of maintaining a generic solution is less than the cost of maintaining multiple specific ones.
None of those conditions is automatically met just because multiple teams are solving similar-looking problems. Figuring out whether they’re met requires talking to the teams building the products — not just looking at their codebases from the outside — and understanding whether their requirements actually converge or just appear to from a distance.
The right question to ask is blunt: what problem keeps getting solved separately that shouldn’t be? Not “what infrastructure do we have duplicated,” because duplication is a symptom, not a diagnosis. The teams duplicating the work might be doing it because the requirements genuinely diverge and no common abstraction would serve them. They might be doing it because nobody gave them time to discover and adopt an existing solution. Or they might be doing it because there’s a shared underlying need and the org has never focused the right people on building for it.
Those are three completely different situations with three completely different right answers. The platform decision is only the right answer for one of them.
Tags:
Keep reading
The Cost of Instability Is Invisible Until It Isn't
July 15, 2025 Engineering LeadershipYour Best Engineers Should Spend Less Time Coding: The Unpopular Truth About Scaling Technical Leverage
May 31, 2025 Engineering LeadershipThe On-Call Rotation the CPO Never Sees
February 1, 2026The things nobody writes on LinkedIn
Monthly signal from leaders navigating the product-engineering merge — real decisions, real tradeoffs, no thought leadership filler.
No spam. Unsubscribe anytime.