Be fast or best, not both.
Pick a lane. Move quickly and learn in public, or commit to best-in-class at a slower cadence. Optimizing for both produces confused direction and mediocre output.
A decade building products across data, AI, and platform ecosystems — in environments where complexity is the constraint.
Execution keeps getting cheaper; what stays scarce is judgment in messy, high-leverage decisions. That's the work.
I've spent a decade building and scaling products in environments where complexity is the constraint — multi-billion-dollar platforms, dense stakeholder maps, and heavily regulated domains.
The work has always been the same shape: turn data, AI, and platform strategy into durable business outcomes, not just shipped features.
What separates product leaders in AI-shaped markets isn't more process — it's knowing what to build, when to say no, and where to concentrate scarce attention. The principles I return to most often.
Pick a lane. Move quickly and learn in public, or commit to best-in-class at a slower cadence. Optimizing for both produces confused direction and mediocre output.
A precise system — even if slightly off — can be tuned. An imprecise one is noise. Sharp goals and unambiguous criteria make strategy adjustments surgical instead of chaotic.
Experiments aren't a hobby. Timebox tests, set kill/ship thresholds upfront, watch the ratio, and treat consecutive losses as a signal to pivot — not a sunk-cost excuse to continue.
People reach for painkillers before supplements. Solve acute, expensive problems first; you earn permission to add delight later, and compounding trust along the way.
A process is a checklist people follow. A system is an engine that produces outcomes even when you're not in the room. Invest in incentives and feedback loops, not more templates.
Rebuilding to arrive at the same place is waste. Reinvention is justified when it raises the bar by an order of magnitude — on reliability, speed, economics, or strategic control.
Activity is not progress. The goal is maximum business and customer impact for the least necessary work. Focus teams on the delta you create in the world — behaviour changes, economics, resilience — not features shipped.
Walk as far as you can see — then decide and learn. Suddenly, you can see a bit further.
My work sits at that intersection — designing harnesses, orchestration layers, and unit economics that make AI both reliable and viable.
Most AI agent projects don't fail because the model is weak — they fail because the system around the model isn't designed to handle failure.
Drawing on biomimicry, I focus on harness engineering: feed-forward and feedback guides, guardrails, and sensors that govern what happens when agents go wrong, so the system fails gracefully instead of catastrophically.
The next advantage won't be "who has the best model?" — it'll be who owns the layer that routes across many models with the right economics and reliability.
That means treating AI infra like a portfolio: separating training capex from inference COGS, defining the true unit of value, and engineering routing, caching, and model choice so contribution margins improve as usage scales.
The real product opportunities lie where infrastructure is breaking: power and compute bottlenecks, memory and data exhaustion, networking and transmission limits, and the architectural gaps in digital twins and fusion-scale systems.
I'm drawn to problems where solving the underlying architecture — rather than adding another wrapper — creates durable advantage for decades.
Every job is task plus judgment. AI speeds the task. Your judgment just got leverage.
A few recent pieces that best capture how I think — on LinkedIn, where most of the conversation actually happens.
Three things: empathy for real users, understanding the system around the product, and naming the bet under ambiguity. Execution keeps getting cheaper — judgment in messy, high-leverage decisions is what stays scarce.
On AI agents and harness engineering — why the real challenge isn't model quality, but designing systems that fail well. Inspired by biomimicry and benchmark gains from changing the harness, not the model.
Why the next advantage in AI won't be a single model — but an orchestration layer routing across a portfolio of models. Borrowing patterns from index funds, cloud control planes, and portfolio theory.
A practical framework for defining the economic unit (query, user-month, or task), separating training capex from inference COGS, and engineering margin levers — model choice, token discipline, routing.
Nine principles for product leadership in an AI era — from "Be fast OR best" to "Systems over process" and "Outcomes over outputs." On culture, signal, and decision quality.
A recent talk on the next tool transition in work. Each prior shift — ATMs, spreadsheets, apps — grew its field by routing execution through machines. The person who can describe what to build becomes the unit of leverage; judgment, not typing, compounds.
I focus my giving on expanding the circle of dignity and opportunity — from animal welfare and direct cash transfers, to children's futures and anti-corruption work that enables healthier economies.
Open to senior product leadership roles, advisory conversations, and opportunities at the intersection of AI, platforms, and complex systems. The best way to reach me is through LinkedIn.