The question I've been asked most often in 2026 is: when does AI get real? My answer is always the same — it got real the moment it had to touch something.
For most of the past decade, artificial intelligence has lived in the data center. It has optimized ad bids, summarized documents, answered questions, generated images. All of this has been remarkable. None of it has had to contend with soil moisture, equipment failure at 3am, a crop that doesn't follow the training data, or the tolerance stack-up in a machined part.
What I Mean by Physical
I've been working on versions of this problem for over twenty years — I just didn't have a name for it. At Verve, we built the mobile location intelligence layer that powered attribution for some of the largest advertisers in the world. The core problem was deceptively simple: a person sees an ad on their phone, walks into a store, and buys something. Can you connect those two events? It sounds like a data problem. What it actually is — is a physical intelligence problem.
To solve it, we had to build a device graph that understood the relationship between anonymous devices and real human movement patterns. We had to reason about signal quality — GPS in an open parking lot versus Wi-Fi triangulation inside a mall versus cell tower inference in a rural market. We had to know, with enough confidence to stake a client's media budget on, whether that device was inside that building at that moment. Nokia and Qualcomm backed us because they saw what we were building: a spatial intelligence layer for the physical world, operating at scale, on hardware that was already in everyone's pocket.
That work taught me something that has shaped everything since: intelligence at the edge is categorically different from intelligence in the cloud. In the cloud, you have time, compute, and clean data. At the edge — on a moving device, in a partially obstructed space, with intermittent connectivity and a sensor suite that was never designed for your use case — you have none of those luxuries. You have to be right anyway. The models have to be smaller, faster, and more robust. The failure modes have to be understood, because you can't watch them happen in real time. The system has to degrade gracefully when the environment stops cooperating.
Physical AI, to me, means intelligence that operates under those constraints by design — not as an afterthought. It means models trained on the messy reality of the physical world, not on the sanitized abstractions of a data warehouse. In-store attribution was an early, narrow version of this. What's coming — in agriculture, in manufacturing, in robotics, in any domain where the compute has to live close to the thing it's sensing — is the same problem at a fundamentally different scale.
For the full AgTech brief, including the Four Structural Hurdles and the Soil Test framework: Clean Rooms and Crop Fields →
The Agricultural Moment
Agriculture is not the only domain where Physical AI is becoming real — but right now it may be the clearest proof point we have. The conditions are forcing the issue: connectivity is scarce, latency is critical, and the cost of a wrong decision is measured in acres, not milliseconds. You can't roll back a bad herbicide pass.
This is why agriculture is ahead of where most people think it is. In 2025, John Deere's See & Spray technology covered over 5 million acres, delivering verified herbicide reductions of 50–60% depending on weed pressure. That's not a pilot. That's a fundamental shift in input economics — money directly back in the farmer's pocket, at scale, running on AI inference at the edge of a moving machine.
At the same time, we're approaching what I'd call a ChatGPT moment for robotics. Companies like Physical Intelligence and Figure — and Google's own Gemini Robotics work — are building generalist foundation models for physical action. The goal is machines that can navigate mud, variable light, and unpredictable crops without being hard-coded for every movement. Zero prior exposure to the environment, still functional. That's a different category of capability than anything that came before it.
Further up the stack, AI is reshaping discovery itself. AlphaFold 3 is compressing the pre-clinical phases of biological R&D — work that previously took five to seven years — into roughly 24 to 30 months. Ohalo's Boosted Breeding uses proprietary proteins to enable full-genome inheritance and true seed potato at commercial scale for the first time, with AI optimizing cross-selection across millions of possible parent combinations. And WeatherNext 2 generates forecasts in under a minute on a single TPU — previously hours on a supercomputer — producing outlooks up to 15 days ahead, giving the entire agricultural supply chain the ability to model tail risks in minutes rather than weeks.
The near-term ROI is already measurable. In India, Wadhwani AI's precision pest management platform increased farmer profits by 26% and reduced pesticide use by 38%. In developed markets, agentic robotics are delivering net benefits of $20–$30 per acre through input savings alone. These are not projections. They are results.
My interest in agriculture isn't purely professional. My mother grew up on a vineyard. Her father farmed. I've made wine, and I understand something about terroir — the way that soil composition, microclimate, sun exposure, and the particular character of a season converge in a piece of fruit. It cannot be fully modeled. It can only be tasted. That gap between the measurable and the real is exactly what draws me to this domain.
What draws me to agriculture as a proving ground for Physical AI is that it compresses every hard problem into a single domain: edge inference, real-time sensing, unstructured environments, biological variability, economic constraints, and human trust. If you can solve it in the field, you can solve it anywhere.
The Shop as Laboratory
I have a full shop in Encinitas. I built my own CNC router. This is not a metaphor — it's a laboratory for the same problem I work on professionally. When a toolpath fails, there is no retry button. The wood is gone. The part is wrong. You learn tolerance, you learn material behavior, you learn that the model in your head and the model in the machine are never exactly the same.
That gap — between the digital model and the physical result — is exactly what Physical AI has to close. It's the most interesting engineering problem of the next decade.
Where It Goes from Here
The structural shift I'm watching is from vertical silos to horizontal intelligence layers. Just as Android standardized mobile infrastructure and let everyone build on top of it, the physical world needs a foundational open stack: multimodal reasoning models, cloud operating systems, and world models like Google Earth Engine that simulate the biosphere itself. Big Tech provides the horizontal layer. Domain experts — seed breeders, machinery manufacturers, agronomists — build proprietary vertical value on top without reinventing AI infrastructure.
In that model, agriculture is one vertical. Manufacturing is another. Logistics, construction, energy — the same architecture applies. The intelligence layer is the same. The constraint is the same. Physical AI isn't about any single industry. It's about what happens when you stop treating the physical world as an edge case and start treating it as the primary environment.
That's the work. That's what I'm building toward.