The bottleneck has shifted. For decades, the constraint in building products and executing strategy was execution — engineering bandwidth, headcount, data-gathering cycles. That constraint is dissolving. When AI can write code, synthesize markets, model scenarios, and iterate at machine speed, the scarce resource becomes the quality of the judgment directing it.

I've been waiting for this moment my whole career — not because I saw it coming, but because of what I am. I'm both the product person and the engineer. I've never been able to separate the vision from the execution, the strategy from the build. For thirty years that made me hard to categorize. Today it makes me rare.

The product person sees the opportunity and frames the right problem. The engineer builds the system that captures it. In the agentic era, you need both in the same skull — because the work is now about specifying intent precisely enough that autonomous systems can act on it. That's a product skill and an engineering skill simultaneously.

And now intelligence is crossing into the physical world. The same patterns that power a trading agent or a Chief of Staff are starting to run on tractors, robots, and CNC machines. The opportunity isn't just endless — it's the most interesting engineering problem of my lifetime.

What follows is a description of my personal stack — four agents, four domains, one architecture. The goal isn't to show off. It's to make the abstract concrete. This is what the agentic era looks like from the inside.

For the broader strategic context behind why this moment matters: The Golden Age of Strategic Thinking →

The Chief of Staff

Domain · Work
Google Cloud FDE · Chief of Staff Agent

Synthesizes signals across the organization, manages context, prepares briefings, and keeps me operating at director level without losing detail.

Daily Briefing Context Management Signal Synthesis Google Cloud

Running a North America-wide Forward Deployed Engineering practice means managing an enormous amount of signal — customer engagements, competitive intelligence, team capacity, deal stages, product roadmap. The Chief of Staff agent is what keeps that from becoming noise.

Each morning it synthesizes the overnight signal: what moved, what's at risk, what needs my attention today versus this week. It doesn't summarize everything — it reasons about priority. That distinction is what separates an agent from a digest. A digest tells you what happened. An agent tells you what matters.

The Trading System

Domain · Finance
Antigravity · Autonomous Trading Infrastructure

A fully automated CIO triad running in Antigravity Desktop Agent. Six skills, three workflows, a DuckDB local ledger, and a Trading Constitution governing every decision via RAG.

Antigravity DuckDB Gemini 3.1 Pro TradingView / Pine Script RAG

The trading system is the most architecturally complete agent I've built. It runs on a CIO triad model: three portfolio accounts — a wealth preservation account focused on capital protection, a growth account for high-conviction entries, and a labs account for experimentation and learning complex options positions — each governed by a separate mandate encoded in the Trading Constitution.

The Constitution is an immutable document that defines risk tolerance, position sizing rules, entry and exit criteria, and the philosophical framework for every decision. It's retrieved via RAG on every synthesis task, which means the agent can't drift from its mandate no matter how compelling the market signal looks in the moment.

The skills layer handles the intelligence work: ark-intel tracks institutional flows, company-research does fundamental analysis, daily-briefing synthesizes overnight market conditions, portfolio-reporter tracks position-level performance, strategy-consultant stress-tests thesis against the Constitution, and digest-publisher produces the morning report. Three workflows — /daily_briefing, /daily_journal, /daily_digest — chain these skills into coherent daily operations.

On the signal side, I built a custom TradingView indicator suite in Pine Script covering volatility trend detection, entry triggers, and volume confirmation. The roadmap closes the loop with TradingView webhook integration into broker API execution via IBKR — at which point the system moves from advisory to fully autonomous.

The Health Agent

Domain · Personal Health
Health Stack · Sensing, Synthesis, Action

Applying the same agentic architecture to personal health — ingesting biometric data, synthesizing patterns, and generating actionable protocols rather than raw numbers.

Biometrics Pattern Recognition Protocol Generation Longitudinal Tracking

The health agent is the most personal piece of this stack and the one still under most active development. The core insight is the same as the trading system: raw data is not intelligence. A wearable that tells you your HRV was 42 this morning is a sensor. An agent that tells you what that means in the context of your sleep trend, training load, and stress markers over the past 30 days — and suggests what to do about it — is infrastructure.

The architecture mirrors what works elsewhere: ingest, synthesize, act. The difference in the health domain is that the action layer has to be conservative by design. The agent recommends; I decide. That governance constraint is intentional.

CAD to CNC

Domain · Making · Physical AI
3D Model Decomposition · Hollow Surfboard Skeletons

Using AI to decompose Fusion 360 surfboard designs into internal skeleton geometry — ribs, spars, and joining structures — optimized for CNC cutting and hollow wooden construction.

Fusion 360 3D Model Decomposition CNC / CAM Parametric Design Physical AI

This one closes the loop between everything else on this page and the physical world — which is where I've always wanted to end up.

A hollow wooden surfboard is structurally complex. The outer shell is the easy part. The internal skeleton — the ribs, the longitudinal spars, the joining structures at the nose and tail — has to be engineered precisely for the specific rocker profile, width template, and thickness distribution of that particular board. Get it wrong and the board is either too stiff, too soft, or structurally compromised. There's no iterating on a physical object the way you iterate on code.

The agent starts with a complete 3D model in Fusion 360 and decomposes it: extracting cross-sectional profiles at regular intervals along the stringer line, computing rib geometry that conforms to the interior shell surface with the correct clearance tolerances, generating the interlocking notch patterns that allow the skeleton to be assembled dry before glue-up, and producing a cut list optimized for material yield on a given sheet size.

What used to take days of manual drafting now takes an hour of supervised generation. The agent does the geometry; I do the judgment — which species of wood for which structural role, where to add material for a fin box, how to account for the inevitable variation between the digital model and the physical blank. That division of labor is exactly right. The machine handles what it's good at. I handle what requires understanding what physical actually costs.

The agent produces geometry. I produce the board. Neither could do it alone.