Physical AI
← All projects
Project 18Phase HStrategy·Hardware: None

Project 18 — Strategy memo

Phase 6.1 in the roadmap. Two pages, no longer. Opinion in the first paragraph. Specific recommendations, not vague gestures.


Why this matters

Engineers can be allergic to writing. The 2-page strategy memo is the highest-ROI artifact of the entire roadmap because:

  1. It forces you to have an opinion. The eight code projects build technical fluency; this one builds judgment.
  2. It's the kind of artifact that travels. A good 2-page memo is the difference between "smart engineer" and "engineer who can be trusted with a roadmap conversation." If you can hand it to a director on day 1 of the job and they learn something, you've established positioning.
  3. It compounds with the technical work. Numbers from projects 01–08 belong in the memo as evidence. The memo turns the projects from a portfolio into a thesis.

How to use this folder

  1. Pick one of the three prompts: 01-where-data-intelligence-should-focus.md, 02-how-cosmos-changes-the-roadmap.md, or 03-the-robotics-opportunity.md.
  2. Use memo-template.md as scaffolding. Don't dilate beyond two pages.
  3. When done, save your draft as memo.md in this folder.
  4. Optional: send it to one trusted reader for feedback. Iterate once. Stop.

What "good" looks like

A strong 2-page memo has:

  • A claim in the first paragraph. "Applied Intuition's Data Intelligence team should X because Y." Not a survey, not a question.
  • Three to five concrete recommendations. Each tied to evidence (a number, a trend, a competitor's move).
  • A "what I'd ship in 90 days" section. Specificity is credibility.
  • A "what I might be wrong about" caveat. Two sentences. Shows judgment.

Common failure modes to avoid:

  • Surveying instead of arguing. "There are many players in this space and several approaches" is filler.
  • Hedging the headline. "Applied Intuition might want to consider exploring…" is a non-claim.
  • Borrowed numbers without source. Cite. The docs in /docs/ give you most of what you need.
  • Implementation detail in the memo body. Save it for an appendix or a follow-up doc.

Format constraints

  • 2 pages, ~1000–1300 words. Strict.
  • Markdown. No fancy formatting. Headers, short paragraphs, bullet lists where appropriate.
  • One small chart or diagram is welcome (Mermaid, ASCII, or a referenced PNG). Two is too many.
  • Cite at least 3 primary sources (company press releases, papers, regulatory filings) — link to URLs from the /docs/ folder.

Suggested time budget

  • 30 min — re-read the three memo prompts, pick one, free-write a "what I think" paragraph without consulting the docs.
  • 60 min — find evidence in /docs/ and elsewhere; build a one-page outline with claims and supporting numbers.
  • 90 min — first draft.
  • 30 min — let it rest at least an hour, then cut 30%. (You'll hit your word count this way.)
  • 30 min — second draft, tighten the headline, sharpen the 90-day section.
  • (optional) 30 min — one round of trusted-reader feedback.

Total: ~4 hours of focused work. Don't let it stretch.


What to do after writing

  • Save the memo as memo.md in this folder.
  • Update /docs/00-overview.md if your memo refines or contradicts anything in there. Link from the memo to the relevant doc section.
  • Consider sharing. A polished memo is also a writing sample for the role itself. Don't overthink this — put it on a personal site or in a private GitHub gist.

Files in this project

  • 01-where-data-intelligence-should-focus.md
  • 02-how-cosmos-changes-the-roadmap.md
  • 03-the-robotics-opportunity.md
  • README.md
  • memo-template.md

Notebook (notebook.py) is in jupytext percent format — open in VS Code or convert with jupytext --to notebook.

Memo prompts

01-where-data-intelligence-should-focus.md

Memo prompt 1 — Where Applied Intuition's Data Intelligence team should focus, 2026–2027

The question

Of all the layers an AV/Physical-AI tooling company could own — ingestion, scenario authoring, sensor sim, auto-labeling orchestration, scenario coverage metrics, regression eval, world-model integration, defense data infrastructure, robotics adjacency — where should the Data Intelligence team disproportionately invest in the next 12–18 months, and where should it explicitly choose not to play?

What you need to engage seriously

Before drafting, make sure you've read or at minimum skimmed:

Frames to consider (pick the ones that fit your argument; don't try to use all)

  • The four loops (collect / curate / label & train / eval) from /docs/00-overview.md. Argue which loop is the rate-limiter for AI's customers and where investment buys the most leverage.
  • The "system of record" thesis. Eval data is longitudinal, audit-able, and stack-coupled. Training data is fungible. Argue whether this means AI should double down on the eval/V&V backbone vs the training-data side.
  • The Cosmos overlap. Where does Nvidia's ecosystem make AI's investment redundant? Where does it amplify AI's value (because customers using both need glue)?
  • The defense expansion. EpiSci + the $171M CDAO contract is a real pillar — should Data Intelligence resource explicitly toward defense modalities or stay AV-pure?
  • The robotics opportunity. The Seegrid AMR partnership is small; Nvidia GR00T is large. Is "AV-first, robotics-second" the right phasing, or does AI lose the window?

What a strong memo on this prompt looks like

  • A clear answer to "what to disproportionately invest in" and a clear answer to "what to deprioritize." Most memos do the first and skip the second; the second is where the courage of the argument shows.
  • Numbers from the docs and from your own project 17 capstone if available — "the worst-slice mAP improvement we measured in our mini-engine was X% in one iteration" is a credible anchor.
  • A 90-day deliverable that the team could actually ship — e.g., "ship a Cosmos-aware ingest path in Data Explorer that auto-tags WFM-generated frames with provenance and feeds them into Validation Toolset's eval harness, behind a feature flag."

Common shapes the argument can take (pick or invert)

  1. "Eval is the moat." Argue that scenario authoring + scenario coverage + regression eval is structurally non-substitutable by foundation-model players, and that AI should pull engineering toward that pole.
  2. "Be Cosmos-aware, not Cosmos-rival." Argue that the right play is to integrate Cosmos as a backend and own the conditioning + labels + V&V — and that fighting Nvidia on photoreal rendering is a losing trade.
  3. "Robotics now." Argue that AV is mature enough that the next 5x of valuation comes from extending the data engine to manipulation and humanoid logs — and that delaying the robotics investment cedes the field to Nvidia + early-stage robotics startups.
  4. "Defense as the long-cycle bet." Argue that the AI Defense unit is the highest-LTV customer and should drive the data-platform roadmap, with AV/robotics as adjacent customers of the same backbone.

Each of these is defensible. Pick one (or combine two) and commit.

Anti-patterns

  • "AI should do all of these things." If everything is a priority, nothing is.
  • "AI should just do better Simian." This is an internal product-improvement memo, not a strategy memo.
  • "AI should beat Nvidia at world models." Nvidia spent ~$10B on R&D in 2024 and has the foundation-model lab. Don't fight on that ground.
02-how-cosmos-changes-the-roadmap.md

Memo prompt 2 — How Nvidia Cosmos changes the Applied Intuition roadmap in the next 18 months

The question

Nvidia released Cosmos World Foundation Models at CES January 6, 2025; followed it with a major release at GTC March 18, 2025; shipped Cosmos Predict 2 (April–June 2025) and Cosmos Predict 2.5 + Transfer 2.5 + Reason 2 later in 2025; and released the Cosmos-Drive-Dreams open synthetic-data pipeline. Cosmos is the most credible competitive threat to the photoreal-rendering layer of every classical synthetic-data product, including parts of Applied Intuition's Spectral.

How should Applied Intuition's product roadmap change in the next 18 months in response? Be specific about which products are threatened, which are reinforced, and what new products or integrations the company should ship.

What you need to engage seriously

Frames to consider

  • Layer-by-layer threat assessment. Use the table in /docs/05 §F.4: which AI product layers does Cosmos threaten high vs medium vs low?
  • The hybrid that's actually winning. Both Wayve and Waabi run a pattern of classical sim for ground truth + WM for photoreal. AI's Neural Sim is the same pattern. The question: how aggressively should AI productize this hybrid as a turnkey offering?
  • Cosmos-aware ingestion. Customers using Cosmos Predict to generate corner cases need a place to land that data with provenance, scenario tags, and quality scores. Should AI build the ingest bridge into Data Explorer / Validation Toolset, or fight Cosmos at the rendering layer?
  • The synthetic-data shutdown signal. Datagen and Synthesis AI both shut down in 2025. What does that say about the survival economics of standalone synthetic-data businesses, and what does it imply for AI's positioning relative to Parallel Domain, AnyVerse, Cognata?

What a strong memo on this prompt looks like

  • A clear taxonomy of threat: which AI products lose (most likely: pure photoreal-rendering capability of Spectral on isolated frames), which AI products gain relevance (scenario authoring, ground-truth label generation, ODD coverage metrics, V&V workflows that wrap Cosmos), and which are roughly neutral.
  • A specific product or integration to ship in the next 6 months (a "Cosmos Transfer plug-in for Spectral", a "Cosmos Predict ingest path for Data Explorer with provenance", a "Cosmos-Drive-Dreams compatibility mode for Test Suites").
  • A specific product to deprecate or descope. The hard part of strategy is naming what to stop. If Spectral's photoreal-renderer team could be reallocated to Validation Toolset's eval harness, what would change?
  • A position on the Foretellix integration with Nvidia (which is real — Foretify on Omniverse, Cosmos Transfer integration in 2025). AI is now competing with Foretellix-on-Cosmos as well as with Cosmos directly.

Common shapes the argument can take

  1. "Be the conditioning layer." Concede the photoreal renderer to Cosmos. Double down on what produces the conditioning inputs (HD maps, BEV layouts, scenario scripts, ground-truth labels) — these are durable.
  2. "Own the V&V wrapper." Cosmos generates the data but cannot certify it. AI's Validation Toolset is the certifying frame. Position every Cosmos output as something that needs to flow through AI's eval harness to be usable in production.
  3. "Build the bridge." AI's value is not building Cosmos; it's making Cosmos consumable by 18 of the top 20 OEMs. Ship integrations, ingest paths, and provenance tracking that turn Cosmos from a foundation model into a product feature.
  4. "Keep both renderers." Some safety cases need deterministic, certifiable classical rendering (ISO 21448 SOTIF). Some perception-training tasks need diversity-on-demand (Cosmos). The right answer is dual-renderer with a clear policy for when to use which.

Anti-patterns

  • Existential framing. "Cosmos is going to kill Applied Intuition" is wrong; AI raised at $15B post-Cosmos. Calibrate the threat.
  • Maximalist denial. "Cosmos doesn't threaten us at all" is also wrong; the photoreal-rendering layer is genuinely under pressure.
  • Punting the engineering question. The strongest memos name a specific product team and a specific 6-month deliverable.
  • Missing the Foretellix angle. Foretellix is the most direct AI competitor after Cosmos integration; ignoring them weakens the analysis.
03-the-robotics-opportunity.md

Memo prompt 3 — The robotics opportunity for Applied Intuition

The question

Applied Intuition's robotics presence as of 2026 is small — a Seegrid AMR partnership, internal humanoid research — relative to its $15B-valued AV business and to Nvidia's vertically integrated GR00T/Isaac stack. The same data-engine primitives that AI sells into AV (scenario libraries, coverage metrics, synthetic data, sim-to-real harnesses, curation pipelines) are arguably more valuable in robotics, where nobody has solved them at production grade.

What's the right phasing and scope of Applied Intuition's robotics push? What products should it ship, in what order, and against which customer types?

What you need to engage seriously

Frames to consider

  • The "5–7 years behind AVs" framing. The AV-tooling pattern is mature; the same patterns in robotics are early. Argue whether AI should pull forward the AV playbook or wait for the robotics market to settle.
  • Customer type. Different robotics buyers have different needs: humanoid OEMs (Figure, 1X, Apptronik), industrial AMR operators (GXO, DHL, Amazon Robotics), defense (DoD ground/air/maritime), warehouse incumbents (Symbotic, Ocado), surgical / specialty robotics. Argue which segment AI should pick first.
  • Cross-embodiment. No single robot embodiment generates enough data. The data-platform play is about embodiment normalization — mapping kinematics, sensor configs, and action spaces across robots. LeRobot is the OSS comparable; nobody has the enterprise version. Should AI build it?
  • Seegrid as a beachhead vs Seegrid as a sideshow. Is the AMR partnership the thin end of the wedge or a customer-relationship one-off?
  • Defense as a leveraged path. EpiSci + the $171M CDAO contract already covers air/sea/land/space autonomy for DoD. The data backbone for defense robotics may be more tractable than commercial humanoids.
  • Nvidia avoidance. GR00T owns the foundation-model layer. Where does AI build alongside vs against Nvidia's stack?

What a strong memo on this prompt looks like

  • A clear customer pick (or a clear "two-customer phased plan"). Vague "robotics in general" is the failure mode.
  • A specific first product. Examples that have been argued by others:
    • "Robotics Data Explorer" — the LeRobot enterprise SKU. Curation, scenario taxonomies, embedding mining for manipulation logs.
    • "Robotics Validation Toolset" — scenario-based eval for manipulation, with cross-embodiment normalization. Sells into Figure / Apptronik / Agility's QA teams.
    • "Defense Robotics Backbone" — extending the CDAO AEP work into ground/aerial robotics data ingest + V&V.
  • A position on the GR00T integration question. AI cannot ship a competing humanoid foundation model; should it ship tooling that integrates GR00T (and pi/Helix and Octo and OpenVLA), or stay foundation-model-agnostic at the policy layer?
  • A 90-day deliverable that's testable on real customer data — e.g., "ship a LeRobotDataset v3.0 ingest path in Data Explorer, profile a single customer's manipulation log corpus, surface ODD imbalances, and present back to that customer."

Common shapes the argument can take

  1. "AMR-first, humanoids-later." AMR is a real revenue market today; humanoids are pre-commercial. Build the data platform on AMR with Seegrid as a lighthouse, then port to humanoids in 2027–2028.
  2. "Defense-first, commercial-later." EpiSci + CDAO is an existing contract with budget. The same data backbone serves DoD ground robotics; commercial AMR/humanoid is a downstream port.
  3. "Cross-embodiment data layer." Don't pick a vertical; ship the cross-embodiment normalization layer that all robotics customers need. LeRobot's enterprise gap.
  4. "Don't enter robotics." Explicitly defensible: the AV business is not yet finished; Nvidia owns the robotics ecosystem; AI's $15B priced AV-leadership, not robotics. Stay focused.

Each is defensible. The strongest memos commit.

Anti-patterns

  • "Robotics is the future, AI should invest." This is a non-claim.
  • Generalizing about "robotics" without naming a customer or robot class. Manipulation, locomotion, AMR, humanoid, and surgical have very different data-platform needs.
  • Ignoring the AV-vs-robotics opportunity cost. Resource allocated to robotics is resource not allocated to closing the eval / scenario-coverage gap in AV. Argue why the trade is correct.
  • Underestimating Nvidia. GR00T is shipping, the ecosystem is forming, and the foundation-model layer is theirs. The AI play is not fighting on that ground — argue what ground AI fights on instead.
memo-template.md

[Memo title — make it a claim, not a topic]

To: [audience — e.g., Applied Intuition Data Intelligence leadership] From: [your name] Date: YYYY-MM-DD Re: [one-line summary of the recommendation]


TL;DR

[Three to five sentences. Lead with the claim. State the most important reason. Note the most concrete recommendation. End with the biggest risk. If a senior reader stops here, what do you want them to take away?]


Why this matters now

[~150 words. The forcing function — what's happening in the industry that makes this question urgent today, not next year. Cite at least one specific 2024–2026 event with a source link. Connect to the four-loop frame from /docs/00-overview.md if useful.]


The recommendation

[~250–400 words. Three to five concrete recommendations, each in its own paragraph or bullet. Each one should:

  1. Name the specific action ("invest in X", "ship Y in Q3", "stop doing Z").
  2. State the supporting evidence — a number, a trend, a competitor's move (with link).
  3. Note the implementation cost or ownership ("this is a 6-engineer-quarter investment", "this lives with the Data Intelligence team", etc.).

Avoid: hedging, surveying, generalizing.]


What I'd ship in 90 days

[~150 words. One concrete deliverable. Be specific about scope, owner, and success criterion. This is where you separate the memo from a thinkpiece.]


What I might be wrong about

[~80 words. Two or three caveats. The ones that show judgment, not the ones that hedge. "I'm assuming Cosmos Predict 2.5 doesn't ship a 30-second multi-camera variant in the next 6 months — if it does, this thesis flips" is good. "Of course, the future is uncertain" is bad.]


Sources