Google relaunched Stitch on March 19th. Multi-screen generation from a single natural language prompt. An AI-native infinite canvas. Voice commands. Direct integrations with Claude, Cursor, and Gemini CLI. Figma's shares dropped 4% the same day. The discourse was immediate: AI is coming for designers.

They're right that something is changing. They're wrong about what it means.

I've been shipping products with AI built into the design process since 2018. Not as a feature or a buzzword. As infrastructure. At Metro Bank, we were embedding ML-driven personalisation into customer journeys before most design teams were even thinking about what a design system token was. At Kraken, we shipped a neobank MVP: $7M in budget, 8 weeks of build time, 30,000 card orders on day one. The entire time, the bottleneck was never tooling. It was never 'we can't generate screens fast enough.' The bottleneck was always clarity. What are we actually solving? For whom? Under which constraints?

Stitch can generate five interconnected screens from a voice prompt. That is genuinely impressive. I've used it. But here is what it cannot do: it doesn't know that your checkout flow sits inside a regulated environment where a specific disclosure must appear before purchase intent is captured. It doesn't know that your card issuance partner has a 48-hour provisioning window, and that failing to account for this in the post-onboarding experience creates a trust gap that support teams spend months firefighting. It doesn't know that your legal team has already rejected three versions of that consent screen because the visual hierarchy implied agreement before the user had finished reading.

Design at the level I've worked, crypto, banking, travel super-apps, regulated lottery products, is not primarily a craft problem. It's a systems problem. The craft matters. But it is downstream of understanding. Every screen I've shipped in a regulated context was shaped by constraints that don't live in a prompt. They live in conversations with compliance teams, in edge cases that only surface after you've mapped the full account lifecycle, in the patterns that legal has rejected twice and finance approved once. A tool that generates five screens in ten seconds has no access to that context. Without that context, what it generates is a best guess at a surface that looks right.

What Stitch and tools like it are actually doing is automating the translation layer. If your job is taking a wireframe and making it look finished, that work is increasingly automatable. It should be. That translation layer was never where the real design value lived. The designers who are genuinely threatened by this are the ones whose entire value proposition was speed of output. You cannot compete on output speed with a model running on Google's infrastructure.

But that is not most senior designers. The people I've hired and scaled at Metro Bank, Almosafer, Careem, the ones who are genuinely hard to replace are the ones who walk into a room, hear a brief, and immediately identify what's wrong with the question being asked. That skill does not get automated by a faster canvas. It gets more valuable when everyone around you is moving quickly and making decisions with less thinking.

The Figma versus Stitch narrative is the wrong frame. The question is: as AI handles more of the execution, what does the design role actually become?

My answer: more strategic, more advisory, more about judgment and less about delivery. That is not a threat to good designers. It is an upgrade.

The designers who should be worried are the ones who have spent the last five years getting very fast at Figma and nothing else.

---

Fact Check

Every factual claim in this article, with its source.

Claim: Google relaunched Stitch on March 19, 2026, with multi-screen generation from a single natural language prompt, an AI-native infinite canvas, voice commands, and integrations with Claude, Cursor, and Gemini CLI.

Google Stitch product relaunch announcement, March 19, 2026.

Claim: Figma's shares dropped 4% on the day of Google's Stitch announcement.

Market reporting, March 19, 2026. Specific source not captured in original draft. Verify before re-promotion.

Claim: Krak neobank MVP: $7M budget, eight weeks, 30,000 card orders on day one.

Jay Tulloch's direct experience leading product design at Kraken.

Claim: ML-driven personalisation at Metro Bank embedded into customer journeys.

Jay Tulloch's direct experience at Metro Bank. The Personetics behavioural analytics platform referenced in the AI fluency article on this site provides further context on this engagement.

Unsourced statements (Jay's opinion or lived experience): All claims about what AI tools cannot replicate (compliance awareness, edge cases, regulatory constraint context); the characterisation of the "translation layer"; professional references to Metro Bank, Almosafer, Careem, and Krak; the strategic prescription for design leaders. These are Jay's points of view and lived experience, not third-party data.