Lovable just raised $330 million. But anyone with Claude Code could rebuild their product in a week. So what actually matters?
Imagine you open a lemonade stand. You have a special juicer that makes perfect lemonade every time. Great, right? Except tomorrow, every other kid on the block gets the exact same juicer. Now your lemonade tastes the same as everyone else's. Your advantage just vanished.
That is what is happening to AI app builders right now. Companies like Lovable, Replit, and Bolt all let you describe an app and it gets built automatically. But they are all using the same "juicers" underneath — AI models from Anthropic, OpenAI, and Google. The only thing different is the cup the lemonade comes in. And with modern coding tools, copying someone's cup design takes about a week.
So if building things is basically free, what is actually valuable? Nate B Jones argues there are five things that AI cannot replace, no matter how smart it gets:
The AI app builder market looks enormous on paper. Lovable raised $330 million at a $6.6 billion valuation and generates 100,000 new projects on its platform every single day. Vercel's V0 has 4 million users. Replit claims 25 million developers. But strip away the branding and most of these companies are thin wrappers around the same foundation models from Anthropic, OpenAI, and Google. Their moat is a UI that can be replicated in a week with tools like Claude Code or Codex.
The conventional escape hatch is to train your own model. Cursor did this for code editing. Replit trained code completion models using Databricks and released them on Hugging Face. Vercel built a custom autofix model with Fireworks AI and updated its terms of service to use customer code for training. But Jones argues this is not the real differentiator. The companies that survive own something structural that model providers cannot replicate.
Replit owns the runtime — the compute environment where your application actually executes. Claude can write code but it cannot run your code in production. Vercel owns the deployment infrastructure that already hosts production apps for OpenAI, Anthropic, Nike, and PayPal. They are not a wrapper with hosting; they are an infrastructure company that added an AI front door. Notion owns 100 million users' worth of structured organizational data. They offer a model picker (Claude, ChatGPT, Gemini) because they don't care which model wins — every model needs their data to be useful.
This pattern reveals the five durable verticals:
1. Trust. When anyone can generate a professional-looking checkout page in seconds, "looks legitimate" means nothing. Stripe processing over a trillion dollars in transactions is a trust signal, not a technical feature. In the agentic economy, AI agents transacting autonomously need verified trust signals to decide which services are safe. Trust providers become the routing layer for responsible web traffic.
2. Context. AI is a general-purpose tool; it becomes powerful only with your specific data. Notion, Salesforce, Epic, Palantir, Snowflake, and Databricks all own context that gets locked inside their platforms. An agent without context is a chatbot; an agent with your context is a dependable junior employee.
3. Distribution. When supply is infinite, curation becomes the scarcest resource. Google, Apple, Amazon, TikTok, YouTube, and Substack own how humans pay attention online. The next frontier is agent discovery: an agent-native app store where AI agents find businesses to transact with. This goes far beyond MCP servers — it requires rethinking transaction speed, API legibility, and service delivery for non-human consumers.
4. Taste. When production is free, choosing what to produce is the entire game. The analogy: after GarageBand and Suno made music production free, the winners were the artists with taste, not the most expensive studio. In the agentic web, taste manifests as orchestration quality: the thousand small editorial decisions about how an agent should behave. Even with auto-research evolving agent harnesses, humans remain accountable for direction and goals.
5. Liability. "The AI did it" will not survive in court. When an AI-generated financial plan loses money or an AI-built medical app gives bad advice, someone is on the hook. Deloitte and McKinsey are repositioning as AI assurance providers. ElevenLabs offers insurance for voice agents. Regulated SaaS platforms like Veeva and Elation inherently sell accountability. Liability becomes the governance layer for the agentic economy.
The strategic test Jones proposes: What do you own that still matters if AI gets 10x better? If a better model makes your product obsolete, change your positioning now. If a better model makes your product more valuable, you are building in a durable niche.