As part of our participation in Microsoft Ignite, Cockroach Labs hosted our latest Cockroach Connect, The Hidden Infrastructure Challenges Behind AI in San Francisco. In a day packed with activity, our CEO and co-founder Spencer Kimball moderated a powerhouse panel of investors: Peter Fenton (Benchmark), Satish Dharmaraj (Redpoint Ventures), and Lucas Swisher (Coatue). The conversation dove into the tectonic shifts underway in technology as AI redefines how systems and software are built, and how businesses operate.
The discussion was less about hype and more about architecture, and how AI is stressing today’s infrastructure, ultimately forcing a rethink of how to scale resilient, global, always-on systems.
Fireside chat: AI is completely rewriting what scale means
The panel opened with Dharmaraj describing a fundamental change in the nature of workloads. Traditional systems, even those designed for scale, are unprepared for the demands of autonomous AI agents.
“They’re geo-distributed, require low-latency and consistency, and are trying to get one task done together … a kind of atomicity no one’s ever contemplated before.”
This explosion of agentic workloads, running continuously and globally, requires infrastructure that is not just scalable, but resilient, consistent, and memory-aware. It’s a kind of workload humanity hasn’t built systems for before, and it’s breaking the assumptions of the modern tech stack as well as the tech stack itself.
For Fenton, the current AI wave fits a familiar historical pattern.
“Every major technology disruption pulls through a new database architecture. The internet did that. The shift from mainframe to client-server did that. The idea that we’ll use the same database for the AI era is incoherent.”
Fenton argued that AI workloads, particularly those driven by long-lived, stateful agents, demand a new kind of computational memory. The relational models that supported web-era apps are now colliding with a world of constant, autonomous interaction. Fenton cited the 1998 eBay outage, and argued that transformation in the AI age will be required as we see more critical events like the recent AWS outage occur.
RELATED
Check out Spencer Kimball's keynote speech at RoachFest 2025 Las Vegas on "Future-proofing operational data at scale" below.
Speed vs. resilience: The caution forgotten
Kimball noted that CockroachDB’s founding thesis, resilience above all, is becoming newly relevant. Yet the AI gold rush has brought a familiar tension: speed to market is tempting founders to throw away the guardrails in pursuit of early wins.
Dharmaraj agreed. “Right now people are throwing caution away,” he said. “They want to be first to market and get to revenue. Eventually they’ll run into the walls and learn why resiliency matters.”
Fenton expanded the point: as startups race ahead, they’re accumulating technical debt at unprecedented speed. The pace of debt accumulation, he warned, could cripple even the most promising AI startups. When infrastructure becomes the bottleneck, recruiting and retaining good engineering talent will become more challenging.
Furthermore, Swisher pointed out that when ChatGPT first launched, it didn’t matter that much if the site went down. The difference between then and now has been the shift from copilots to agents. With AI agents out in production, whether or not your underlying technology can survive day-to-day matters a lot more.
The enterprise divide: Who will survive AI?
When Kimball asked which established companies will adapt, and which will fade like Blockbuster did, the answers were telling.
Swisher outlined a growing divide between enterprises that lean in to AI and those that don’t. “The folks that don't buy in will wake up 10 years from now and will be completely irrelevant,” he said.
Fenton agreed that it’s a mandate to adopt AI and cited the death of Blockbuster as unpredictable at the time. “It wasn't obvious in 1997 that Blockbuster would be completely destroyed. They had the customer, they had the assets but there were a few companies, Netflix was the principal company, who kept the culture of asking how do we reinvent the user experience.”
While the push for AI is strong, Swisher also emphasized a critical need to trust underlying systems. “If I’m AT&T, I need to trust that system, that it works, and that it’s reliable.” In keeping with trust, Dharmaraj also focused on security data infrastructure as a critical piece, without which companies won’t survive.
AI isn’t just a new application layer, it’s ushering in a new kind of architectural foundation. The companies that endure won’t be the ones who move fastest, but the ones who build for what comes after the period of fast innovation: reliability, memory, and resilience at a global scale.
RELATED
Hear from Cockroach Labs' CTO, Peter Mattis, on this episode of Big Ideas in App Architecture, "How to build an AI-native organization."
The investment cycle: Mania, correction, opportunity
The conversation inevitably turned to capital, and whether the current AI frenzy mirrors past bubbles.
Dharmaraj cautioned that the market is flush with capital and predicted some big failures as valuations normalize. Fenton countered that such cycles are intrinsic to innovation.
“Even in the next 12 months, there will be a company founded that doesn’t exist today that’s worth a billion dollars.”
Swisher offered a tempered optimism. Unlike 2000, he noted, today’s private markets are more robust, with profitable late-stage companies like Databricks staying private longer and soaking up a lot of the capital. And the stakes are higher: this time, AI isn’t just moving workloads from on-prem to the cloud, it’s changing the workforce.
As AI agents begin to perform ongoing, dynamic tasks, the line between software and workforce blurs. This shift forces infrastructure to approximate human reliability: always-on, relational, and globally available. That means databases and distributed systems must act as the operational backbone for digital labor.
Building future-proof infrastructure with our community
A huge thank-you to our panelists for sharing their insights on how AI is reshaping the stack from the ground up, and gratitude to our partners and customers who are pushing the boundaries of what’s possible with resilient, distributed infrastructure.
In addition to the panel, Cockroach Connect was happy to host our partner, ClickHouse, and customer, Automation Anywhere, in sharing their stories. “CockroachDB + ClickHouse: Powering Real-Time AI” showcased how combining CockroachDB’s transactional strength with ClickHouse’s analytical speed enables real-time, customer-facing AI applications. The Automation Anywhere team led a session on “Scaling Agentic Process Automations,” which covered how CockroachDB underpins millions of global, fault-tolerant workflows.
Looking to 2026, we’re proud to power the data architectures that make this new AI era possible, and are inspired by the builders, customers, and partners leading the way.







