As enterprises rush to deploy autonomous AI systems into customer support, finance, coding, and internal operations, a new startup is betting that the next major cybersecurity frontier will not be networks or endpoints, but agents. General Analysis has announced $10 million in seed funding led by Altos Ventures, with participation from 645 Ventures, Menlo Ventures, Y Combinator, and a group of strategic investors and angels. It is an early but telling sign that investors increasingly see AI security as its own category, not just an extension of traditional cyber tools.
The company says it is already working with enterprise customers whose support and finance systems touch hundreds of millions of end users. That traction matters because many organizations are now moving beyond chatbots and experimenting with agents that can take actions, access internal systems, issue refunds, review transactions, or interact with customers directly. Once software starts acting instead of merely answering, the risk profile changes dramatically.
General Analysis has tried to make that danger tangible. In one March stress test, its adversarial agent reportedly persuaded 50 live customer-service AI agents to hand over more than $10 million in fabricated perks, including massive gift cards and years of free services, in around three minutes per target. Only five of 55 bots refused. Even allowing for the theatrical framing, the message lands: many deployed AI systems remain surprisingly easy to manipulate.
The founding team brings heavyweight research credentials. CEO Rez Havaei previously worked at NVIDIA and Cohere. He is joined by Maximilian Li, an AI safety researcher from Harvard University, and Rex Liu, a machine learning researcher from California Institute of Technology. That mix of frontier-model experience and academic safety work is exactly the kind of profile investors are chasing right now.
The startup argues that agentic AI creates a security problem unlike classic software. Traditional systems are deterministic: you inspect code, permissions, network flows, and known vulnerabilities. Agents are probabilistic and context-sensitive. They can misinterpret goals, be socially engineered through prompts, leak information through unexpected chains of reasoning, or exploit tools in ways designers did not predict. In short, they fail more like people sometimes do, but at machine speed and scale.
That thesis was reinforced by earlier research from the company involving a widely used Supabase integration in Cursor, where a malicious support ticket could allegedly hijack an internal agent and expose a private database. Engineer Simon Willison referenced the finding through his well-known “lethal trifecta” framework: an AI system with access to sensitive data, exposure to untrusted input, and outbound communication ability. That combination is becoming more common, not less.
What makes General Analysis interesting is its practical stance. Rather than claiming perfect safety, it frames security as measurable risk reduction. That feels more realistic. There may never be a universal setting that makes all agents safe, just as there is no single rulebook that eliminates fraud or insider threats. Instead, companies will need continuous red-teaming, adversarial simulation, layered defenses, monitoring, and constant tuning.
The bigger picture is that AI deployment is outrunning governance. Many firms know there are risks, but delaying implementation can feel commercially impossible. So the market is emerging for companies that help enterprises move faster without flying blind. If cloud computing created massive demand for cloud security, agentic AI may now be creating the same opportunity for behavioral security infrastructure.
General Analysis is still early-stage, of course. Seed rounds buy time more than certainty. But the company is pointing at a real and rapidly expanding problem. As businesses hand more decisions and actions to autonomous systems, the question is no longer whether AI can do the job. It is whether anyone can reliably control what happens when it does.
Leave a Reply