The upcoming AI Agent Security Summit will bring together security researchers, enterprise practitioners, and AI infrastructure leaders to examine one of the fastest-emerging areas in cybersecurity: the protection of autonomous AI agents operating inside enterprise systems.
Hosted by Zenity, the summit takes place on May 27, 2026, in San Francisco and follows the expansion of the event series after earlier editions in New York and San Francisco during 2025. Organizers say the 2026 program will continue globally with additional events planned across New York and Asia as AI agent adoption accelerates across enterprise environments.
The conference agenda focuses on practical and technical security challenges surrounding agentic systems, including prompt injection attacks, memory poisoning, tool manipulation, runtime defense mechanisms, and behavioral monitoring for autonomous workflows. Sessions will feature new research, enterprise case studies, and discussions around emerging vulnerabilities tied to AI agents increasingly embedded in operational infrastructure.
Featured speakers include Michael Bargury, Vivek Vinod Sharma, Aron Eidelman, Ashay Raut, Aditya Dubey, Travis McPeak, Ben Sadeghipour, Allie Howe, and Jim Reavis.
According to organizers, the summit is designed as a community-focused event rather than a vendor showcase, with discussions centered on real-world attacks, defensive strategies, and operational security lessons emerging from live AI deployments. That distinction matters more than it may sound at first glance. Enterprise AI security is evolving so quickly that practitioners are often learning from incidents and edge cases in real time, long before standardized frameworks fully catch up.
Michael Bargury described the current AI adoption wave as a modern gold rush, where organizations are rapidly deploying autonomous systems capable of making decisions, interacting with sensitive applications, and executing actions at scale. As these systems become integrated into critical enterprise environments, traditional security models are increasingly being tested by software that behaves probabilistically rather than predictably — a shift that many security teams are still trying to operationalize.
Leave a Reply