Semgrep is drawing a clear line about where application security is heading, and it’s not toward pure AI or legacy scanning alone, but something more blended and, frankly, more practical. With the launch of Semgrep Multimodal, the company is combining AI reasoning with its rule-based analysis engine across detection, triage, and remediation. The claim is bold but telling: up to eight times more true positives and roughly half the noise compared to using foundation models on their own, with early deployments already surfacing dozens of zero-day vulnerabilities in customer environments.
The timing matters. Security teams are no longer dealing with code written at a human pace. AI-assisted development has shifted the baseline, flooding repositories with new code and turning pull request queues into something closer to a constant stream. Even strong remediation rates leave gaps that compound quickly at that scale. Many teams have experimented with LLMs to keep up, only to run into familiar problems—outputs that vary from repo to repo, hallucinations that erode trust, and token costs that spiral once you move beyond controlled demos. That gap between promise and production is exactly where most AI security tooling has stalled.
Semgrep’s approach leans into that tension instead of ignoring it. Traditional SAST still does a solid job catching known patterns like injection flaws or exposed secrets, but it struggles with the messier category of business logic issues—authorization flaws, IDORs, subtle authentication bypasses—that require understanding context and intent. LLMs can reason through those scenarios, but at scale they tend to produce too much noise. Multimodal tries to bridge that divide by pairing deterministic program analysis with model-driven reasoning, effectively letting each side do what it’s best at.
Underneath that sits Semgrep Workflows, which is arguably the more strategic piece of the release. Workflows allows teams to encode their own security processes—detection, triage, remediation, compliance—into automated pipelines written in plain Python, then run them on managed infrastructure. Instead of stitching together tools and scripts internally, teams can start with prebuilt workflows, adapt them to their environment, or build entirely new ones without turning the effort into an infrastructure project. That detail might sound minor at first, but it’s where many AI-driven security initiatives quietly fail: not in capability, but in operational friction.
There’s also a longer-term bet embedded here. Semgrep is extending its original idea—that security teams should be able to encode their own knowledge into precise, customizable rules—into a world where that knowledge includes AI-assisted reasoning and automation. As models improve, the system improves with them, but the structure around it remains controlled and repeatable. It’s a way of adopting AI without surrendering predictability, which is something security teams tend to care about more than flashy demos.
Step back a bit and the direction becomes clearer. The industry is moving past the early phase where AI was treated as a replacement for everything. What’s emerging instead is a layered model: deterministic analysis for consistency, AI for context and interpretation, and workflow systems to make the whole thing usable at scale. Semgrep’s Multimodal and Workflows launch fits neatly into that shift. The real question now isn’t whether AI belongs in application security, but whether vendors can integrate it in a way that actually holds up under real-world conditions—high volume, messy codebases, and teams that don’t have time to babysit their tools.
Leave a Reply