Anthropic has introduced Claude Code Security, an AI tool that scans code for vulnerabilities and suggests fixes, but it requires human oversight and is not a complete security solution. The tool's announcement caused significant but temporary stock declines for several cybersecurity companies, though market reactions are seen as premature. While the underlying AI technology shows promise in finding and patching complex, previously unknown bugs, experts caution it is too early to label the tool a market disruptor. The main topics covered are the launch and capabilities of Claude Code Security, its immediate impact on financial markets, and an analysis of its potential and current limitations.
Claude Code Security Shows Promise, Not Perfection
Claude Code's introduction rippled across the stock market, but researchers and analysts say its impact was overstated, as they peel back the layers.
Claude Code Security made a big splash when it was introduced last week, but it may be too early to call it as disruptive as the markets suggested.
Anthropic unveiled Claude Code Security on Feb. 20, built into the web version of agentic AI coding tool Claude Code. Available now in research preview, the new tool scans codebases for vulnerabilities and suggests patches and fixes categorized by priority level. Anthropic said the tool makes recommendations only for human review, so developers are always in control when deciding to ship a patch Claude Code creates.
Somewhat limited in scope, Claude Code is not a one-and-done security solution and still requires developers at the helm. But its debut had a notable impact on share prices in the security market. CrowdStrike's stock dropped from about $420 a share on Feb. 19 to less than $350 on Feb. 23, though, as of this writing, it has partially recovered to $380. JFrog saw an even more aggressive dip during the same time period, from about $50 a share to $35, though it has also partially recovered to about $42, as of this writing.
Zscaler, Datadog, Okta, Fortinet, SentinelOne, Palo Alto Networks, and others saw varying share price declines in the wake of a coding tool that had neither been fully launched or fully tested by the larger community.
As markets have a tendency toward knee-jerk reactions from time to time, it's hard to say exactly how disruptive this tool and others like it will be for the security market. For now, this level of fervor appears to be premature.
Claude Code Security's Promising Tech
Claude Code Security makes big promises. Built from more than a year of security research, Anthropic said in its blog post that "Claude Code Security reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss."
Each finding goes through a multistage verification process that aims to weed out false positives, and flaws are packaged up into an easy-to-read dashboard. The tool also has "confidence ratings" to account for the nuances that AI models can't always pick up on. And through Claude Opus 4.6, released earlier this month, the blog claimed Anthropic "found over 500 vulnerabilities in production open-source codebases — bugs that had gone undetected for decades, despite years of expert review."
There's also some promising data regarding the use of large language models (LLMs) to find and remediate vulnerabilities. At DEF CON 33, last summer, DARPA hosted the finals of its two-year AI Cyber Challenge (AIxCC), in which teams used AI technology to secure open source technology's underlying critical infrastructure. Much of the work during this challenge involved using cyber reasoning systems to fix vulnerabilities in open source technology.
And by many accounts, it was a success.
Justin Cappos, a professor in the Computer Science and Engineering department at New York University, as well as a longtime developer of open source software, helped develop the format of the challenge. He tells Dark Reading that many people, including some of those who won the contest, "did not expect it to go as well as it did."
"They basically thought these models would find a few minor types of bugs but probably struggle with creating patches, but that's not actually what happened," Cappos says. "What happened is that they were able to find quite a lot of fairly complicated issues and actually create semi-reasonable patches for a lot of them, including a lot of issues that weren't known at the time, that weren't artificial things that the conference organizers put in."
Too Early to Label It a Disruptor
Broadly, Cappos says he's mildly positive that there are good things that can happen with AI coding security tools like this. He warned that it's still early days for these tools, or as he called it, the "Will Smith eating spaghetti" phase.
Cappos, who maintains multiple open source projects, says he and others have begun to receive bug reports from AI coding tools. While some are good from a helpfulness angle, many are also false positives or make suggestions that aren't useful or practical in real-life development environments. "There's a lot of junk," he notes, with understatement.
Melinda Marks, practice director of cybersecurity at analyst firm Omdia (which, like Dark Reading, is owned by Informa TechTarget), says it's interesting to see security vendors take a hit, but it doesn't mean agentic AI solutions will take over security wholesale.
For example, she called attention to three critical vulnerabilities in Claude Code that Check Point Research discovered and reported on this week. While Claude Code is powerful and has the potential to make software development easily accessible to anyone with an idea, Marks says these vulnerabilities "highlight the importance of security when utilizing these types of coding tools, besides using the agentic security capabilities."
"Claude Code Security is super exciting, as we need to apply AI on the defender side because it is the only way for security teams to keep up with the scale of development, especially with AI adoption. Our research shows that security teams are using or want to use agentic AI to scale security to stay ahead of threats and attacks," she says. "For companies wanting to secure usage of AI, they would likely still need third-party security vendor tools to efficiently mitigate risk associated with AI adoption."
Eran Kinsbruner, VP of product marketing at application security vendor Checkmarx, says Claude Code Security marks "meaningful progress" in bringing security awareness closer to code creation. That said, it's not a one-size-fits-all application security solution for the complex environments organizations deal with these days. "Safer code generation alone doesn't equate to comprehensive software security," he adds.
"The idea of streamlining patching through an integrated, developer-friendly interface is understandably appealing. Anything that reduces friction between identifying and fixing vulnerabilities can help organizations move faster," Kinsbruner says. "However, this speed comes at a cost in terms of literal dollars. Whereas AppSec solutions are built for ongoing scanning, an LLM-based solution like Claude Code Security is prompted to conduct point-in-time checks that add up across hundreds if not thousands of repositories."
Anthropic did not respond to Dark Reading's request for comment.