AI Tools Are Now a Standard Part of the Attacker Toolkit, Forescout Exec Warns
10 mins read

AI Tools Are Now a Standard Part of the Attacker Toolkit, Forescout Exec Warns

The cybersecurity landscape is undergoing a profound transformation as artificial intelligence (AI) transitions from a novel technology to an indispensable component of sophisticated cyberattacks. According to Rik Ferguson, VP of Security Intelligence at Forescout, AI tools are now a "standard part of the attacker toolkit," fundamentally altering the dynamics for both cybercriminals and defenders. This shift is driven by a growing adoption of AI by threat groups seeking to amplify their offensive capabilities, with a notable increase in the utilization of commercial AI models.

Ferguson’s assertions are underpinned by recent research from Forescout, which meticulously details the escalating prowess of AI in offensive cyber operations. The cybersecurity firm’s analysis reveals a dramatic year-over-year improvement in AI’s ability to identify vulnerabilities. In mid-2025, a survey of 50 AI models found that over half (55%) faltered at basic vulnerability research. However, a follow-up study released this month paints a starkly different picture, indicating that virtually all tested AI models now excel in this critical area.

The advancements are not limited to detection; AI’s capacity to exploit vulnerabilities has also seen significant enhancement. This trend was previously highlighted by ITPro, when Forescout’s VP of Research, Daniel dos Santos, cautioned about a potential "explosion of vulnerabilities" in the near future as a direct consequence of AI’s growing offensive capabilities. This escalating sophistication of AI-powered attacks necessitates a proactive and adaptive response from cybersecurity professionals worldwide.

Shifting Criminal Strategies: From Niche to Mainstream Adoption

Ferguson elaborated on how the cybercriminal community’s approach to AI has evolved. Previously, illicit actors gravitated towards specialized, "underground" Large Language Models (LLMs) like WormGPT, which were specifically designed or adapted for malicious purposes. These platforms, often marketed at low price points, facilitated the generation of malicious code, malware, and sophisticated phishing emails. However, Forescout’s research indicates a significant pivot away from these niche tools towards widely available, mainstream commercial AI models.

"When it comes to the criminal community, the behavior there is changing around AI," Ferguson stated during a roundtable discussion at Forescout’s Vedere Labs research hub in Eindhoven. "We used to talk about… underground LLMs and criminal AI offerings. Things like WormGPT was definitely one that got a lot of coverage." He further explained that these dedicated criminal tools have been "largely abandoned" and supplanted by commercial offerings.

This transition is occurring through several pathways. Cybercriminals are employing "jailbreaks," which are essentially wrappers or modifications applied to commercial AI models to bypass their built-in safety features and ethical guardrails. Alternatively, they are utilizing locally deployed open-source AI models, or even trading stolen subscriptions for popular commercial AI services on underground forums. This widespread adoption signifies a democratization of advanced AI capabilities for malicious actors.

A Deep Dive into Preferred Tools and Shifting Trends

Forescout’s research has identified Anthropic’s Claude model as a particularly "preferred tool" among threat actors. Analysis of discussions on underground forums reveals that Claude is highly sought after, while newer iterations of OpenAI’s ChatGPT models are reportedly losing traction. This decline in popularity for ChatGPT is attributed to the implementation of stronger "guardrails" by OpenAI, which limit its utility for certain illicit activities.

Both Anthropic and OpenAI are acutely aware of the misuse of their AI technologies by cybercriminals and are actively working to mitigate these threats. In September of the previous year, Anthropic publicly acknowledged that its AI tools had been "weaponized" by hackers to launch cyberattacks against organizations across various sectors. In response, the company implemented account bans to curb this illicit activity.

Similarly, OpenAI released a report in late 2024 detailing efforts to combat the use of its services for malicious purposes. The company reported that it had disrupted 20 operations where its chatbot was being exploited for criminal activities, and it has since introduced enhanced guardrails to prevent such misuse. These efforts highlight the ongoing arms race between AI developers and malicious actors seeking to exploit their creations.

The Evolution of Cybercriminal Perception: From Skepticism to Advocacy

Beyond the adoption of specific tools, the very perception of AI and its potential benefits within the cybercriminal community has undergone a dramatic transformation. Ferguson noted that, much like in the mainstream enterprise world, an initial phase of skepticism has given way to enthusiastic embrace.

"The opposite is now true," Ferguson told reporters. "AI is recommended, and more experienced forum members are offering this knowledge transfer, skills transfer, not just recommending to use it, but recommending which one to use, how to use it, [and] offering tutorials." This shift signifies that AI is no longer just a tool but a cornerstone of modern cybercrime strategy, with experienced actors actively mentoring others in its application.

The implications of this mainstreaming are profound. AI has cemented its place as a "standard part of the attacker toolkit," and specific models like Claude are emerging as favored instruments for offensive operations. This trend is corroborated by numerous studies from various cybersecurity firms over the past 18 months. For instance, research from Trend Micro indicated that threat actors are leveraging AI to efficiently summarize threat intelligence reports and to aid in the reverse engineering of malware.

Furthermore, Kaseya’s research, published in March, warned that "AI-generated phishing became the baseline" for hackers in 2025. This suggests that the sophistication and conviction of phishing attacks have reached a new level, with AI enabling criminals to craft more persuasive and targeted messages, making them harder for individuals and organizations to detect. The baseline for attacks is rising, demanding more advanced defensive measures.

Raising the Stakes for Defenders: The Velocity and Scale of AI-Powered Attacks

The pervasive use of AI by threat actors, particularly "agentic AI" (AI systems capable of autonomous decision-making and action), poses significant challenges to the operational capability and readiness of cybersecurity professionals. Ferguson emphasized that AI and agentic AI-powered support in cybercrime activities dramatically increase the velocity and scale of attacks.

Forescout’s statistics underscore this alarming trend. The median time for initial access brokers (IABs) to hand off control of a compromised network has plummeted to a mere 22 seconds, a stark contrast to the over eight hours recorded in 2022. This dramatic reduction indicates that network breaches are now being exploited with unprecedented speed, often through fully automated processes.

The advent of agentic AI introduces a new dimension of continuous risk. Unlike human attackers, AI agents do not require rest or breaks, meaning enterprises could face relentless, 24/7 threats. "AI is not constrained by the way we consider the world," Ferguson explained. "So all of the things that we understand about how do I get attacked, how do attacks happen, may not apply going forwards." This necessitates a fundamental reevaluation of how cybersecurity professionals classify, understand, and defend against attacks.

The capabilities being observed extend beyond mere speed and scale. Ferguson highlighted the emergence of autonomous reconnaissance, autonomous lateral movement within networks, and the real-time matching of vulnerabilities to live targets. Crucially, the absence of a "human in the loop" means these operations can proceed without interruption, leading to continuous and potentially overwhelming assaults. "And of course, no human in the loop means no lunch breaks," he remarked, underscoring the tireless nature of AI-driven attacks.

Impact on Attribution and the Future of Defense

This relentless, automated nature of AI-driven offensive operations has a direct impact on the ability to attribute attacks to specific actors. Historically, security researchers could infer the origin of an attack by analyzing patterns such as the time of day it occurred, correlating it with known time zones of threat actor groups. However, the 24/7 operational capacity of AI agents eliminates this crucial indicator.

"If it becomes 24/7, 365, not only is that much more difficult to defend against, it’s actually much more difficult to attribute using those characteristics," Ferguson stated. This obfuscation of attribution makes it harder to hold perpetrators accountable and to implement targeted countermeasures. The challenge for defenders is to develop new methods of identification and response that are not reliant on human operational patterns.

Playing by the Rules: Agents Versus Agents

Looking ahead, the cybersecurity battlefield is expected to feature a direct confrontation between AI agents deployed by both attackers and defenders. Ferguson noted that this scenario is already unfolding in some areas. Organizations are leveraging AI agents to automate critical defensive functions such as device isolation and quarantine. Similarly, bots are being employed in threat hunting and asset monitoring to improve efficiency and response times.

However, a significant challenge lies in the disparity of operational policies. Defenders operate within strictly regulated frameworks, necessitating justification for every action and a careful consideration of potential collateral damage. In contrast, attackers are unburdened by such constraints; they do not care if their actions cause unintended harm, and they can readily pivot to new tactics if one fails.

"It’s not an unbalanced equation," Ferguson cautioned. "We are all using it. The big thing for us as defenders, and every practitioner out there, is that we have to justify everything we do. We actually care if something goes wrong. Attackers don’t care if something goes wrong, they just move on." This fundamental difference in operational ethos presents a significant hurdle for defenders as they seek to counter the relentless and unconstrained nature of AI-powered cyber threats. The future of cybersecurity will undoubtedly be shaped by this escalating AI-driven conflict, demanding continuous innovation and adaptation from both sides of the digital divide.

Leave a Reply

Your email address will not be published. Required fields are marked *