IT Leaders Fear AI-Driven Cybersecurity Costs Will Soar - The Legend of Hanuman

IT Leaders Fear AI-Driven Cybersecurity Costs Will Soar


IT leaders are concerned about the rocketing costs of cyber security tools, which are being inundated with AI features. Meanwhile, hackers are largely eschewing AI, as there are relatively few discussions about how they could use it posted on cyber crime forums.

In a survey of 400 IT security decision makers by security firm Sophos, 80% believe that generative AI will significantly increase the cost of security tools. This tracks with separate Gartner research that predicts global tech spend to rise by almost 10% this year, largely due to AI infrastructure upgrades.

The Sophos research found that 99% of organisations include AI capabilities on the requirements list for cyber security platforms, with the most common reason being to improve protection. However, only 20% of respondents cited this as their primary reason, indicating a lack of consensus on the necessity of AI tools in security.

Three-quarters of the leaders said that measuring the additional cost of AI features in their security tools is challenging. For instance, Microsoft controversially increased the price of Office 365 by 45% this month due to the inclusion of Copilot.

On the other hand, 87% of respondents believe that AI-related efficiency savings will outweigh the added cost, which may explain why 65% have already adopted security solutions featuring AI. The release of low-cost AI model DeepSeek R1 has generated hopes that the price of AI tools will soon decrease across the board.

SEE: HackerOne: 48% of Security Professionals Believe AI Is Risky

But cost isn’t the only concern highlighted by Sophos’ researchers. A significant 84% of security leaders worry that high expectations for AI tools’ capabilities will pressure them to reduce their team’s headcount. An even larger proportion — 89% — are concerned that flaws in the tools’ AI capabilities could work against them and introduce security threats.

“Poor quality and poorly implemented AI models can inadvertently introduce considerable cybersecurity risk of their own, and the adage ‘garbage in, garbage out’ is particularly relevant to AI,” the Sophos researchers cautioned.

Cyber criminals are not using AI as much as you may think

Security concerns may be deterring cyber criminals from adopting AI as much as expected, according to separate research from Sophos. Despite analyst predictions, the researchers found that AI is not yet widely used in cyberattacks. To assess the prevalence of AI usage within the hacking community, Sophos examined posts on underground forums.

The researchers identified fewer than 150 posts about GPTs or large language models in the past year. For scale, they found more than 1,000 posts on cryptocurrency and more than 600 threads related to the buying and selling of network accesses.

“Most threat actors on the cybercrime forums we investigated still don’t appear to be notably enthused or excited about generative AI, and we found no evidence of cybercriminals using it to develop new exploits or malware,” Sophos researchers wrote.

One Russian-language crime site has had a dedicated AI area since 2019, but it only has 300 threads compared to more than 700 and 1,700 threads in the malware and network access sections, respectively. However, the researchers noted this could be considered “relatively fast growth for a topic that has only become widely known in the last two years.”

Nevertheless, in one post, a user admitted to talking to a GPT for social reasons to combat loneliness rather than to stage a cyber attack. Another user replied it is “bad for your opsec [operational security],” further highlighting the community’s lack of trust in the technology.

Hackers are using AI for spamming, gathering intelligence, and social engineering

Posts and threads that mention AI apply it to techniques such as spamming, open-source intelligence gathering, and social engineering; the latter includes the use of GPTs to generate phishing emails and spam texts.

Security firm Vipre detected a 20% increase in business email compromise attacks in the second quarter of 2024 compared to the same period in 2023; AI was responsible for two-fifths of those BEC attacks.

Other posts focus on “jailbreaking,” where models are instructed to bypass safeguards with a carefully constructed prompt. Malicious chatbots, designed specifically for cybercrime have been prevalent since 2023. While models like WormGPT have been in use, newer ones such as GhostGPT are still emerging.

Only a few “primitive and low-quality” attempts to generate malware, attack tools, and exploits using AI were spotted by Sophos research on the forums. Such a thing is not unheard of; in June, HP intercepted an email campaign spreading malware in the wild with a script that “was highly likely to have been written with the help of GenAI.”

Chatter about AI-generated code tended to be accompanied with sarcasm or criticism. For example, on a post containing allegedly hand-written code, one user responded, “Is this written with ChatGPT or something…this code plainly won’t work.” Sophos researchers said the general consensus is that using AI to create malware was for “lazy and/or low-skilled individuals looking for shortcuts.”

Interestingly, some posts mentioned creating AI-enabled malware in an aspirational way, indicating that, once the technology becomes available, they would like to use it in attacks. A post titled “The world’s first AI-powered autonomous C2” included the admission that “this is still just a product of my imagination for now.”

“Some users are also using AI to automate routine tasks,” the researchers wrote. “But the consensus seems to be that most don’t rely on it for anything more complex.”


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment