The State Of AI And Cybersecurity In 2024

For years, advocates have debated how AI can be used in cyberattacks, and the rapid development of large language models (LLMs) has raised concerns about their risks.

In March 2023, anxiety over automated attacks was so great that Europol issued a warning about the criminal use of ChatGPT and other LLMs. Meanwhile, NSA Cybersecurity Director Rob Joyce warned companies to ” buckle up ” against the weaponization of generative AI.

Since then, threat activity has been increasing. A study published by Deep Instinct surveyed more than 650 senior security operations professionals in the US, including CISOs and CIOs, and found that 75% of professionals witnessed an increase in attacks in the last 12 months.

Furthermore, 85% of respondents attributed this increase to bad actors using generative AI.

Suppose we identify 2023 as the year that generative AI-led cyberattacks moved from a theoretical to an active risk. In that case, 2024 is the year organizations must be prepared to adapt to them at scale. The first step to this is understanding how hackers use these tools.

How Generative AI Can Be Used for Evil

How Generative AI Can Be Used for Evil

Threat actors can exploit LLMs in several ways, from generating phishing emails and social engineering scams to generating malicious code, malware, and ransomware.

“The accessibility of GenAI has lowered the barrier to entry for threat actors to exploit it for malicious purposes.” According to PwC’s latest Global Digital Trust Insights survey, 52% of executives expect GenAI to cause a catastrophic cyberattack next year.

“Not only does it allow them to identify and analyze the exploitability of their targets quickly, but it also allows them to increase the scale and volume of attacks. For example, using GenAI to mass classify a basic phishing attack quickly makes it easy for adversaries to identify and trap susceptible individuals.”

Phishing attacks are widespread because attackers must jailbreak a legitimate LLM or use a purpose-built dark LLM like WormGPT to generate an email convincing enough to trick an employee into visiting a compromised website or downloading a malware attachment.

Use AI for good.

Use AI for good.

As concerns about AI-generated threats grow, more organizations are looking to invest in automation to protect against the next generation of fast-moving attacks.

According to a study by the Security Industry Association (SIA), 93% of security leaders expect generative AI to impact their business strategies in the next five years, and 89% have active AI projects in their research lines. And development (R&D).

AI will be an integral part of enterprise cybersecurity in the future. A Zipdo study demonstrates this, as 69% of companies believe they cannot respond to critical threats without AI.

After all, if cybercriminals can create large-scale phishing scams phishing scams using language models, defenders need to increase their ability to defend against them since relying on human users to detect scams every time they encounter them isn’t enough. It is sustainable in the long term.

At the same time, more organizations are investing in defensive AI because these solutions offer security teams a way to reduce the time needed to identify and respond to data breaches while freeing up the manual administration required to run a center—-security operations team (SOC).

Organizations cannot afford to manually monitor and analyze threat data in their environments without the help of automated tools because it is too slow, especially considering a 4 million cybersecurity workforce shortage.

Part of these defenses may involve using generative AI to sift through threat signals, one of the core values ​​of LLM-based security products launched by vendors such as Microsoft, Google, and SentinelOne.

The role of LLMs in the cybersecurity market

One of the most significant advances in cybersecurity AI came last April when Google announced the launch of SEC-PaLM, an LLM explicitly designed for cybersecurity, which can process threat intelligence data to deliver capabilities detection and analysis.

This release led to the development of two exciting tools: VirusTotal Code Insight, which can analyze and explain script behavior to help users identify malicious scripts, and Breach Analytics for Chronicle, which automatically alerts users about active breaches.in the environment, along with contextual information so they can keep track.

Similarly, Microsoft Security Copilot uses GPT4 to process threat signals taken from a network and generates a written summary of potentially malicious activity so human users can investigate further.

Although these are just two products that use LLM in a security context, more generally, they highlight their role in the defensive landscape as a tool to reduce administrative burdens and improve contextual understanding of active threats.

Conclusion

Whether AI is a positive or negative for the threat landscape will depend on who does it better: attackers or defenders.

Let’s assume that defenders are not prepared for an increase in automated cyberattacks in the future. In that case, they will be vulnerable to exploitation. However, organizations that adopt these technologies to optimize their SOCs not only have the option to avoid these threats but can also automate the less rewarding manual work in the process.

 

Read Previous

What is the Internet of Things? – Learn everything about IoT

Read Next

Metaverse Banking: Building a New Financial Frontier

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

error: Content is protected !!