By Fleming Shi, CTO at Barracuda
The 30th of November marks a year since the free research preview of ChatGPT was unveiled to the world. Today, the cybersecurity risks and rewards of artificial intelligence can be clearly seen in what’s happened since the release of ChatGPT, the world’s first widely available generative-AI tool. One year on, for many of us it’s hard to imagine life without generative-AI. Tools such as ChatGPT, Bing and others, offer immense benefits in terms of the time and effort saved on everyday online tasks – but there’s more to generative AI than doing your homework in 30 seconds.
The security risks of gen-AI are widely reported. For example, the LLMs (large language models) that underpin them are built from vast volumes of data and they can be distorted by the nature of that data. The sheer volume of information ingested carries privacy and data protection concerns. At the moment, regulatory controls and policy guardrails are trailing in the wake of the gen-AI’s development and applications.
Other risks include attacker abuse of gen-AI capability. Generative AI allows attackers to strike faster, with better accuracy, minimizing spelling errors and grammar issues that have acted as signals for phishing attacks. This makes attacks more evasive and convincing. As attackers become more efficient, it is even more critical that businesses use AI-based threat detection to outsmart targeted attacks.
That’s where the good news comes in. The security rewards of AI are immense. For example, in our email protection suite, Barracuda AI uses the metadata from internal, external, and historical emails to create an identity graph for each Office 365 user that defines the unique communication patterns of an individual. These machine-learned models allow Barracuda to identify behavioral, content, and link-forwarding anomalies in your company’s email communications to protect against spear phishing, business email compromise, lateral phishing and other targeted threats.
Another opportunity for generative AI is in enhancing cybersecurity training – basing it on actual attacks, and therefore more real, personal, timely and engaging than the mandatory, periodic, and simulated awareness training sessions. At Barracuda, we’re building functionality that uses generative AI to educate users when a real-world cyber threat, such as a malicious link is detected in an email they’ve received. We believe this will provide an impromptu opportunity to train the user if they fall for the attack when clicking through the malicious link, ultimately increasing the firepower against threats and changing user behaviour when faced with risk. In many ways, it’s a way to build immunity and awareness against real threats in the parallel universe of cyberspace.
We can’t put the AI genie back in the bottle – but nor should we want to. What we need to do is harness its power for good.
Looking ahead, the impact of AI – including, but not limited to generative-AI – on the cyberthreat landscape will become ever more pervasive. Attackers are already leveraging advanced AI algorithms to automate their attack processes, making them more efficient, scalable, and difficult to detect. These AI-driven attacks can adapt in real time, learning from the defenses they encounter and finding innovative ways to bypass them. Ransomware attacks are evolving into more targeted campaigns as cybercriminals focus on critical infrastructure and high-value targets, aiming to inflict maximum damage and, in turn, demand exorbitant ransoms.