The role of AI in security has come under severe scrutiny in recent years as companies of all sizes look to establish their footing in the industry.
The recent Google Cloud Next 24 event saw a major focus on AI, but security was also a prominent presence, with a number of security-focused releases and services unveiled at the conference.
Since its acquisition in September 2022, Mandiant has played a major role in helping boost the security of Google’s entire security portfolio, and we sat down with Kevin Mandia the company’s CEO, to find out just how big a role AI can play in helping stop the latest threats around today.
AI advantage?
“We get asked a lot – what is AI, and is it an advantage, the defence or the offence?” Mandia tells us.
“AI is another technology that’s coming along that good people will use, and bad people will use – it’s just another tool in the toolbox now.”
Google was keen to promote the role that AI can play in security during Cloud Next 2024, revealing a host of new updates and upgrades that leverage Mandiant services.
This includes Gemini in Threat Intelligence, part of the new Gemini in Security platform, which allows users to utilize conversational search to quickly discover details on existing issues or threat actors, as well as offering researchers automated web crawling for relevant open source intelligence (OSINT) articles, ingesting information and providing concise summaries to help the fightback.
Elsewhere, Gemini in Security Operations is also able to use natural language to explain key findings to security admins and professionals via its assisted investigations feature. Once a threat is detected, the platform can summarize event data, then recommend the next steps to take to contain or mitigate, and help guide users through the platform using easy-to-follow instructions and prompts.
So with AI taking over a lot of the heavy lifting when it comes to threat detection, where does that leave the role of the human?
“The innovation cycle is going to be different – it used to be that humans would learn and create rules with what we are building, the others will build a system that learns and thinks,” Mandia says.
“You’ll always need cybersecurity folks, and AI is the sidecar to that for now,” he adds, pointing out the benefits the technology can have on bringing new workers on board and up to speed.
“We can take someone who’s only been doing security for half a year and make them way faster and smarter,” he says, highlighting how defenses can be scaled much quicker for businesses of all sizes.
“I think we’ll see more secure code being built with AI as well, because it’s very good at structured languages, and code is a structured language.”
Ultimately, there is still work to be done in certain areas of threat intelligence, with Mandia flagging the definition of “normal” behavior in a business as something that is still tricky to pin down when spotting possible issues.
“Every day in a lot of businesses, people do the same things all the time – about the only anomalous thing is email,” he notes. “When you look at actual business and work functions and work processes, most people are doing the same things and logging into the same systems, so you should see processes doing the same thing all the time, and humans doing the same things all the time.”
Voice and video spoofing has also grown in scrutiny as AI platforms get better at imitating humans, with Mandia noting more rules need to be created to help crack down.
“Folks that do a lot of business by voice are going to have to start looking into what can be faked, and what can be done about it,” he admits, “the problem right now is that it’s hard to be 100% certain – but we’re getting better on defense to detect these kinds of things.”
So AI may still have a way to go before it fully takes over security protections, but as it ingests more data and learns more, the time may not be too far away.
For now though, Mandia says he sees humans and AI working together, helping create a multi-fronted approach to stopping attacks.
“(AI) won’t replace a security operator yet,” he notes, “you do need to have (them) – it’s going to speed up things, and train people well, but ultimately you’re not ready to risk transfer to the machines just yet…you want security operations by people still, but they’re getting powered by AI.”
“Security is too important to just remove a gating factor without knowing and ensuring that whatever you’re replaced it with works.”