SAN FRANCISCO — Artificial intelligence, particularly generative forms such as ChatGPT, was on the lips and minds of many cybersecurity professionals at the RSA Conference, including Rob Joyce, director of cybersecurity at the National Security Agency.
“You can’t walk around RSA without talking about AI [and] machine learning,” Joyce said during a keynote about the state of cyberthreats, emerging risks and predictions for the year ahead.
Generative AI is a “technological explosion,” Joyce said. “I won’t say it’s delivered yet, but this truly is some game-changing technology that’s emerging.”
Cybersecurity professionals have concerns about AI and large language models fueling more dangerous and sophisticated attacks. That hasn’t happened yet, but it could within a year, according to Joyce.
The NSA is tracking advancements for defenders and adversaries, and focusing on three areas as ChatGPT and other generative AI tools gain momentum. Here is what they’re watching.
How adversaries ultimately leverage generative AI and what they do with it remains a top, but not overwhelming concern.
“I don’t expect some magical technical capability that is AI generated that will exploit all the things,” Joyce said.
Adversaries linked to nation states and criminal organizations are just starting to experiment with ChatGPT in their workflows, according to Joyce. Generative AI will eventually reduce the cycle and dwell time for attackers and it’s already enabling more effective phishing attacks.
AI will help threat actors rewrite code, changing the signature and attributes, to give it a unique look and feel that will impose challenges on defenders in the near term, Joyce said.
“Buckle up,” Joyce said. A year from now “I think we’ll have a bunch of examples of where it’s been weaponized, where it’s been used and where it’s succeeded.”
Distrust and maligned poisoning of AI
On the fringes of generative AI advancement, Joyce and his colleagues at the NSA are cautiously tracking how adversaries might sow distrust or poison the well-intentioned operation of AI, rendering its benefits ineffective.
“As people understand models are out there, there’s going to be folks who look to manipulate them,” Joyce said. “How do we get trust and assurance in some of the things that we’re going to start counting on in generative AI and other models?”
The NSA is also studying how defenders can use AI or machine learning to regain advantages.
“It’s showing real promise in being able to do rote things at scale — scanning across massive amounts of logs, being able to pull patterns out to be able to correlate known CVEs and other things into your data streams,” Joyce said.
Generative AI is especially impressive when used to add machine-like focus to troves of data and help defenders prioritize activities.
“That’s the accelerant for defense,” Joyce said. “It’s a huge amplification capability to make our defenders better, and I think you’ll see some of that emerge as well.”