Editor’s note: Cybersecurity Dive is delving into how ChatGPT and generative AI is making a mark on security. Here’s how executives expect adversaries to exploit the technology to inflict damage. Check out our previous coverage on how generative AI might bolster defense.
ChatGPT has taken the world by storm and as it makes waves for consumers and business alike, security experts are wary of threat actors turning to generative AI for nefarious purposes. It’s a cause for worry, but not full-on panic.
Since the generative artificial intelligence chatbot was released in November, Palo Alto Networks’ Unit 42 has detected up to 118 malicious URLs related to ChatGPT daily and domain squatting related to the tool has surged 17,818%.
ChatGPT is “one of the fastest-growing consumer applications in history,” the threat intelligence firm said Thursday in a blog post. “The dark side of this popularity is that ChatGPT is also attracting the attention of scammers seeking to benefit from using wording and domain names that appear related to the site.”
The earliest and most obvious threats involving ChatGPT include sharp phishing email composition, some malware coding, and the gathering and contextualization of data for more effective targeting.
More startling dangers, such as autonomous attack mechanisms and sophisticated malware coding, have yet to materialize.
There’s been a fair amount of sensationalism about how ChatGPT and other generative AI are going to fundamentally change the landscape of security, said Jon France, CISO at (ISC)2. “I’m not sure that’s quite the case yet."
The gap between hype and reality is doing little to allay concerns.
The majority of IT professionals, nearly three-quarters, considered ChatGPT a potential threat and shared concerns about the technology just two months after it was released, according to a BlackBerry survey conducted in January. More than half of the respondents predicted ChatGPT would be used in a successful cyberattack sometime this year.
Worries abound, but cybersecurity professionals currently see ChatGPT doing more to bolster defense than tear it down.
“The few negatives that come out of it, we tend to really emphasize,” said Justin Fier, SVP of red team operations at Darktrace.
“There’s always somebody out there that is going to take something and find a way to use it for bad, and there’s just no way around that,” Fier said. “We should just assume it’s probably already well underway.”
Worries within reason abound
While ChatGPT could enable threat actors to write more targeted and effective code, the generative AI tool hasn’t written a piece of fully fledged functioning malware yet, according to France.
For now, the threat is more specific and narrow.
“If you’ve got a specific target in mind, the model probably knows quite a lot about the person, if they’re relatively well known,” France said. “It can contextualize quite quickly and potentially give you some vectors that may not be obvious to the attacker. It’s a little bit like Google on steroids in that case.”
Adversaries could also use ChatGPT’s problem-solving capabilities to learn specific routes on firewalls, for example, or identify unpatched vulnerabilities to determine what to target.
“I don’t think we’re at a point where it’s going to write the next zero-day that’s going to get Log4Shell status around the globe,” Fier said.
The bigger more immediate worry involves data mining. ChatGPT’s ability to find patterns in large datasets and marry those findings with patterns in other datasets could result in some scary outcomes, according to Fier.
Cybersecurity professionals are reluctant to deem ChatGPT a force that will predominantly be used to inflict damage, but they’re not yet willing to count that out as a possibility.
“I do think it's a positive that we're having these conversations, even if some of them feel a little bit alarmist,” Fier said. “This is the time to have these conversations. AI is not going anywhere. It is a new arms race across different nations.”