While artificial intelligence and machine learning aid in threat detection and help fend off attacks, security decision making falls to the professional.
Yet AI has a role in security and, like other avenues of business, security is becoming accustomed to its place — benefits and limitations alike.
AI and ML have to find their fit within security systems, but it requires training the technology first. For insider threats, it's theoretically easy for ML to pick up on anomalous activity, but preparing a model for real-life scenarios proves challenging.
"When we look at creating these baselines to begin with, we have to make sure that we have a clean slate to start with. And most organizations that have been around for quite some time are not going to have that," said Greg Foss, senior cybersecurity strategist at VMware Carbon Black, while speaking during the virtual Cybersecurity Summit by the National Cyber Security Alliance and Nasdaq earlier this month.
Building out datasets for ML to reference is an ongoing process that requires companies to consider how much data they need to understand "trending of behavior over time," said Foss. "All of us are really just kind of scratching the surface in terms of what the possibilities are in terms of defense, or AI and ML."
Then AI's trustworthiness is continuously tested as businesses pile on on AI capabilities. Organizations need to ensure their ML is learning from unbiased and untainted data. Security experts are welcoming the advantage of AI while remaining wary of the danger it presents.
Companies are finding AI's value wherever they adopt it. By 2024, 75% of businesses will have graduated from the pilot stage of AI into operationalization, according to Gartner.
"There is no shortage of purposes when it comes to AI," said Elham Tabassi, chief of staff of Information Technology Laboratory at National Institute of Standards and Technology (NIST), on the panel.
Businesses tend to have three goals in mind when integrating AI into operations, according to Tim Bandos, CISO of Digital Guardian, while speaking on the panel:
Automate business processes for improved efficiency
Obtain insights from data analysis
For security, AI can prevent outages using anomaly detection, cut through noisy alerts to single out the ones worth listening to, and recognize suspicious IP addresses faster than humans.
Deep learning and natural language processing (NLP) can absorb data from blogs or breaking news to pick up on emerging cyber trends or to identify sentiments in text, said Foss. "That can be very useful for things like, identifying phishing attacks, or hostility between communication channels."
Cybercriminals latch onto possibilities
Even with an abundance of opportunities and solution providers rich with AI offerings, cybercriminals stay a step ahead.
"This means that we also need to kind of frequently retrain malware detectors in some use cases," said Bandos. Adversaries are adopting advanced techniques, including impersonation compression, evasion, obfuscation and polymorphism.
Russia-based hacker group Fancy Bear, for example, used files where "99% of the code base was leveraging legitimate libraries and compilers," said Bandos. "It would appear to be benign to the AI or the [ML] algorithm that was looking at that particular sample."
In 2019 researchers found Fancy Bear implanting a multi-threaded DLL backdoor, giving the threat actor "full access to, and control of, the target host," according to Blackberry's Cylance. The implant in parts allows the bad actors to hide among legitimate code, fooling the AI meant to deflect it.
Because bad actors are using AI against the companies that rely on it, Tabassi reinforced the importance of data's reliability. Garbage in, garbage out "is really true here," she said. Hackers try to leverage data poisoning techniques to "fool" the AI, kneecapping its ability to stop, or detect, a threat.
During pen testing exercises, Foss used "data ruses," which "misuse certain parts of the environment to distract people and get their attention over there when we're doing something over here." It's a similar concept for AI and ML, he said.
As algorithms make decisions for security analysts, hackers will abuse the models where they can. "This is something we see very prevalent now within trading, specifically within cryptocurrency," said Foss. For bots performing automated trading, hackers can reverse engineer their model.
"All these people check into their accounts, they have no money there [because] it's all been spent by these bots automatically without them even having control over it," said Foss.
AI gets commandeered
AI is not only for the enterprise. In some cases, bad actors are using companies' AI capabilities against them.
Hackers are running their attacks automatically. Malware is capable of adjusting and "making logical decisions" depending on the environment it was dropped in, according to Foss. It's an evolution of decades-old worming capabilities.
On top of traditional malware learning its environment automatically, credential stuffing applications have added AI image recognition and automated CAPTCHA capabilities. There is an increasing number of crimeware groups "creating these commercial grade credential stuffing applications that have these light AI/ML components baked into them," said Foss.
But some AI and ML innovations are outpacing safeguards to protect them from malicious use; case in point, the generative adversarial network (GAN).
GAN is a "class of algorithms used to create some sort of synthetic data by continuously improving the statistical model of the data distribution," said Bandos. GAN was meant to give AI the power of image creation, or imagination, and now hackers are using the network to create "command and control information to mimic normal traffic."
Because anomalies slide by undetected, the activity can be done in a matter of minutes. "I've worked cases where [malware is] in and out within a matter of 30 to 40 minutes" because they're leveraging these capabilities," said Bandos. "'Gone in 60 Seconds' — that should be a sequel on malware."