Cybersecurity and the rise of Artificial Intelligence

By: Angela Polania

AI Data Poisoning and Sabotage 

With the rise in Artificial Intelligence (“AI”), the cybersecurity industry needs to be alert to emerging cases of attackers seeking to poison AI/ML training data in business applications to disrupt decision-making and otherwise operations. There would be a large impact, for example, if compromised data is used for AI to automate supply chain decisions. The sabotaged data set could result in a severe under-or oversupply of product.

“Expect to see attempts to poison the algorithm with specious data samples specifically designed to throw off the learning process of a machine learning algorithm,” says Haiyan Song, senior vice president and general manager of security markets for Splunk. “It’s not just about duping smart technology, but making it so that the algorithm appears to work fine – while producing the wrong results.”

Deepfake Audio Takes BEC Attacks to the next level

Business Email Compromise (BEC) has already cost organizations billions of dollars as attackers pose as CEOs and other members of senior management to trick people in charge of banking accounts to make fraudulent transfers. Cyber criminals are taking BEC attacks to the next level with the use of AI technology and the telephone. There was a reported incident whereby an attacker used deepfake audio to impersonate a company CEO over the phone to trick someone at a British energy firm to wire $240,000 to a specious bank account. Experts believe 2020 will see increased use of AI-powered deepfake audio to carry out such BEC-style attacks.

“Even though many organizations have educated employees on how to spot potential phishing emails, many aren’t ready for voice to do the same as they’re very believable and there really aren’t many effective, mainstream ways of detecting them,” says PJ Kirner, CTO and founder of Illumio. “And while these types of ‘voishing’ attacks aren’t new, we’ll see more malicious actors leveraging influential voices to execute attacks next year.”

AI-Powered Malware Evasion 

Deepfakes are just one way that bad actors will leverage AI to perpetrate attacks. Security researchers are on edge waiting to discover AI-powered malware evasion techniques. Some believe this will be the year they discover the first malware using AI-models to evade sandboxes.

“Instead of using rules to determine whether the ‘features’ and ‘processes’ indicate the sample is in a sandbox, malware authors will instead use AI, effectively creating malware that can more accurately analyze its environment to determine if it is running in a sandbox, making it more effective at evasion,” predicts Saumitra Das, CTO of Blue Hexagon.

Evolution of Biometrics 

The fraud prevention world of financial services continues to evolve when it comes to the use of AI and biometric technology to onboard and authenticate customers. Financial institutions are rapidly iterating on authentication mechanisms that use AI and facial recognition to scan, analyze, and confirm online identity using mobile cameras and on-file government-issue IDs. But bad actors continue to evolve and will likely leverage AI to create deepfakes that try to trick these systems.

“In 2020, we will see an increase in deepfake technology being weaponized for fraud as biometric-based authentication solutions are widely adopted,” says Robert Prigge, president of Jumio.

Differential Privacy Gains 

The combination of big data, AI, and strict privacy regulations creates a challenge for security and privacy professionals to start innovating better ways to protect the kind of customer analytics that feed into AI applications. Fortunately, other forms of AI can be used to accomplish this.

“In the coming year, we will see practical applications of AI algorithms, including differential privacy, a system in which a description of patterns in a dataset is shared while withholding information about individuals,” says Rajarshi Gupta, head of artificial intelligence at Avast. Gupta says differential privacy will allow companies “to profit from big data insights as we do today, but without exposing all the private details” of customers and other individuals.

AI Ethics and Fairness

Another relevant AI issue involves ethics and fairness. These issues are pertinent to cybersecurity leaders who are tasked with maintaining the integrity and availability of systems that rely on AI to operate.

“We are going to get a lot of new lessons from the usage of AI in cybersecurity this coming year. The recent story about Apple Card offering different credit limits for men and women has pointed out that we don’t readily understand how these algorithms work,” says Todd Inskeep, principal of Cyber Security Strategy for Booz Allen Hamilton and RSA Conference Advisory Board Member. “We are going to find some hard lessons in situations where an AI appeared to be doing one thing and we eventually figured out the AI was doing something else, or possibly nothing at all.”

The Need for AI

The world has grown to incorporate technology into almost every aspect of daily life, and with that comes a considerable increase in risk. However, AI can help cybersecurity leaders tackle the sheer number of cyberthreats in corporate and personal applications. This is why AI will continue to grow and become an essential part of cybersecurity.