Cybersecurity

Five Ways Cyber-Attackers Are Using AI to Their Advantage

By Dr. Christine Izuakor

Industries far and wide are raving about all of the ways that artificial intelligence can help transform the world into a more efficient and productive environment. Within cybersecurity, AI is already leaving a lasting impact. Today it is already being used to alleviate the industry’s talent shortage by automating processes, increase the accuracy of alerts, minimize false positives in technology, cut down investigation times during incidents, eliminate the need for passwords, and more!

One common saying is that cybersecurity becomes a cat and mouse game because just as the good team is developing solutions to combat evolving cyber-attacks, the attackers are already looking for the next best way to bypass security and gain unauthorized access to resources. Just as AI can provide many benefits to those fighting the good fight, it also enables cyber attackers to get bigger, better, and faster at their attacks.

A key component of defending against cyber attacks is to understand the enemy and what kinds of tools and technology they are using. Here are a few examples of how cyber attackers are embracing Artificial Intelligence to further their malicious agendas:

Attackers use AI to bypass standard security controls, like CAPTCHA systems. 

Captcha is a functionality that has grown in popularity over the last several years to address the risk of brute force attacks and robot logins in online accounts. It has become a standard way to differentiate humans from machines. Images or strings of text are presented for users prompting them to interact with the image in ways that machines should not be able to. Today, using AI, machines are indeed able to replicate human behavior.  

For example, if presented with 1,000 images of street lights, cars, and trees, AI can group these based on similarities. Using this same technology, machines can be programmed to select the right images and beat CAPTCHA technology. A test done against Google ReCAPTCHA found that attackers were able to beat technology roughly 97% of the time.

Attackers are using artificial intelligence to impersonate trusted users.

Another common use case for artificial intelligence in cybersecurity is to improve user and entity behavior analytics. By analyzing tons of data on what normal user and network behavior look like, these solutions can identify deviations from ordinary with the proper context to conclude the risk of a transaction, as well as prompt action. On the same front, attackers can try analyzing big data regarding users and regular transactions. Doing so can allow them to replicate a user’s typical behavior or try to stay within the parameters of what is considered “normal.” 

Attackers are using artificial intelligence to launch sophisticated and targeted phishing attacks.

Phishing attacks can get more and more sophisticated when machines are able to mimic styles and knowledge of real communications. Even the most highly trained employees may not be able to distinguish real communications from phishing emails. In addition to impersonating trusted users, attackers are using AI to get much better at social engineering. They can sweep social media and online repositories of mass data for relevant information that can help make phishing attacks seems more legitimate and thus more convincing.

Attackers are using artificial intelligence to blend in on networks, and even cover their tracks.

Similar to mimicking legitimate user behavior, attackers can use AI to learn what normal network activity looks like and better mimic this in order to evade security protection mechanisms. They are also finding creative ways to find and delete or destroy logs to cover their tracks.

Attackers are using AI to build more advanced malware.

Artificial intelligence is changing the game when it comes to stealthy and persistent malware attacks. For example, standard security practice for addressing malware sent via email or other means is to try opening it in a sandbox first to see what it actually does. If it looks safe, then the user can open it on their own. To get around this, attackers are developing malware that is smart enough to distinguish a sandbox from a real computing environment so that it does not launch until it reaches it’s intended target destination.

Another example is in the battle to defend against ransomware. A security practice to combat ransom is to ensure that there are adequate backups of data available. To get around this, attackers are creating ransomware that is smart enough to find backups on the network and inject them with ransomware as well. When a company tries to restore data from their backups, they eventually learn that the backups are also infected.

Conclusion 

Combatting cyber-attacks requires a good understanding of our adversaries and the ways that they are evolving and growing to attack us. Cyber attackers are embracing artificial intelligence to further their malicious agendas by building smarter malware, impersonating trusted users, breaking standard security controls, and more. 

Furthermore, the most exciting yet concerning part of cybersecurity is that we do not know all of the possibilities yet. Cyber attackers are increasingly creative and persistent in the types of attacks that they craft and launch. When it comes to artificial intelligence, they can be learning to use this for attacks well beyond our imagination. As an industry, we must be prepared by first understanding these potential use cases.

 

 

Insider Risk – How Prepared Are You?

Insider Risk – How Prepared Are You?

Not every company is equally prepared to deal with insider risk. This report outlines the four stages of insider risk maturity and explores how to improve your insider risk preparedness.

About the author

Dr. Christine Izuakor
Dr. Izuakor is the Senior Manager of Global Security Strategy and Awareness at United Airlines where she plays a critical part in embedding cyber security in United’s culture. She is an adjunct professor of cyber security at Robert Morris University, and independently helps corporations solve a diverse range of strategic cybersecurity challenges.

Insider Risk & Employee Monitoring Resources

Is IAM, SIEM, and DLP Enough to Combat Insider Risk?

Is IAM, SIEM, and DLP Enough to Combat Insider Risk?

Key Takeaways: Closing the Gaps in Traditional Security Tools: IAM, SIEM, and DLP are vital but insufficient in addressing insider risks. They focus on access control, event logs, and data protection without understanding the behavioral context that signals insider...

Insider Risk Management: Addressing the Human Side of Risk

Insider Risk Management: Addressing the Human Side of Risk

Key Takeaways: Proactive Over Reactive: Shifting from a reactive to a proactive approach is essential in managing insider risks. Continuous monitoring and analysis of human behavior are key to detecting potential insider risks before they escalate. The Power of AI:...