back

AI, Machine Learning Aren’t Cybersecurity’s Silver Bullets

Which of the following statements is true?

Artificial intelligence and machine learning are going to meet all of cybersecurity’s challenges and usher in a new age of safe and secure internet usage and hacker-proof transactions.

Cyber criminals will use the same technologies to break down the barriers created by artificial intelligence and machine learning, rendering their efficacy with cybersecurity nil.

There’s validity in both statements, and a bit of hyperbole as well. AI and machine learning will neither be the salvation of nor the demise of cybersecurity. The more likely scenario is that they will be used in a long-running game of one-upmanship, with machines learning to bypass security measures and security technology learning to block ever-more-clever attacks.

That’s the picture of cybersecurity’s future painted by Niall Browne of software manufacturer Domo.

“Imagine a world where intelligent criminal systems try to break into banks, hospitals and energy companies 24x7x365,” Browne told CSO. “Of course, the AI systems at these institutions will counter with hundreds of intuitive moves per second in an attempt to keep cybercriminals at bay.”

How AI, Machine Learning Fill Cybersecurity Need

Browne’s words may dampen the enthusiasm of those who hope AI advancements will eliminate their cybersecurity woes.

Midway through 2018, Wired reports that cyber attacks haven’t quite reached the number they were at in mid-2017, but “that’s pretty much where the good news ends.”

Corporations’ security measures aren’t keeping pace with the efforts of cybercriminals, and hackers, some backed by foreign governments, are targeting U.S. universities and even the nation’s infrastructure.

At the same time, the stress levels – and subsequent health concerns – of professionals on the cybersecurity frontlines are skyrocketing, while their numbers are dwindling: 300,000 cybersecurity positions in the U.S. are going unfilled, according to MIT Technology Review.

To some professionals, this is where the much-needed help of AI and machine learning can step in.

Wired reports that the 2018 RSA security conference in San Francisco was full of companies that promoted software and hardware to make all things cyber safe. Their sales pitch? AI that can instantly detect malware, guide incident response and spot intrusions before they begin.

What security companies are referring to are machine learning algorithms that become familiar with large data sets to “learn” what to keep an eye on and how to react. In particular, machine learning-based malware scanning that applies this process to observing malicious programs. Machine learning can also be used to protect against spam, phishing and a host of other security issues.

For example, Google uses machine learning in nearly all of its services, in particular, deep learning, in which algorithms self-correct as they grow and evolve. Google’s Elie Burzstein told Wired that with deep learning, “We are preventing violent images, scanning comments, detecting phishing and malware in the Play Store. We use it to detect fraudulent payments, we use it for protecting our cloud, and detecting compromised computers. It’s everywhere.”

Be Aware of Limitations

Despite the promise of machine learning and other AI applications for cybersecurity, there are some dangerous limitations.

As Peter S. Vogel and Edward H. Block explain in an article on the Ecommerce Times website, artificial intelligence has the same “learning curve” as a human being: “Both have to see something or make a mistake before it can learn.”

In other words, an AI cybersecurity system will recognize an abnormality or illegal activity – but the damage may already have been done by the time it does so.

One way to foil or at least limit this damage is to use a diverse set of algorithms so that a single hacked algorithm doesn’t compromise the entire security system.

Another issue is that AI and machine learning have become cybersecurity buzzwords, sparking fears that AI and machine-learning products will go to market without proper testing to discover deficiencies.

Speaking at the Black Hat USA conference, Raffael Marty of security firm Forcepoint warned about the dangers of products that require supervised learning, where a company trains its algorithm on a data set that has its clean and malware code tagged. One problem with this is that cybercriminals who get access to the company’s data sets could re-tag them so that malware was listed as clean. Hackers also could see what the system is designed to detect as a threat, remove those features from their code and sneak it past the algorithm.

Artificial intelligence and machine learning already are part of the cybersecurity toolkit and they have much to contribute. However, security companies and their customers do themselves no favors when they think of these technologies as invincible. For AI and machine learning cybersecurity systems to be effective, it’s essential that manufacturers and users be aware of shortcomings, monitor the product’s use and learn how best to minimize the risk of attack.

Get program guide
YES! Please send me a FREE brochure with course info, pricing and more!

Unfortunately, at this time, we are not accepting inquiries from EU citizens.

Unfortunately, at this time, we are not accepting inquiries from EU citizens.