The Malicious Use of Artificial Intelligence in Cybersecurity

The Malicious Use of Artificial Intelligence in Cybersecurity

SHARE:

Researchers from some of the most prestigious universities in the world (Cambridge, Oxford, Stanford, and Yale), along with cyber-security industry and civil society organization representatives, published a paper earlier this year. The Malicious Use of Artificial Intelligence:

Scientists from the world’s leading universities have sounded the alarm over the increasing malicious use of artificial intelligence in cyber security. Specifically, they are concerned that nation states are increasingly using their machine learning capabilities and research to craft faster and more accurate attacks against their targets.

Researchers from some of the most prestigious universities in the world (Cambridge, Oxford, Stanford, and Yale), along with cyber-security industry and civil society organization representatives, published a paper earlier this year. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, as the study was entitled, it looked at a range of issues regarding potential misuses of artificial intelligence, in particular, machine learning.

Defining Artificial Intelligence

By now, everyone has heard of artificial intelligence (AI), but not everyone has a clear notion of exactly what it means. If you ask researchers in different fields for a definition of artificial intelligence, you might get some mixed answers, but what they will all agree upon is that it involves the use of computers to undertake the kind of analytical functions that usually only a person could do.

Malicious Use of Artificial Intelligence in Cybersecurity

With the power of computers tackling the problems, however, AI can solve problems at a much faster rate than a human could ever hope to. That makes AI an incredibly powerful tool, yet one that can be used nefariously just as easily as it can be used for good.

Ethics

At the heart of the issue, as outlined in the paper, is the fact that AI has no ethical bias. This is the formal way of stating what we said earlier, that AI can just as easily be used for malicious purposes as positive ones. The waters are further muddied by the fact that, while nation states are turning their AI towards a number of very positive goals, they are also increasingly using them to target other nations states and foreign corporations.

At the heart of the issue, as outlined in the paper, is the fact that AI has no ethical bias. That is the formal way of stating what we said earlier - AI can be used for malicious purposes as well as for positive ones. The waters are muddied further by the fact that while nation states are turning their AI practices towards positive goals, at the same time they are increasingly using the technology to target other nation states and foreign corporations.

The concern is that as AI and machine learning progress, we will start seeing an increase in zero-day attacks. Not only will we see an uptick in the rate of attacks but they will also be carried out much more precisely. Worse still, they will be sophisticated enough to evade the usual defenses and countermeasures.

Endpoint Protection

Currently, the most common use of machine learning is in the latest generation of endpoint protection systems. In other words, anti-malware and anti-virus software. We call it ‘machine learning’ because the AI algorithms are designed to learn. By feeding these algorithms with millions of example pieces of malware and associated behavior patterns, they can learn to analyze software and detect whether it is malicious or not.

Before the application of machine learning, anti-virus software was mostly dependent upon databases of known malware. These anti-virus programs would then check files on the user’s computer against entries in their database and report any matches, or near matches. While the use of machine learning offers enormous potential for the cybersecurity industry, it does come with a couple of caveats. Namely that the results it produces, and the accuracy with which it performs, are determined by the quality of the learning algorithm, and of the sample data which it is fed to learn from.

Also read: How artificial intelligence is shaping your sales strategy

Such vulnerabilities mean that malicious actors can manipulate either the underlying algorithm or the learning data it uses. Poisoning the learning data by feeding it false positives or by removing results, renders the sophistication of the machine learning approach redundant. The researchers are concerned that nation state actors might inadvertently be hastening our arrival at a point where machine learning algorithms are commonly defeated.

Staying Safe

None of this is to say that you should abandon your antivirus software - you should definitely have a tool of such sort installed on your computer. However, it is a good idea to supplement it with other forms of protection. A virtual private network (VPN) is another useful layer of virtual security. A VPN improves your online privacy and security by encrypting any data you send or receive from the internet. What's even better is that many providers also offer additional cybersecurity functions.

Artificial intelligence is one of the most critical areas concerning current computer research. It is also an area of technology that has enormous potential benefits to offer our society. However, we should all be somewhat worried about the current direction the field is heading in.

| About the Guest Author:

Guest Author
Harold is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online.

COMMENTS

BLOGGER

Related Articles

Name

Android,41,Blogger,24,Blogging,28,Business,21,Computer,71,Development,5,Games,5,Guest,330,Health,16,iOS,8,Marketing,11,Online Tools,1,Programming,25,SEO,68,Social,267,Software,4,Startup,13,Technology,21,Website,14,Wordpress,18,
ltr
item
MindxMaster: The Malicious Use of Artificial Intelligence in Cybersecurity
The Malicious Use of Artificial Intelligence in Cybersecurity
Researchers from some of the most prestigious universities in the world (Cambridge, Oxford, Stanford, and Yale), along with cyber-security industry and civil society organization representatives, published a paper earlier this year. The Malicious Use of Artificial Intelligence:
https://4.bp.blogspot.com/-u3yH3vpiC2E/W04FLaDq5cI/AAAAAAAAPuM/P2jrRporqPYuiOJmFgOOh8KbsJM_ANecQCK4BGAYYCw/s1600/glenn-carstens-peters-204767-unsplash.jpg
https://4.bp.blogspot.com/-u3yH3vpiC2E/W04FLaDq5cI/AAAAAAAAPuM/P2jrRporqPYuiOJmFgOOh8KbsJM_ANecQCK4BGAYYCw/s72-c/glenn-carstens-peters-204767-unsplash.jpg
MindxMaster
https://www.mindxmaster.com/2018/07/malicious-use-of-artificial-intelligence-in-cybersecurity.html
https://www.mindxmaster.com/
https://www.mindxmaster.com/
https://www.mindxmaster.com/2018/07/malicious-use-of-artificial-intelligence-in-cybersecurity.html
true
5332415103371288268
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share to a social network STEP 2: Click the link on your social network Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy