In 1956, at a workshop on the campus of Dartmouth College, in Hanover, New Hampshire, the field of artificial intelligence (AI) was born. Attendants were buoyant. MIT cognitive scientist Marvin Minsky was quoted as saying, “Within a generationย […] the problem of creating ‘artificial intelligence’ will substantially be solved.”

This prediction turned out to be over zealous, but Minsky and his colleagues believed it wholeheartedly. What, then, is different today? What makes the current dialogue about AI more relevant and believable? How do we know that this is not another case of humans over estimating the development of technology?

For one thing, AI is already here. In its narrower form, artificial intelligence already pervades industry and society. It is the โ€˜intelligenceโ€™ behind facial recognition, big data analysis or self-driving cars. Beyond narrow AI, however, is artificial general intelligence (AGI), the adaptable type of intelligence humans have. This is what scientists and commentators are usually referring to when they argue about the imminent arrival of AI.

The arguments are often about when and what โ€“ 1) when will AI happen, and 2) what will AI mean for humankind, 3) what are the risks of AI?

Answers to the first question vary wildly depending on who you listen to. Elon Musk, for example, once said,

โ€œThe pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast โ€“ it is growing at a pace close to exponential.โ€

Author and futurist Ray Kurzweil predicts that the technological singularity โ€“ the moment when AI becomes smarter than humans โ€“ is just decades away. Other AI experts disagree, claiming that true artificial intelligence before 2100 is impossible. Apart from academic and professional rhetoric, the debate around question one is important because the answer influences how long we have to get ready.

Ready for what? Well, that depends on your answer to question two โ€“ what will AI mean for humankind? In a broader sense, this question just leads to more questions โ€“ will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? For more on this discussion see my article Why AI is Neither the End of Civilization not the Beginning of Nirvana.

There are many people, including the likes of Elon Musk, Bill Gates and Stephen Hawking, who have voiced concerns about the answers to these questions. Their fears are mainly of the โ€˜Killer Robotโ€™ variety โ€“ the concern that machines, once conscious, would be so much more intelligent than humans that we would lose power to them, with unpredictable consequences. This is the narrative that forms the backbone of popular culture dystopian fantasies, like those found in The Terminator, The Matrix, iRobot, Ex Machina, and Blade Runner. In fact, this theme goes all the way back to Samuel Butlerโ€™s 1872 novel, Erewhon. Others, like Kurzweil, welcome the imminent integration of organic and artificial intelligence as an evolutionary pathway for humankind.

Regardless of whether true AI is a few years or a few decades away, and regardless of whether AI includes inherent threats we are not yet aware of, most experts agree that it is important to begin preparing now for the age of AI. That means creating the oversight, regulation and guidance needed to allow AI to flourish safely. And that begins with understanding and mitigating the threats of the AI that is already at our fingertips. With so much narrow AI capability online, and waiting to come online with the emergence of 5G, understanding the risks and learning how to face them is vital.

How do we currently use AI?

Merriam-Websterย defines artificial intelligence as:

  1. A branch of computer science dealing with the simulation of intelligent behavior in computers.
  2. The capability of a machine to imitate intelligent human behavior.

But those actively developing AI usually have a more pragmatic definition focused on specific objectives and uses. Amazon, for example, defines AI as โ€œthe field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition.โ€

This is more accurately a definition of machine learning (ML), a sub-domain of AI, and it describes most of the ways in which AI is being used today. Pattern recognition in large amounts of data can be used to identify faces from CCTV footage or identify specific medical anomalies. In the UK, for example, Googleโ€™s artificial intelligence company DeepMind are collaborating with the National Health Service. DeepMindโ€™s AI software is being used toย diagnose cancerย and eye disease from patient scans. ML is also being used in other applications to spot early signs of conditions such as heart disease and Alzheimers.

AIโ€™s big data-processing capability is also being used in the healthcare sector to analyse huge amounts of molecular information to find potential new drug candidates โ€“ a taxing and time-consuming process for humans.

In the shipping industry, the Port of Rotterdam is leading with progressive AI initiatives. New ML models have been developed to predict a vesselโ€™s arrival time at the wharfside โ€“ a notoriously difficult task that requires consideration of multiple port and vessel processes. The application, Pronto, allows the port to better manage its resources and move freight faster through its facilities. Vessel waiting time has already been reduced by 20%. With further development, Port of Rotterdam hope Prontoโ€™s self-learning capabilities will be extended to predict the arrival time of ships seven, or even 30, days away.

In smart buildings, AI is being used for predictive energy optimization, learning when to heat and cool a building to find the best balance between temperature conditions for its inhabitants and energy costs. Machine learning is also supporting fault detection and preventative maintenance by processing continuous streams of input and output data for building operations. In homes, AI is most popularly recognized in virtual assistants such as Alexa or Siri, though such services are becoming increasingly utilized in workspaces too.

Zooming out to smart cities, AI is already being used to optimize city traffic, parking and public transport. It is assisting with public safety and managing optimal flow of resources like energy and water.

AI, or ML, applications are being integrated at individual, group, industrial, social, national and international levels. They are increasingly embedded in the technology that we invite into our most private spaces, the technology that runs the mechanics of our work day, the technology that manages how we obtain access to food, water, energy and safety.

Of course, this is just the beginning.

What is the future of AI?

As AI becomes more sophisticated and it helps businesses operate better, governments see more and individuals lead easier lives, it will be adopted with increasing speed. However, ecosystems will also evolve to facilitate faster growth of AI. The rollout of 5G in the coming years will inexorably alter the techno-human landscape.

For the first time, extreme technologies like autonomous vehicles, integrated virtual reality (VR) and augmented reality (AR) and fully smart cities will be possible. These will require 5Gโ€™s high-speed, low-latency capabilities, but they will also rely on AIโ€™s ability to process massive volumes of data, thereby driving faster adoption of AI. As AI evolves, this processing power will also turn into decision-making power as humans increasingly trust machines to make decisions on their behalf.

But machines can make mistakes. They may perfectly process the data we feed them, but if we feed them poor data they will produce poor results. โ€˜Garbage in, garbage outโ€™ says the old computer science adage. But as we hand over more and more influence to AI systems, the stakes rise.

There is real risk that we can put too much trust in the smart systems we are building. Once AI applications take on responsibility for processes with important private and social ramifications, like the assessment of your credit score, job suitability or criminalsโ€™ chance of reoffending, the consequences of error escalate.

Even if one believes that fears of AI takeover are alarmist, which they may not be, there is still cause for prudence. A knife is a neutral instrument. Depending on who holds it, it may be used to cause harm or do good. It could stab or it could prepare a meal. The knife represents an entire spectrum of latent potential waiting to be realized by its operator.

In the same way, AI will increasingly be defined by those who use it. Already, AI applications have been shown to reflect the prejudices of those who built them, with possibly significant consequences for the individual and society. It is important to note that these effects are the result of humans acting unconsciously. What then is the potential for humans using AI with conscious intent?

What are the risks of AI?

Unlike the common Hollywood representation of vengeful machines bent on eradicating humankind, we are unlikely to see superintelligent AI exhibit human emotions. There should be no reason, then, for AI to be particularly kind or particularly malicious. Any danger in AI will depend on the humans that develop or implement it.

The Future of Life Institute, which is focused on keeping technology โ€“ especially AI โ€“ beneficial, suggests there are two primary scenarios in which AI could be dangerous:

  1. The AI is programmed to do something devastating.

The most common, and possibly most feared, example of this is found in autonomous weapons: weapons that operate independently of any controller in intelligent and co-ordinated ways. Though this will largely spell the end of human-to-human warfare, the risks to humanity at large are extreme. It doesnโ€™t take much imagination to see what devastation could be wrought by armies of machines with no inherent conscience programmed to kill. We are already seeing the beginnings an AI arms race between major nations like China, USA and Russia. Russian leader Vladimir Putin summed up the spirit of this competition when he said:

โ€œArtificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.โ€

  1. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal.

This could happen when we set a goal for the AI, but the AIโ€™s interpretation of that goal and how to get there does not fully align with ours. Unless specifically programmed to do so, AI will not necessarily avoid actions that are illegal or harmful in its pursuit of the goal it has been given.

Both scenarios pose significant potential threats. And the more integrated and autonomous our systems become, especially in the hyper-connected 5G-verse, the more difficult it becomes for cybersecurity professionals to manage these risks.

AI risks and benefits for cybersecurity

As with our metaphorical knife, AI can be wielded in service of or in conflict with cybersecurity.

Because it demands so much manpower, cybersecurity has already benefited from AI and automation to improve threat prevention, detection and response. Preventing spam and identifying malware are already common examples.

However, AI is also being used โ€“ and will be used more and more โ€“ by cybercriminals to circumvent cyberdefenses and bypass security algorithms. AI-driven cyberattacks have the potential to be faster, wider spread and less costly to implement. They can be scaled up in ways that have not been possible in even the most well-coordinated hacking campaigns. These attacks evolve in real time, achieving high impact rates.

Same AI-exacerbated capabilities are already being used, not for illicit financial gains, but for political manipulation by nation states and aligned organizations.

Through adversarial learning, machine learning systems are fed inputs that are intentionally designed to fool the ML program and arrive at conclusions that serve would-be attackers. This is used to compromise spam filters, hide malware code, or trick biometric assessments into incorrectly identifying users. In 2018, Google Brain famously created an algorithm that tweaks images to get around image recognition in ML systems and human brains, tricking most machines and people into thinking a picture of a dog was a cat. This has potentially dire consequences when cybersecurity checks are run by machine learning applications.

The common security issue of backdoors also becomes far more difficult to police in an AI environment. When built from scratch into a machine learning network, backdoors represent corruptions of the algorithm that are initiated under certain predetermined conditions. This has been proven possible with visual recognition software, which makes it a prime target for those wishing to interfere with drones, autonomous vehicles or surveillance technologies.

The nature of AI itself also poses difficulties for cybersecurity. In the use of deep learning, a more complex subtype of machine learning, the AI system is fed large amounts of data without the initial modelling that accompanies standard machine learning. In teaching an ML system facial recognition, for example, the process will begin with images that model the characteristics needed for the machine to recognise facial patterns. In deep learning, the initial image stage is skipped โ€“ the system is just fed the data and identifies patterns on its own.

This can have incredible results. The Deep Patient program at Mount Sinai hospital in New York has proved remarkably good at predicting illnesses based on hospital records. This includes really difficult to diagnose conditions like schizophrenia.

The problem is, nobody really knows how. The complexity of deep learning algorithms is so great that even the engineers who designed the program are often unable to work out how it arrives at its conclusions, even though those conclusions may be excellent. This is known as AIโ€™s โ€˜black boxโ€™. We are able to see the inputs and the outputs, but everything that goes on in-between is shrouded in mystery. How does one ensure the security of such a system? If the outputs of such a system are trusted but not understood, then it is possible for the processing algorithm to be corrupted to give different results, and nobody would know. In the example referenced earlier, the Port of Rotterdam has avoided a black box approach altogether by setting reliable parameters for its programโ€™s predictions.

Though AI and its derivatives promise a new world of opportunity for our species, we need to tread carefully. As we move to place AI at the heart of facilitative technologies like 5G, as well as governments, corporations and our homes, it is more important than ever that we develop intelligent ways to manage artificial intelligence. Research so far has been limited to white hat hackers using ML to identify vulnerabilities and plan fixes. But at the speed AI is developing, it wonโ€™t be long before we see attacks on a mass scale. We need to prepare now.

Website | Other articles

For over 30 years, Marin Ivezic has been protecting people, critical infrastructure, enterprises, and the environment against cyber-caused physical damage. He brings together cybersecurity, cyber-physical systems security, operational resilience, and safety approaches to comprehensively address such cyber-kinetic risk.

Marin leads Industrial and IoT Security and 5G Security at PwC. Previously he held multiple interim CISO and technology leadership roles in Global 2000 companies. He advised over a dozen countries on national-level cybersecurity strategies.