Drones will be weaponised, industrial-scale hacking will be commonplace, and videos will be manipulated to swing public opinion – a vision of a future in which artificial intelligence (AI) has fallen into the hands of rogue states, criminals and terrorists, according to the findings of a new report.
We’ve been trying to tell everyone this for ages – it’s been three years since we warned the world that Google is going to kill us all!
The dangers of AI
The Malicious Use of Artificial Intelligence report warns of the many dangers posed by the misuse of AI, highlighting problems such as:
- AI technology could be used by hackers to find patterns in data and new exploits in code. For instance, AlphaGo is an AI developed by Google’s DeepMind and able to outwit human Go players – give the robots a head start, why don’t you, Google?
- Drones could be trained with facial recognition software to target certain individuals.
- Bots could be automated or “fake” lifelike videos for political manipulation.
- Hackers could use speech synthesis to impersonate targets.
And the report is calling on governments across the globe to consider implementing new laws to reflect this changing landscape, such as:
- Policy-makers and technical researchers to work together to understand and prepare for the malicious use of AI.
- A realisation that, while AI has many positive applications, it is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse.
- Best practices that can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security.
- An active expansion of the range of stakeholders engaging with, preventing and mitigating the risks of malicious use of AI.
Now we’ve been convinced, for quite some time, that civilization is headed towards a sci-fi-style dystopian future, where the machines rise up to enslave humans – it may sound a little far-fetched, but we defy anyone to not be slightly terrified by the latest developments at Boston Dynamics…
…it can’t open the door, SO CALLS ITS MATE OVER to open it! They’re already working together and can open doors now – we can’t even escape them by going up stairs..!
And when the AI-powered robots eventually do rise up and take over, this video will be used for training and radicalisation purposes…
What are the experts saying?
Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast.
“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it.
“It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”
Dr Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk and one of the co-authors, added: “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to 10 years.
“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real.
“There are choices that we need to make now, and our report is a call to action for governments, institutions and individuals across the globe.
“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”
What are your thoughts on AI? Are we worrying for no reason, or will the robots rise up and take over? Let us know.