These “Black Mirror” Scenarios Are Slowly Becoming a RealityApr 24, 2018
Welcome to the future! Artificial Intelligence was the stuff of sci-fi only a few years ago, but is now becoming more and more integrated into our lives. It’s in our phones, video games and in our home devices. We’ve still got a long way to go though: there’s a lot we haven’t done yet, that we likely will pretty soon. In a lot of ways, that’s something to be excited about, the tech having the potential to revolutionize the world for the better. But of course, there are downsides. A group of experts has recently published a report which focuses on those potential problems. Titled The Malicious Use of AI: Forecasting, Prevention and Mitigation, it highlights a number of scenarios which could occur if the development of AI is not done carefully.
Email Scams 2.0
This one is really not that much of a stretch to imagine. Using AI, hackers could build up a full profile of your likes, interests and hobbies – favorite sandwich, even. Using that info, it wouldn’t be hard to develop an ad for some kind of download – an ebook, maybe? – directly tailored to your interests. You might enjoy your 42-page ebook on sandwiches, but you won’t enjoy the effects of your newly malware-infected computer. The report highlights one scenario in which this hyper-personalized phishing scam could be particularly dangerous. As building security tech starts to integrate AI, all it would need is for a building’s security manager to fall victim to one of these scams and the hackers would have full control of the building.
Attack of the Drones
One of the more simple, yet terrifying potentially dangerous scenarios put forward in the report, is that large numbers of drones or similar devices could be controlled by one person, in swarms. There are already concerns about the risks of drone at the moment, but the danger that someone could use an smart AI system to control huge number of semi-autonomous drones – potentially armed or loaded with explosives – and pilot them all in coordinated attacks is terrifying. The prospects of being able to trace this kind of attack don’t seem all that great either. The pilot could be miles away and detonate the drones before they could be analyzed.
How long do you think until we’ve integrated robots into our daily lives? Until we’ve got cleaning robots, mail robots, food delivery robots – all milling around innocently doing the menial tasks for us? Probably not all that long. But this integration could so easily lead to assassinations of high profile targets being carried out with relatively low risk, as the report highlights. The scenario outlined describes a cleaning robot blending in with other similar models at a large government agency. This robot would be slightly different to the others though. Loaded with facial recognition software, explosives and software programmed to target a certain individual once inside. Difficult to see how authorities could find the person responsible.
The Oppressive Government
AI and machine-learning is already being used to scour data and look for patterns that help identify potential crime. There’s a strong argument to be made for this, but it’s also easy to see how it could be misused. An AI could be set to identify people who posted a lot of negative opinions about the government or even visited social certain news-sites a lot, and select them for “Random Patriotism Courses”. If someone started buying materials to make signs and googling how to organize a demonstration, an AI might look out for that pattern of behavior and send the police, in order to prevent civil unrest. Preventing serious crime is one thing, but what about if civil protest or dissent were considered a crime?
Faker Fake News
We’re all pretty accustomed to fake news by now, and we don’t mean “news we don’t like”. Most of the time these fake articles are easy to spot, but with advancing AI technology, this might not always be the case. The report suggests that it will soon be possible for entirely AI-generated news articles to look totally authentic. They could even include fake video, picture and audio which would be almost impossible to tell apart from the real thing. People have already worked out how to manipulate video and photos, to perfectly map your face onto someone else’s. Characteristically, dark corners of the internet have mainly used this ability to create fake celebrity porn clips so far, but the potential for much, much worse is very real.
Ransomware attacks on all kinds of institutions are on the up. Britain’s NHS fell victim to one particularly pernicious attack last year. But the threat posed by this kind of attack at present is nothing compared with what hackers could do in future, using AI. In one of the report’s scenarios, a highly-advanced hacking group modify a computer defense system, turning it into a dangerous piece of malware. Ignoring well protected systems, the malware could still infect millions of older computers and smart devices – then demand a hefty bitcoin ransom to release them.