All right! Now let’s get regula– uh, debating, study concludes
Their aim is to put in motion safeguards and policies to crackdown on malevolent uses of machine-learning technology, rather than whip up panic, and to make scientists and engineers understand the dual-use of their code – that it can be used for good and bad.
The paper, titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, was made public on Tuesday evening. It is a followup to a workshop held at the University of Oxford in the UK last year, during which boffins discussed topics from safety and drones, to cybersecurity and counterterrorism, in the context of machine learning.
The dossier’s 26 authors hail from various universities, research institutions, an online rights campaign group, and a cybersecurity biz. It offers example scenarios in which artificially intelligent software can be used maliciously, scenarios the team believe are already unfolding or are plausible in the next five years.
The work is broadly split into three categories: digital security, physical security, and political security.
Digital security focuses on cyberattacks. The team argued that, following rounds of training in thousands or millions of virtual cyber-sorties, AI-driven hacking tools can prioritize who or what to target and exploit in real-world networks, can evade detection, and can “creatively respond” to any challenges.
Rather than a spray-and-pray approach, toolkits could use neural networks to optimize attacks, basically. Of course, software won’t be putting seasoned hackers out of work any time soon.
“Autonomous software has been able to exploit vulnerabilities in systems for a long time, but more sophisticated AI hacking tools may exhibit much better performance both compared to what has historically been possible and, ultimately (though perhaps not for some time), compared to humans,” the paper said.
The most realistic examples in the paper involve adversarial machine learning. This is where you feed specially crafted data into an AI system so it makes bad decisions – for example, a distorted badge on your chest makes an office facial-recognition system think you’re the CEO.
It is already a major topic of research in AI, and there are several cases where image-recognition models have been hoodwinked. Adding a few pixels here and there can trick systems into thinking turtles are rifles, or bananas are toasters.
There are more futuristic examples of AI going rogue, such as smart chatbots, or computer-generated spear-phishing emails, that trick people into handing over their passwords and other sensitive information. The software can be trained to take basic information about a person and tailor a convincing conversation or message to fool them into thinking they are talking to a legit service, for instance.
Today’s successful phishing mails and instant messages look pretty authentic already; the worry here is that the process can be automated and scaled out to take these attacks to the next level. Right now, though, realistic-sounded chatbots are still hard to produce.
Hyrum Anderson, coauthor of the paper and a technical director of data science at Endgame, a cybersecurity biz, told The Register that it’s safe to assume hacking teams and similar miscreants are already “investing heavily” in AI and machine learning.
In fact, he believes that we will see AI being used in a cyberattack in the next year or so – probably in the form of automated mass spear-phishing or bypassing CAPTCHA codes.
Physical security involving AI is slightly more easier to picture: think swarms of drones or autonomous weapons controlled by even moderately intelligent software.
Jack Clark, coauthor of the paper and strategy and communications director at OpenAI, told The Register commercially available drones could in the future be reprogrammed using “off-the-shelf software, such as open-source pre-trained models,” to create “machines of terror.”
Consider that the software controlling a quadcopter to deliver a package to someone’s home doesn’t need to be tweaked very much to instead drop explosives on a person’s head.
Plus, these attacks can be carried out automatically from afar without even having to glance at the victims before they’re blown up. Today’s remote-controlled military drones require pilots and commanders sitting comfortably in booths hundreds or thousands of miles away, guiding their aircraft across deserts before opening fire on targets.
With this technology fully under autonomous control, self-flying drones can be told to patrol regions and attack as they deem fit, perhaps without anyone having to worry too much about the consequences. The same goes for folks repurposing civilian drones for mischief or violence, out of sight and almost out of mind.
Point and click, like a Command & Conquer computer game.
The third area, political security, has already made headlines. The rise of Deepfakes, in which machine-learning software creates fake pornography videos, by mapping the faces of celebrities into X-rated smut scenes, has sparked fears of legit-looking computer-generated fake news.
What if AI-shopped videos are shared of politicians engaged in salacious or illegal activities during election campaigns? How difficult will be it to prove the material was electronically doctored?
The technology behind Deepfakes, and another system that manipulated mouth and audio samples to make Barack Obama say things he’s never said, are not terribly convincing – yet.
On the other side, as surveillance capabilities increase, it’ll be easier for lousy regimes to gather video recordings, photos, and sound clips of citizens, and manipulate the footage and audio to smear or incriminate them.
Miles Brundage, coauthor of the paper and a research fellow at the University of Oxford’s Future of Humanity Institute, told El Reg engineers and scientists have to do more than simply acknowledge that their AI systems are dual use: the million-dollar question is what steps should be taken next.
“It’s hard to say in general what sort of research has security implications, but in some cases it’s pretty clear,” he said. “For example: adversarial examples that are specifically designed to spoof AI systems. And even if you know there are security implications, it’s not always clear what to do with that information.”
So that question is, for now, unanswered. For that reason, the recommendations offered by the dossier’s authors are a little fluffy and open-ended. They want to see a closer collaboration between policymakers and technical researchers, and for researchers to scrutinize their work more closely for any security risks.
Here’s the study’s high-level summary of recommendations:
It can be tricky for researchers to think about the dual use of their own work, especially if they aren’t security experts. But there are a few simple points to consider when trying to look for blind spots, Endgame’s Anderson said.
People need to think carefully about their training data, he said. Where is it coming from? Should the source be trusted?
AI programmers are scarce, and it’s becoming easier to outsource the work to experts who will build custom models. Anderson recommended people consider the possibility of backdoors being implanted during development, and highlighted tools such as CleverHans, a Python library that can be used to scan computer vision applications for possible vulnerabilities to adversarial examples.
“One need not be a security expert to identify the vulnerabilities in the machine learning,” he said.
Essentially, no one wants suffocating regulations and laws, yet a line has to be drawn somewhere. Where exactly that should be? Well, the debate is only just starting.
Clark reckoned policymakers are “slightly concerned about AI, but not generally not aware of its rate of progress.” He hoped to see more people hosting more workshops, where scientists, engineers, those advising lawmakers, and other experts, can raise their concerns.
“This is the beginning of a dialogue on this topic, not the end,” Brundage added. ®