Tuesday, August 22, 2017

Killer Robots Are Coming And These People Are Trying To Stop Them

 
 
begin quote from:
  

Killer Robots Are Coming And These People Are Trying To Stop Them

https://www.buzzfeed.com/.../how-to-save-mankind-from-the-new-breed-of-killer-ro...
Aug 26, 2016 - This is the ever-present cloud of lethal autonomous weapons. ... By now, drone warfare has been normalized — at least 10 countries have them. .... Why would any country actually ban a weapon they are convinced can win them a war? ..... surveillance — sensors everywhere, your movements always being ...      \
Clay Rodery for BuzzFeed News

Forget about drones, forget about dystopian sci-fi — a terrifying new generation of autonomous weapons is already here. Meet the small band of dedicated optimists battling nefarious governments and bureaucratic tedium to stop the proliferation of killer robots and, just maybe, save humanity from itself.
Posted on
A very, very small quadcopter, one inch in diameter can carry a one- or two-gram shaped charge. You can order them from a drone manufacturer in China. You can program the code to say: “Here are thousands of photographs of the kinds of things I want to target.” A one-gram shaped charge can punch a hole in nine millimeters of steel, so presumably you can also punch a hole in someone’s head. You can fit about three million of those in a semi-tractor-trailer. You can drive up I-95 with three trucks and have 10 million weapons attacking New York City. They don’t have to be very effective, only 5 or 10% of them have to find the target.
There will be manufacturers producing millions of these weapons that people will be able to buy just like you can buy guns now, except millions of guns don’t matter unless you have a million soldiers. You need only three guys to write the program and launch them. So you can just imagine that in many parts of the world humans will be hunted. They will be cowering underground in shelters and devising techniques so that they don’t get detected. This is the ever-present cloud of lethal autonomous weapons.
They could be here in two to three years.
— Stuart Russell, professor of computer science and engineering at the University of California Berkeley
Mary Wareham laughs a lot. It usually sounds the same regardless of the circumstance — like a mirthful giggle the blonde New Zealander can’t suppress — but it bubbles up at the most varied moments. Wareham laughs when things are funny, she laughs when things are awkward, she laughs when she disagrees with you. And she laughs when things are truly unpleasant, like when you’re talking to her about how humanity might soon be annihilated by killer robots and the world is doing nothing to stop it.
One afternoon this spring at the United Nations in Geneva, I sat behind Wareham in a large wood-paneled, beige-carpeted assembly room that hosted the Convention on Certain Conventional Weapons (CCW), a group of 121 countries that have signed the agreement to restrict weapons that “are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately”— in other words, weapons humanity deems too cruel to use in war.
The UN moves at a glacial pace, but the CCW is even worse. There’s no vote at the end of meetings; instead, every contracting party needs to agree in order to get anything done. (Its last and only successful prohibitive weapons ban was in 1995.) It was the start of five days of meetings to discuss lethal autonomous weapons systems (LAWS): weapons that have the ability to independently select and engage targets, i.e., machines that can make the decision to kill humans, i.e., killer robots. The world slept through the advent of drone attacks. When it came to LAWS would we do the same?
Yet it’s important to get one thing clear: This isn't a conversation about drones. By now, drone warfare has been normalized — at least 10 countries have them. Self-driving cars are tested in fleets. Twenty years ago, a computer beat Garry Kasparov at chess and, more recently, another taught itself how to beat humans at Go, a Chinese game of strategy that doesn’t rely as much on patterns and probability. In July, the Dallas police department sent a robot strapped with explosives to kill an active shooter following an attack on police officers during a protest.
But with LAWS, unlike the Dallas robot, the human sets the parameters of the attack without actually knowing the specific target. The weapon goes out, looks for anything within those parameters, hones in, and detonates. Examples that don’t sound entirely shit-your-pants-terrifying are things like all enemy ships in the South China Sea, all military radars in X country, all enemy tanks on the plains of Europe. But scale it up, add non-state actors, and you can envision strange permutations: all power stations, all schools, all hospitals, all fighting-age males carrying weapons, all fighting-age males wearing baseball caps, those with brown hair. Use your imagination.
While this sounds like the kind of horror you pay to see in theaters, killer robots will shortly be arriving at your front door for free courtesy of Russia, China, or the US, all of which are racing to develop them. “There are really no technological breakthroughs that are required,” Russell, the computer science professor, told me. “Every one of the component technology is available in some form commercially … It’s really a matter of just how much resources are invested in it.”
LAWS are generally broken down into three categories. Most simply, there's humans in the loop — where the machine performs the task under human supervision, arriving at the target and waiting for permission to fire. Humans on the loop — where the machine gets to the place and takes out the target, but the human can override the system. And then, humans out of the loop — where the human releases the machine to perform a task and that’s it — no supervision, no recall, no stop function. The debate happening at the UN is which of these to preemptively ban, if any at all.
Mary Wareham in her office at Human Rights Watch in Washington, DC, on Aug. 22, 2016.
Gabriella Demczuk for BuzzFeed News
Mary Wareham in her office at Human Rights Watch in Washington, DC, on Aug. 22, 2016.
Wareham, the advocacy director of the Human Rights Watch arms division, is the coordinator of the Campaign to Stop Killer Robots, a coalition of 61 international NGOs, 12 of which had sent delegations to the CCW. Unlike drones, which entered the battlefield as surveillance technology and were weaponized later, the campaign is trying to ban LAWS before they happen. Wareham is the group’s cruise director — moderating morning strategy meetings, writing memos, getting everyone to the right room at the right time, handling the press, and sending tweets from the @BanKillerRobots account.
This year was the big one. The CCW was going to decide whether to go to the next level, to establish a Group of Governmental Experts (GGE), which would then decide whether or not to draft a treaty. If they didn’t move forward, the campaign was threatening to take the process “outside”— to another forum, like the UN Human Rights Council or an opt-in treaty written elsewhere. “Who gets an opportunity to work to try and prevent a disaster from happening before it happens? Because we can all see where this is going,” Wareham told me. “I know that this is a finite campaign — the world’s going to change, very quickly, very soon, and we need to be ready for that.”
“I know that this is a finite campaign — the world’s going to change, very quickly, very soon, and we need to be ready for that.”
That morning, countries delivered statements on their positions. Algeria and Costa Rica announced their support for a ban. Wareham excitedly added them to what she and other campaigners refer to as "The List," which includes Pakistan, Egypt, Cuba, Ecuador, Bolivia, Ghana, Palestine, Zimbabwe, and the Holy See — countries that probably don’t have the technology to develop LAWS to begin with. All eyes were on Russia, which had given a vague statement suggesting they weren’t interested. “They always leave us guessing,” Wareham told me when we broke for lunch, reminding me only one country needs to disagree to stall consensus. The cafe outside the assembly room looked out on the UN’s verdant grounds. You could see placid Lake Geneva and the Alps in the distance.
In the afternoon, country delegates settled into their seats to take notes or doze with their eyes open as experts flashed presentation slides. The two back rows were filled with civil society, many of whom were part of the campaign. During the Q&A, the representative from China, who is known for being somewhat of an oratorical wildcard, went on a lengthy ramble about artificial intelligence. Midway through, the room erupted in nervous laughter and Erin Hunt, program coordinator from Mines Action Canada, fired off a tweet: “And now the panel was asked if they are smarter than Stephen Hawking. Quite the afternoon at #CCWUN.” (Over the next five days, Hunt would begin illustrating her tweets with GIFs of eye rolls, prancing puppies, and facepalms.)
A few seats away, Noel Sharkey, emeritus professor of robotics and artificial intelligence at Sheffield University in the UK, fidgeted waiting for his turn at the microphone. The founder of ICRAC, the International Committee for Robot Arms Control (pronounced eye-crack), plays the part of the campaign’s brilliant, absent-minded professor. With a bushy long white ponytail, he dresses in all black and is perpetually late or misplacing a crucial item — his cell phone or his jacket.
In the row over, Jody Williams, who won the Nobel Peace Prize in 1997 for her work banning landmines, barely suppressed her irritation. Williams is the campaign’s straight shooter — her favorite story is one in which she grabbed an American colonel around the throat for talking bullshit during a landmine cocktail reception. “If everyone spoke like I do, it would end up having a fist fight,” she said. Even the usually tactful Wareham stopped tweeting. “I didn’t want to get too rude or angry. I don’t think that helps especially when half the diplomats in that room are following the Twitter account,” she explained later and laughed.
But passionate as they all were, could this group of devotees change the course of humanity? Or was this like the campaign against climate change — just sit back and watch the water levels rise while shaking your head in dismay? How do you take on a revolution in warfare? Why would any country actually ban a weapon they are convinced can win them a war?
And maybe most urgently: With so many things plainly in front of us to be fearful of, how do you convince the world — quickly, because these things are already here — to be extra afraid of something we can't see for ourselves, all the while knowing that if you fail, machines could kill us all?

Jody Williams (left), a Nobel Peace Laureate, and Professor Noel Sharkey, chair of the International Committee for Robot Arms Control, pose with a robot as they call for a ban on fully autonomous weapons, in Parliament Square on April 23, 2013, in London, England.
Oli Scarff / Getty Images
Jody Williams (left), a Nobel Peace Laureate, and Professor Noel Sharkey, chair of the International Committee for Robot Arms Control, pose with a robot as they call for a ban on fully autonomous weapons, in Parliament Square on April 23, 2013, in London, England.
One of the very real problems with attempting to preemptively ban LAWS is that they kind of already exist. Many countries have defensive systems with autonomous modes that can select and attack targets without human intervention — they recognize incoming fire and act to neutralize it. In most cases, humans can override the system, but they are designed for situations where things are happening too quickly for a human to actually veto the machine. The US has the Patriot air defense system to shoot down incoming missiles, aircraft, or drones, as well as the Aegis, the Navy’s own anti-missile system on the high seas.
Members of the campaign told me they do not have a problem with defensive weapons. The issue is offensive systems in part because they may target people — but the distinction is murky. For example, there’s South Korea’s SGR-A1, an autonomous stationary robot set up along the border of the demilitarized zone between North and South Korea that can kill those attempting to flee. The black swiveling box is armed with a 5.56-millimeter machine gun and 40-millimeter grenade launcher. South Korea says the robot sends the signal back to the operator to fire, so there is a person behind every decision to use force, but there are many reports the robot has an automatic mode. Which mode is on at any given time? Who knows.
Meanwhile, offensive systems already exist, too: Take Israel’s Harpy and second-generation Harop, which enter an area, hunt for enemy radar, and kamikaze into it, regardless of where they are set up. The Harpy is fully autonomous; the Harop has a human on the loop mode. The campaign refers to these as “precursor weapons,” but that distinction is hazy on purpose — countries like the US didn’t want to risk even mentioning existing technology (drones), so in order to have a conversation at the UN, everything that is already on the ground doesn’t count.
Militaries want LAWS for a variety of reasons — they're cheaper than training personnel. There’s the added benefit of force multiplication and projection. Without humans, weapons can be sent to more dangerous areas without considering human-operator casualties. Autonomous target selection allows for faster engagement and the weapon can go where the enemy can jam communications systems.
Israel openly intends to move toward full autonomy as quickly as possible. Russia and China have also expressed little interest in a ban. The US is only a little less blunt. In 2012, the Department of Defense issued Directive 3000.09, which says that LAWS will be designed to allow commanders and operators to exercise “appropriate levels of human judgment over the use of force.” What “appropriate” really means, how much judgment, and in which part of the operation, the US has not defined.
In January 2015, the DoD announced the Third Offset strategy. Since everyone has nuclear weapons and long-range precision weapons, Deputy Secretary of Defense Robert Work suggested emphasizing technology was the only way to keep America safe. With the DoD’s blessing, the US military is racing ahead. Defense contractor Northrop Grumman’s X-47B is the first autonomous carrier-based, fighter-sized aircraft. Currently in demos, it looks like something from Independence Day: The curved, grey winged pod takes off from a carrier ship, flies a preprogrammed mission, and returns. Last year, the X-47B autonomously refueled in the air. In theory, that means except for maintenance, an X-47B executing missions would never have to land.

Killer Robots By Air...

US Navy
The US X-47B
BAE Systems
The Taranis
Israel Aerospace Industries
HARPY NG

...And By Sea And By Land

Office of Naval Research
US Navy’s Autonomous Swarmboats
The World Military
Russia's T14 Armata
Israel Aerospace Industries
Israel's Guardium
At an event at the Atlantic Council in May, Work said the US wasn’t developing the Terminator. “I think more in terms of Iron Man — the ability of a machine to assist a human, where the human is still in control in all matters, but the machine makes the human much more powerful and much more capable,” he said. This is called centaur fighting or human–machine teaming.
Among the lauded new technologies is swarms — weapons moving in large formations with one controller somewhere far away on the ground clicking computer keys. Think hundreds of small drones moving as one, like a lethal flock of birds that would put Hitchcock’s to shame, or an armada of ships. The weapons communicate with each other to accomplish the mission, in what is called collaborative autonomy. This is already happening — two years ago, a small fleet of ships sailed down the James River. In July, the Office of Naval Research tested 30 drones flying together off a small ship at sea that were able to break out of formation, perform a mission, and then regroup.
“I think more in terms of Iron Man — the human is still in control in all matters, but the machine makes the human much more powerful,” he said. This is called centaur fighting.  
The Defense Advanced Research Projects Agency (DARPA), which brought us the internet, has long been the epicenter of military innovation, but these days, civilian spheres are the ones making rapid advances. In July, Defense Secretary Ash Carter launched the Defense Innovation Unit-Experimental Boston office (DIUx) designed to woo civilian tech companies into collaborating with the DoD. It already has a branch in Silicon Valley. Secretary Carter emphasized DIUx is requesting $72 billion from the budget for research and development for the next year alone; promotional materials say the money will be used for handheld drones designed to fly indoors that can operate autonomously and map the environment without needing GPS. They also want machine-learning technology that can sift through millions of social media posts for specific images and aggregate those posts for “rapid awareness of extremist activity on the internet.” (DIUx denied requests for interviews, as did the DoD, the Navy, DARPA, and the Pentagon. The US Air Force, Northrop Grumman, and Boston Dynamics, as well as a bunch of DARPA’s subcontracting labs did not respond to emailed interview requests.)
Not to be left behind, the UK is developing the Taranis, a supersonic stealth drone like the X-47B except land-based. China, though a black box of military development, has tested the Sharp Sword, their own stealth combat drone attempting to reach supersonic speeds, and there’s word of an air-to-air fighter, the Dark Sword, or Anjian, but it is not known how far along in development these are. The Chinese also have showcased pixelated camouflage for tanks, which look like they've come straight out of Minecraft. Russia has the Armata T-14 tank with an unmanned remote-controlled turret that spins 360 degrees. It currently requires three crew members, but the company has said they are planning to go to zero in the next few years. They also want to build a fully robotized unit with artificial intelligence as soon as possible. Israel is already moving in that direction; they developed the Guardium to patrol borders. “Guardium is based on a unique algorithmic expert system that functions as a 'brain' to allow decision-making capabilities,” Israel’s Aerospace Industries declares on its website.
All of this is troubling, no matter the country. “I’m worried about the US as a driver — not what they are going to do with it, but the perception of what they are doing, and then the perception of other states that they have to keep up,” Heather Roff, a research scientist at Arizona State University and fellow at the New America Foundation, told me. “You couple that perception and you get arms races, and you get bad actors getting their technologies, that they don’t give a shit about targeting civilians.”
And it’s not just an international threat; there is a growing concern about the use of robots in policing and crowd control. In August 2015, North Dakota passed a bill allowing the police to equip their drones with Tasers or rubber bullets. In Texas, a company began exhibiting the CUPID, a fully autonomous quadcopter compliant with the state’s Stand Your Ground law — if someone illegally enters your property, it will ask them to leave, and if they don’t, it can tase them and keep tasing them until the authorities arrive.

CUPID

Popular Mechanics / Via youtube.com
"If they don’t leave it will taser them and will keep tasering them until the authorities arrive and by the time they get back they will be dead," says Noel Sharkey.
“People will not worry about this until something really happens," Sharkey told me. “I have no doubt that there will be autonomous crowd control, if we let it happen.”
But a ban at the CCW would apply only to warfare. Although tear gas is banned from military use, many countries’ police, including the US's, use it regularly on their own people. Sharkey was particularly concerned about the sales of a Skunk Riot Control Copter in South Africa. The octacopter can fire 20 bullets per second from four paintball barrels, “stopping any crowd in its tracks,” according to the company website. The Skunk can be armed with pepper spray, dye-marker balls, or solid plastic balls, carrying 4,000 bullets and “blinding lasers.” It has a thermal camera as well as speakers. Sharkey told me demand is so high he’s tracked the opening of two new factories producing the Skunk selling to various security services around the world.
Clay Rodery for BuzzFeed News
With killer robots, you don't have to go occupy physically a state anymore — you could aerially occupy it with surveillance. You don't need boots on the ground. I worry that if you did have something like aerial occupation and mass surveillance — sensors everywhere, your movements always being tracked, your data always being tracked, everything you say being tracked. That to me seems very Stasi-esque. I don’t want to walk into a dystopian future of mass surveillance and authoritarianism. That’s what I worry about, mostly, with autonomous technologies, because everyone is going to get so scared. It’s more incentives for more surveillance, more monitoring.
Overreliance on technology invites interesting ways of adapting to it from an adversary's perspective. So think of IEDs: cheap, lethal, super-effective. No one saw it coming. So I wonder in a situation like this, where you have AIs doing all sorts of different things, systems in the sea, systems in the sky, how do you fight that? I worry it will breed way more terrorist activities. You can call them insurgents, you can call them terrorists, I don’t care, when you realize that you can’t ever fight the state mano-a-mano anymore, if people are pissed off, they’ll find a way to vent that frustration, and they will probably take it out on people who are defenseless. Because that’s what we see happening now.
— Heather Roff, research scientist at Arizona State University, fellow at the New America Foundation and senior research fellow at Oxford
On the second morning of the conference, Wareham and I were walking into the building when she mentioned an Israeli Harop hit a bus full of Armenians in Nagorno-Karabakh, a mountainous region that has been contested between Azerbaijan and Armenia. According to experts, Israel has sold the kamikaze radar-hunting drone to at least five countries. “We’re not saying it’s a killer robot,” Wareham made sure to say, but since it is a system that has an autonomous mode that normally targets radars, how did it hit a bus? What mode was it on? Could it be reverse engineered to look for targets other than radars? Nobody was sure.
When Wareham heard about the attack the night before, she started to think about how to document it. “Then I thought, Fuck, man, I’m putting on my write-up hat. This is not good,” she told me. “The write-up is when you can explain what happened and document the harm.”
One of the problems with lobbying for preemptively banning a weapon was that there were no casualties for the campaign to point to, to spur public action. The successful Ban Landmines and Ban Cluster Munitions campaigns Wareham had worked on previously hosted many events to bring attention to victims, including building a pyramid of shoes in front of the UN and erecting the Broken Chair, an almost 40-foot-tall sculpture of a chair missing a leg that still stands there today.
“So you guys have your first victims?” I asked.
“I don’t know…” She trailed off. “I think because we are so nervous about getting the process going, and they are so nervous about existing systems, that we’ve been… but now we will start to get a lot done and start pointing out the contradictions like that. But if you start off like that you can’t get anywhere.”
And indeed, getting this far was hard enough — it took almost a decade for the world to take killer robots seriously. Back in 2007, Sharkey was at a press conference when a journalist asked him about military robots. He didn’t know anything about them. Sharkey thought he’d have a quick look online one night and stayed transfixed for half a year reading military road maps. He learned the US wanted to build Terminator-style killer robots and imbue them with AI. “I was shocked and horrified,” he told me. In his first editorial on the subject in The Guardian, he wrote: “A robot could not pinpoint a weapon without pinpointing the person using it or even discriminate between weapons and non-weapons. I can imagine a little girl being zapped because she points her ice cream at a robot to share.” He began speaking publicly, but little happened.
Two years later, Sharkey launched ICRAC along with two other academics. They made public speeches, gave presentations, and wrote op-eds. Nothing happened. “We could not have the international discussion going at all,” Sharkey told me. It was a fringe group of nutty professors.
In 2010, Williams, the Nobel laureate, was researching extrajudicial execution when she read experts saying that drones were the Ford Model T of where weapons were going. She stumbled into her kitchen where her husband, Stephen Goose, the arms director of Human Rights Watch (HRW), was sitting. “When I learned about killer robots it’s like, what the fuck are human beings thinking?” she told me. Goose is Wareham’s boss. He met Williams while she was working with Wareham on banning landmines, and the two subsequently got together. They began informal talks on starting a new coalition. Wareham began reaching out to experts and other NGOs.
By 2012, a handful of civil society groups had woken up to the problem ICRAC had been talking about. Everyone was watching the impact of armed drones, but their progeny would be much more terrifying. Wareham went to Sheffield to meet Sharkey at home. They spent four hours talking. Wareham wanted to find out more about ICRAC and let them know HRW was planning to campaign against fully autonomous weapons.
HRW started working on a report with Harvard Law School’s International Human Rights Clinic called "Losing Humanity: The Case Against Killer Robots." In order to deal with the seeming contradiction of banning weapons that don’t yet exist but kind of do — or the issue where humans can in theory override the machine but the system is operating at such high speeds and in such complex ways that it’s pretty much impossible for that to have any value — the report came up with a concept called “meaningful human control,” in which humans would have ultimate authority over the critical functions of weapons, which includes selection and engagement of targets. What “meaningful” means is still nebulous, but for the campaign, that would at least satisfy the issue of an accountability gap — that someone could be tried and punished for war crimes, because there would be a chain of command.
In April 2013, the campaign officially launched in London. Taking care to say they appreciate technology, they brought a friendly robot to Parliament Square who handed out bumper stickers to tourists and the media: STOP KILLER ROBOTS in black text, white background, STOP in red, and a bull’s-eye.
But when they went public, plenty of people, including myself, laughed — they seemed insane. Wareham remembers the first New York Times story to quote them: “I was squished in the article between the tea party and Code Pink. It was like they were like, let’s interview all of the radical people, and I was just kind of like, OK, you can put us there if you want.” Comments like this are Wareham’s speciality. She is full of positive energy, frenetically competent, always juggling 100 tasks. Things sometimes drop out of her hands — the strap of her overstuffed backpack slips, coffee vending machines malfunction in her presence (“Trust the machine!” she joked to me), bits of her sandwich fall out, and she interrupts a conversation to scramble for napkins to clean up.
While we were talking about banning something far away and magical, the technology kept careening. It had killed seven people that morning.  
At the 2013 CCW meeting, putting LAWS on the agenda for the following year went to a vote. “That was nail-biting,” Sharkey remembered. The docket could have been vetoed by any of the countries. Sharkey, who had never been part of a campaign and had never done any activism, was impressed. “Once those NGOs got into the bed with us, we were away,” he told me. Even the camaraderie of fellow activists was a boon. “There were times when I was blackly depressed, to be honest,” he told me. He had been on the road for years talking about killer robots, but talking, always talking, and feeling like he was banging his head against a wall, hearing the same things tossed back at him. “I can be talking to some kind of event in London and some person would say, ‘Well, it’s just an unfortunate fact of war, children have always been victims of war and always will be.’ And you think, Did you actually just say that, so glibly?"
The CCW first met to discuss LAWS in 2014. By many accounts, it was a circus. Delegates debated whether LAWS already exist, whether they will ever exist, and what they even were. (At this year's conference not much seemed to have changed, if you ask me, but I was assured it had.)
The following year, more than a thousand AI researchers, including Stephen Hawking, Elon Musk, and Noam Chomsky signed an open letter calling for a ban. “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: Autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” the letter said. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable.”
Wareham was delighted when she heard. She had never been to Silicon Valley. “You know that the movement is starting to take off when people start doing these things that you didn’t know about,” she told me. “You’re here at a moment in time — a snapshot — be excited! Be negative, too!”
But it didn’t seem so simple to me — once countries began developing LAWS, which they already have, what would really stop them from continuing? Especially if they perceived adversaries as doing the same. What terrified me most was that while we were talking about banning unicorns — something far away and magical that we have all the time in the world to debate — the technology kept careening. It had killed seven people that morning.
Clay Rodery for BuzzFeed News
If you had talked to me 10 years ago, I would say that’s all science fiction, but I know better now. Take the US launching weapons like the X-47B in big swarms at China. Add China’s counter-weapons. These are moving at supersonic speeds. It is an incontrovertible fact that when we have two programs and we don’t know the content of them and they are competing against each other, no one knows what can happen. Sending a swarm of robots against China is one idea; to imagine that China is not going to be doing the same, that’s crazy. They are going to be sending them to California.
If you want to be really scary, you can imagine people putting a nuclear deterrent on an unmanned system. It can go completely ballistic, I mean, who knows what can happen. That’s the real danger. Accidental warfare has started before we even knew it started and a lot of people are lying dead. Yes, I do get nightmares about it. Really."
— Noel Sharkey, founder of the International Committee for Robot Arms Control
The next day, Wareham and I were sitting outside of the auditorium assessing the current state of the conference. Mexico had come out in favor of a ban, bringing the total up to 12 countries. Wareham added them to The List. But between the UK saying they were interested in “intelligent partnership” and China volleying between citing the limits of AI telling the difference between “baby boys and baby girls” and also suggesting AI would save humanity, all while the Russian delegation smugly stalled on the need for definitions and the US suggested that autonomy in weapons would reduce civilian casualties, I was even more convinced the CCW was among the most ridiculous things I’d ever seen. The only thing keeping me tethered to sanity seemed to be Instagramming photos of the peacocks that freely roam the UN grounds. The head of the US delegation, Michael Meier, walked by us.
“He walked out flashing a [Stop] Killer Robot sticker,” Wareham told me. I was dumbfounded. “Yeah, he’s a nice guy, you know.” Wareham was mid-sentence when another member of the US delegation cut her off.
“Hey, how are you?” the Defense Department representative asked awkwardly, in that way that officials often resemble actual robots. “I was hoping I could get a sticker?”
“Oh, are they all gone?” Wareham asked. “They’re on the table in there.”
Wareham turned back to me and started laughing. “You’re giving out killer robot stickers and that’s the Department of Defense representative saying, 'I really want a sticker.'" She explained how the representative from Germany got them for his five teenagers, who slapped them on their doors as a kind of “Keep Out” sign. Another diplomat kept them on her office desk to show people. “That’s honestly part of the reason why we did this, to get attention, to have a conversation starter,” Wareham said.
No matter how many times Wareham patiently talked me through it, I still couldn’t see how anything they were doing at the CCW would amount to anything. But members of the campaign believe, really seem to believe, in the power of advocacy in a way that feels like a distant memory of a better time — a time before September 11, before Guantánamo, before Snowden, before global jihad made any US foreign policy decision domestically publicly acceptable — something Wareham readily admits.
Growing up at the tail end of the Cold War, anything seemed possible. Wareham remembers being on the streets of Wellington protesting nuclear weapons when she was 11. She and her friends would go downtown after class without telling their parents, joining massive marches against nuclear testing in French Polynesia and against US nuclear warships docking in New Zealand. In 1987, New Zealand passed national legislation that made the country a nuclear-free zone. The US responded by cutting military ties, but the global-pariah status the US threatened never happened.
“That’s my background of activism — that it is possible to take a principled stance,” Wareham told me. “I thought that’s the way the world worked.” By the time Wareham was in university, the Berlin Wall was falling, there was Perestroika, and where there had been East and West Germany, there was now just Germany. Her textbooks could barely keep up with the pace of change. Wareham wrote her thesis on landmines. She joined the Ban Landmines campaign under the tutorship of Jody Williams.
“We either choose to allow this to happen or we fight it. My choice is to fight it. I think we may not succeed but it’s not inevitable. That’s very different.”   
For her part, Williams won a Nobel Prize for her faith in the power of activism. When attempts to ban the use of landmines failed at the CCW, the Canadian government took up the cause and invited countries to Ottawa in December 1997 to sign a treaty opting in to ban the weapons; 162 states are party to the treaty — the US, Russia, Israel, India, Pakistan, and China are not. By 2007, Williams and Wareham were working on the Ban Cluster Munitions campaign. The CCW failed again and they took the treaty process outside, this time to Oslo. They admit the campaigns have not eradicated either weapon, but they argue they have drastically have reduced their use and established an international norm.
Yet something about LAWS feels different to me, somehow inevitable. “Nothing is inevitable, it is always a choice," Williams told me. "We either choose to allow this to happen or we fight it. My choice is to fight it. I think we may not succeed but it’s not inevitable. That’s very different.”
For all their optimism, the campaigners are strategic, too. There are common tactics to every disarmament movement — a recipe that includes a champion country, victims, and a lot of team players. But two days into the meeting, there was no leader yet among the countries that called for a ban. “They’re mostly developing countries and the Holy See," Wareham told me. "What we need is a Western country." But none had heeded the call; in fact, they seemed to be firmly against it. Wareham didn’t seem deterred. She was enthusiastic about Mexico.
“Mexico is the country that you want on your team when you are trying to create something, because they have been successfully involved in the creation of most of the humanitarian disarmament treaties that have come out of the last 20 years,” Wareham said. “They’re a good team player.”
She gave part of the credit for Mexico’s stance to Camilo Serna, a dapper campaigner from Seguridad Humana en América Latina y el Caribe in Colombia, who swirled around the hall, speaking with a different delegate each time I saw him. “As a campaign only, we are like a catalyst,” he told me. “Latin America is very progressive in disarmament.”
But for Wareham, it was his work that was helping push countries to communicate with their capitals, talking one-on-one and gaining the confidence of the diplomats to talk things through. "If we can build rapport and if they feel like they can talk to us, then maybe they will go back home and make the extra effort,” she said. Diplomats are people too. Another campaign tactic.
Last year, Wareham brought a campaigner from the Middle East and North Africa, who she credits with bringing Egypt and Algeria on board. She couldn’t afford to sponsor his trip this year. “We’ve got a big network of friends who want to help us with this, but what we lack is money,” Wareham told me. “There’s so much stuff that we could be doing but we’re not. But I keep saying to some governments, this thing is only going to get bigger the longer you take.”
A scene from the Convention on Certain Conventional Weapons in Geneva, 2016.
Sharron Ward
A scene from the Convention on Certain Conventional Weapons in Geneva, 2016.
The more time I spent at the CCW, the more concerned I became that regardless of the outcome of the meeting, it would be too late. The GGE would take another year or two. Even in the most ideal setting, treaty drafting would take another year. Then where would we be?
I wasn’t wrong. “We need to be quite concerned about the developments on the ground just outpacing what is happening at CCW to the point where it becomes an academic discussion, because it will become very hard to unwind once countries start committing large parts of their defense posture to the autonomous weapons,” Russell told me.
While the campaign was talking about fully autonomous weapons, human–machine teaming like the kind Work talked about was coming our way. If you combine facial recognition with targeted strikes, that would be semi-autonomous, because the human does the targeting. “You can say it’s morally inappropriate for other reasons but for the killer robot campaign, it’s not autonomous,” Roff told me. “Now you see where the definitions get really fucked up.”
Roff explained it wouldn’t take much to deploy loitering munitions with facial recognition. “We can do that now,” she said, all that was required was to be able to tap into sensor technology that could accurately see faces. “It’s more a communication data-to-data link, cameras, surveillance, sensor fusion problem, which we are working on.”
“How much does that fall under the scope of whatever the campaign is even talking about?” I asked.
“I don’t think they know,” Roff told me. “Does the trigger have to be pulled and the bullet fired by the human? Or can the human make a series of selections that can be years, months, days ahead of time and then this thing goes and prosecutes, right? If that’s true, then that type of configuration falls outside of the campaign. I think there is a big grey area that the campaign doesn’t address.”
Think about that again: One of the things we may not even be talking about banning is a system that has been targeted to hunt a human for days, months, years — the situation on the ground can change, the tempo of war can change, a bridge that was once filled with tanks can be crammed with civilians in cars, and the weapon is still out there. This is what the campaign is referencing in the phrase "meaningful human control," but there’s no official definition. That’s another campaign tactic: Make countries define things themselves. But why would any country define itself out of a weapon they think will change their national security calculation? The US is among the countries that have suggested they will be able to safely develop these weapons because they will undergo intensive review processes to ensure they comply with international law.
For his part, Paul Scharre, a senior fellow at the Center for New American Security and co-author of US Directive 3000.09 back when he was at the DoD, doesn’t see the utility in debating a blanket preemptive ban. Scharre talks with the ease of a politician stumping and with slicked-back brown hair and a neatly trimmed beard, he almost looks the part. But when I sat down with him in the cafe outside the assembly room, even he had reservations on how all of this could work — including the campaign’s idea of meaningful human control or his ex-employer’s own appropriate human judgment. He ran me through a seemingly black-and-white example:
Let’s say we had a weapon that would target Osama bin Laden: The human operator sees bin Laden and he runs around a building, and you let the weapon go, like an attack dog — then you’d say the human was still in control. Take it a step back and say bin Laden runs into a building, and you tell the weapon to go into the building, but you don’t know what’s inside. What if he runs in and grabs a human shield? How big will the explosion be? What if someone else gets injured? Take it another step back: What if you know he’s in the city and you tell the weapon to hunt him in the city?
“It’s not clear where this line is, right?," Scharre said. "At some point, I’m crossing some threshold where the person doesn’t really know the specific context for the attack. Where it's kind of weird to say that the person's meaningfully involved, like they’ve just set up this process and then they went and got a sandwich, right? That line is really murky.”
This is the conversation Scharre wants countries to be having on an international level. “There's basically zero chance the CCW will pass a legally binding ban on autonomous weapons because every single state has to agree. It won’t,” Scharre told me. But then what was the point? “In this world we are in, drones are proliferating like mad and we’re seeing more autonomy baked into things like cars and house robots and everything else and you start to ask, how much autonomy do we want? I think it's worth countries coming together to, like, discuss it in an international context and try to see: Can we reach a common understanding of what right looks like?”
So what does right look like?
Even if humans can override the system to stop an attack, how likely is it that they will? It’s well known that humans begin to defer to the machine; it’s called automation bias. Like when the squiggly red line appears under a word you type, even though you’re pretty sure you’re right about how it’s spelled, you start to doubt yourself. You have faith in your spam filter, you stop checking it. “You just come to trust it. And if it goes wrong, it goes really wrong,” Sharkey told me. “Machines don’t go wrong like humans do; they go completely wrong.”
“Machines don’t go wrong like humans do; they go completely wrong.”  
As the complexity of a system increases, it becomes harder to test its behavior in all possible conditions. As code grows, so do the number of elements that can malfunction or be coded incorrectly. With autonomous weapons, how do you even begin to know all of the possible things that can happen during the fog of war? How will these weapons ever actually be completely predictable to operate within the rules of armed conflict?
When it comes to artificial intelligence and deep learning, testing has shown that neural networks misclassify images to a degree that makes no sense to humans. Relying on technology to identify enemies on sight could fall victim to the same shortfalls. Even more worrying, humans don’t know why machines see these things — there’s no way to preempt these mistakes. This makes it hard to predict how a system will malfunction or interpret the data humans feed it.
What’s worse, Scharre explained, in the case of an error, things could go badly wrong in multiple parts of the system as they interact with each other. Even with a human on the loop, the more complicated the system, the less likely the human will know where exactly the error is occurring. The longer the delay between the system malfunction and when the human figures out how to correct it, the more damage it can do.
Then there is the potential for hacking or spoofing, which could send a weapons system back on itself. “You might think that you have strategic superiority and that could change in basically 30 seconds because someone has a software upgrade on their side," Russell said. "Or perhaps someone hacked into your software and figured out what algorithms you are running and the area you will make a move in is predictable.”
The more technology changed, the more concerned even the campaign became. “It's the most important thing I ever worked on," Sharkey told me. "I am more concerned than when I was at the start because there are so many new technologies that we cannot even predict. We are going to be here all the time getting bans on new things."
Mary Wareham in her office at Human Rights Watch in Washington, DC, on Aug. 22, 2016.
Gabriella Demczuk for BuzzFeed
Mary Wareham in her office at Human Rights Watch in Washington, DC, on Aug. 22, 2016.
At the end of the conference, Wareham and I were walking out through the UN grounds, a sea of flags fluttered in the wind — an idyllic setting to contrast the grim visions of our imminent future. The day before, Nicaragua and Chile had come on board, bringing the total number of countries calling for a ban to 14. After some furtive diplomatic stalling, the CCW decided to move into the next phase, the GGE, which would then decide if they were going to write a treaty, but the mandate was weak. On the sidelines, the campaigners had heard broad support for two years of GGE talks spanning six weeks.
Wareham wasn’t ecstatic, but she wasn’t giving up on the CCW either. “Ultimately this is the process that we have got right now, so we have to stick with it,” she decided. “But if they can’t agree to kind of step it up to the next level after this and begin negotiating, that’s where we are going to have problems.”
We walked through the building’s gleaming ornate white archways and through the lush green manicured entryway. In front of us, the Broken Chair statue loomed. I was still completely unconvinced anything I’d witnessed would mark the beginning of the end of a nightmare that has already started — between centaur fighting, autonomous weapons, lack of coherent definitions, attitudes of governments, and the CCW’s meager rate of progress, what were the campaign's odds of saving the world from itself? The technology was already there, so was the will to use it, and every month brought new innovations that could never be rolled back.
I asked Wareham what the campaign would do next. “What I have to do is to bring the robots to the UN,” she told me about the next big meeting in December, where countries will formally agree to move into the GGE and agree to the timeline for discussions. “We would probably do something cool with the future victims. We can have some shadows or something representing what might come or, you know? I am not sure. We will figure it out.” Always thinking, always optimistic, Wareham was already onto the next task.
“You have to have the visual element, the creative element, the diplomatic engagement, the research, the hard people like Jody [Williams], the soft people like Stephen [Goose] in a suit. It’s a symbol — the whole cast of characters and get the ingredients right, get the timing right, build the confidence of the diplomats, make them think that you have got a massive movement behind you and get ready basically. Then everybody says go! And we negotiate and then we’re done, like, 18 months later! Seriously, this is all just lining everything up for that time when everybody is ready!” Wareham turned to me and laughed. ●

No comments: