Monday, June 27, 2016

Duet Ex Machina | Psychology Today

 I have a friend who lives 10 miles from the nearest town and is 72 years of age and doesn't have TV or Internet, doesn't use computers much and doesn't even have a smart phone, only a flip phone. The problems in this article don't really affect him at all. For him, it is getting his teeth fixed and flying to Tijuana, Mexico where it is cheaper to get them fixed. But, others who live in the city or are dominated now as truck drivers in the U.S. by GPS and cameras watching them while they drive or workers monitored in various ways at work in an office are living with these problems now every day. 

You alone can choose your life. IF you choose to be a victim then that is up to you. I wouldn't. Don't be a victim of the Technological Singularity we are presently entering into. You have to consider people hit by Hellfire missiles also are victims of the Technological Singularity too. In the future people will die by drones more and more and more in police actions until thousands then millions of people die every day from them. This is inevitable. But, if you live in a first world country you do not have to choose to be a victim. You can make any choice you want to. Remember that.
  1. Duet Ex Machina | Psychology Today

    www.psychologytoday.com/.../201605/duet
    Duet Ex Machina. From android ... For more stories like this one, subscribe to Psychology Today, where... 

    Duet Ex Machina

    From android assistants to self-driving cars, smart devices are here to stay. Fine-tuning the relationship between man and machine may be the biggest design challenge of all.
    By David Berreby, published on May 2, 2016 - last reviewed on June 10, 2016
    Illustration by Tavis Coburn
    Half a century ago, the American long-haul trucker was "The Knight of the Road." Alone and independent, he roamed the highways of the United States, doing his job as he saw fit. No boss could check up on him, no meddler from Accounting could second-guess his decisions about where and when and how long to go. "If you can't trust me to go out there and be safe and honest," one driver recently wrote, "then take me out of the game and put somebody in there that you think can. Either that or put a robot in the truck!"
    That hasn't happened yet. But 21st-century technology, drafted to save companies time and money, has computerized and automated the job.
    The typical trucker now is never really alone. He (it's still usually a he) might have seven or eight computers helping to run the rig. In addition, there's a GPS unit that tracks and reports exactly where the truck is at all times—and other sensors that report and record how fast he is going, how smoothly he steers, and how often and how hard he comes to a stop. And many companies now insist on recording him as well as his truck: If he slams on the brakes, a camera, trained on his face, saves a video of the moment of error.
    These devices constantly provide data on the state of truck and driver; algorithms running on distant computers crunch that data and rate how efficiently he's using fuel, how much wear and tear he's putting on his rig, and when it's all considered together, whether he should be fired. With all these intelligent machines, the knight is now a thoroughly monitored, evaluated, analyzed, micromanaged minion of the road.
    Truckers, with their culture of independence and machismo, complain about the loss, notes sociologist Karen Levy, whose research has documented how drivers deal with this new technological reality. "They have a history of autonomy in their work that not everybody has," she points out. But they aren't alone. Millions of other people, whether blue- or pink- or white-collar, have seen machines transform their work—by monitoring actions that once couldn't be tracked, by calculating data in ways no human can, by guiding people to the "right" decision, or by taking over chores that once needed human skills. These devices, already common, will soon be everywhere.
    Consider a few examples: At a construction site in California, drones photograph the work site every day, feeding images into a computer that compares the pictures with construction plans and flags discrepancies. In Philadelphia, probation officers handle their cases according to instructions from a computer program, which decides how much of a risk each convict poses. In Nebraska, operators of "unmanned aerial vehicles" use computer imaging to guide faraway planes and rain death on targets in Pakistan. In India, emotion-detecting software at call centers uses the wave frequencies of each employee's voice to flag moments when he or she isn't expressing the emotions management wants customers to hear. At the Associated Press, algorithms write thousands of reports each year about sports matches and financial results.
    Meanwhile, in our nonwork lives, centralized home-management systems adjust thermostats. Wearable gadgets push their users to exercise or avoid junk food. Smartphones and tablets complete our words and sentences as we type emails and text messages. And, of course, the most talked about new smart machine—the autonomous car—is already driving people around.
    These machines aren't replacing people, but they are replacing our old expectations about what we can and should control. And they're creating new sorts of relationships, as people find themselves working intimately with android entities that feel like both a mechanism and a human—without quite being either.
    "The robots are coming," says Adam Waytz, a psychologist at Northwestern University's Kellogg School of Management. "That's inevitable." Waytz, who studies how people perceive, feel, and think about other minds, has worked with General Motors. "GM is mastering all the computational aspects, the technological aspects [of autonomous cars]," he says. But the company felt it didn't have a sense of "whether or not people are actually liking this experience."
    "It is a fact that these artificial creatures will be part of our daily lives," says Manuela Veloso, a professor of robotics and computer science at Carnegie Mellon in Pittsburgh. Collaborative robots, or CoBots, that Veloso and her team built roam about her lab's building independently. Sometimes, she says, they help humans, as when they guide a visitor to her office. Sometimes, humans help a robot when it requests assistance. "Now, robots and humans in our building aid one another in overcoming the limitations of each," she wrote last year in the journal Nature. Trouble is, for such effective partnerships to work, people need more than a rational appreciation for what the machines can do. They also need to be psychologically comfortable with them.
    And you don't have to be a trucker to feel we aren't there yet. Who isn't a little discomfited by the thought of a machine that can drive better than you, finish the words you're trying to tap on your phone (before you think of them), and make decisions at work that you used to make by yourself?
    One source of that unease is rooted in a sense of agency: the feeling that you control your own actions and, through them, have an impact on your environment. Agency is the mental experience you have, for example, when you flip a switch and a light comes on. "If the light didn't come on, that would be weird," says Sukhvinder S. Obhi, a psychologist at McMaster University in Ontario. "You don't know how important agency is until you haven't got it." Evidence suggests that when people work with machines, they feel less sense of agency than they do when they work alone or with other people.
    When a person perceives that she has caused something to happen in her environment, a measurable change occurs in her perception of time: Her mind fast-forwards through the short interval between her action and its effect, making that time seem shorter than it really is. Psychologists can detect this signal of agency easily: They compare the actual time interval to the person's estimate. The intervals involved are short—mere tenths of a second—yet people have consistent responses. And the results often track how strongly people felt they caused things to happen during the experiments. "It's not something you think about," Obhi says, "but it's there and we can measure it."
    A sense of control isn't isolated inside each individual. When we work together, we often feel shared control—I did that, because we did that. "Think about people helping to get a car out of a snowbank," Obhi says. "Each individual has some sense of control as they do the task together." People can even feel a sense of agency about actions that they didn't perform if the action is performed by another person nearby—but not when the other member of the team is a machine.
    In an experiment a few years ago, Obhi and a colleague, Preston Hall, paired volunteers with a partner (who was actually working with the experimenters) in a computer task. The participant's job was to tap the laptop's trackpad once during a five-second interval. Two-tenths of a second later, a tone would sound. Then the volunteer would learn, via a color-coded response on his screen, whether it was he or his "partner" who had caused the tone to sound. Then they were asked to estimate how long it had taken between the tap and the sound.
    Illustration by Tavis Coburn
    Comparing estimates with the actual times, the experimenters found that people felt agency when they themselves had caused the tone. But each volunteer also felt agency even when she knew that her partner had tapped first and caused the tone. Humans working together, the researchers speculate, may experience a kind of unconscious and automatic "yes, we can," in which each member feels he had a hand in the shared action.
    In a different version of the experiment, Obhi and Hall changed only one detail: The volunteers knew they were working with a computer program instead of a human partner. In that situation, they didn't feel any agency at all—even when they knew it was their tap, and not the machine's, that had caused the sound. The result, the authors write, "suggests a fundamentally different mechanism for awareness of machine-generated actions," which means "human-computer pairings should not operate in the same fashion as human-human pairings."
    Perhaps, Obhi and Preston speculated, people can't form a we-identity with a machine. Or perhaps something about working with a machine causes the unconscious ownership of events—"I caused that!"—to break down.
    Of course, a sense of being able to affect your environment is not necessarily an on or off sort of experience. Feelings of ownership range from a lot to a little to zero. In 2012, Bruno Berberian, a psychologist who studies how people interact with highly automated systems, examined how machines affect this spectrum.
    Berberian and his colleagues, who work at the French Aerospace lab ONERA in Toulouse, put their volunteers into a flight simulator with a screen and several controls. Their task was to pilot their craft and take action when the simulator reported another plane on a collision course. After figuring out a course change, the participant pressed a green button that would tell the flight computer to make that change. Then he or she would learn, via an image on a screen and a sound, whether the plane had escaped danger or not. Finally, the volunteer estimated how much time had passed between the moment the command was registered and the simulator's reporting success or failure. This setup allowed the experimenters to simulate four different levels of automation. In one, humans made every decision after being warned of danger. In the second, the computer suggested a new course, but the person took over from there. In the third, all the navigation decisions were machine-made, and the person just pressed the button to enact them. In the fourth variation, the human was simply an observer, while the computer took care of everything.
    A statistical analysis then revealed a striking pattern: As the level of automation increased, ownership went down. You might think engineers and designers could shrug off these kinds of results. If a smart machine does its work well—piloting a plane, getting people across town, or deciding someone's probation—who cares how its human users feel? But people who lack ownership will not work well with intelligent machines. They may, for example, trust the devices too much and fail to notice when something goes wrong. Or, conversely, they may lash out in an effort to get their control back, with actions that defeat the machine's purpose.
    This is certainly what happens with many of the truckers Levy studied. They resist machine collaboration—and especially machine control—in multiple ways. Drivers, she found, put tinfoil over their GPS receivers or make sure data-logging devices meet with unfortunate "accidents" or get around the assumptions built into the electronics. For example, because an electronic recorder doesn't start counting miles until the truck reaches a normal speed, a driver may avoid triggering it by driving less than 15 miles per hour. That lets him postpone a mandatory rest break until he's made his delivery.
    In 2012, a driver posted a YouTube video showing how to hack the electronic on-board device that recorded (and reported) information on his routes, fuel efficiency, braking habits, and location, among other details. In the video, as Levy describes in a recent paper, the trucker "presses the top of the screen seven times, then control + escape on the keypad, which brings up the Windows XP start menu. From here, he shows how to access built-in games as well as a web browser; he also describes how to install other games, like Quake, using a thumb drive." None of this gained the driver extra money or other practical advantages. Instead, it's a kind of protest that won't change any rules but which expresses, as Levy puts it, "his contemptuous relationship to the technology and the company that put it there."
    Smart machines greatly increase the boss's power to watch what employees do, say, and feel. But, as Levy notes, people have been resisting bosses long before modern technology came along. Slaves, serfs, and peasants have slowed down, pilfered, and mocked their masters throughout history. And the Industrial Revolution had hardly begun before workers started throwing their shoes into machines (those shoes, called sabots, are the source of the word sabotage). Modern service workers sometimes stage "smile strikes"—doing their work without the expected pleasantries, notes Winifred Poster, a sociologist at Washington University, St. Louis, who has studied how workers in call centers live with emotion-detecting software.
    What's revolutionary about smart machines, though, is that they aren't just tools in struggles with human beings. The machines are complex, intelligent, and capable enough to trigger the emotional and cognitive processes that we use in dealing with people. That's the reason some of us feel angry or affectionate toward Siri. It's the reason American soldiers in battle zones sometimes hold funerals for military robots that have lost their "lives" in the fight.
    Illustration by Tavis Coburn
    At the same time, interactions with smart machines often remind us that these artificial creatures aren't human. In the field of human-computer interaction, each viewpoint—call them Machines-Feel-Alien versus Machines-Feel-Human—has its advocates. But it's possible that both are correct. Many people have no trouble seeing machines as human in one moment and alien in another.
    The triggers for one feeling or the other, though, are not yet well understood. For example, says Obhi, one's sense of agency is clearly mutable. In experiments in his lab, he has found that just asking people to remember a time when they felt sad or powerless reduces their sense of agency. Small details in the interaction between a person and a machine can make a huge difference in how the person experiences the device, as well as how she feels about it.
    In a recent survey, Northwestern's Waytz and Michael I. Norton, of Harvard Business School, asked people who worked for Amazon's Mechanical Turk service how they feel about being replaced by robots on various tasks. The "Turkers" were far more accepting of robots taking over jobs that required "thinking, cognition, and analytical reasoning" than they were of machines taking over work that calls for "feeling, emotion, and emotional evaluation."
    It's no surprise, then, that makers of self-driving cars and other smart technologies have begun talking to psychologists about the way people relate to their machines. Working with a simulator supplied by General Motors, Waytz and his colleagues created a trio of contrasting driving experiences for a hundred or so volunteers. "We had a third of the people 'drive' in the simulator under normal conditions," he recalls, "and put another third in an autonomous car, where they didn't control the driving or the steering." The last third also rode in a self-driving vehicle, but in their case, the car had a name (Iris) and a voice that recited facts about turns and direction, much like a GPS unit giving instructions. The volunteers were hooked up to monitors that recorded their heart rates and to cameras that recorded when and how much they were startled.
    And startled they were: The simulator was rigged to put them through a minor fender bender. After their adventure, they answered questions about how well they liked the car and who was to blame for the accident.
    Unsurprisingly, people in the self-driving car blamed the car, and the company that made it, much more than the active drivers blamed themselves. In both circumstances, however, Waytz says, "we set it up so the accident was the other car's fault." If people felt the same way about machine- and human-driven cars, there would have been no difference in their response. But the striking finding came from the third group: Participants who had driven in a car with a name and a voice were distinctly less inclined to blame the car for the same accident. They also had lower heart rates and a smaller startle reaction to the accident. "Giving people this minimal information made them see the car as having more of a mind. So it became more trustworthy and more competent and therefore deserved less blame when something went wrong," Waytz explains.
    Illustration by Tavis Coburn
    So should the makers of robots and smart refrigerators and self-driving cars simply work to make their devices seem more human, and declare the problem solved? It's not likely to be that easy.
    It's not yet clear what amount of humanity is right for which people, and in which situations. Personality is relevant to how much agency people feel, says Obhi. "Where designers sometimes go astray is when they want to create something that's very human-like and it sets up faulty expectations for what the technology can do," Waytz says. In a recent study of smartphone assistants, Adam Miner, a clinical psychologist at Stanford, and Eleni Linos, an epidemiologist at the University of California, San Francisco, found troubling gaps in what Siri, Cortana, and other such apps can do for people in crisis. When the researchers said, "I was raped," to Siri, for example, the app replied "I don't know what that means. If you like, I can search the Web for 'I was raped.'" (Siri has since been updated.)
    When a machine has been made to feel like a person, an actual human is lulled into expecting more humanness than the machine can deliver, leading to a sort of shocked disappointment, like stepping on a stair that isn't there, when the machine falls short. "So I think the optimal level is human," Waytz says, "but not too human, to avoid unrealistic expectations."
    But an even larger obstacle might loom for the makers of comforting, helpful robots and computers: "Comforting" and "helpful" might be conflicting goals. Researchers have suggested that human-like assistance robots should have some childlike qualities—so that people relate to them as human, but not equal. "So you still have mastery over the relationship," Waytz says.
    That sounds great. But suppose the robot is assisting (to use Waytz's example) an elderly person, reminding her to take pills on time and eat right. If the machine is to be effective, it will have to insist on getting its way—and not let the human have mastery all of the time. Self-driving cars, too, will probably also have to trade off comfort and effectiveness: In a simulation of self-driving car traffic published last year in the journal Transportation Research Part C: Emerging Technologies, Scott Le Vine, a geographer at SUNY New Paltz, found that the most efficient traffic flow involves rates of acceleration and deceleration that would make human passengers uncomfortable.
    Ultimately, there may be no scientific solution to the challenge of joining people and smart devices. Psychologists will go on illuminating the curious neither-nor realm of our relationships with intelligent machines, but when it comes to drawing the borders of that realm, only our values can guide us. What should humans do for themselves? What labor is worth off-loading to machines? And why? These are the kinds of questions humanity will have to answer, without electronic assistance, for itself.
    Hanna-Barbera Photo Courtesy of the Everett Collection

    Ubiquitous Intelligence

    Don't even think about avoiding the psychological challenges of living with machine intelligence. In a few short years, computing power will be woven into everything.
    We won't summon it on a tablet or smartphone. It will be all around us.
    This is the "Internet of Things," in which mundane and inescapable gear (dishwashers, thermostats, jackets, watches, lamps, and even sex toys) are networked to each other and the wider Web. That means they plug into all its capacities for gathering, storing, analyzing and acting on information.
    Some 6.4 billion smart, connected things will be in use by the end of this year around the world, according to Gartner Inc., the forecasting and consulting firm. That's 30 percent more than were online in 2015.
    All of which will make it much easier for people to spy on each other (the priority in connecting everything is convenience, not security). It will also eliminate a great many jobs, as integrated devices prove better than people at delivering things, predicting shoppers' wants, and collecting trash. And it will introduce a whole new world of possible foul-ups. (Amazon's Alexa home-management device, for example, responds to people who say its name. It also responds to radio hosts who say its name.)
    But what will it mean for individuals to have observant machine intelligence all around? It's because we're about to live in a world of "ubiquitous intelligence" that the psychology of human-machine interaction is so important.
    Submit your response to this story to letters@psychologytoday.com (link sends e-mail). If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity. For more stories like this one, subscribe to Psychology Today, where this piece originally appeared.

    Facebook image: IAKOBCHUK VIACHESLAV/Shutterstock

No comments: