A Case for Cooperation Between Machines and Humans


But Ben Shneiderman, a University of Maryland computer scientist who has for decades warned against blindly automating tasks with computers, thinks fully automated cars and the tech industry’s vision for a robotic future is misguided. Even dangerous. Robots should collaborate with humans, he believes, rather than replace them.

Late last year, Dr. Shneiderman embarked on a crusade to convince the artificial intelligence world that it is heading in the wrong direction. In February, he confronted organizers of an industry conference on “Assured Autonomy” in Phoenix, telling them that even the title of their conference was wrong. Instead of trying to create autonomous robots, he said, designers should focus on a new mantra, designing computerized machines that are “trusted, reliable and safe.”

Dr. Shneiderman, 72, began spreading his message decades ago. A pioneer in the field human-computer interaction, he co-founded in 1982 what is now the Conference on Human Factors in Computing Systems and coined the term “direct manipulation” to describe the way objects are moved on a computer screen either with a mouse or, more recently, with a finger.

In 1997, Dr. Shneiderman engaged in a prescient debate with Pattie Maes, a computer scientist at the Massachusetts Institute of Technology’s Media Lab, over the then-fashionable idea of intelligent software agents designed to perform autonomous tasks for computer users — anything from reordering groceries to making a restaurant reservation.

“Designers believe they are creating something lifelike and smart — however, users feel anxious and unable to control these systems,” he argued.

In recent years, the computer industry and academic researchers have tried to bring the two fields back together, describing the resulting discipline as “humanistic” or “human-centered” artificial intelligence.

Dr. Shneiderman has challenged the engineering community to rethink the way it approaches artificial intelligence-based automation. Until now, machine autonomy has been described as a one-dimensional scale ranging from machines that are manually controlled to systems that run without human intervention.

The best known of these one-dimensional models is a set of definitions related to self-driving vehicles established by the Society of Automotive Engineers. It describes six levels of vehicle autonomy ranging from Level 0, requiring complete human control, to Level 5, which is full driving automation.

In contrast, Dr. Shneiderman has sketched out a two-dimensional alternative that allows for both high levels of machine automation and human control. With certain exceptions such as automobile airbags and nuclear power plant control rods, he asserts that the goal of computing designers should be systems in which computing is used to extend the abilities of human users.

“There is so much that automation can do to help people that is not about replacing them,” Dr. Pratt said. He has focused the laboratory not just on car safety but also on the challenge of developing robotic technology designed to support older drivers as well.

The term “centaur” was originally popularized in the chess world, where partnerships of humans and computer programs consistently defeated unassisted software.

At the Phoenix conference on autonomous systems this year, Dr. Shneiderman said Boeing’s MCAS flight-control system, which was blamed after two 737 Max jets crashed, was an extreme example of high automation and low human control.

“The designers believed that their autonomous system could not fail,” he wrote in an unpublished article that has been widely circulated. “Therefore, its existence was not described in the user manual and the pilots were not trained in how to switch to manual override.”

Dr. Shneiderman said in an interview that he had attended the conference with the intent of persuading the organizers to change its name from a focus on autonomy to a focus on human control.

“I’ve come to see that names and metaphors are very important,” he said.

He also cited examples where the Air Force, the National Aeronautics and Space Administration, and the Defense Science Board, a committee of civilian experts that advises the Defense Department on science and technology, had backed away from a reliance on autonomous systems.

Robin Murphy, a computer scientist and robotics specialist at Texas A&M University, said she had spoken to Dr. Shneiderman and broadly agreed with his argument.

“I think there’s some imperfections, and I have talked to Ben about this, but I don’t know anything better,” she said. “We’ve got to think of ways to better represent how humans and computers are engaged together.”

There are also skeptics.

“Ben’s notion that his two-dimensional model is a fresh perspective simply is not true,” said Missy Cummings, director of Duke University’s Humans and Autonomy Laboratory, who said she relied on his human-interface ideas in her design classes.

“The degree of collaboration should be driven by the amount of uncertainty in the system and the criticality of outcomes,” she said. “Nuclear reactors are highly automated for a reason: Humans often do not have fast enough reaction times to push the rods in if the reactor goes critical.”



Sahred From Source link Technology

Leave a Reply

Your email address will not be published. Required fields are marked *