- November 2008 by Tom Simonite
- For similar stories, visit the Books and Art and Robots Topic Guides
With the relentless march of
technological progress, robots and other automated systems are getting
ever smarter. At the same time they are also being given greater
responsibilities, driving cars, helping with childcare, carrying weapons, and maybe soon even pulling the trigger.
But should they be trusted to take on
such tasks, and how can we be sure that they never take a decision that
could cause unintended harm?
The latest contribution to the growing
debate over the challenges posed by increasingly powerful and
independent robots is the book Moral Machines: Teaching Robots Right from Wrong.
Authors Wendell Wallach, an ethicist at Yale University, and historian and philosopher of cognitive science Colin Allen,
at Indiana University, argue that we need to work out how to make
robots into responsible and moral machines. It is just a matter of time
until a computer or robot takes a decision that will cause a human
disaster, they say.
So are there things we can do to
minimise the risks? Wallach and Allen take a look at six strategies that
could reduce the danger from our own high-tech creations.
Keep them in low-risk situations
Make sure that all computers and robots never have to make a decision where the consequences can not be predicted in advance.
Likelihood of success: Extremely low. Engineers are already building computers and robotic systems whose actions they cannot always predict.
Consumers, industry, and government
demand technologies that perform a wide array of tasks, and businesses
will expand the products they offer in order to capitalise on this
demand. In order to implement this strategy, it would be necessary to
arrest further development of computers and robots immediately.
Do not give them weapons
Likelihood of success: Too late. Semi-autonomous robotic weapons systems, including cruise missiles and Predator drones, already exist. A few machine-gun-toting robots were sent to Iraq and photographed on a battlefield, though apparently were not deployed.
However, military planners are very
interested in the development of robotic soldiers, and see them as a
means of reducing deaths of human soldiers during warfare.
While it is too late to stop the
building of robot weapons, it may not be too late to restrict which
weapons they carry, or the situations in which the weapons can be used.
Give them rules like Asimov's 'Three Laws of Robotics'
Likelihood of success: Moderate. Isaac Asimov's famous rules
are arranged hierarchically: most importantly robots should not harm
humans or through inaction allow harm to them, of secondary importance
is that they obey humans, while robotic self-preservation is the lowest
priority.
However, Asimov
was writing fiction, not building robots. In story after story he
illustrates problems that would arise with even these simple rules, such
as what the robot should do when orders from two people conflict.
Asimov's rules task robots with some
difficult judgements. For example, how could a robot know that a human
surgeon cutting into a patient was trying to help them? Asimov's robot
stories in fact quite clearly demonstrate the limits of any rule-based
morality. Nevertheless, rules can successfully restrict the behaviour of
robots that function within very limited contexts.
Program robots with principles
Building robots motivated to create
the "greatest good for the greatest number", or to "treat others as you
would wish to be treated" would be safer than laying down simplistic
rules.
Likelihood of success:
Moderate. Recognising the limits of rules, some ethicists look for an
over-riding principle that can be used to evaluate all courses of
action.
But the history of ethics is a long
debate over the value and limits of many proposed single principles. For
example, it could seem logical to sacrifice the lives of one person to
save the lives of five people. But a human doctor would not sacrifice a
healthy person simply to supply organs to five people needing
transplants. Would a robot?
Sometimes identifying the best option
under a given rule can be extremely difficult. For example, determining
which course of action leads to the greatest good would require a
tremendous amount of knowledge, and an understanding of the effects of
actions in the world. Making such calculations would require time and a
great deal of computing power.
Educate robots like children
Machines that learn as they "grow up" could develop sensitivity to the actions that people consider to be right and wrong.
Likelihood of success: Promising, although this strategy requires a few technological breakthroughs. While researchers have created robots able to learn in similar ways to humans, the tools presently available are very limited.
Make machines master emotion
Human-like faculties such as empathy, emotions, and
the capacity to read non-verbal social cues should give robots much
greater ability to interact with humans. Work has already started on equipping domestic robots with such faculties.
Likelihood of success:
Developing emotionally sensitive robots would certainly help implement
the previous three solutions discussed. Most of the information we use
to make choices and cooperate with others derives from our emotions, as
well as our capacity to read gestures and intentions and imagine things
from another person's point of view.
=======================
http://www.erwinvanlun.com/ww/full/robot_ethics/
=======================
http://www.erwinvanlun.com/ww/full/robot_ethics/
No hay comentarios.:
Publicar un comentario