Rensselaer Polytechnic Institute professor Selmer Bringsjord is working on a way to code morality into artificial intelligence.
He says understanding morality is increasingly important as robots become smarter and more autonomous. "I'm worried about both whether it's people making machines do evil things or the machines doing evil things on their own," Bringsjord says. "The more powerful the robot is, the higher the stakes are. If robots in the future have autonomy...that's a recipe for disaster."
As robots enter military, law enforcement, and health aide roles, it might not be possible to prescribe how robots should behave in every possible scenario, Bringsjord says. However, robots could be encoded with basic principles, such as never to harm a human being.
Morality would have to be built into a robot's operating system to help prevent hackers from using robots for malicious purposes. Several experts worry robots are advancing rapidly without sufficient focus on the issue of morality.
"We want robots that can act on their own," says analyst Dan Olds. "As robots become part of our daily lives, they will have plenty of opportunities to crush and shred us. This may sound like some far off future event, but it's not as distant as some might think."
From Computerworld
View Full Article
Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA
No entries found