Morality is a thorny issue for machines, as scientists learned in testing Delphi, a system programmed by the Allen Institute for Artificial Intelligence (AI) to make moral judgments.
The neural network analyzed more than 1.7 million ethical judgments made by humans to establish a morality baseline for itself, and people generally agreed with its decisions when it was released to the open Internet.
Some, however, have found Delphi to be inconsistent, illogical, and insulting, highlighting how AI systems reflect the bias, arbitrariness, and worldview of their creators.
Delphi's developers hoping to build a universally applicable ethical framework for AI, but as Zeerak Talat at Canada's Simon Fraser University observed, "We can't make machines liable for actions. They are not unguided. There are always people directing them and using them."
From The New York Times
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found