Credit: Andrij Borys Associates, Shutterstock.AI
We humans worry deployed artificially intelligent systems (AIs) could harm individual humans and perhaps even humanity as a whole. These AIs might be embodied robots such as autonomous vehicles making driving decisions, or disembodied advisors recommending products, credit, or parole. The field of AI ethics has arisen and grown rapidly, investigating how humans should design and deploy AIs, and how to create AIs that reason appropriately about how they should act. This Viewpoint attempts to pick out one useful thread of an immensely complex and important discussion.
To approach these questions, AI researchers must understand how ethics works for humans—the problem of descriptive ethics. It is widely understood that action decisions are made by individuals, but those decisions also influence the welfare of the larger society. A core functional role for ethics is to balance individual self-interest with the well-being of society.
No entries found