Sign In

Communications of the ACM


Technology's Impact on Morality

circuit electronics on a human head, illustration

Credit: Getty Images

Can technology affect human morality?

This is not an esoteric test question from a college philosophy class, but a real, growing concern among leading technologists and thinkers. Technologies like social media, smartphones, and artificial intelligence can create moral issues at scale, and technology experts specifically and society generally are struggling to navigate these issues.

On the one hand, technology can empower us with better information on the consequences of our actions, as when we use the Internet to research how to reduce our environmental footprint. In the past, such information may have been inaccessible or impossible to source, but today we can easily arm ourselves with data that helps us make choices we perceive to be more moral.

On the other hand, technology can bring out our worst behaviors. Social media platforms can serve us content that enrages or depresses us, making it more (or less) likely we will take immoral actions based on our feelings. These platforms also can be used by bad actors to take immoral actions more easily.

Not to mention, technology companies themselves may use their creations in ways that, intentionally or accidentally, cause real harm.

One prominent example of how technology can impact morality is Facebook. The company's social media platform makes it easier for us to connect with people and issues we care about, which enables us to take moral actions, like supporting a friend having a hard time or raising money for an important cause.

However, the same functionality that allows moral choices also enables immoral ones, as Facebook whistle-blower Francis Haugen told the U.S. Congress. Haugen testified in October 2021 that Facebook knowingly served content containing hate speech and misinformation to its users, since that increased engagement. Some of the users receiving that content then decided to speak and act in hateful ways, which caused mental and physical harm to others. In one scenario, Haugen said, the company's technology was even used to fan the flames of genocide in Myanmar, literally costing lives.

Technology without morality is barbarous; morality without technology is impotent.—Freeman Dyson

"The result has been more division, more harm, more lies, more threats and more combat," Haugen told Congress. "In some cases, this dangerous online talk has led to actual violence that harms and even kills people."

Examples like Facebook have experts increasingly worried about links between our technology and our moral behavior (or the lack thereof).

In fact, these days the question should not be "Can technology affect our morality?", but "how does technology affect our morality?"

Back to Top

The Moral Hazards of Technology

Moral behavior starts in the brain, in how we think.

Technology can influence our neurochemistry. In one of many studies on the subject, research supported by the National Institutes of Health found that Instagram directly influenced the neurochemistry of high schoolers by stimulating the reward center of the brain when photos they had posted were liked.

"Changing certain aspects of our neurochemistry with technology is bound to affect our moral behavior," says Joao Fabiano, a visiting fellow at Harvard University's Ash Center in the Kennedy School of Government, who researches the moral implications of new technologies. These impacts can range from motivating us to make positive moral decisions to incentivizing us to make bad ones.

This does not mean we are blameless for our behavior because technology hacks our brains. It means that technology has a profound influence on how we think and act, an influence we are only just beginning to understand.

For starters, technology can be used knowingly by immoral actors in harmful ways, either through turning a blind eye to harm or by intentionally using a platform to create negative outcomes.

"If I want to do something unethical, technology can provide me with better information and better tools to achieve my ill-intentioned goals, giving me opportunities that I would not have pursued without technological help," says Jean-François Bonnefon, a research director at France's Toulouse School of Economics who studies moral psychology and artificial intelligence (AI).

Technology also can be used to pass the buck on making moral decisions, says Bonnefon. "For example, I may decide to buy an automated vehicle that always puts my safety first, even if it disproportionately increases risks for other road users," he says. "I may have felt guilty to drive that way myself, but I do not feel as guilty if a machine is doing it on my behalf."

The debate over technology's impact on morality becomes even murkier when you consider what happens when it amplifies our own moral flaws.

"Like any technology, social media simply enables us to do what we've done to a greater degree," says Paul Taylor, a pastor at Peninsula Bible Church in Palo Alto, CA. Taylor is no stranger to the moral quandaries posed by technology; before becoming a pastor, he worked in software development in San Francisco, at one point as a product manager for tech giant Oracle.

Taylor says humans tend to present themselves in a certain light, revealing some information while obscuring other details. "But the choices we make about what kinds of information we reveal end up shaping us in significant ways. The design of social media technologies incentivizes certain information and deincentivizes other information."

These technical decisions end up shaping the choices we make about how to present ourselves, which in turn shapes how we view ourselves and other people.

And what happens when the machines themselves tell us how we should act?

"We should pay more attention to how [artificial intelligence] shapes our moral behavior," says Nils Köbis, a postdoctoral fellow at the Max Planck Institute for Human Development who studies ethics and AI.

For example, take Amazon's Alexa voice assistant. According to Köbis' research, Amazon's chief scientist Rohit Prasad says the company envisions Alexa's role as moving from a voice assistant to a trusted advisor. "Soon, people might not only ask it to put on their favorite music or switch on the light, but consult it before making decisions with moral consequences," says Köbis.

"Technology creators can be more responsible in their efforts if they have a grounded understanding of what it means to be human," Taylor says.

Köbis and other researchers "conducted a large-scale, financially incentivized, preregistered experiment" to judge how machines impact our morality by testing whether AI-generated advice could corrupt people. They had study participants play a game in which they were given dishonest advice about how to cheat and get ahead, sometimes by humans and sometimes by AI. Study participants accepted the dishonest advice and lied to win the game.

According to the study, "the effect of AI-generated advice was indistinguishable from that of human-written advice." The study concludes humans could be just as strongly corrupted by AI systems as they are by other humans.

Back to Top

Move Fast and Break Things

One reason technology causes murky moral quandaries is that its creators don't always consider its consequences.

"For many AI tools, a mentality of 'innovate first, ask for forgiveness later' exists," says Köbis. "Stress testing if and how the technology might affect peoples' ethical behavior is an important step." One way to do that would be by enabling researchers and civil society organizations to audit code released by major technology players, but that is easier said than done, Köbis says.

Many companies use proprietary data and code that they guard carefully. That makes it difficult to understand how their technology works, and to determine the possible moral pitfalls it could create. This became apparent in Haugen's testimony to Congress, during which she advocated for the regulation of Facebook's algorithms. However, regulators would need unprecedented access to Facebook's algorithms, code, and data to enact such a policy.

Projects like DataSkop by Algorithm-Watch try to work around these restrictions. Through DataSkop, users can donate data from their desktop or laptop computers to help researchers understand how algorithmic decision-making systems work. In a project in the fall of 2021, DataSkop was used to analyze user-contributed data to spot patterns in YouTube's recommendation and search result algorithm during Germany's election campaign season, and the role personalization plays.

Another way to address the issue of how technology impacts morality is through asking better questions before the technology is developed.

"Philosophy has developed tools for tackling big-picture questions that innovators should be asking," says Fabiano. These tools include fundamental questions about what's best for future generations, and where responsibility lies for the harm caused by technologies.

It also means considering what it means to be human, says Taylor. "Technology creators can be more responsible in their efforts if they have a grounded understanding of what it means to be human," he says.

That means asking questions like: What aspects of our humanity are at risk from technology? Which aspects are too important to sacrifice? What ideals do we want to cultivate? What temptations do we want to avoid?

"Having some answers to these questions will provide a foundation for designing technology to have a positive impact on our behavior."

If we get there, we may see technology where morality is a feature, not a bug.

* Further Reading

Allyn, B.
Here are 4 key points from the Facebook whistleblower's testimony on Capitol Hill, NPR, Oct. 5, 2021,

Leib, M. et al.
The corruptive force of AI-generated advice, Cornell University, Feb. 15, 2021,

Samuel, S.
It's hard to be a moral person. Technology is making it harder., Vox, Aug. 3, 2021,

Sherman, L. et al.
Peer Influence Via Instagram: Effects on Brain and Behavior in Adolescence and Young Adulthood, Child Development, Jan./Feb. 2018,

Back to Top


Logan Kugler is a freelance technology writer based in Tampa, FL, USA. He is a regular contributor to CACM and has written for nearly 100 major publications.

©2022 ACM  0001-0782/22/4

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.


No entries found