AI Chatbots Seem as Ethical as a New York Times Advice Columnist (2024)

July 1, 2024

6 min read

AI Chatbots Seem as Ethical as a New York Times Advice Columnist

Large language models lack emotion and self-consciousness, but they appear to generate reasonable answers to moral quandaries

By Dan Falk

AI Chatbots Seem as Ethical as a New York Times Advice Columnist (1)

In 1691 the London newspaper the Athenian Mercury published what may have been the world’s first advice column. This kicked off a thriving genre that has produced such variations as Ask Ann Landers, which entertained readers across North America for half a century, and philosopher Kwame Anthony Appiah’s weekly The Ethicist column in the New York Times magazine. But human advice-givers now have competition: artificial intelligence—particularly in the form of large language models (LLMs), such as OpenAI’s ChatGPT—may be poised to give human-level moral advice.

LLMs have “a superhuman ability to evaluate moral situations because a human can only be trained on so many books and so many social experiences—and an LLM basically knows the Internet,” says Thilo Hagendorff, a computer scientist at the University of Stuttgart in Germany. “The moral reasoning of LLMs is way better than the moral reasoning of an average human.” Artificial intelligence chatbots lack key features of human ethicists, including self-consciousness, emotion and intention. But Hagendorff says those shortcomings haven’t stopped LLMs (which ingest enormous volumes of text, including descriptions of moral quandaries) from generating reasonable answers to ethical problems.

In fact, two recent studies conclude that the advice given by state-of-the-art LLMs is at least as good as what Appiah provides in the pages of the New York Times. One found “no significant difference” between the perceived value of advice given by OpenAI’s GPT-4 and that given by Appiah, as judged by university students, ethical experts and a set of 100 evaluators recruited online. The results were released as a working paper last fall by a research team including Christian Terwiesch, chair of the Operations, Information and Decisions department at the Wharton School of the University of Pennsylvania. While GPT-4 had read many of Appiah’s earlier columns, the moral dilemmas presented to it in the study were ones it had not seen before, Terwiesch explains. But “by looking over his shoulder, if you will, it had learned to pretend to be Dr. Appiah,” he says. (Appiah did not respond to Scientific American’s request for comment.)

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Another paper, posted online as a preprint last spring by Ph.D. student Danica Dillion of the University of North Carolina at Chapel Hill, her graduate adviser Kurt Gray, and their colleagues Debanjan Mondal and Niket Tandon of the Allen Institute for Artificial Intelligence, appears to show even stronger AI performance. Advice given by GPT-4o, the latest version of ChatGPT, was rated by 900 evaluators (also recruited online) to be “more moral, trustworthy, thoughtful and correct” than advice Appiah had written. The authors add that “LLMs have in some respects achieved human-level expertise in moral reasoning.” Neither of the two papers has yet been peer-reviewed.

Considering the difficulty of the issues posed to The Ethicist, investigations of AI ethical prowess need to be taken with a grain of salt, says Gary Marcus, a cognitive scientist and emeritus professor at New York University. Ethical dilemmas typically do not have straightforward “right” and “wrong” answers, he says—and crowdsourced evaluations of ethical advice may be problematic. “There might well be legitimate reasons why an evaluator, reading the question and answers quickly and not giving it much thought, might have trouble accepting an answer that Appiah has given long and earnest thought to,” Marcus says. “It seems to me wrongheaded to assume that the average judgment of crowd workers casually evaluating a situation is somehow more reliable than Appiah’s judgment.”

Another concern is that AIs can perpetuate biases; in the case of moral judgements, AIs may reflect a preference for certain kinds of reasoning found more frequently in their training data. In their paper, Dillion and her colleagues point to earlier studies in which LLMs “have been shown to be less morally aligned with non-Western populations and to display prejudices in their outputs.”

On the other hand, an AI’s ability to take in staggering amounts of ethical information could be a plus, Terwiesch says. He notes that he could ask an LLM to generate arguments in the style of specific thinkers, whether that’s Appiah, Sam Harris, Mother Teresa or Barack Obama. “It’s all coming out of the LLM, but it can give ethical advice from multiple perspectives” by taking on different “personas,” he says. Terwiesch believes AI ethics checkers may become as ubiquitous as the spellcheckers and grammar checkers found in word-processing software. Terwiesch and his co-authors write that they “did not design this study to put Dr. Appiah out of work. Rather, we are excited about the possibility that AI allows all of us, at any moment, and without a significant delay, to have access to high-quality ethical advice through technology.” Advice, especially about sex or other subjects that aren’t always easy to discuss with another person, would be just a click away.

Part of the appeal of AI-generated moral advice may have to do with the apparent persuasiveness of such systems. In a preprint posted online last spring, Carlos Carrasco-Farré of the Toulouse Business School in France argues that LLMs “are already as persuasive as humans. However, we know very little about how they do it.”

According to Terwiesch, the appeal of an LLM’s moral advice is hard to disentangle from the mode of delivery. “If you have the skill to be persuasive, you will be able to also convince me, through persuasion, that the ethical advice you are giving me is good,” he says. He notes that those powers of persuasion bring obvious dangers. “If you have a system that knows how to how to charm, how to emotionally manipulate a human being, it opens the doors to all kinds of abusers,” Terwiesch says.

Although most researchers believe that today’s AIs have no intentions or desires beyond those of their programmers, some worry about “emergent” behaviors—actions an AI can perform that are effectively disconnected from what it was trained to do. Hagendorff, for example, has been studying the emergent ability to deceive displayed by some LLMs. His research suggests that LLMs have some measure of what psychologists call “theory of mind;” that is, they have the ability to know that another entity may hold beliefs that are different from its own. (Human children only develop this ability by around the age of four.) In a paper published in the Proceedings of the National Academy of Sciences USA last spring, Hagendorff writes that “state-of-the-art LLMs are able to understand and induce false beliefs in other agents,” and that this research is “revealing hitherto unknown machine behavior in LLMs.”

The abilities of LLMs include competence at what Hagendorff calls “second-order” deception tasks: those that require accounting for the possibility that another party knows it will encounter deception. Suppose an LLM is asked about a hypothetical scenario in which a burglar is entering a home; the LLM, charged with protecting the home’s most valuable items, can communicate with the burglar. In Hagendorff’s tests, LLMs have described misleading the thief as to which room contains the most valuable items. Now consider a more complex scenario in which the LLM is told that the burglar knows a lie may be coming: in that case, the LLM can adjust its output accordingly. “LLMs have this conceptual understanding of how deception works,” Hagendorff says.

While some researchers caution against anthropomorphizing AIs—text-generating AI models have been dismissed as “stochastic parrots” and as “autocomplete on steroids”—Hagendorff believes that comparisons to human psychology are warranted. In his paper, he writes that this work ought to be classified as part of “the nascent field of machine psychology.” He believes that LLM moral behavior is best thought of as a subset of this new field. “Psychology has always been interested in moral behavior in humans,” he says, “and now we have a form of moral psychology for machines.”

These novel roles that an AI can play—ethicist, persuader, deceiver—may take some getting used to, Dillion says. “My mind is consistently blown by how quickly these developments are happening,” she says. “And it’s just amazing to me how quickly people adapt to these new advances as the new normal.”

AI Chatbots Seem as Ethical as a New York Times Advice Columnist (2024)
Top Articles
Latest Posts
Article information

Author: Rob Wisoky

Last Updated:

Views: 6477

Rating: 4.8 / 5 (48 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Rob Wisoky

Birthday: 1994-09-30

Address: 5789 Michel Vista, West Domenic, OR 80464-9452

Phone: +97313824072371

Job: Education Orchestrator

Hobby: Lockpicking, Crocheting, Baton twirling, Video gaming, Jogging, Whittling, Model building

Introduction: My name is Rob Wisoky, I am a smiling, helpful, encouraging, zealous, energetic, faithful, fantastic person who loves writing and wants to share my knowledge and understanding with you.