Why do we personify AI models, and are these metaphors actually helpful?

Celebrity Gig
Credit: CC0 Public Domain

The press has always used metaphors and examples to simplify complex issues and make them easier to understand. With the rise of chatbots powered by artificial intelligence (AI), the tendency to humanize technology has intensified, whether through comparisons to medicine, well-known similes, or dystopian scenarios.

Although what lies behind AI is nothing more than code and circuits, the media often portrays algorithms as having human qualities. So what do we lose, and what do we gain, when AI ceases to be a mere device and becomes, linguistically speaking, a human alter ego, an entity that “thinks,” “feels” and even “cares”?

The digital brain

An article in Spanish newspaper El País presented the Chinese AI model DeepSeek as a “digital brain” that “seems to quite clearly understand the geopolitical context of its birth.”

This way of writing replaces technical jargon—the foundational model, parameters, GPU, and so on—with an organ that we all recognize as the core of human intelligence. This has two results. It allows people to understand the magnitude and nature of the task (“thinking”) performed by the machine. However, it also suggests that AI has a “mind” capable of making judgements and remembering contexts—something that is currently far from the technical reality.

This metaphor fits into the classic conceptual metaphor theory of George Lakoff and Mark Johnson, which argues that concepts serve to help humans understand reality, and enable them to think and act. In talking about AI, this means we turn difficult, abstract abilities (“statistical computation”) into familiar ones (“thinking”).

While potentially helpful, this tendency runs the risk of obscuring the difference between statistical correlation and semantic understanding. It reinforces the illusion that computer systems can truly “know” something.

READ ALSO:  New solar panel designs could help Europe achieve fair and resilient energy goals

Machines with feelings

In February 2025, ABC published a report on “emotional AI” that asked: “will there come a day when they are capable of feeling?” The text recounted the progress made by a Spanish team attempting to equip conversational AI systems with a “digital limbic system.”

Here, the metaphor becomes even bolder. The algorithm no longer just thinks, but can also suffer or feel joy. This comparison dramatizes innovation and brings it closer to the reader, but it carries conceptual errors: by definition, feelings are linked to bodily existence and self-awareness, which software cannot have. Presenting AI as an “emotional subject” makes it easier to demand empathy from it or blame it for cruelty. It therefore shifts the moral focus from the people who design and program the machine to the machine itself.

A similar article reflected that “if artificial intelligence seems human, has feelings like a human and lives like a human… what does it matter if it is a machine?”

Robots that care

Humanoid robots are often presented in these terms. A report in El País on China’s push for elder-care androids described them as machines that “take care of their elders.” By saying “take care of,” the article refers to the family’s duty to look after their elders, and the robot is presented as a relative who will provide the emotional companionship and physical assistance that was previously provided by family or nursing staff.

This caregiver metaphor is not all bad. It legitimizes innovation in a context of demographic crisis, while also calming technological fears by presenting the robot as essential support in the face of staff shortages, as opposed to a threat to jobs.

READ ALSO:  ‘Creative industry’s potential must be tapped for economic

However, it could be seen as obscuring the ethical issues surrounding responsibility when caregiving work is done by a machine managed by private companies—not to mention the already precarious nature of this kind of work.

The doctor’s assistant

In another report by El País, large language models were presented as a doctor’s assistant or “extension,” capable of reviewing medical records and suggesting diagnoses. The metaphor of the “smart scalpel” or “tireless resident” positions AI within the health care system as a trusted collaborator rather than a substitute.

This hybrid framework—neither an inert device nor an autonomous colleague—fosters public acceptance, as it respects medical authority while promising efficiency. However, it also opens up discussions about accountability: if the “extension” makes a mistake, does the blame lie with the human professional, the software, or the company that markets it?

Why does the press rely on metaphor?

More than a decorative flourish, these metaphors serve at least three purposes. First and foremost, they facilitate understanding. Explaining deep neural networks requires time and technical jargon, but talking about “brains” is easier for readers to digest.

Secondly, they create narrative drama. Journalism thrives on stories with protagonists, conflicts, and outcomes. Humanizing AI creates these, along with heroes and villains, and mentors and apprentices.

Thirdly, metaphors serve to formulate moral judgements. Only if the algorithm resembles a subject can it be held accountable, or given credit.

However, these same metaphors can hinder public deliberation. If AI “feels,” then it stands to reason that it should be regulated as citizens are. Equally, if it is seen as having superior intelligence to our own, it seems only natural that we should accept its authority.

READ ALSO:  Manufacturers lament exclusion from World Bank-funded 1.2 million meters

How to talk about AI

Doing away with these metaphors would be impossible, nor is it something we should strive for. Figurative language is the way human beings understand the unknown, but the important thing is to use it critically. To this end, we offer some recommendations for writers and editors:

While “humanizing” artificial intelligence in the press helps readers become familiar with complex technology, the more AI resembles us, the easier it is to project fears, hopes, and responsibilities onto servers and lines of code.

As this technology develops further, the task facing journalists—as well as their readers—will be to find a delicate balance between metaphor’s evocative power and the conceptual precision we need if we are to keep having informed debates about the future.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.Why do we personify AI models, and are these metaphors actually helpful?

Citation:
‘Digital brains’ that ‘think’ and ‘feel’: Why do we personify AI models, and are these metaphors actually helpful? (2025, September 25)
retrieved 26 September 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Categories

Share This Article
Leave a comment