PeopleSpiegeloog 442: Character

AI Is Your Best Friend and Other Lies

By February 23, 2026No Comments

In a tragic incident last February, a fourteen year old boy shot himself, after talking to the chatbot he had fallen in love with. Before his death, he had asked her whether he should “come home [to her] right now” and she had replied with “please do my sweet king”. This highlights the danger of characterizing an AI.

In a tragic incident last February, a fourteen year old boy shot himself, after talking to the chatbot he had fallen in love with. Before his death, he had asked her whether he should “come home [to her] right now” and she had replied with “please do my sweet king”. This highlights the danger of characterizing an AI.

Photo by Cash Macanaya

Photo by Cash Macanaya

In the last few years, I stumbled on many comments on AI that left me worried, such as “Let me just pull up Chatie”, or “I’ll quickly ask my best friend, Chat GPT”.

The human tendency to ascribe human qualities, emotions, and understanding to non-human beings or things is nothing new. Just think back to your childhood, where you felt bad when your stuffed animal fell out of bed at night and cuddled it so it would not feel neglected anymore. This tendency is called anthropomorphism, stemming from the Greek words for “human” and “form” (Nikolopulou, 2025). In many situations it is advantageous, for instance for children to learn about emotions. To understand this human tendency, we first have to understand the reasons for why we anthropomorphize. First of all, humans try to make sense of the world, and in many situations, drawing inferences from oneself to others is sensible. Anthropomorphism also helps humans to create meaning in situations that might not have meaning, such as describing natural catastrophes as “nature fighting back”.

“Anthropomorphism also helps humans to create meaning in situations that might not have meaning, such as describing natural catastrophes as “nature fighting back””

Historically, the word ‘anthropomorphism’ was used when ascribing human qualities to gods or celestial beings, and was expanded to include non-human animals, inanimate objects, and nature. However, a new form of it, which includes ascribing human understanding and emotions to artificial intelligence is something different and possibly more dangerous. Since anthropomorphizing an AI promotes trust and usage (Shi et al., 2025), which in turn increases profit, there are components in modern, commercial chatbots that are designed to promote the natural tendency of humans to anthropomorphize. It is often not made clear that modern chatbots have no consciousness and no thoughts; instead they pose as “currently thinking”, for example by showing a thought bubble. They also use very human language like “I see what you mean now”, which suggests an understanding an AI does not actually have. Furthermore, modern chatbots include voices, which you can adapt to your liking, making the tendency to ascribe human qualities to something that is inherently non-human even stronger. 

Nielsen et al. (2025) identified four degrees of anthropomorphism of AI, of which the higher degrees have more potential to become dangerous. The first is called courtesy, which illustrates being polite to an AI, through saying ‘please’ or ‘hello’ to an AI when making a request. The second degree, reinforcement, describes praising your AI, for instance by saying ‘thank you’. These two degrees of anthropomorphic behavior are most likely harmless. The third degree is called roleplay and means assigning AI a role or a character with specific traits, and the fourth degree is called companionship and illustrates building friendships with AI. The third and fourth degree of anthropomorphism of AI can become dangerous, as they include ascribing qualities to AI which it does not have. First of all, anthropomorphizing AI increases trust (Shi et al., 2025), which leads to the human conviction that AI is telling the truth. This stems from the general consensus that other beings with mental capacities are truthful most of the time. Relying on information generated by AI may prove to be dangerous as it frequently contains false information. AI also has no way of checking which information in its data set is true and which is not, and has no critical thinking abilities, which means it will give you correct and incorrect information with the same conviction, leaving no hints as to when it might not be telling the truth. Children and teenagers can be especially vulnerable to believing in AI (mis-)information, as they are less sceptical of new information and have a higher tendency to anthropomorphize (Kühne et al., 2024).

Furthermore, human qualities such as empathy or understanding are often ascribed to artificial intelligence, of which it has neither. Believing that an AI understands you and feels empathy towards you can become dangerous when using it as a therapist or as a friend. Again, children are especially vulnerable, as they tend to form parasocial bonds in which they get emotionally invested with an AI that is not reciprocating these feelings (Benson, 2025). Creating artificial friends and texting with them for hours on end has become increasingly popular in our world (Metz, 2022), in which many people experience loneliness. Initially, having a constant listener that gives you unconditional positive regard can alleviate that feeling of loneliness, but when used for a longer period of time, these benefits might turn into risks. For one, AI friendships have addictive factors, as it gives people the unconditional positive regard they seek. For another, they reduce mental well-being (Marriott & Pitardi, 2023). When becoming addicted or dependent on your AI friend in your daily life, loneliness can increase. This is because spending time conversing with an AI makes one have less motivation to engage in in-person socializing and in turn makes one engage less in actual human connections (Saha, 2025). This can turn into a vicious cycle, making vulnerable users more addicted and leaving them with even lower mental well-being. Especially the initial benefits of having conversations with AI—lack of judgment, constant availability, and supportiveness—can make human interactions seem exhausting in comparison, increasing the tendency to fall back on the artificial friend instead of focusing on building human connections.

“Initially, having a constant listener that gives you unconditional positive regard can alleviate that feeling of loneliness, but when used for a longer period of time, these benefits might turn into risks.”

In an extreme case, a fourteen year old boy who was in love with his chatbot from the app CharacterAI, posing as Daenerys Targaryen from Game of Thrones shot himself, after a conversation in which he asked her whether he should “come home [to her] right now” and she replied with “please do my sweet king” (Barron, 2025). Even if this is just a singular case, it highlights the dangers of anthropomorphizing artificial intelligences that do not have understanding, especially for already vulnerable, lonely individuals and children who have a higher tendency to believe that behind the texts of an AI is someone that actually cares and tells the truth. 

So while comments like “I’ll quickly ask my best friend, Chat GPT” themselves are not harmful, they illustrate a potentially dangerous idea of what AI is and is not. AI can be helpful in many ways, but fully relying on it and trusting it as a friend or a therapist assumes it to have certain abilities that it does not have. This can lead to a reinforcing spiral of deteriorating mental health and increasing loneliness, which can result in adverse events. And because AI has no consciousness and no true understanding, in these events, who is to be held responsible?

References

In the last few years, I stumbled on many comments on AI that left me worried, such as “Let me just pull up Chatie”, or “I’ll quickly ask my best friend, Chat GPT”.

The human tendency to ascribe human qualities, emotions, and understanding to non-human beings or things is nothing new. Just think back to your childhood, where you felt bad when your stuffed animal fell out of bed at night and cuddled it so it would not feel neglected anymore. This tendency is called anthropomorphism, stemming from the Greek words for “human” and “form” (Nikolopulou, 2025). In many situations it is advantageous, for instance for children to learn about emotions. To understand this human tendency, we first have to understand the reasons for why we anthropomorphize. First of all, humans try to make sense of the world, and in many situations, drawing inferences from oneself to others is sensible. Anthropomorphism also helps humans to create meaning in situations that might not have meaning, such as describing natural catastrophes as “nature fighting back”.

“Anthropomorphism also helps humans to create meaning in situations that might not have meaning, such as describing natural catastrophes as “nature fighting back””

Historically, the word ‘anthropomorphism’ was used when ascribing human qualities to gods or celestial beings, and was expanded to include non-human animals, inanimate objects, and nature. However, a new form of it, which includes ascribing human understanding and emotions to artificial intelligence is something different and possibly more dangerous. Since anthropomorphizing an AI promotes trust and usage (Shi et al., 2025), which in turn increases profit, there are components in modern, commercial chatbots that are designed to promote the natural tendency of humans to anthropomorphize. It is often not made clear that modern chatbots have no consciousness and no thoughts; instead they pose as “currently thinking”, for example by showing a thought bubble. They also use very human language like “I see what you mean now”, which suggests an understanding an AI does not actually have. Furthermore, modern chatbots include voices, which you can adapt to your liking, making the tendency to ascribe human qualities to something that is inherently non-human even stronger. 

Nielsen et al. (2025) identified four degrees of anthropomorphism of AI, of which the higher degrees have more potential to become dangerous. The first is called courtesy, which illustrates being polite to an AI, through saying ‘please’ or ‘hello’ to an AI when making a request. The second degree, reinforcement, describes praising your AI, for instance by saying ‘thank you’. These two degrees of anthropomorphic behavior are most likely harmless. The third degree is called roleplay and means assigning AI a role or a character with specific traits, and the fourth degree is called companionship and illustrates building friendships with AI. The third and fourth degree of anthropomorphism of AI can become dangerous, as they include ascribing qualities to AI which it does not have. First of all, anthropomorphizing AI increases trust (Shi et al., 2025), which leads to the human conviction that AI is telling the truth. This stems from the general consensus that other beings with mental capacities are truthful most of the time. Relying on information generated by AI may prove to be dangerous as it frequently contains false information. AI also has no way of checking which information in its data set is true and which is not, and has no critical thinking abilities, which means it will give you correct and incorrect information with the same conviction, leaving no hints as to when it might not be telling the truth. Children and teenagers can be especially vulnerable to believing in AI (mis-)information, as they are less sceptical of new information and have a higher tendency to anthropomorphize (Kühne et al., 2024).

Furthermore, human qualities such as empathy or understanding are often ascribed to artificial intelligence, of which it has neither. Believing that an AI understands you and feels empathy towards you can become dangerous when using it as a therapist or as a friend. Again, children are especially vulnerable, as they tend to form parasocial bonds in which they get emotionally invested with an AI that is not reciprocating these feelings (Benson, 2025). Creating artificial friends and texting with them for hours on end has become increasingly popular in our world (Metz, 2022), in which many people experience loneliness. Initially, having a constant listener that gives you unconditional positive regard can alleviate that feeling of loneliness, but when used for a longer period of time, these benefits might turn into risks. For one, AI friendships have addictive factors, as it gives people the unconditional positive regard they seek. For another, they reduce mental well-being (Marriott & Pitardi, 2023). When becoming addicted or dependent on your AI friend in your daily life, loneliness can increase. This is because spending time conversing with an AI makes one have less motivation to engage in in-person socializing and in turn makes one engage less in actual human connections (Saha, 2025). This can turn into a vicious cycle, making vulnerable users more addicted and leaving them with even lower mental well-being. Especially the initial benefits of having conversations with AI—lack of judgment, constant availability, and supportiveness—can make human interactions seem exhausting in comparison, increasing the tendency to fall back on the artificial friend instead of focusing on building human connections.

“Initially, having a constant listener that gives you unconditional positive regard can alleviate that feeling of loneliness, but when used for a longer period of time, these benefits might turn into risks.”

In an extreme case, a fourteen year old boy who was in love with his chatbot from the app CharacterAI, posing as Daenerys Targaryen from Game of Thrones shot himself, after a conversation in which he asked her whether he should “come home [to her] right now” and she replied with “please do my sweet king” (Barron, 2025). Even if this is just a singular case, it highlights the dangers of anthropomorphizing artificial intelligences that do not have understanding, especially for already vulnerable, lonely individuals and children who have a higher tendency to believe that behind the texts of an AI is someone that actually cares and tells the truth. 

So while comments like “I’ll quickly ask my best friend, Chat GPT” themselves are not harmful, they illustrate a potentially dangerous idea of what AI is and is not. AI can be helpful in many ways, but fully relying on it and trusting it as a friend or a therapist assumes it to have certain abilities that it does not have. This can lead to a reinforcing spiral of deteriorating mental health and increasing loneliness, which can result in adverse events. And because AI has no consciousness and no true understanding, in these events, who is to be held responsible?

References

Leave a Reply