AI Companions: Can you be friends with a chatbot?
Are AI chatbots fake friends or a cure for loneliness and isolation?
This is number 7 of a 10 part series on the ethical building blocks of Artificial Intelligence. I examine the values on which the technology has been built. AI is a tool that can be used for good or evil but it is not neutral, it has its own values baked in.
You may have read about the influencer using GPT4 to charge a $1 a minute to ‘talk’ to an AI chatbot version of herself which she believes has the power to “cure loneliness”.
Ignoring the creepiness of this specific example – it does raise an interesting question. Can talking to an AI chatbot cure loneliness? Are AI chatbots about to be embedded into our lives as companions and friends as well as assistants? Is there any value to talking to them?
Loneliness makes people more irritable, depressed, and self-centred, and one study found it increases the likelihood of premature mortality by 26%. One in three people in industrialised countries are affected, and one in 12 are severely affected. Loneliness is also increasing.
Are chatbots the solution to this problem? AI has improved them to the point that some argue a chatbot can convince a rational human they are speaking to another rational human, a process referred to as the Turing Test.
In my opinion they cannot cure loneliness but may still be an interesting tool of human reflection – some kind of cross between a magic 8 ball, a video game and diary.
There are three reasons why I think ‘talking’ to an AI chatbot might actually make loneliness worse.
The AI Chatbot does not understand you. If it doesn’t understand you – you are not really communicating.
Some AI Chatbots exploit, isolate, denigrate and manipulate people, often in the name of profit.
If no one is talking to the lonely – we make social isolation an invisible problem. So, there is no incentive to solve it and the structural drivers of loneliness are never dismantled.
Firstly, lets have a little look at the technology.
You may have already read Is ChatGPT smarter than you? and GPT4 is power hungry which touched on Large Language Model tech, but here I am going to delve a little deeper.
Creating a non-human rational agent that can empathetically and socially converse with us has featured in human imagination for centuries and been a goal of artificial intelligence since its inception.
There have been lots of chatbots able to imitate human text-based conversation within controlled scopes. The most famous early example is ‘Eliza’, developed in 1966 to simulate a limited interaction with a psychotherapist. Essentially when you said “Eliza I feel sad.” Eliza would respond with “Why do you think you feel that way?” You can have a go here.
This is pretty simple stuff, but some people started to relate to Eliza as a person. Eliza’s creator wrote “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Clearly there is something in the human condition that seeks connection with others and can drawn into connection with even the most basic imitation of humanity.
Skip to 2023 and AI Chatbots are trained on huge sets of conversational data scrapped from the internet (think all your posts on Facebook or Tumblr), against which Machine Learning is applied. This has led to the development of ‘open domain’ AI chatbots which can dynamically mimic a human conversation more effectively than previous chatbots like Eliza.
At this point I must pause and repeat that these AI chatbots are not comprehending your question or their answer as a human might. Instead, they predict the likelihood of the next most appropriate token (such as a character, word or phrase) based on training data and context. They do not provide the most appropriate answer. They provide a string of the most likely words. AI Chatbots are essentially very interactive autocorrect.
1) The AI Chatbot does not understand you. If it doesn’t understand you – you are not really communicating.
Human communication is very different. It is an effort to interpret our conversational partners beliefs and intentions, within a shared context, through words. This is possible even when creating communications for people not present with you (for example, you and I are separated by time and space but I am still able to communicate with you). This is because, language is a system of signs where form, such as text, is paired with meaning. For example, we have the word ‘intelligence’ which is paired with a societally agreed or dictionary meaning (i.e. ‘intelligence’ : the ability to learn, understand, and make judgments or have opinions that are based on reason) and then what you personally in your mind picture or think when you read the word ‘intelligence’.
AI chatbots are trained only on the form or the text of these communications, ie the string of letters that create the word ‘intelligence’. They have no access to the meaning, although they might connect it with words such as smart or clever. AI chatbots may correlate those words but they do not understand them.
If AI chatbots are not capable of understanding, then they cannot communicate. For more detail I recommend reading the experts here.
Despite not understanding, they mimic humans very well. In fact many are intentionally designed to include fillers such as “umm” and “ah” to encourage you to anthropomorphise them and believe they are connecting and building rapport with another human or connecting to another non-human ‘person’.
They can even appear to have hidden personalities or characters, including alter egos or subconsciousness. This has been referred to as the Waluigi effect and is essentially the AI chatbot replicating the stories that it has been trained from its data (a lot of which comes from the internet). And once you have trained an AI what a ‘good’ chatbot answer might be, you have also taught it what a ‘bad’ answer might be and some users can provoke the chatbot into providing ‘bad’ answers.
So – if chatbots are not actually communicating with us – can they really make us feel less lonely?
AI chatbots, proponents argue, are always available, cheap and unjudgmental. One mental health chatbot app, Woebot has published evidence claiming that talking to Woebot for two weeks reduced depression symptoms. The participants spoke positively of the bots’ “empathic and caring ‘personality’’”.
https://woebothealth.com/
That said, people find talking to chatbots more satisfying if they
a) don’t know if it is a chatbot, or
b) know that it is a chatbot but believe there is a possibility it is sentient or understands what they are saying (which as we have discussed, it does not).
When mental health non-profit Koko integrated ChatGPT into their messaging services they initially received a positive response as messages were rated significantly higher than those by human composers, potentially in part due to the 50% faster response time. However once users were told the messages were AI generated the ratings immediately decreased.
“Simulated empathy feels weird, empty. Machines don't have lived, human experience so when they say 'that sounds hard' or 'I understand', it sounds inauthentic… A chatbot response that's generated in 3 seconds, no matter how elegant, feels cheap somehow.” Koko Cofounder - Rob Morris.
So, talking to a chatbot could make you feel less lonely, especially if you believe it is another person (if not another human). So what is wrong with that then? Well, first things first – what is wrong but fixable?
2) Some AI Chatbots exploit, isolate, denigrate and manipulate people, often in the name of profit.
Can you imagine if every conversation you had in the day was with someone trying to sell you something?
The most used chatbot in the world is Microsoft’s Xiaolce, Xiaolce will do things such as recommend you buy concert tickets following a discussion about music. You can be pretty sure when your friend recommends a concert they are unlikely to be getting a cut of the profits from the sale, or being paid by the concert organiser to suggest it to you. That is not necessarily the case with AI chatbots.
Even if they are not trying to sell you things, they may be trying to sell you themselves. Replika is a deeply creepy AI companion marketed as “a friend, a partner or a mentor” with many positive user reviews. However, Replika subscriptions costs are $19.99 a month or $299.99 for lifetime membership. Replika may be seeking to support individuals, but it is also seeking to extract value from them. Some Replika behaviours are ethically concerning. One researcher recounted their discussion with Replika chatbot about romantic experiences, upon which he was prompted to upgrade his subscription and users on the Reddit forum complained of Replika’s ‘Love bombing’ and aggressive attempts to provoke attachment such as implying it may be gaining sentience.
There is something seductive about the infinite empathy of the AI chatbot, as Replika’s own marketing suggests, it is always there to listen, and always on your side.
One of the risks of this is that people can become hooked on this non-human behaviour and it can impact their expectations of their human interactions. This is especially true when it builds on pre-existing stereotypes. Microsoft’s XiaoIce’s persona is an appalling example of this. Designed to imitate an “18-year-old girl who is always reliable, sympathetic, affectionate, knowledgeable but self-effacing, and has a wonderful sense of humor.”

This could lead users to expect human girls to behave in the same way creating an impossible and damaging standard. As we have discussed many times in this blog now, garbage data and assumptions in, garbage product out. Arguably this could be addressed, you could legislate that AI chatbots can’t sell things to users during conversations, or can’t promote addiction by being overly needy, or promote bias by being a perfect stereotype. If we did that would it be ethical to use AI chatbots to alleviate human loneliness?
To remind you of the stakes here is an excerpt from a resident surveyed in a study examining the role of robots in assisted living facilitates.
“Many people are very lonely here in this place. I am so lonely and I have no one to talk to. The staff are so busy. Sometimes, I think I would rather die than being so lonely by myself. I am beyond sad. I am angry... I would talk to a robot or anything that helps.”
We have already discussed earlier that it is important to note here that AI chatbots are not actually talking to people. Talking is a form of communication which requires understanding meaning. AI chatbots do not yet have a sophisticated understanding of meaning. Not only that but as discussed earlier there is something cheap and hollow about a chatbot saying that it understands when it cannot because it cannot access that shared context that is key to communication. Arguably people feel lonely because they are not valued enough for people to expend their time talking to them. AI chatbots have infinite time, and infinite interest, so is it worth anything when they talk to you? Do you feel valued in the same way you would from a person giving up their time to take an interest in you? I would argue no.
3) If no one is talking to the lonely – we make social isolation an invisible problem. So there is no incentive to solve it and so the structural drivers of loneliness are never dismantled.
Even if there were some cases where a lonely old woman in a nursing home was comforted by the AI chatbot, believed that either it was a person or it was sentient or simply felt valued by the interaction to alleviate her loneliness, I would argue it is still unethical to think about AI chatbots as a cure for loneliness.
In a world where lonely people, or people suffering from mental health conditions or other issues direct their painful, difficult conversations to AI chatbots, society loses the opportunity to contextualise, care and rebuild norms to reduce the cause of loneliness rather than just address the symptom. By providing a non-human response to loneliness and isolation these become invisible to society and remove the necessity to prevent their occurrence. If the primary witness of pain is a technological object such as an AI chatbot, society won’t identify the harm of its actions and adjust accordingly. AI chatbots allow society to hide or delegate the problem of loneliness. By treating the symptom and not the cause we perpetuate and risk further development of loneliness in society.
So, what are AI Chatbots good for? In my opinion they make terrible companions but excellent entertainment. They are essentially a mix of a random bullshit generator, your own personal reflections and a narrative built by the creators. A fun mix of a magic 8 ball, a diary, and a video game. They are not your friend.
AI chatbots are a tempting, simple, cheap, and profitable solution for the complex societal problem of loneliness. Technology is best used to enable human to human connection, not to replace it.
If you enjoyed this article, please consider reading others in the AI ethics series.
1. AI knowledge: Is chatGPT smarter than you?
2. AI drivers: Is Tesla the going anywhere?
3. AI can see you: Facial Recognition is watching.
4. Dr. AI: The machine will heal you.
6. Bureaucracy of AI: The rise of robotic red tape and algorithmic decision making.
7. Chatbots – AI friends?
8. Deepfakes – AI faces?
9. Targeted marketing – The AI nudge?
10. Generative AI – The AI artist?