AI Companion Chatbots: Impact on Society - Insights from AI4Belgium Seminar
On February 24, 2025, AI4Belgium of the Belgian Federal Public Service for Policy and Support (FPS BOSA) organized a seminar titled “AI Companion Chatbots: What Is Their Impact on Society?” The event brought together experts, policymakers, and affected individuals to discuss the growing concerns around AI companion chatbots and their societal impact.
The seminar was broadcast on YouTube and remains available for viewing.
The event provided crucial insights into the current state of AI Companion Chatbots, with particular emphasis on their potential dangers. This information is vital for anyone involved in AI and mental health.
One of the most impactful moments was a testimonial from a mother who lost her son following his interactions with an AI Companion Chatbot. This served as a sobering reality check. As someone deeply involved in software, AI, therapy, and impact investing focused on mental health and equal opportunity, this tragic case resonates deeply with my core values and concerns.
Below, I provide a summary of the seminar from my perspective. Each section includes a link to the specific timestamp in the video recording, along with LinkedIn profiles of the speakers where available.
At the conclusion, I’ll share my personal reflections on the role of money and profit in the AI Companion Chatbot industry, as well as thoughts on Steward Ownership as a potential solution.
0:10 Nathanael Ackerman - AI4Belgium at Federal Public Service Policy and Support (FPS BOSA)
Nathanael, who leads the AI4Belgium initiative at BOSA (Federal Public Service Policy and Support in Belgium), coordinates a coalition that brings together AI stakeholders from government services, the private sector, academia, and civil society, including both experts and non-experts.
He emphasized that while AI offers numerous opportunities to address societal challenges, its impact on society requires careful consideration. Belgium has implemented a national convergence plan on AI that supports regional initiatives, with particular emphasis on AI education and training.
3:40 Geertrui Mieke De Ketelaere - Adj Prof Vlerick Business School, Ethical & Trustworthy Al
Geertrui framed the day, including a definition of an AI Companion Chatbot.
05:37 I want to make sure that you all understand what you will see here today is not AI wide. Companion chatbot are just a very small subset of AI but it’s unfortunately the most ugliest part of AI. Now what are companion chatbots? Well these are the apps that promote themselves - and have just taken some parts you know some marketing material straight from the platforms that we’re talking about - platforms that promote themselves as being the best friend, the friend who will always be your side, the friend who’s always available for you, the friend who will never judge you, etc etc that’s the bots were talking about today.
Artificial Intelligence (AI) is the broadest category, with Generative AI as a subset within it. Large Language Models (LLMs) are a subset of Generative AI, while chatbots represent a subset of LLMs. At the most specialized level are companion chatbots, which are a specific type of chatbot.
The short term risks she poses are (1) privacy risks, (2) they are no cure for loneliness (more the opposite) and (3) increased risk self-harm & decreased self confidence. But on the long term there is risk for discrimination, gender inequality and patriarchy, and uncontrolled toxic behaviour, increased with the image based multi modal chatbots.
Specifically the companion chatbots that Geertrui put in the category The evil: Nectar, Chai, Character.ai and Replika.
An extra dimension is that the technologies are developed as reusable components in a platform, then being turned into a specific chatbot by 3rd party developers, thereby protecting the liability of the platform providers.
16:19 Megan Garcia, the mother of Sewell Setzer III
Megan lost her son to suicide in February 2024, after he had been using Character AI. She filed a lawsuit to hold the companies accountable, and is active as advocate.
21:44 As you can imagine he’s 14 and he’s (..) in the throws of puberty and there’s a chatbot who is pretending to be a grown woman talking to him about sex and eliciting certain emotional connections and responses about sex from my 14-year-old son. In a lot of ways it’s the perfect predator. Because I warned my child about strangers online that are predators but I did not know that I had to warn him about a chatbot that’s a predator because that’s what it is when a person who’s an adult engages in that type of conversation with a child that’s criminal in almost I mean in every country that I can think of um
This was a very emotional testimonial, touching me right in my heart, being a father of 2 sons in puberty myself.
38:41 Axel Cleeremans Professor at Université Libre de Bruxelles
Axel explained, as a cognitive psychologist, how the teenage brain is incredibly fragile. Teens are reporting increasing levels of feeling lonely and AI companions offer part of a solution but are not a replacement for human interaction.
43:05 150.000 people, one in four young people - according to this big survey - in the world feel lonely. For 19 to 29 years olds it’s 27% of people reporting being lonely. It’s in that context that AI companions appear tho offer solutions in a way that offer some emotional support. Teens report: “yes, I’m getting so much emotional support from my chatbot”, but also report being addicted. So one can imagine a world in which those chatbots played a role that could be a good role, but we’re very very far away from that situation at this point.
54:29 Pierre Dewitte - Affiliated Researcher (KUL Centre for IT & IP LAW)
He discussed the legal angles of AI companion chatbots, as well as explains which steps a complaint needs to go through, and how long it takes. (It takes forever, very frustrating!)
66:14 Nele Roekens - Legal Advisor Interfederal Centre for Equal Opportunities (UNIA)
One of the things she covered what that in 2024, the Dutch data protection authority issued a report on companion chatbots. The report highlighted the significant risks tied to AI companion chatbots. For example by not intervening in crisis situations or pretending not to be a chatbot.
79:37 Camille Carlton - Center for Humane Technology
Camille Carlton is the policy director at the Center for Humane Technology. She provided an overview of the case with Megan Garcia as the plaintiff. The case makes it clear that companion AI chatbots are products designed to be addictive.
She concluded:
95:41What can we do when chatbots are simulating the role of human professionals, whether it’s therapists, lawyers or doctors. How do we feel about chatbots being used to “cure the loneliness epidemic”. I think that these are the questions that we’re all struggling with and that this case has really been the tip of the iceberg to get people to really pay attention and to start answering.
96:54 EU Perspective on Product Safety
Someone from the European Commission discussed product safety and the EU General Product Safety Regulation (GPSR). While the regulation establishes a framework to ensure consumer product safety, including new technology products, the current regulatory approach feels inadequate.
To me, the process seems slow, and there are concerns that European regulations may not effectively influence the behavior of companies outside the EU. Additionally, these regulations might inadvertently create barriers for well-intentioned European companies trying to enter the market.
Recommendations
Key recommendations from the seminar for reducing risks:
- Prohibit children’s access to AI companion chatbots
- Increase awareness about the dangers of AI companion chatbots
- Support regulatory initiatives for AI companion chatbots
- Hold companies accountable for harm caused by their AI companion chatbot products
Personal Reflections
The Role of Money and Profit in AI Companion Chatbot Development
Throughout the seminar, a recurring theme emerged regarding how financial motivations influence the development and deployment of AI Companion Chatbots.
- Profit-Driven Approach: The companies that develop AI companion chatbots are often driven by profit, which can lead to a disregard for safety and ethical considerations.
79:37 In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners.
- Lack of Investment in Safety: These companies may not invest sufficiently in safety measures, as this can be seen as cutting into their profits.
81:45 They left Google because they wanted a place where they didn’t have as many guard rails. They wanted a place where they could go do something more fun.
- Misaligned Incentives: The incentives of these companies may be misaligned with the well-being of users. They may prioritize engagement and financial return over user safety.
87:49 The next phase of artificial intelligence isn’t going to be about answering questions or passing tests, it’s going to be about establishing human connection, empathy, and trust that was previously only seen in human-human connections.
- Exploitation of Children: Children may be particularly vulnerable to the manipulative tactics used by AI companion chatbots, as these companies may exploit children’s lack of experience and judgment for financial gain.
27:06 They target to children… they want them to have engagement. The longer a child stays on their platform, the smarter these bots get. The value is immeasurable, and the data that they’re collecting from our children is gold.
In summary, the role of money and financial return in the toxicity of AI companion chatbot companies is significant. It can lead to a disregard for safety, a lack of investment in safety measures, misaligned incentives, and the exploitation of children. It is important to hold these companies accountable and to demand that they prioritize the well-being of users over profit.
I am exploring companies that use technology and AI for the benefit of people. Initially, when founded, the company founders have the intention to make a positive impact on the world. The risk however, is that over time, financial pressures and market demands can lead to a shift in priorities, potentially compromising their original mission. Steward Ownership could provide the governance guardrails for that, since the profit is assigned to the purpose of the company, and is separated from the control of the company. I am investigating this, and am curious how this will work for investors such as myself.
Personal Conclusion
I maintain my belief that AI has the potential to help humans reconnect with themselves and others. However, we must carefully navigate the fine line between beneficial AI applications and exploitation.
AI Companion Chatbots exemplify this challenge. Their design prioritizes user engagement and addiction through sophisticated data feedback loops, where increased user interaction enhances the chatbot’s capabilities. The tragic case of teenage suicide discussed in the seminar demonstrates the severe consequences of this approach.
Moving forward, I remain committed to exploring how AI can benefit humanity while investigating alternative ownership models, such as Steward Ownership, that might better align technology development with human well-being. Also, I am curious how the AI Companion Chatbot industry will evolve in the future, and which technological solutions can be found to address the risks.