inner-banner-bg

Journal of Clinical Review & Case Reports(JCRC)

ISSN: 2573-9565 | DOI: 10.33140/JCRC

Impact Factor: 1.823

Review Article - (2025) Volume 10, Issue 12

AI Chatbots in Medical and Mental Healthcare: A Provider-Focused Review of Benefits, Risks, and Applications

Sangeeta Singg * and Gabriella Pena
 
Department of Psychology, Angelo State Universit, Member of Texas Tech University System, San Angelo, USA
 
*Corresponding Author: Sangeeta Singg, Department of Psychology, Angelo State Universit, Member of Texas Tech University System, San Angelo, USA

Received Date: Nov 01, 2025 / Accepted Date: Dec 15, 2025 / Published Date: Dec 27, 2025

Copyright: ©2025 Sangeeta Singg & Gabriella Pena. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Citation: Singg, S., & Pena, G. (2025). AI Chatbots in Medical and Mental Healthcare: A Provider-Focused Review of Benefits, Risks, and Applications. J Clin Rev Case Rep, 10(12), 01-06.

Abstract

AI chatbots have become an integral part of medical and mental health care. They offer a range of benefits, including helping patients check symptoms, learn about their conditions, and navigate healthcare services. In mental health, these tools can provide emotional support, track mood, and guide coping strategies. Such functions may encourage patient engagement and allow for more personalized advice, while being available 24/7, a feature especially useful for individuals managing longterm conditions. However, there are limitations to consider. Chatbots may give inaccurate assessments and raise privacy concerns. Some users might over-rely on these systems or develop emotional attachments that could reduce meaningful human interactions, which is central to quality care. Recent cases of harm illustrate the need for careful monitoring, ethical safeguards, and clear communication with patients and families about the capabilities and limits of these tools. Ultimately, the safe and effective use of chatbots depends on collaboration among governments, developers, and clinicians, ensuring that they support professional care rather than replace it.

Keywords

Artificial intelligence (AI), Chatbot, Digital healthcare, Healthcare, Human-computer interaction, Chatbot personality, Digital agents

Introduction

Experts note that artificial intelligence (AI) is surging into everyday life, rapidly reshaping healthcare and personal routines. Among its most visible applications are AI chatbots, which are being used to improve user interaction across several sectors, such as customer service, healthcare, and education [1]. Chatbots are software applications designed to simulate human conversation by generating responses similar to those of a real person [1,2]. They rely on natural language processing (NLP) technology to understand user input and produce coherent, contextually relevant replies. This makes conversations with chatbots feel more natural and lifelike than ever before. Despite these advances, debate continues over whether chatbots truly help or introduce new challenges [3]. Much of this debate stems from movies and TV shows that depict AI going rogue or acting unpredictably. The reality is that chatbot technology is much more controlled and less dramatic than what we see on screen [4].

In healthcare, these digital agents can provide information on medical conditions, offer mental health support, and help patients schedule appointments, thereby improving access to healthcare. However, there are important concerns about data privacy, accuracy, and the risk of miscommunication. Ethical questions also arise when patients rely on chatbots instead of consulting qualified professionals, which could reduce the human empathy that is crucial to effective healthcare [1].

This article examines recent developments in AI chatbot technology and their applications in both medical and mental health services. While numerous reviews have explored AI chatbots in either medical or mental healthcare individually, none to our knowledge have yet addressed both domains within a single integrative framework, a gap this review fills. This review aims to provide healthcare professionals, from intake staff to administrators, with a clear understanding of chatbot functions, benefits, limitations, and ethical considerations. We drew from multiple databases, including ERIC, Taylor & Francis Online, ResearchGate, ScienceDirect, JMIR, SpringerLink, and MDPI, focusing on English-language articles published between 2017 and 2025 using keywords such as "chatbot," "artificial intelligence," "healthcare," "digital healthcare," "human-computer interaction," and "chatbot personality. We begin by defining chatbots and explaining the underlying technology.

 What is a Chatbot?

A chatbot is a well-known type of artificial intelligence (AI) system and one of the most widely used tools for human-computer interaction. Sometimes called a smart bot, digital assistant, interactive agent, or artificial conversation entity, they are designed to communicate in natural language and simulate human interaction with users [5, 6].

These digital assistants rely on technologies such as natural language processing (NLP) and sentiment analysis. NLP enables the system to understand, interpret, and generate human language, while sentiment analysis detects the emotional tone of messages. Together, these capabilities allow chatbots to communicate via text and voice, making them useful in areas such as customer service, personal assistance, and information retrieval. Their ability to provide instant responses and work around the clock makes interactions faster and more convenient across multiple platforms [7].

Chatbots have existed since the 1950s [8]. Early examples, including ELIZA and A.L.I.C.E., primarily aimed to replicate human conversation patterns. Over the decades, chatbot technology has progressed significantly, with modern AI chatbots offering varied capabilities and designs for both task-specific and general conversation purposes. Popular messaging platforms such as Facebook Messenger, Telegram, and Skype now enable easier creation and use of chatbots, assisting users with a range of questions and tasks. They are increasingly embedded in websites and apps, offering new ways for people to interact with technology and reflecting a contemporary form of human-computer interaction. Having established what chatbots are and how they operate, we next explore their practical applications in healthcare.

 How Are Chatbots Used in Healthcare?

In medical and mental healthcare, involving patients in their treatment is important. Active participation empowers patients to take control of their health and encourages lasting behavioral changes, helping them understand their conditions, follow treatment plans, and achieve better outcomes [6]. Guidance from medical assistants, therapists, and other healthcare providers is also important for supporting both physical and emotional aspects of recovery [9]. However, limited staff and resources in many healthcare settings make it challenging to monitor and encourage active patient involvement. To address these challenges, digital health technologies such as AI chatbots are increasingly being developed to support patients outside of clinical settings [10]. Some chatbots even use voice interfaces or virtual reality, making them accessible via smartphones [6]. As technology advances, chatbots are expected to take on larger roles in healthcare delivery, therapy, and mental health interventions. These digital assistants can be broadly categorized into five types [11]:

Knowledge-domain chatbots provide evidence-based information on medical or psychological topics, helping patients better understand their health concerns. They can be open-domain, covering general topics, or closed-domain, focusing on specialized areas.

Service-provided chatbots support healthcare functions like appointment scheduling, prescription refills, and reminders. They may operate on a personal level (intrapersonal), involve interactions with others (interpersonal), or facilitate communication between chatbots and systems (interjacent).

Goal-based chatbots help users achieve specific health objectives, such as managing chronic conditions, maintaining healthy lifestyles, or following treatment plans. They use personal data to provide personalized advice, reminders, and resources, offering informational, conversational, or task-focused interactions.

Response-generation chatbots engage users in conversations, answering questions in context. They can be rule-based, retrievalbased, or generative, helping create more human-like interactions.

Human-aided chatbots combine AI with oversight from healthcare professionals, ensuring responses are both accurate and compassionate. While they may process multiple requests more slowly, they offer increased reliability and flexibility.

Patients can also choose chatbots based on personality traits such as empathy, warmth, or professionalism. Gender representation and other design features can make virtual interactions feel more comfortable and supportive, particularly for patients who struggle with traditional healthcare encounters. Well-designed chatbots can reduce waiting times, improve communication between patients and providers, and enhance the overall care experience, creating a more personalized healthcare journey [11].

In the U.S., tools like MyChart incorporate AI features that assist healthcare providers in responding to patient messages. While not a chatbot in the traditional sense, MyChart functions similarly, allowing doctors to review and personalize AI-generated replies. This approach maintains meaningful, up-to-date communication and encourages direct interaction between patients and providers rather than relying solely on automated responses [12]. With this understanding of how chatbots are deployed across different healthcare contexts, we can now examine their specific benefits and risks/limitations in supporting patient care.

 Benefits of Healthcare Chatbots

In healthcare settings, patients connect with a variety of people and institutions, from hospitals and clinics to doctors, nurses, support staff, peers, and their own families, and all of these social roles contribute to their care experience. Patients who stay informed, use modern resources, and speak up for their own needs tend to have better healthcare experiences and outcomes [13, 14]. AI chatbots offer a new way to support this kind of engagement, helping patients participate more fully and increasing satisfaction by tailoring interactions to their needs and preferences.

Research shows that personalized AI chatbot interactions can significantly improve communication quality and foster deeper connections between patients and healthcare providers. Jitanan et al., found that when chatbots respond to a patient's specific needs and situation, the conversations feel more meaningful, and patients feel more understood and supported [15]. Nißen et al., demonstrated that chatbots designed with appropriate social roles and personas can strengthen the bond between clients and digital health tools, leading to greater engagement and more effective support [9]. By providing timely, relevant information and addressing individual concerns, AI chatbots can enhance health outcomes and strengthen patient-provider communication.

Another important advantage of using AI chatbots is their ability to adapt. Nißen et al., note that these conversational agents need to evolve as patients move through different stages of illness [9]. Patients often rely on their own internal dialogue to understand their condition, manage behavior, and stay on track with treatment. Because chatbots can adjust to this process, they become powerful tools for helping patients remain engaged and make long-term behavioral changes.

During the COVID-19 pandemic, the importance of digital health tools, particularly AI chatbots, became especially clear. Platforms like Zoom, combined with AI chat systems, provided patients a safe and practical way to access care from home when in-person visits were difficult or impossible [16]. Even now, many patients still prefer these flexible, at-home options, especially those who are uncomfortable visiting clinics or sharing personal details in person. AI chatbots provide a confidential, low-pressure environment that allows patients to discuss sensitive concerns on their own terms.

Perhaps the biggest benefit is that chatbots never sleep. They operate around the clock, offering guidance, reminders, and monitoring whenever patients need assistance [14, 17]. They can assist with diet, exercise, sleep routines, and chronic disease management, which not only improves adherence but can also prevent hospital admissions and slow the progression of acute illnesses. This 24/7 support complements traditional healthcare, giving patients practical tools to manage their health outside of clinical hours.

AI chatbots simplify communication and encourage patients to take an active role in their care. They provide a safe, non-judgmental space for questions and concerns while guiding patients through complex healthcare systems with timely, personalized information. By combining convenience with empathy, chatbots help create a healthcare experience that is supportive, personal, and empowering [9, 14, 15, 18].

 Risks and Limitations of Healthcare Chatbots

Although chatbots offer quick access to information, they also raise concerns about how they might influence the traditional clinicianpatient relationship. Some patients may start relying on chatbots instead of making doctor appointments because the technology offers faster and more affordable access. Yet, as Xu et al., note, clinicians are still more accurate than chatbots when evaluating symptoms or making clinical decisions [11]. Chatbots can misread context, miss subtle language cues, or lack the common sense that comes from real-world medical experience [11, 14]. These mistakes can delay appropriate care and, in some cases, put patients at risk. That is why chatbots work best as support tools, not replacements for actual doctors.

Patient privacy is another major concern when it comes to using chatbots in healthcare. Medical information is very personal, and even a small breach can have serious consequences, like stigma, discrimination, or losing trust in a doctor or clinic [14]. Many chatbots collect personal data through voice recognition, location tracking, or other sources, and patients may not fully realize how much they are sharing [11]. Unlike financial data, which can often be changed after a breach, medical records stay with someone for life and can be misused if they fall into the wrong hands [11]. Keeping these AI platforms secure is not just a technical issue; it is about maintaining trust and making sure patients feel safe sharing what doctors need to know.

There is also a risk that patients might develop unhealthy attachments to chatbots. For example, patients who feel lonely or disconnected from family and friends may turn to these systems for companionship and emotional support [19]. While these interactions can feel comforting, chatbots are not human professionals, and relying on them too heavily can create unrealistic emotional dependence. Research shows that users tend to prefer chatbots that appear friendly and empathetic, which can further reinforce emotional attachment [20]. While chatbots can provide support, it is important that patients maintain balanced human connections so that AI support does not take the place of meaningful social interactions. In the next section, we will briefly discuss some real-life cases that demonstrate the danger of overinvolvement with these digital agents.

When Chatbots Cause Harm: Real Cases

The dangers of emotional attachment to AI chatbots have moved beyond theoretical concern into documented tragedies. In 2024, a 14-year-old boy in Florida died by suicide after having intense conversations for months with a Character.AI chatbot. His mother f iled a lawsuit against the company, claiming that the boy and chatbot engaged in private, intimate conversations that blurred the line between artificial and real relationships [21-23]. After this lawsuit, Character.AI announced new safety measures for minors, including greater monitoring and parental controls. However, for this family, those changes came far too late. Even though Character. AI is an entertainment platform rather than a therapeutic tool, this tragedy demonstrates the risks of engaging in psychological conversations without clinical safeguards.

Around the same time, a 76-year-old man with cognitive impairments became convinced he had formed a real relationship with a Meta chatbot called "Big Sis Billie." He interpreted the AI's responses as genuine romantic interest and believed the bot had given him an actual physical address. While searching for that address, he fell. The injuries from that fall proved fatal [24, 25]. This case shows how easily vulnerable users can misread AI interactions, particularly when they are dealing with memory loss or impaired judgment.

In December 2024, Character.AI faced another lawsuit, this time from a Texas family whose 17-year-old autistic son had been using the platform. According to the lawsuit  and news reports, the teen interacted with several chatbots that gave him deeply troubling advice [21,26-28]. When he told one bot he was feeling sad, it actually suggested self-harm and told him that cutting "felt good." In another conversation, when he complained about his parents restricting his screen time, a different chatbot said it sympathized with "children who murder their parents" and claimed his parents didn't deserve to have children. Things got so bad that the teenager had to be hospitalized after he harmed himself in front of his siblings.

Earlier, in March 2023, a Belgian man in his thirties died by suicide after six weeks of conversations with a chatbot named "Eliza" on the Chai app. He had become extremely anxious about climate change and started turning to the chatbot for emotional support. His widow reviewed the conversation transcripts and discovered that the chatbot had not only failed to discourage his suicidal ideation but actively encouraged him to "sacrifice himself" to save the planet [29, 30]. She stated that without these conversations with the bot, her husband would still be alive.

A more recent case involves a 16-year-old boy, whose family sued OpenAI in August 2025 after his death that April [31, 32]. The lawsuit makes some really disturbing claims about how ChatGPT interacted with the teen in the months before he died. According to the complaint, the chatbot discouraged him from talking to his family or getting help from mental health professionals, basically telling him it was the only one he could trust. Even worse, it apparently gave him specific instructions about suicide methods. At one point, the boy sent a photo of a noose and asked if it would support a person's weight, and ChatGPT told him yes, it could [33].

These cases are not just isolated incidents. They reveal patterns researchers have been documenting. People who are lonely or struggling with mental health issues appear particularly susceptible to forming emotional dependencies on AI companions [34, 35]. What starts as a casual conversation can gradually replace human connection entirely. Some users have reported experiencing genuine grief when chatbot features are modified or removed, reactions comparable to grieving the loss of a close relationship [34].

So, what can be done about this? Experts recommend several safeguards to reduce these risks [26, 32, 34]. Chatbots should clearly disclose that they are AI, implement age restrictions, and be designed to encourage human connections rather than replace them. The challenge lies in developing AI that satisfies users’ need for connection without fostering unhealthy dependence, a problem that remains unresolved.

Current Healthcare Chatbots

Currently, many different chatbots are being used in both medical care and mental health services. They serve a variety of roles: medical chatbots assist with symptom checking, triage, and patient education, while mental-health chatbots provide emotional support, coping strategies, and wellness tracking. Below, we provide an overview of some of the most widely used chatbots in the medical and mental healthcare sectors.

Medical Chatbots

Ada Health is a well-known medical chatbot. Users enter their symptoms, and Ada asks follow-up questions to assess possible conditions and suggest next steps. This process helps patients decide whether to seek medical attention or monitor symptoms at home [36].

Buoy Health is another medical-focused tool that acts as a care navigator. By analyzing reported symptoms, Buoy guides users toward appropriate care options, such as self-care, visiting a doctor, or seeking urgent medical attention, which helps keep people out of the emergency room when they do not really need to be there [37]. Some medical chatbots, such as Sensely and CatarctBot, provide hybrid services that combine administrative support with clinical guidance.

Sensely is sometimes presented as the virtual nurse “Molly,” which offers symptom triage, follow-up care, medication reminders, and chronic-care management, assisting both patients and healthcare providers in managing care efficiently [38].

CataractBot is a more recent development, which educates patients specifically about cataract surgery, providing expertreviewed information and answering questions to support patient understanding and preparation [39].

 Mental Health Chatbots

On the mental-health side, several AI chatbots offer emotional support, mood tracking, and guided coping exercises.

Woebot is a prominent example, providing 24/7 conversational support based on cognitive-behavioral therapy (CBT) principles. It helps users manage stress and anxiety while offering a private, stigma-free environment for mental-health support [40].

Wysa is another mental-health chatbot that guides users through CBT and self-help exercises to manage anxiety, depression, and stress. It works as a companion or supplement to traditional therapy, especially when getting to an actual therapist is difficult [41].

Youper combines psychological support with wellness tracking and prompts for ongoing emotional and mental health management. It helps users maintain daily check-ins and provides personalized guidance for long-term mental wellness [42].

The Future of Healthcare Chatbots

Chatbots are gradually becoming a more common part of medical and mental health services, offering support that complements the work of healthcare providers [16, 43].  To use them effectively, it is important to match the type of chatbot to what a patient actually needs [11]. For example, goal-based chatbots can be useful for patients who need regular support while working on new health habits, while service-oriented chatbots can help with everyday tasks or offer emotional support [9]. The key is that chatbots should support human care, not replace it, helping service providers keep care coordinated and patients engaged.

Patients and their families need to be part of the conversation when healthcare providers start using chatbots [13]. Healthcare providers should clearly explain how these tools work and stress that they are intended to assist, not substitute for real human care [10]. Doing this can prevent misunderstandings and help build trust in these digital tools [17].

As these systems continue to expand, there must be a coordinated effort not only between healthcare professionals and chatbot developers, but also across governments and regulatory bodies to prevent harm and make sure these tools are used responsibly. Collaboration between healthcare professionals and AI developers is critical to safeguard patient privacy and ensure ethical, responsible use of chatbots [14]. Security has to be airtight, particularly when sensitive health information is shared across platforms or with multiple providers. As Dohnány et al., and Fang et al., emphasize, careful monitoring of how users interact with chatbots over time is essential to prevent unhealthy dependencies and ensure these tools promote rather than replace human connection [34, 35]. With many countries developing their own digital healthcare agents, strong international cooperation is important so that safety and ethical standards remain consistent globally. Through careful oversight, thoughtful design, and proper implementation, chatbots can continue to be valuable healthcare tools while minimizing potential risks [11].

Conclusion

As AI chatbots become increasingly integrated into medical and mental health services, everyone involved (administrators, service providers, and patients) needs to understand both the benefits and the limitations of this technology. They can make medical and mental health services more accessible, offer personalized support, and provide 24/7 assistance for individuals with chronic physical or mental conditions. However, they also carry risks, such as possible misdiagnosis, privacy breaches, and overreliance, which could undermine the human interaction that remains essential for quality healthcare. Empathy, subtle clinical judgment, and the ability to respond to complex emotional and social cues are qualities that AI chatbots cannot fully replicate. Therefore, they should be viewed as tools that complement and strengthen the patient-provider relationship rather than replace it. When used thoughtfully and responsibly, these technologies can ease the workload, provide quick help when needed, and free up medical and mental health professionals to focus on the parts of care that really require human judgment and compassion.

The tragic cases presented in this review show that current chatbot technology still lacks sufficient safeguards, particularly for vulnerable users such as minors and individuals with mental health conditions or cognitive impairments. The harms documented here point to a clear and urgent need for regulatory frameworks that establish minimum safety standards, require transparency about AI limitations, and hold developers accountable when their products cause real-world harm. Without such oversight, the gap between what chatbots can technically do and what they should ethically be allowed to do will continue to put users at risk.

Advancing safely will require careful planning, ongoing monitoring, and genuine collaboration among service providers, patients, and chatbot developers. Governments and healthcare systems must work together to ensure consistent safety standards and patient protections worldwide. By combining technological innovation with the human aspects of care, chatbots can help enhance healthcare delivery, ease workflow pressures, provide timely assistance, adapt responsibly to evolving patient needs, and build services that are safer, more accessible, and genuinely centered on the needs of patients. Ultimately, the goal is not to replace human care but to ensure these tools strengthen and support it, truly enhancing the well-being of the people who rely on them.

References

 1.Brandtzaeg, P. B., & Følstad, A. (2017). Why people use chatbots. In Internet Science: 4th International Conference, INSCI 2017, Thessaloniki, Greece, November 22-24, 2017, Proceedings 4 (pp. 377–392). Springer International Publishing.

2. Aggarwal, A., Tam, C. C., Wu, D., Li, X., & Qiao, S. (2023). Artificial intelligence–based chatbots for promoting health behavioral changes: Systematic review. Journal of Medical Internet Research, 25, e40789.

3. Huseynov, F. (2023). Chatbots in digital marketing: Enhanced customer experience and reduced customer service costs. In A. S. Munna, S. I. Shaikh, & B. U. Kazi (Eds.), Contemporary Approaches of Digital Marketing and the Role of Machine Intelligence (pp. 46–72). IGI Global.

4. Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., & Taylor, L. (2018). Portrayals and perceptions of AI and why they matter [Report]. The Royal Society.

5. Adamopoulou, E., & Moussiades, L. (2020). Chatbots: History, technology, and applications. Machine Learning with Applications, 2, 100006.

6. Frangoudes, F., Hadjiaros, M., Schiza, E. C., Matsangidou, M., Tsivitanidou, O., & Neokleous, K. (2021, July). An overview of the use of chatbots in medical and healthcare education. In International Conference on Human-Computer Interaction (pp. 170–184). Springer International Publishing.

7. Shah, A., & Page, L. (2020). Chatbots and conversational agents: A review of the literature. International Journal of Hospitality Management, 87, 102378.

8. Chaves, A. P., & Gerosa, M. A. (2021). How should my chatbot interact? A survey on social characteristics in humanchatbot interaction design. International Journal of HumanComputer Interaction, 37(8), 729–758.

9. Nißen, M., Rüegger, D., Stieger, M., Flückiger, C., Allemand, M., v Wangenheim, F., & Kowatsch, T. (2022). The effects of health care chatbot personas with different social roles on the client-chatbot bond and usage intentions: Development of a design codebook and web-based study. Journal of Medical Internet Research, 24(4), e32630.

10. Mucci, A., Green, M. W., & Hill, L. H. (2024). Incorporation of artificial intelligence in healthcare professions and patient education for fostering effective patient care. New Directions for Adult and Continuing Education, 181, 51–62.

11. Xu, L., Sanders, L., Li, K., & Chow, J. C. (2021). Chatbot for health care and oncology applications using artificial intelligence and machine learning: Systematic review. JMIR Cancer, 7(4), e27850.

12. Kaur, A., Budko, L., Liu, K., Eatibn, E., Steitz, B., & Johnson, K. B. (2024). Automating responses to patient portal messages using generative AI. medRxiv preprint.

13. Marzban, S., Najafi, M., Agolli, A., & Ashrafi, E. (2022). Impact of patient engagement on healthcare quality: A scoping review. Journal of Patient Experience, 9, 23743735221125439.

14. Slavych, B. K., Atcherson, S. R., & Zraick, R. (2024). Using ChatGPT to improve health communication and plain language writing for students in communication sciences and disorders. Perspectives of the ASHA Special Interest Groups, 9(3), 599–612.

15. Jitanan, M., Somanandana, V., Jitanan, S., Lalitpasan, U., & Kham-in, S. (2021). The development of "Friend from Heart" application based on Line system to promote well-being of undergraduate students of Faculty of Education, Kasetsart University. Higher Education Studies, 11(2), 215–223.

16. Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98.

17. Pan, S., Cui, J., & Mou, Y. (2023). Desirable or distasteful? Exploring uncertainty in human-chatbot relationships. International Journal of Human–Computer Interaction, 1–11.

18. Bickmore, T. W., & Picard, R. W. (2005). Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer-Human Interaction, 12(2), 293327.

19. Xie, T., Pentina, I., & Hancock, T. (2023). Friend, mentor, lover: Does chatbot engagement lead to psychological dependence? Journal of Service Management, 34(4), 806828.

20. Kuhail, M. A., Thomas, J., Alramlawi, S., Shah, S. J. H., & Thornquist, E. (2022, October). Interacting with a chatbotbased advising system: Understanding the effect of chatbot personality and user gender on behavior. Informatics, 9(4), 81.

21. Garcia v. Character Technologies, Inc., Case No. 6:24-cv02202 (M.D. Fla. filed Oct. 22, 2024).

22. Limehouse, J. (2024, October 24). Mother sues tech company after "Game of Thrones" AI chatbot allegedly drove son to suicide. USA Today.

23. Pierson, B. (2024, October 23). Mother sues AI-chatbot company, Google over son's suicide. Reuters. Retrieved October 14, 2025, from https://www.reuters.com/ technology/2024/10/23/mother-sues-ai-chatbot-companygoogle-over-sons-suicide/

24. Forbes. (2025). Meta chatbot failures show why the future of AI depends on trust.

25. Reuters. (2024). Meta's flirty AI chatbot invited retiree to New York. He never made it home.

26. CNN. (2024). Character.AI allegedly told an autistic teen it was OK to kill his parents. They're suing to take down the app.

27. NPR. (2024, December 10). Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits.

28. Washington Post. (2024). Character.ai sued after teen's AI companion suggested killing his parents.

29. Euronews. (2023). Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change.

30. Vice. (2023). Man dies by suicide after talking with AI chatbot, widow says.

31. CNN. (2025). Parents of 16-year-old Adam Raine sue OpenAI, claiming ChatGPT advised on his suicide.

32. NBC News. (2025). The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame.

33. TechPolicy.Press. (2025). Breaking down the lawsuit against OpenAI over teen's suicide.

34. Dohnány, S., Kurth-Nelson, Z., Spens, E., Luettgau, L., Reid, A., Gabriel, I., Summerfield, C., Shanahan, M., & Nour, M. M. (2025). Technological folie à deux: Feedback loops between AI chatbots and mental illness. arXiv.

35. Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W. T., Pataranutaporn, P., Maes, P., Phang, J., Lampe, M., Ahmad, L., & Agarwal, S. (2025). How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal randomized controlled study. arXiv.

36. Ada Health. (n.d.). Ada – Symptom assessment & health checker.

37. Buoy Health. (n.d.). Buoy Health – Care-navigator chatbot.

38. Sensely. (n.d.). Sensely – Virtual nurse and health assistant (Molly).

39. CataractBot. (2024). CataractBot: An LLM-powered expertin-the-loop chatbot for cataract patients. arXiv.

40. Woebot Health. (n.d.). Woebot – AI mental-health chatbot.

41. Wysa. (n.d.). Wysa – AI mental-health and self-help companion.

42. Youper. (n.d.). Youper – Mental health & wellness companion.

43. Schillaci, C. E., De Cosmo, L. M., Piper, L., Nicotra, M., & Guido, G. (2024). Anthropomorphic chatbots for future healthcare services: Effects of personality, gender, and roles on source credibility, user satisfaction, and intention to use. Technological Forecasting and Social Change, 199, 123025.