Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Chatbot
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Mental health === Chatbots have shown to be an emerging technology used in the field of mental health. Its usage may encourage the users to seek advice on matters of mental health as a means to avoid the stigmatization that may come from sharing such matters with other people.<ref name=":0">{{Cite journal |last1=Chin |first1=Hyojin |last2=Song |first2=Hyeonho |last3=Baek |first3=Gumhee |last4=Shin |first4=Mingi |last5=Jung |first5=Chani |last6=Cha |first6=Meeyoung |last7=Choi |first7=Junghoi |last8=Cha |first8=Chiyoung |date=2023-10-20 |title=The Potential of Chatbots for Emotional Support and Promoting Mental Well-Being in Different Cultures: Mixed Methods Study |journal=Journal of Medical Internet Research |language=en |volume=25 |pages=e51712 |doi=10.2196/51712 |doi-access=free |pmid=37862063 |pmc=10625083 |issn=1438-8871 }}</ref> This is because chatbots can give a sense of privacy and anonymity when sharing sensitive information, as well as providing a space that allows for the user to be free of judgment.<ref name=":0" /> An example of this can be seen in a study which found that with social media and AI chatbots both being possible outlets to express mental health online, users were more willing to share their darker and more depressive emotions to the chatbot.<ref name=":0" /> Findings prove that chatbots have great potential in scenarios in which it is difficult for users to reach out to family or friends for support.<ref name=":0" /> It has been noted that it demonstrates the ability to give young people "various types of social support such as appraisal, informational, emotional, and instrumental support".<ref name=":0" /> Studies have found that chatbots are able to assist users in managing things such as depression and anxiety.<ref name=":0" /> Some examples of chatbots that serve this function are "Woebot, Wysa, Vivibot, and Tess".<ref name=":0" /> Evidence indicates that when mental health chatbots interact with users, they tend to follow certain conversation flows.<ref name=":1">{{Cite journal |last1=Haque |first1=M D Romael |last2=Rubya |first2=Sabirat |date=2023-05-22 |title=An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews |journal=JMIR mHealth and uHealth |language=en |volume=11 |pages=e44838 |doi=10.2196/44838 |doi-access=free |pmid=37213181 |pmc=10242473 |issn=2291-5222 }}</ref> These being guided conversation, semi guided conversation, and open ended conversation.<ref name=":1" /> The most popular, guided conversation, “only allows the users to communicate with the chatbot with predefined responses from the chatbot. It does not allow any form of open input from the users”.<ref name=":1" /> It has also been noted in a study looking at the methods employed by various mental health chatbots, that most of them employed a form of cognitive behavior therapy with the user.<ref name=":1" /> Research has identified that there are potential barriers to entry that come with the usage of chatbots for mental health.<ref name=":2">{{Cite journal |last1=Coghlan |first1=Simon |last2=Leins |first2=Kobi |last3=Sheldrick |first3=Susie |last4=Cheong |first4=Marc |last5=Gooding |first5=Piers |last6=D'Alfonso |first6=Simon |date=January 2023 |title=To chat or bot to chat: Ethical issues with using chatbots in mental health |journal=Digital Health |language=en |volume=9 |doi=10.1177/20552076231183542 |pmid=37377565 |pmc=10291862 |issn=2055-2076 }}</ref> There exist ongoing privacy concerns with sharing user’s personal data in chat logs with chatbots.<ref name=":2" /> In addition to that, there exists a lack of willingness from those in lower socioeconomic statuses to adopt interactions with chatbots as a meaningful way to improve upon mental health.<ref name=":2" /> Though chatbots may be capable of detecting simple human emotions in interactions with users, they are incapable of replicating the level of empathy that human therapists do.<ref name=":2" /> Due to the nature of chatbots being language learning models trained on numerous datasets, the issue of [[Algorithmic bias|Algorithmic Bias]] exists.<ref name=":2" /> Chatbots with built in biases from their training can have them brought out against individuals of certain backgrounds and may result incorrect information being conveyed.<ref name=":2" /> There is a lack of research about how exactly these interactions help with a user’s real life.<ref name=":1" /> Additionally, there are concerns regarding the safety of users when interacting with such chatbots.<ref name=":1" /> When improvements and advancements are made to such technologies, how that may affect humans is not a priority.<ref name=":1" /> It is possible that this can lead to "unintended negative consequences, such as biases, inadequate and failed responses, and privacy issues".<ref name=":1" /> A risk that may come about because of the usage of chatbots to deal with mental health is increased isolation, as well as a lack of support in times of crisis.<ref name=":1" /> Another notable risk is a general lack of a strong understanding of mental health.<ref name=":1" /> Studies have indicated that mental health oriented chatbots have been prone to recommending users medical solutions and to rely upon themselves heavily.<ref name=":1" />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)