Chatbots Supporting Mental Well-Being: Are We Playing A Dangerous Game? by Forbes – Entrepreneurs

Serebral360° found a great read by Forbes – Entrepreneurs article, “Chatbots Supporting Mental Well-Being: Are We Playing A Dangerous Game?.”

Add another layer to your #Business literacy. We at Serebral360° would love to know if the Forbes – Entrepreneurs article was helpful, leave a comment, like and share. Let’s dive in and discuss the information and put it to use to grow your business. #BusinessStrategy #ContentMarketing #WebDevelopment #BrandStrategy
Info@serebral360.com 762.333.1807 www.serebral360.com
Grap a copy of our Strategy Books 👉 CLICK HERE FOR VOL1 and 👉 CLICK HERE FOR VOL2


In recent years, we have noticed the rise of chatbots. The term “ChatterBot” was originally coined by Michael Mauldin to describe these conversational programs. They are supposed to emulate human language and hence pass the Turing Test. 

Most recently, chatbots are being designed and targeted to help people with their mental health and well-being. And, there seems to be a crowded market out with several such chatbots popping up and cashing in on the mental wellness drive across the world.

Can chatbots really claim to be ‘wellness coaches’ and ‘mental health gurus’?

Artificial Intelligence has been for many years trying to be more cognizant, more attuned to the nuances of human language. As an Academic, I have been working with technology for over a decade looking at whether even the most intelligent technology can replace human emotions, and claim to be truly “intelligent”.

Mental health is a complex, multi-layered issue. Having suffered from anxiety and depression myself, I know how difficult it is to articulate my feelings even to a trained human being, who can see my facial expressions and hear the nuanced inflections in my voice, or even my body language. My slumped shoulders, the slight frown as I respond “I am ok” to someone asking me how I am, are hints that all is not well, which a chatbot is unlikely to pick up. As a chatbot asks me “Are you stressed”, I feel annoyed already, as that is not something I am likely to respond well to. The conversational capacity of these chatbots is limited. I asked 15 people (a small sample size, of course, but relevant in the context of this study) to test two mental health chatbots (Wysa and Woebot) and they unanimously agreed about feeling more stressed and anxious by the stilted conversation and the insistence of these bots on repeatedly asking them how they were.

Let us also talk about the underlying prejudice and unconscious bias in these AI tools. A chatbot is trained with underlying neural nets and learning algorithms and it will inherit the prejudices of its makers. However, there is a perception that technology is entirely neutral and unbiased, and people are more likely to trust a chatbot than a human being. Bias in AI is not being given adequate attention, especially when such tools are being deployed in a sensitive domain such as tackling mental health or being advertised as a “coach”. In 2016, Microsoft released its AI chatbot Tay onto Twitter. Tay was programmed to learn by interacting with other Twitter users, but it had to be removed within 24 hours because its tweets included pro-Nazi, racist and anti-feminist messages. There are currently no stringent evaluation frameworks within which such chatbots can be tested for their bias, and the developers are not bound legally to openly and transparently talk about their training process of these AI algorithms.

Many of these chatbots are designed around the use of Cognitive Behavioural Theory (CBT). Developed way back in the 1960s, it is a conversational technique designed to support a person through their own emotions and feelings. As I look through some of these chatbots and their marketing materials, and their founders claiming that their chatbots are designed using a very unique and novel technique of CBT, it makes me wonder how much truth is really underlying many of their other claims.

Is it really morally and ethically fair to market these chatbots as “solving a nation’s mental health problem”?

Another area of concern is privacy, data security, and trust. Many of these interactions will have sensitive personal information, that a user might not even be sharing with their very close families and friends. Research has shown that since people know that they are talking to a machine, they have no filter, no fear to be judged, and speak more freely, and therefore might share more than they would be with another human being. There is a lack of transparency, in the marketing and promotional material for such chatbots that not reveal what GDPR regulations are being adhered to, and what happens to the sensitive information that is being stored. Is this being used to train the algorithm for future users, fine-tune the technology, or used for monitoring purposes? Even if the technology platform is being operated from a country outside the European Union, they have to conform to the GDPR regulations if they deal with EU customers. The Cambridge Analytica–Facebook revelations have woken up many more of us to the potential impact of poor data protection policies. In 2014, Samaritans was forced to abandon its Radar Twitter app, designed to read users’ tweets for evidence of suicidal thoughts, after it was accused of breaching the privacy of vulnerable Twitter users.

Research has shown that the behavioral data acquired from the continual tracking of digital activities are sold in the secondary data market and used in algorithms that automatically classify people. These classifications may affect many aspects of life including credit, employment, law enforcement, higher education, and pricing. Due to errors and biases embedded in data and algorithms, the non-medical impact of the classifications may be damaging to those with mental illness who already face stigmatization in society. There are also potential medical risks to patients associated with poor quality online information, self-diagnosis and self-treatment, passive monitoring, and the use of unvalidated smartphone apps. Now that we are seeing a proliferation of these chatbots, this is the time that we need a thorough investigation into whether the availability of these chatbots hinder the possibility of people seeking therapy and counseling. 

I am not averse to finding tools and techniques to support our well-being. But, creating a reliance on such technology designed to replace human intervention and making users trust and believe the support they are getting is “emotionally intelligent” is a false promise and something that ought to be actively questioned and discouraged. Technology is not a panacea for mental health problems. When such technology is transparent and is meant as an aid rather than as a means to replace human connection and therapy, then it can be used as a support intervention. If we continue to use AI and chatbots as a solution to the “mental health epidemic”  then we are certainly playing a dangerous game with people’s mental health and well-being.


December 14, 2018 at 10:55AM
https://www.forbes.com/sites/pragyaagarwaleurope/2018/12/14/chatbots-supporting-mental-well-being-are-we-playing-a-dangerous-game/?ss=entrepreneurs
Forbes – Entrepreneurs
http://www.forbes.com/entrepreneurs/
http://bit.ly/2CMy7Yu