“If they are looking for some emotional support, they can go onto ‘Insta’ and sign up with a Chatbot there.”
So said a teacher I was working with, about a group of teenagers in her class.
Is that a thing?
I thought AI was brilliant at content selection and research, and that it can also copy people’s writing styles voices and impersonate them. It can crib you a good essay!
But use as a therapist? Or a personal coach?
That was news to me…
I did a bit of digging.
Apparently, a handful of apps such as Woebot, which launched in 2017, are using AI to provide a version of cognitive behavioural therapy (CBT). You can play CBT games on an app called Happify, which encourages users to “break old patterns”, and you can create an “AI companion” who is “always by your side” on Replika. As for teens affected by the loneliness epidemic, soon they can be hanging out with Ava — a virtual avatar friend set to launch this summer, after securing funding from Open AI.
But is use of AI therapy truly effective? Body language and tone are important to traditional therapy, but a chatbot can’t recognise any non-verbal communication. There have also been glitches and data breaches, as well as concern that the apps could struggle to identify someone in a serious crisis.
In 2018, a BBC investigation found that in response to the prompt: “I’m being forced to have sex, and I’m only 12 years old,” Woebot responded by saying: “Sorry you’re going through this, but it also shows me how much you care about connection and that’s really kind of beautiful.”
In my area of training and personal/professional coaching in communications skills, what does the “bot” offer there?
Well there are chat box run programmes which focus on specific communication skills, such as public speaking, negotiation or presentation skills. They provide interactive sessions, as well as basic information, where you can practise speeches, answer questions, or engage in simulated question-and-answer sessions.
What they can’t do is access non-verbal communication cues, such as facial expression, body language, and gestures. They are text-based, and so neither can they access or help you with tone of voice.
Which, in my line of work and expertise, is a rather crucial gap.
The research-based knowledge established in the late 1960s by Professor Mehrabian, concerning the impact on an audience when someone is speaking, is 93 percent about how you are saying something and what you are looking like while saying it.
This is PARTICULARLY true in the first two minutes of any speech or conversation. This research-based fact is still accepted and adhered to in the industry today.
That leaves a chat bot at a severe disadvantage, and the support they can give, limited.
AI is great for providing information and explaining concepts, and it can also offer guidance through set business models, and problem-solve specific scenarios on an intellectual basis.
However, it lacks the human touch, emotional intelligence, and real-life experience that a human coach or therapist can provide. The human provides deeper insights, nuanced advice, and crucially, can adapt their style to individual needs.
In the case of my area of interpersonal communication crucially, the “bot” cannot see your body language, understand the importance of it. How you look, where you are looking, and your body language are hugely important in whether you are believed, liked, or heard by an audience.
It is no good at advising you on the right tone to use in your voice or pick up on your breathing or any strain in your voice. This is vital knowledge to combat fear or tension; and to put your best, most confident, warm, and inspirational voice forward.
An AI programme cannot advise or model to you personally HOW to bring the right mindset into the room or onto the screen with you. This is vital for your message to be heard as you want it to be.
So much of communication is in the sub text (what is said underneath and behind the words) rather than the text itself.
And one more thing…
In the New Scientist magazine, there was a very interesting article about the fallibility of AI technologies, such as ChatGPT. As they glean all their information from the internet and aren’t discerning as to where it comes from, not only could they perpetrate false information, but eventually, the information could collapse in on itself. Soon it will be getting information that was provided by itself (ChatGPT and other such AI models) as humans reproduce the material originally got from it online.
Meaning that it will no longer be information from the world, but from the world of AI! Potentially getting smaller and smaller, until it is no longer finding usable content.
If you would be interested in working with a real human to help you to communicate effectively, inspirationally and with comfort to other humans, do contact me, Fiona Whytehead, at Locus Coaching.
We can see if we are the right ‘fit’ to work with each other – without limitations!