- Beyond Blue Forums
- Caring for myself and others
- Staying well
- AI (artificial intelligence) and the future of sup...
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Pin this Topic for Current User
- Follow
- Printer Friendly Page
AI (artificial intelligence) and the future of support
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
As technology progresses, there is an increasing amount of talk about the use of AI (artificial intelligence) and how it might affect support services in the future.
At beyondblue, we're currently in the early stages of researching how we might use AI to tailor content for people seeking information from us. But it's most important that we know how you, our members and regular visitors, feel about it.
There are a wide range of possible applications for AI. At one, you have the idea of using AI to help triage people seeking assistance; at the other end you have the idea of using AI to actually provide support for people and have conversations.
This is an open thread to discuss the implications of AI and how it might work on the beyondblue site. How do you think it could make your experience here better? Do you have any concerns? Have you had experience with using other services where AI is a component?
Below is a link to a recent summary of work being done to incorporate AI into health services, to help jump-start the discussion:
Confronting Dr Robot: Creating a people-powered future for AI in health
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Hi Chris,
I think when it comes to the use of AI, it could be useful for something such as greeting new members to a forum etc, but when it comes to actually talking about mental health issues, beyond providing facts and information such as help numbers etc, there would be a lack of empathy which I feel is fundamental for talking to people with mental illness.
I haven't had much experience with sites that use AI personally, so it would be interesting to read others opinions.
Kind regards,
Lici
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Hi Chris and Everyone,
I have read the information you have supplied in the link Chris and I can see both benefits and disadvantages.
One advantage, I doubt the AI will have a time limit to the amount of assistance it can provide. It won't need a loo break or feel overwhelmed with all the emotional information it is taking in.
Would we refer to AI as an 'it" or just AI!
In some cases, people may benefit greatly from a stock standard answer supply general information, help and support.
It may work as a type of back up or for a first port of call situation, combined with "real people" it might work.
I think there may be some people who will embrace this technology and others who will want nothing to do with it for a variety of reasons.
It is good to have this opportunity to discuss the possibility of AI being used.
Cheers all from Dools
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Hi,
I've had a little experience using and helping to implement AI for a website called 7 Cups. It's a free website connecting people on the internet to listeners for one on one chats.
'Noni' is the AI robot and it's a bit odd to talk to her.
On one hand, there's helpful/kind responses like 'tell me how you are feeling' and 'tell me more' and 'oh that doesn't sound pleasant'. I know that some people like Noni because she's not a person - you can literally say what you'd like to say without any chance of being judged.
On the other hand though, Noni (or most AI's) struggle to understand humour and sarcasm even with the best of intentions. and of course - empathy. Which is hugely needed when helping people and trying to understand where they are at.
The other difficulty is the case that conversations can escalate, and Noni can't tell the difference between 'that was a tough day at work' and 'this triggered my PTSD and I'm having suicidal thoughts'. This can be unhelpful at best and dangerous at worse. I even tried a conversation to see how she'd respond and managed to get responses of 'Do it! I believe in you' which is terrifying.
and finally - I don't really think there's a way to really make progress and get treatment with AI. Even with the best of technology, an AI can never really gauge when you/a client might be ready to start creating change, talking about more difficult things or making steps and goals for the future. I'm sure technology will try anyway.
Hope this helps!
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Hmm...
I read the positives.
1. I can see that AI is better than no support at all.
2. I can see that AI would allow for options in remote or rural areas than currently have none.
3. I understand some people would find AI easier to talk to. No judgement. No bias. None of the human weaknesses.
But I can't shake the gut feelings...
1. I wouldn't use an AI option myself.
2. I feel anxious about AI replacing people. There is a big push everywhere towards saving money. Why hire four humans when one AI service can do the work of more for less?
3. I feel like this increases the divide between those with money and those without. If you have money you can pay for a person to listen. If you don't you get whatever you can. It makes me uncomfortable. So if I'm broke I'm not good enough for a person to bother talking to?
Hmm. I would love to read how others feel about BB considering Artificial Intelligence.
Nat
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
As a Champ, I started out with good writing skills and a load of lived experience to respond to posters. 19 months later and my skills have increased tremendously which adds to my ability to assess situations, people and their needs; especially reading between the lines by using perception, intuition and personalised questions.
Also, I've learned from my mistakes and, those of others. I don't see an average version AI respondent program 'learning' unless it's a state of the art program like something from NASA.
I'd question Beyondblue's intent with considering this option. Replacing people might be cost effective and keep up with tech advances, but we can read generic information in the links below. Why would anyone engage with 'something' that replaces 'me'?
Sez
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Arrgh Romantic Thief.. that sounds absolutely horrible! Can you imagine if that was a real conversation with a peron experiencing suicidal thoughts!!! Terrifying indeed.
I think this is a use I would accept BB to use. An acknowledgement of sorts and to provide information on expected wait times and where to go for immediate support.
Sara I like the theory of "me" over "something". I have an idea I think BB needs to consider before going to AI options. Paid counsellors or even paid CC positions. It would mean we would have more applications I don't doubt this.
The reality is not many feel able to volunteer regularly because they have other commitments. I can because I'm a stay at home Mum but that is just me.
I'm not suggesting this out of greed to line my pockets. But to point out in life good deeds are lovely in theory but limited by your ability to put a roof over your family's heads. I think there are many people who could do a much better job than me but they are busy and I can make time.
But as crap and fallible as I am I still think I'm better than an AI system. Hmmm ego speaking there?
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Thanks for your contributions. The intent here is to have an open discussion about the possibilities of AI in the context of providing mental health support, including the opportunities and challenges this presents. It's really important to involve the community in these discussions, as there are ethical implications.
It's also important to have this discussion because of the rate at which technology is moving, and the time it takes to implement new solutions. We've seen with our current forum system what can happen if we don't plan ahead - we get behind the eight-ball in terms of providing the best possible service for members.
A small example: five years ago, 100% of all posts to this forum had to be read by a moderator before publication. Having a conversation here could take days, and was more akin to having a pen-pal than using an online service. Since we introduced technology to help streamline moderation, forum traffic and membership has increased exponentially - meaning more people have been able to seek and receive support through this forum.
Based on the responses so far, it doesn't seem that anyone would be in favour of humans being replaced in a peer support environment by AI. However, this doesn't have to be an either/or discussion. In reading through the paper linked to above, the intent is about using technology to enhance someone's ability to get good support, rather than cut costs.
That said, anxiety about the use of technology in support settings is very real and we need to have all these thoughts on the table so we can make an informed decision about how we build services for the future.
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
I shall comment on this tomorrow as there have been many comments by our members who have made many great points.
Geoff.
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Hi,
If I have difficulties with Telstra understanding what I say when navigating a menu system, the thought of getting support from a robot is not appealing.
I have also been involved in speech to text type software and depending on the technologies used the accuracy is average, especially when you have to take different accents into account, and some people mumble, talk fast etc. And as noted the is also the issue of empathy or lack thereof. Even ignoring the responses you might get for a suicide related matter, if a robot kept saying "I don't understand", a caller would get equally frustrated.
AI would be useful in scanning posts for keywords, and then (1) replying to sender in email or similar (2) sending messages to key staff to respond to (3) flagging post for moderation.
It might be usable in a "play me a meditation" type function.
It could be used to tailor information delivered via the web site? not sure of what type of learning algorithms would be used here.
Smallwolf
![](/skins/images/B1039C67CE4F021CAD7BCC3F8BFE1955/responsive_peak/images/icon_anonymous_message.png)