- Beyond Blue Forums
- Caring for myself and others
- Staying well
- AI (artificial intelligence) and the future of sup...
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Pin this Topic for Current User
- Follow
- Printer Friendly Page
AI (artificial intelligence) and the future of support
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
As technology progresses, there is an increasing amount of talk about the use of AI (artificial intelligence) and how it might affect support services in the future.
At beyondblue, we're currently in the early stages of researching how we might use AI to tailor content for people seeking information from us. But it's most important that we know how you, our members and regular visitors, feel about it.
There are a wide range of possible applications for AI. At one, you have the idea of using AI to help triage people seeking assistance; at the other end you have the idea of using AI to actually provide support for people and have conversations.
This is an open thread to discuss the implications of AI and how it might work on the beyondblue site. How do you think it could make your experience here better? Do you have any concerns? Have you had experience with using other services where AI is a component?
Below is a link to a recent summary of work being done to incorporate AI into health services, to help jump-start the discussion:
Confronting Dr Robot: Creating a people-powered future for AI in health
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Hi to All. well, every comment made by all of you typifies exactly how I feel.
A machine will not and can not understand all the complexities that occur with mental illness, it won't know why or what has caused you to feel as though there is no way out of your situation, because there maybe so many different variants, other people or separate circumstances that you certainly can't try and explain to a machine.
If you do try and the machine gets it wrong then what are the repercussions and what may happen.
It's bad enough trying to get where you want to when talking to any large organisation, mostly you're directed to the wrong section and then passed onto someone else, so the shovel and push begins.
I wish I could write as well as others but accepted that I can't, but yes over the years we become more knowledgeable and we can access the situation quicker than before.
A machine doesn't mature, it may get new programs but machines break down or need to be closed to upgrade, that's not good when people are suffering from mental illness.
What people need is empathy, a machine can't possibly give you this with any meaning 'it's heart isn't in it' and certainly not for mental illness.
Geoff.
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Hi All,
Just had a read through this thread again.
I imagine that AI and robots in general are used a lot more than I have any knowledge of.
Many people are very willing to use Dr. Google so some may work well with AI on a forum like this.
I feel that as long as people know they are connecting with AI and not a real person, and they are happy with that, then it will be their choice to continue the connection or not.
Sometimes it may actually be beneficial to rant and rave and to not care that a non human is reading what you are off loading! It would save me writing it all long hand in my note book!
romantic_thi3f how is "Noni" recognised on 7 Cups?
Cheers all from Dools
- Mark as New
- Follow Post
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Post
Hi!
This has become a really interesting thread!
I'll try and cover all your points but let me know if I missed something!
I really don't believe that AI can replace therapists. There's no doubt that AI's are smart (which is a little bit scary) but part of being a therapist is being able to relate to human experience. Take anxiety and depression for example; I have both of those conditions so I can relate well, but if I didn't - I know what it's like to be afraid and I know what it's like to be sad. I know what it's like to overthink, I know what it's like to want to connect, I know what it's like to feel lonely.
Yet mental illness is only part of that; what about other things we have to go through that robots don't? Grief, heartbreak, loss, health - how can an AI understand all of those feelings? The hollowness and emptiness that we feel, the tightness when we're angry, the pain when we're sad; there's so many universal emotions and sensations that a robot can never truly understand.
Just Sara - you talked about an AI respondent learning, but I know AI's learn all the time. Noni on 7 Cups learns through responses, where as Netflix learns what movies to suggest based on the ones you've watched. Google learns what you like/dislike and shows you through ads, YouTube learns what things I like to watch and shows me things others watch too. I even have a friend who can code programs allowing it to 'learn' from the user.
smallwolf - your post made me think even more! It makes me think about the different levels of AI - from a basic bot to something way more complex. I just watched the release of Google's assistant impersonating humans to make phone calls - so I think it's only growing!
I like your idea about AI scanning for key words! Out of curiosity, I asked Siri (on my iPhone) to play me a meditation and she went to connect to Apple Music!
dools - Noni is recognised as a bot! 7 Cups is very open about the fact that Noni is a bot and learning through conversations.
- « Previous
-
- 1
- 2
- Next »