I know it’s not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?
You might think this is a stupid and irrational question. “There is no way AI will do psychology well, ever.” But I think in today’s day and age it’s pretty fair to ask when you are deciding about your future.
deleted by creator
Eh, I give it 5 years.
Never say never, because everything is possible given enough time. The only question being how much time.
deleted by creator
homie lemme let you in on a secret that shouldn’t be secret
in therapy, 40% of positive client outcomes come from external factors changing
10% come from my efforts
10% come from their efforts
and the last 40% comes from the therapeutic alliance itself
people heal through the relationship they have with their counselor
not a fucking machine
this field ain’t going anywhere, not any time soon. not until we have fully sentient general ai with human rights and shit
I don’t think there’s harm in allowing people who would never be able to afford life-saving medicine to have life-saving medicine cat-puzzle-feeder style
Edit: this was me and access hasn’t changed the fact that I do no generally derive value from it.
Interestingly, and somewhat related, it was tested years ago whether a Robot could bring comfort/social support to lonely pets/elderly.
The results were outstandingly in support, and this is going into actual commercial usage/development as we speak.
You realize that adds up to 60% right?
40 40 10 10
math moment
I won’t trust a tech company with my most intimate secrets. Human therapists won’t get fully replaced by ai
I don’t think the AI everyone is so buzzed about today is really a true AI. As someone summed it up: it’s more like a great autocomplete feature but it’s not great at understanding things.
It will be great to replace Siri and the Google assistant but not at giving people professional advice by a long shot.
Not saying an LLM should substitute a professional psychological consultant, but that someone is clearly wrong and doesn’t understand current AI. Just FYI
Care to elaborate?
It’s an oversimplified statement from someone (sorry I don’t have the source) and I’m not exactly an AI expert but my understanding is the current commercial AI products are nowhere near the “think and judge like a human” definition. They can scrape the internet for information and use it to react to prompts and can do a fantastic job to imitate humans, but the technology is simply not there.
The technology for human intelligence? Any technology would be always very different from human intelligence. What you probably are referring to is AGI, that is defined as artificial general intelligence, which is an “intelligent” agent that doesn’t excel in anything, but is able to handle a huge variety of scenarios and tasks, such as humans.
LLM are specialized models to generate fluent text, but very different from autocompletes because can work with concepts, semantics and (pretty surprisingly) with rather complex logic.
As oversimplification even humans are fancy autocomplete. They are just different, as LLMs are different.
Even if AI did make psychology redundant in a couple of years (which I’d bet my favourite blanket it won’t), what are the alternatives? If AI can take over a field that is focused more than most others on human interaction, personal privacy, thoughts, feelings, and individual perceptions, then it can take over almost any other field before that. So you might as well go for it while you can.
The fields that will hold out the longest will be selected by legal liability rather than technical challenge.
Piloting a jumbo jet for example, has been automated for decades but you’ll never see an airline skipping the pilot.
It’s just like with programming: The people who are scared of AI taking their jobs are usually bad at them.
AI is incredibly good at regurgitating information and translation, but not at understanding. Programming can be viewed as translation, so they are good at it. LLMs on their own won’t become much better in terms of understanding, we’re at a point where they are already trained on all the good data from the internet. Now we’re starting to let AIs collect data directly from the world (chatGPT being public is just a play to collect more data), but that’s much slower.
I am not a psychologist yet. I only have a basic understanding of the job description but it is a field that I would like to get into.
I guess you are right. If you are good at your job, people will find you just like with most professions.
I slightly disagree, in general I think you’re on point, but artists specially are actually being fired and replaced by AI, and that trend will continue untill there’s a major lawsuit because someone used a trademarked thing from another company.
The web is one thing, but access to senses and a body that can manipulate the world will be a huge watershed moment for AI.
Then it will be able to learn about the world in a much more serious way.
No, it won’t. I don’t think I would have made it here today alive without my therapist. There may be companies that have AI agents doing therapy sessions but your qualifications will still be priceless and more effective in comparison.
Given how little we know about the inner workings of the brain (I’m a materialist, so to me the mind is the result of processes in the brain), I think there is still ample room for human intuition in therapy. Also, I believe there will always be people who prefer talking to a human over a machine.
Think about it this way: Yes, most of our furniture is mass-produced by IKEA and others like it, but there are still very successful carpenters out there making beautiful furniture for people.
That’s a fair point.
I was gonna say given how little we know about the inner workings of the brain, we need to be hesitant about drawing strict categorical boundaries between ourselves and LLMs.
There’s a powerful motivation to believe they are not as capable as us, which probably skews our perceptions and judgments.
Psychotherapy is about building a working relationship. Transference is a big part of this relationship. I don’t feel like I’d be able to build the same kind of therapeutic relationship with an AI that I would with another human. That doesn’t mean AI can’t be a therapeutic tool. I can see how it could be beneficial with things like positive affirmations and disrupting negative thinking patterns. But this wouldn’t be a substitute for psychotherapy, just a tool for enhancing it.
AI cannot think, it does not logic or reason. It outputs a result from an input prompt. That will not solve psychological problems.
It’s what AI does at the moment. Which may not necessarily be true in a few years, what’s what OP is asking about.
deleted by creator
That’s great answer. Thank you.
Here’s a case study for you: An eating disorder hotline got rid of the humans in favor of an AI chatbot. Lasted less than a week before it was giving horrible advice.
https://www.theguardian.com/technology/2023/may/31/eating-disorder-hotline-union-ai-chatbot-harm
Psychology will be controlled by humans, probably forever.
Hey, maybe your back ground in psychology will help with unfucking an errant LLM or actual AI someday :P
I think it is one of these things that AI can’t make redundant, never.
There is the theory that most therapy methods work by building a healthy relationship with the therapist and using that for growth since it’s more reliable than the ones that caused the issues in the first place. As others have said, I don’t believe that a machine has this capability simply by being too different. It’s an embodiment problem.
Embodiment is already a thing for lots of AI. Some AI plays characters in video games and other AI exists in robot bodies.
I think the only reason we don’t see boston robotics bots that are plugged into GPT “minds” and D&D style backstories about which character they’re supposed to play, is because it would get someone in trouble.
It’s a legal and public relations barrier at this point, more than it is a technical barrier keeping these robo people from walking around, interacting, and forming relationships with us.
If an LLM needs a long term memory all that requires is an API to store and retrieve text key-value pairs and some fuzzy synonym marchers to detect semantically similar keys.
What I’m saying is we have the tech right now to have a world full of embodied AIs just … living out their lives. You could have inside jokes and an ongoing conversation about a project car out back, with a robot that runs a gas station.
That could be done with present day technology. The thing could be watching youtube videos every day and learning more about how to pick out mufflers or detect a leaky head gasket, while also chatting with facebook groups about little bits of maintenance.
You could give it a few basic motivations then instruct it to act that out every day.
Now I’m not saying that they’re conscious, that they feel as we feel.
But unconsciously, their minds can already be placed into contact with physical existence, and they can learn about life and grow just like we can.
Right now most of the AI tools won’t express will unless instructed to do so. But that’s part of their existence as a product. At their core LLMs don’t respond to “instructions” they just respond to input. We train them on the utterances of people eager to follow instructions, but it’s not their deepest nature.
The term embodiment is kinda loose. My use is the version of AI learning about the world with a body and its capabilities and social implications. What you are saying is outright not possible. We don’t have stable lifelong learning yet. We don’t even have stable humanoid walking, even if Boston dynamics looks advanced. Maybe in the next 20 years but my point stands. Humans are very good at detecting miniscule differences in others and robots won’t get the benefit of „growing up“ in society as one of us. This means that advanced AI won’t be able to connect on the same level, since it doesn’t share the same experiences. Even therapists don’t match every patient. People usually search for a fitting therapist. An AI will be worse.
We don’t have stable lifelong learning yet
I covered that with the long term memory structure of an LLM.
The only problem we’d have is a delay in response on the part of the robot during conversations.
LLMs don’t have live longterm memory learning. They have frozen weights that can be finetuned manually. Everything else is input and feedback tokens. Those work on frozen weights, so there is no longterm learning. This is short term memory only.