AI is Reshaping Psychotherapy. We Have Two Years to Get Ready.
- Jordanthecounselor
- 5 hours ago
- 13 min read
Sentio Counseling Center recently released a report showing that of their clients who use AI, 49% of them use it for mental health support. Looking at this, and other research, the report suggests that AI, as it is right now, may already be the largest provider of mental health support in the US.
The report is full of quotes like:
“I usually just talk to it when I’m feeling lonely or super depressed. It’s nice that it just listens, but also that it gives me some actionable advice and really helpful encouragement. It’s especially helpful because I have severe social anxiety, so it’s a little easier to talk to AI than to a human therapist.”
“I am autistic and find it difficult to understand the motives and emotions of other people. I mostly use LLMs to calm myself down while having crises and ask questions pertaining to the situation that I don’t understand.”
“I’ve had a wonderful experience and I don’t know what I would do at this point without having LLM instant support.”
Why are people using the present AI models this way? Two reasons. Accessibility and affordability. You can download an app to your phone and “talk” to a supportive AI anytime you need to - for free. As one study participant said:
“I had a crisis related to death in the family and couldn’t reach anybody else in the middle of the night. LLM got me through the night until I could talk to somebody.”
Even more shocking, 75% of clients stated that AI is already as good or better than their counselor. This is shocking for two reasons. First, the current AI chatbots are just the beginning and with time they will only get better.

Second, this study was done at Sentio. Sentio is an organization dedicated to providing premium care at low cost. All of the counselors there agree to record all of their sessions, track all of their outcomes, and to receive ongoing individual and group consultation. The program is run by Tony Rousmaniere and Alex Vaz who’ve written and edited dozens of books on psychotherapeutic expertise. To put it bluntly, working at Sentio is like training at an Olympic training camp. These are the counselors currently being outmatched by AI.
In this article, I’m going to talk about AI, what comes next, and how this AI tsunami will uniquely impact therapy.
The Four Big Players
As far as I can tell, there will be four big players in the AI-Therapy space: insurance companies, model-based apps, big box provider platforms, and general large language models. Let’s look at these four big players and how they may use AI in psychotherapy.
Insurance Companies
In many circles, it’s assumed that insurance companies will soon use AI to monitor therapy sessions. Their logic: "Why should we pay for treatment that isn't evidence-based? Let's use AI to listen in and ensure therapists are adhering strictly to researched models like CBT, EFT, or EMDR – checking for 'model fidelity'." Programs capable of doing this already exist for modalities like Motivational Interviewing.
On the surface, this makes sense. You’d never pay a physician to administer leeches or perform bloodletting or some other outdated intervention based on superstition instead of science.
But we all know that most therapists are not following a model. Most therapists are “eclectic” or “integrative,” meaning they use a smattering of techniques from different models. Of course, when this comes out and becomes general knowledge, it will be a big scandal. The New York Times will run an article about how “95% of therapists aren’t following the evidence,” and there will be a hit podcast series called something like The Emperor's New Couch exploring the “massive negligence and lack of oversight in psychotherapy.”
It will be a whole thing.
Model-Based Apps
Model-based apps have been around for a while. Some, like meditation and mindfulness apps like Calm and Headspace, are already pretty popular. But we also have others such as Claira, a hypnosis app for depression created by famed psychotherapists Michael Yapko, or the IFS Guide app which leads people through Internal Family Systems exercises.
The problem with these apps is self-evident. It’s the same problem revealed by decades of psychotherapy research that no one talks about: Strict adherence to a specific therapeutic model does not reliably correlate with better client outcomes. Simply following the steps of CBT or EMDR perfectly doesn't guarantee success. There's something else, something deeper, happening in effective therapy.
So these model-based apps will likely fall flat in terms of effectiveness.
Big Box Provider Platforms
The most interesting, and most threatening to therapists, will probably be these "big box" therapy platforms like Better Help and Talkspace. Many (most?) clinicians use these platforms because they need clients. There seems to be a feast or famine dynamic with clinicians. If you’re well established in your community and have good word of mouth, you’re overwhelmed with the amount of clients you’re getting. However, if you don’t have a lot of connections in your community or have poor word of mouth, it’s excruciatingly hard to build a practice. This means a lot of clinicians are dependent on these platforms for their livelihood and, until they learn to get their own clients, have no alternative.
Which is a problem, because it’s generally assumed that these big box provider platforms are collecting massive amounts of data. Every text exchange, email, survey response – it's all potential fuel for training future AI models. Some currently ask providers to use AI to record sessions. These big box provider platforms will inevitably discover what research already suggests: some therapists are simply more effective and efficient than others, helping clients achieve better outcomes in less time, regardless of strict model adherence.
Then, because they have the data, these big box provider platforms will use their data to train their own therapy AI.
General Purpose AI Models
These models, Chat GPT, Gemini, Claude, won’t be specifically designed to provide mental health support. What’s interesting about them is you can use these models to “clone” people based on existing data. This is already a feature; it’s just not fine-tuned yet. For instance, Google’s Gemini currently allows you to create custom AIs called “Gems” by uploading documents.
Say you’re really struggling in your relationship and you’ve been listening to Ester Perel’s podcast for help. Now, of course, you can’t have Perel as your counselor. She’s famous and expensive. But you could easily download all of her podcast transcripts, upload them to Gemini, and use it as your own personal therapist.
What Does This Mean for Counselors?
What this means is the landscape of therapy could change pretty dramatically in the next few years.
Big box provider platforms and insurance companies may partner. The provider platforms could develop really insightful and dynamic AI models which become the first line of care for mental health issues, and then contract their AI to insurance companies. Much like how today you often have to first see a primary care doctor to get a referral to a specialist, you may first have to work with an autonomous AI before being referred to a human counselor.
This could be championed by insurance companies. After all, why pay for a human therapist $100+ per session when they don’t follow evidence-based treatment and aren’t that effective?
Also, these big box provider platforms will have the data on who the effective therapists are. It’s easy to see a world where this information is shared with insurance companies, who then won’t reimburse human clinicians unless the humans are able to demonstrate their effectiveness. This might lead to a system where, in order to get credentialed with insurance, clinicians have to pass some sort of test showing they’re on par or better than their AI model (or some other standard).
As for the model-based apps, they may be replaced with AI clones of famous therapists. So you’ll download the Bessel van der Kolk app or the Sue Johnson App and have their guidance. These apps, enhanced by even simple wearables, might be integrated into your daily life. So if you’re having a fight with your husband, you might get a ding from your John Gottman app letting you know that your heart rate is over 120 and you should take a break before continuing the discussion.
With all of this, it can seem as though human therapists will become obsolete, but I don’t think that’s the case for several reasons.
First, these models will still need some human oversight.
Self-driving cars are great 98% of the time, but that 2% of the time when they aren’t accurate can be fatal. The same is true with the general purpose AI models. These models, commonly called “large language models,” operate by basically guessing what the next word in the sentence ought to be. These models are really good at guessing, but they don’t actually know anything. This leads to them often “hallucinating,” generating fake answers that sound real. And they don’t know they’ve done this. This means the AI will still need oversight, especially when dealing with edge cases or high-risk situations such as suicidal ideation, highly dissociative clients, domestic violence, etc.
Second, if you look at the history of innovation, new innovations don’t always push out old inventions.
Very often the old invention becomes niche or specialized. So cars didn’t kill trains. Trains now are used for hauling large freight and metros move people within large cities. MP3 players may have killed tape cassettes, but tape cassettes were fragile (as anyone who’s accidentally pulled out the magnetic tape knows) and were mostly used because they were cheap to produce. Records, on the other hand, are very much still around and enjoy niche status for music lovers.

Third, clinicians will revolt.
While insurance companies will be incentivized to use AI therapists (after all, it will be cheap and effective), public outcry from counselors, interest groups, and professional counseling organizations will prevent them from being able to fully adopt AI as the only provider of mental health.
Taken together, the practical implications are likely that ineffective therapists will be pushed out of the profession. They won’t be able to get clients from the big box provider platforms. They won’t be able to get reimbursed by insurance. They won’t be able to compete with the therapy guru clones.
However, due to the need for human oversight, the history of invention, and public outcry, it’s likely that effective therapists will be in high demand. This leads to the obvious question: “How do we become better therapists?”
The Answer: Decoding the Micro-Processes
Traditionally, when we want to become more effective counselors, we’ve focused on models. But the research is clear: learning a new model won’t help you become a more effective counselor.
I believe the answer lies in process coding. This isn't about rigid models; it's about decoding the nuanced, moment-to-moment dynamics within a therapy session. It's about identifying the specific micro-processes essential for successful outcomes.
I learned the importance of these micro-dynamics firsthand when, over Christmas break, I decided to make chocolate chip cookies. Using my mom's old recipe, my first batch got slightly burnt. At our family Christmas party, I told my sister-in-law about it, and she pinpointed the issue: the recipe called for large ice-cream-sized scoops, but I'd used smaller tablespoon scoops. Same baking time, smaller cookies – naturally, they burnt. After the family Christmas party, I went home and decided to make a second batch with my leftover dough. My next attempt, using dough straight from the fridge, resulted in underdone cookies. I was confused. I’d baked the cookies for less time because they were smaller than the recipe required. That should have worked. This time, in talking with my wife, she pointed out the obvious: the dough was too cold.
These micro-details – dough temperature, cookie size – dramatically impacted the outcome, even while following the main "recipe." Therapy is the same. You can execute CBT (a therapy “recipe”) perfectly, but if the client lacks motivation (a distinct process), it won't land. You can perform EMDR (another recipe) flawlessly, but if the client isn’t engaged in the experience (another process), progress stalls.
Process coding focuses on identifying and mastering these crucial underlying dynamics: therapist-client expectancy, therapist persuasiveness, client motivation, client experiencing levels, etc. Training therapists to first decode which process is hindering progress, and then intervene specifically on that process, is the key.
AI as Super-Coach
Ultimately, I don’t think we can compete with these models in insightfulness. However we can leverage these models to make us better therapists. Imagine an AI acting as your personal supervisor, reviewing your sessions (with client consent, of course), pointing out missed cues, highlighting moments where a specific micro-process could have been addressed more effectively, helping you become far more attuned and attentive.
An example of this comes from the world of chess. The best chess AI engines are currently 1000 points stronger than the strongest chess players.
Has this killed chess?
No. It’s revolutionized chess. Young players achieve Grandmaster status faster than ever, tutored by near-perfect AI engines. Complex positions that once took months to analyze can be understood in minutes. Moves once considered weak are now recognized as brilliant long-term strategies, thanks to AI's deep calculation.
Crucially, people haven't stopped playing or watching human chess. The AI serves as an incredible enhancement, elevating the entire field. The average serious player is significantly better today because of AI tools [1].
The same thing could certainly happen in counseling. It’s easy to imagine a future where therapists use AI as a co-therapist. You’ll be sitting in when a ding comes on over your earpiece:
Your client just sighed. In the past, 98% of the time when your client sighs and then pauses, they’ve been holding back. You should ask them a follow-up question.
This would create a very powerful combination. It would give the therapist an amazing ability to read clients. Nothing would be missed.
However, for this to happen, we have to do two things now.
First, we have to be willing to adopt autonomous AI as a copilot in therapy.
Therapists who ignore this shift, who resist learning and adapting, risk becoming obsolete. Those providing mediocre therapy, relying solely on rote model adherence, will likely be the first replaced by cost-effective AI chatbots.
Second, we have to be able to understand why the autonomous AI is telling us to do one thing instead of another.
If we’re not familiar with processes like persuasion, resistance, and experiencing, then when the AI gives us in-the-moment guidance, we won’t understand how it fits into the overall therapy. Which means we probably won’t be able to implement the AI’s insights well.
However, therapists who embrace this moment, who learn about process coding, who engage with AI as a tool for profound self-improvement, will not only survive but thrive. They will become significantly more effective, build thriving practices, help more people deeply, and yes, likely earn much more. They will be the insightful, attuned clinicians analogous to the new generation of AI-enhanced chess masters.
The Timeline: How Much Time Do We Have?
Recently, a former key White House advisor on artificial intelligence – someone deeply embedded in the technology's trajectory – stated publicly that we are likely just two years away from achieving autonomous AI.(Watch the first 60 seconds of this video)
Most people don’t know what autonomous AI is. Don’t we already have AI after all? I can ask ChatGPT to plan a trip to New York, and it would, right now, draft an itinerary: sights to see, places to stay, restaurants to try.
Autonomous AI is a different beast altogether.
In the next 2-3 years, AI won't just suggest your itinerary; it will execute it. You'll say, "I want to go to New York," and it might respond, "Okay, considering your schedule, your upcoming family reunion, and your aversion to cold weather, the fall is ideal. I know you dislike big-city driving, so I've booked you a hotel near major subway lines, and secured you a transit pass. Your flight reservations are also confirmed."
This isn't just planning; it's autonomous action based on a deep understanding of you and access to the real world.
Like many powerful resources, autonomous AI’s adoption will be uneven, creating a huge gap between those who adopt it and those who don’t.
The core problem won’t be access to autonomous AI. Already anyone with an internet connection can use ChatGPT or Gemini for free. The real challenge lies in the willingness and ability to effectively integrate it. Two major roadblocks stand in the way: institutional inertia and individual resistance.
Consider your local middle school. Does the school board, administrators, and faculty have the know-how to adopt autonomous AI into the classroom? Do they know how to integrate it in ways that genuinely enhance education? Or are their instincts more defensive – focusing on banning AI to prevent cheating on essays?
What will probably happen is schools who do “adopt” autonomous AI will probably make the same mistake they’re making now with EdTech. They’ll use new “educational computer games” to supplement and “personalize” lessons taught by teachers, when in reality they’re just giving kids screen time that doesn't actually accelerate learning.
Understanding how to intelligently apply autonomous AI in a complex system like education requires vision, agility, and a willingness to rethink fundamental processes. Many institutions simply aren't built for that speed or depth of change.
Then there's individual resistance.
Back to our example of the local middle school. Teachers might feel their role is threatened, insisting, "Students should learn from me, not a robot." Parents might worry, "What is this thing telling my kids? I don't trust it." The secretaries and other staff might say, "This is confusing, it doesn't make sense, I don't want to deal with it." This fear and reluctance are natural, but they will cause significant delays in adoption.
Of course, this isn’t just true for the local middle school. These sorts of dynamics will play out everywhere from small businesses to large government organizations.
This dynamic – powerful autonomous AI emerging within two years, coupled with slow, uneven adoption – creates a massive, time-sensitive window of opportunity. If we have two years before autonomous AI is readily available, then individuals and organizations who prepare now will be positioned to gain a disproportionate advantage when autonomous AI arrives. This technology is coming, and, for those willing to adopt it, it will transform our lives and professions almost overnight.
Even psychotherapy.
Conclusion: Navigating the AI Tsunami in Therapy
So what have we covered so far?
Current AI is already a significant, accessible, and often preferred source of mental health support for many, even compared to highly trained therapists. This trend will only accelerate as AI capabilities grow.
Big box platforms may leverage vast data to create powerful therapy AI, potentially becoming gatekeepers or primary providers endorsed by insurance companies seeking efficiency and measurable outcomes.
Simultaneously, general AI tools could allow clients to access personalized guidance modeled after renowned experts (i.e., Sue Johnson, Ester Perel, Bessel van der Kolk, John Gottman).
The future isn't necessarily one where AI replaces human therapists entirely, but rather one where the nature of therapy and the role of the clinician evolve dramatically.
The most promising future involves human connection augmented by AI insight. Like chess masters using AI coaches, therapists can leverage autonomous AI as a powerful co-pilot, receiving real-time feedback to enhance attunement and effectiveness. However, this requires a willingness to adopt these tools and the foundational knowledge of process coding to interpret and apply AI-driven guidance meaningfully.
It’s really important that we take the next two years to prepare. Therapists who take the time now to learn the key dynamics and micro-processes impacting therapy have the chance to become the expert therapists of tomorrow. They will be uniquely positioned to use the technology to get better at connecting with clients and facilitating profound change. But we have to start now. If therapists wait, they risk being left behind.
Best,
Jordan (the counselor)
Notes
[1] The same is true in more random environments like poker. Poker AI have shown us that playing a game where you bet perfectly on the odds outperforms a strategy where you’re trying to read the other person. This has revolutionized how players study the game of poker. For more read this article ] or watch this video.
Paul Peterson and Jordan Harris are co-founders of Private Practice Incubator, a consulting firm dedicated to:
Helping clinicians earn more money.
Helping clinicians help more clients.
If you'd like to learn more about launching your practice, visit us here.