As a blogger, I occupy a relatively small space. I mean how many people do you know who blog about deliberate practice and outcome tracking for therapists?
It's a pretty niche crowd. As far as I know there are only about three blogs in this space.
One of them is Therapy Meets Numbers (TMN).
Based in the UK, TMN is a blog run by Barry and Niles dedicated to documenting their journey using outcome measures to improve client care. You can see why I'm a fan. So I reached out to Barry and asked him to write a guest post. He was kind enough to send me what you'll find below.
Before you dig in, I recommend you also check out TMN's blog. I recommend you start with their blog on supervision. It's one of my favorites.
Enjoy!
Consider signing up for my newsletter. You'll get more goodies like this.
It’s a provocative question, isn’t it? It’s also the title of a recent blog post, more of which below. Looking back at my outcomes over 25 years, the answer depends what period I’m looking at. I know that when I’ve got complacent, or stopped looking at my data, it’s shown in my outcomes. So, what have I learned?
“I just need an evidence base….”
Back in the day (mid 1990’s to be precise), I managed the counselling service for the Royal College of Nursing (RCN). I loved the role, but it used to keep me awake at night thinking how I was going to provide a service to the (then) 330,000 members of the RCN on a complement of eight staff.
Thankfully, there wasn’t really an expectation that we would. There was an expectation, however, that we would promote the value of counselling for NHS staff sufficiently well that NHS employers would be falling over themselves to provide it themselves. We just needed an evidence base.
Back then, there wasn’t an evidence base for staff counselling as such. Our subsequent efforts took us in two directions. First, we worked with the BACP Research Committee to develop BACP’s first review of sector-based counselling (John McLeod’s systematic review of the research evidence for counselling in the workplace). Second, we set about growing the evidence base for our own service, based on the CORE system.
When it comes to impact, we’re far from equal
We started using the CORE measures (or more accurately the 34-item CORE-OM) with service clients. We learned to introduce it into the work in a non-clunky way. Reviewing their responses became part of our standard assessment.
When it came to processing the data, we’d wait a few months and send batches of forms several inches thick to be scanned and reported on by the University of Leeds. Other than as part of assessment we had no real relationship with our data.
Everything changed when we adopted CORE-PC. We input our data in real time, and we took charge of its analysis. It was so feature rich that initially it felt like flying a spacecraft. Features included an appraisal function, known commonly as the ‘scary button’. This allowed me to look beneath the overall service data at markers such as dropout and improvement and identify individual rates among my team.
Gaining that level of insight wasn’t a comfortable experience. Once seen, it can’t be unseen. One of the most uncomfortable aspects of this was the discovery that I had my own problem with dropout. It was a major problem. More than half of my clients were dropping out, and my figure was the highest of my team by some distance. That discovery completely blindsided me. How on earth could that happen, and I not be aware?
This experience left a lasting legacy. I learned that my judgement alone isn’t a reliable witness to what’s really going on in my practice. Assuming my judgement is even active, it’s easy to reassure myself that I’m doing OK, when my numbers are telling a different story. I need to give those numbers proper attention.
Improvement doesn’t happen by chance
By the early 2000’s we’d gotten very familiar with using the CORE measures actively with clients and by then I’d clearly taken my wayward dropout rates in hand. We were building our evidence base and routinely using CORE-PC to analyse our service data. There wasn’t much by way of benchmarks with which to contrast our performance, but what there was seemed to suggest we were doing OK by our clients.
By 2002, rates of unplanned endings across the service (for clients accepted for therapy) stood at 31%, and 79% of completers were showing clinical and/or reliable change. We challenged ourselves to do better, and it’s to the credit of my brilliant team that we did. Over the next two years (just prior to my departure), we nearly halved our rate of unplanned endings and raised our rate of improvement to 85%.
That improvement didn’t happen by chance. It came from a combination of challenging ourselves to do better, focused actions, and continuous measuring and monitoring the results. These days, I suspect it would probably fit the description of deliberate practice. It was certainly deliberate.
The art of the possible
I left the RCN in 2005 and started working for CORE-IMS, the organisation established to support CORE system users. Much of the next five years was spent roaming the length and breadth of the UK providing CORE system implementation training and supporting services to use their data as part of a service development strategy.
During that time I saw performance data for dozens of services, hundreds of therapists, and thousands of clients (all anonymous I should add). It’s said that a picture paints a thousand words. After a while I came to see data in a similar way. It’s no exaggeration to say that I could form a tentative (and usually accurate) picture of a service’s strengths and shortcomings from a two-minute tour of their data.
I was privileged to work with some truly exceptional services and provide independent reporting of their service quality. I saw what great therapy looks like, provided by services I’d be confident in referring any loved one of mine to. Many in primary care, sadly, lost their funding as IAPT was rolled out. A few are still going strong, such as My Sister’s Place (MSP) in Middlesborough. You can read more about MSP and their achievements here.
How did I forget everything I learned?
Given all I’ve just said, you might imagine my commitment to using measures in my practice would have been unshakeable. You’d be wrong. Between 2005 and 2010, when I left CORE-IMS to go independent, I didn’t see one client. I was great at talking the talk, but when I went into private practice, not so good at walking the walk.
As anyone who’s set up in private practice will attest, it’s tough, especially at the start. I relied heavily on EAP referrals. Of the four EAP’s that I’ve been on the books of, one uses GAD-7 and PHQ-9; one GHQ-28; one CORE (sporadically). The other doesn’t use measures. It’s an utter mess, and to date I don’t know of one EAP that’s successfully managed to use data purposefully. This is an invite to tell me different.
So, in the early years of my private practice I was using measures, but very erratically. I’d also lost the habit of paying attention. And, once again, it was starting to show up in my numbers. In 2017, on a hunch, I set about looking at dropout rates for my EAP and non-EAP (private) clients. I discovered that while 95% of EAP referrals for the previous year had reached a planned end to therapy, for non-EAP clients the figure was just 38%. Not only that, the average number of sessions attended by non-EAP clients was one. Many just weren’t engaging.
I started focusing assiduously on the goal and means elements of the working alliance, and on more systematically using measures in my work. By 2019 my overall dropout rate was just eight percent, with no dropout from the non-EAP clients that finished in that year. It had taken feedback from the data to galvanise me once again, but the measures I was taking seemed to be making a difference.
You need an evidence base, but not always for the reasons you think
Over my years of consultancy I’ve heard many variations on the theme of “I need an evidence base so that I can show commissioners the great work that we do.” To which my response is some variation of “I’m a great fan of building an evidence base, but perhaps more to test our assumptions that we are doing great work?”
As I’ve discovered to my cost more than once, it’s unwise to make any assumptions about your therapeutic impact in the absence of evidence. As research has shown, in common with other professions, we tend to over-estimate the level of our professional abilities. While it may be “common to think of ourselves as somewhat remarkable compared to others” not all of us can fit into the remarkable category. We need to approach this area with a little humility. Outcomes can go down as well as up.
Am I Any Good…as a Therapist?
Are You Any Good…as a Therapist? is the somewhat provocative title of a recent post on the Society for the Advancement of Psychotherapy website. In the context of my own reflections on my journey with measurement and evaluation of my own practice, it feels like a timely question.
If I’m really objective, the truth is probably that there have been times when I’ve been consistently impactful, and times when I’ve been less so. Along the way I feel like I’ve picked up what feel to me like some simple but powerful truths.
I’ve learned that someone else’s evidence base is a poor substitute for my own. Just because the average effect size for therapy is on the order of d = 0.8, it doesn’t follow that mine will be anything like that. It’s an average. Some of us will do better, and some worse. And it won’t be consistent over time.
I’ve discovered that when it comes to assessing my performance, my judgement alone isn't reliable. Twice in my professional career I’ve discovered that I’ve gone off the boil and not realised until I’ve run my numbers. That’s not happening again.
I’ve also learned that attending to some simple evidence-based therapy practices, together with paying systematic attention to my numbers, significantly improves my ending and outcome data. It’s not even that I slavishly adopt a ‘one size fits all’ approach to measurement, more that my approach to monitoring progress with each client is considered.
I need no convincing about the value of an evidence base. Twenty-two years ago, at the end of an organisation-wide review of services, it was the strength of our evidence base that saved my service from being terminated or contracted out. Now, I’m no longer obliged to collect data, but I know that the very process of doing so is part of what helps me to guard against complacency.
Comments