Easy pickings: Establishment critiques of MH innovations

You know that your innovations are making headway when the establishment sharpens their attacks. I notice some of my colleagues are taking aim at mental health apps, walk-in counselling, and 30 minute sessions.

Just last week, some of my colleagues questioned the value of e-mental health programs. This was in response to an article on e-mental health published in the Chronicle of Higher Education (CHE).   While the article was balanced, skepticism abounds. A common criticism is that help seekers prefer face-to-face, in -person care delivered the same way for more than 100 years. This is not surprising. Most people prefer the familiar over something new especially when feeling vulnerable.

A meta-analysis on mental health apps reported typical drop-out rates of 50%. The implication is that those who dropped out did not get better. A further implication is that high dropouts mean we cannot trust the research. Sure, there are some legitimate questions. We don’t know what happened to those who dropped out. How can we deem this a failure if we don’t know what happened to the dropouts? The modal number of visits for conventional face-to-face therapy is one. Approximately 20% do not come back. Does this mean success or failure? The simple answer is that we haven’t studied it enough.

What does a drop out rate mean? How do you calculate a drop-out rate? When someone stops seeking treatment does that mean failure? Do all symptoms have to be resolved? What if they got enough to improve functioning?  Surely this constitutes success.

One critic recently warned (in another CHE edition), “this idea that you can provide treatments in a 30-minute, one-time appointment that’s going to meet the needs of people with mental-health concerns is not accurate.” A sweeping judgement, especially since there are a growing number of studies contradicting this.

Why not offer both single session and longer-term treatment? Why not have apps and face-to-face counselling? Doesn’t the fact that 50% did not drop out mean something?

Instead of taking potshots at innovation, why not set up a solid continuous therapeutic measurement system? Recently at a conference session with over 100 of my colleagues in attendance, I asked for a show of hands on clinic-wide use of a popular but somewhat unwieldy outcome monitoring tool. Most said they used it. Then I asked how often. Most said every third or fourth visit. Not particularly helpful given that most clients come less than four times. We can do a lot better with therapeutic measurement.

Just like everyone else, evidence-based psychotherapists are biased. Some do not want apps to work. Others do not want 30-minute sessions to work. This, understandably, will be seen as a threat to livelihoods. Some anger may be justified. After all we are not training our workforce to apply innovation or practice feedback informed treatments.

While I have made some enemies saying this, the emperor has no clothes. There is no evidence indicating a doctoral trained therapist is any better than a masters trained one.  There is even some data showing that junior trainees perform better than seasoned therapists.

It is unethical to shut down innovation when so few are able to access expensive mental health resources. It is premature, elitist, and self-serving for therapists to reject alternatives in favor of an unsustainable one-size fits all solution. It is unethical to invest more in the same old thing.

Instead of more meta-analyses, we need better outcome monitoring to support discovery and implementation of promising new practices. We need more options in order to reach more people. Client preference, the capacity to choose among multiple treatment options, is a good predictor of outcome. But with only one largely inaccessible option – conventional psychotherapy – our collective impact on health and well-being in our society will be negligible.