We have evidence-based treatments in mental health all wrong

Wampold & Imel (2015)

Over the past few years, as I attend conferences and strategic planning meetings on mental health innovation with leading policy and decision makers across North America, I am struck by the persistence of a dominant myth. If you have followed the work of Bruce Wampold, John Norcross or Scott Miller, you will be familiar with the great psychotherapy debate. Essentially it boils down to the question of whether specific treatment factors or contextual factors are more important for maximizing outcomes. A specific factors approach rooted in the medical model has dominated for decades and led to the often-touted primacy of cognitive behavioural therapies. This even though virtually all therapies, if practiced with care, produce similar outcomes. It remains puzzling to me why policy and decision makers remain blind to the decades of outcome research which show that at best, specific treatment factors (i.e., the techniques attributable to any theoretical model or treatment manual) account for only 15% of outcomes. The remainder is due to extra-therapeutic factors (e.g., patient circumstances; motivation at roughly 40%) and the therapeutic context (e.g., therapeutic alliance; access; modality; expectations at 45%).  

So, this means that true evidence-based treatments (i.e., those that have the potential for maximizing outcomes) are unrelated to whether you apply CBT, acceptance and commitment therapy, DBT, interpersonal solution-focused, or any of the hundreds of other techniques that have demonstrated empirical support. What really drives outcomes is attention to unique client circumstances and the context in which care is provided. Stepped Care 2.0 maximizes attention to these contextual factors.

It is not to say that techniques are independent of contextual factors. They are intertwined. To convince a client that an approach is worth considering, an intervention (i.e., some sort of reasonable technique) must be applied persuasively. This, of course, requires careful attention to client differences, cultural context, modality and timing of access. Equally important is constant adjustment of treatment, selecting from the extensive buffet of options when monitoring indicates a change is needed.

The debate is over. Common factors won. Practice-based evidence through continuous client facing monitoring is much more valuable than fidelity to any one evidence-based treatment manual. So, can we please end the hype about evidence-based techniques or fidelity to one model that has without merit dominated our field?

Instead let’s invest in technological infrastructure that supports a flexible system with wide ranging options through frequent and continuous objective monitoring of what matters most to people – improving their functioning and overall well-being.

Having won the debate, we need to change how we design, implement and evaluate programming. There is a shift happening. Pragmatic trials of systems of care are starting to be recognized as more valuable than randomized clinical trials of specific treatments. But we don’t need to wait for the results. Instead we need philanthropic and government investment in infrastructure that supports what is already evidence-based: feedback informed treatment.

Leave a Reply