The Psychotherapy Myth
Psychotherapy's founders were spectacularly wrong about mental illness. And today, a psychotherapist might be little more than an expensive placebo.
Written by Bo Winegard and Ben Winegard.
Humankind cannot bear very much reality
— T. S. Eliot.
From “Ordinary People” to “Good Will Hunting,” from “Law and Order” to “Shrinking,” from Woody Allen to Prince Harry, from the chatter at cocktail parties to the advertisements on popular podcasts, therapy pervades modern culture. And with it, a myth—the psychotherapy myth. Like other myths, the psychotherapy myth is not the product of one or even a few geniuses, though Sigmund Freud may be its Homer and its Hesiod. It lingers over our culture like miasma around a swamp; we breathe it from birth. It is so ubiquitous that it is virtually invisible. Indeed, many who have absorbed it and whose worldviews are shaped by it would not explicitly endorse it—and may even explicitly reject it.
The chief content of this myth is that people often cannot process or work through adverse events and traumas—abuses, breakups, firings, humiliations—and sometimes even repress the memories because they are too painful for the psyche to assimilate. But repressed or poorly processed traumas do not simply subside; they fester, and they spread, causing further psychological pain and maladaptive behaviors. Time alone, it seems, does not heal psychic wounds. But if the sufferer works through the trauma, potentially recovering repressed or degraded memories, she can understand and perhaps even eradicate the sources of her misery. Thus, the talking cure is indispensable, and a stoic embrace of silent suffering, once lauded, is not only a species of misguided masculinism, but is inimical to mental health.
The psychotherapy myth is often coupled with, though sometimes contradicted by, another pervasive myth, the brain-chemical imbalance myth. According to this myth, depression is not caused by repressed trauma, at least that is not the essence of depression, but rather by a chemical imbalance (perhaps especially by an imbalance of serotonin). The talking cure might work, but only if it restores chemical equilibrium; and often therapy is not enough. Antidepressants are needed. These alleviate despair, lethargy, and the other myriad symptoms of depression by increasing available serotonin and other relevant neurotransmitters (depicted memorably in a Zoloft commercial).
In the past twenty years, scholars and concerned intellectuals have subjected this brain-chemical imbalance myth to withering criticism, noting that the widespread view that low serotonin levels cause depression is likely erroneous, that many of the pharmaceutical commercials about depression are simplistic and misleading at best, and that we have good reasons to be skeptical of popular antidepressants and the model of depression that motivates and justifies them. Scholars have assailed this imbalance myth because they think it is pernicious, wasting time and resources and potentially leading to the consumption of habit-forming and ineffective drugs whose side effects are often unpleasant. But they have been less energetic in attacking the psychotherapy myth, perhaps because they believe it is less dangerous, less dishonest, less propagandistic. After all, what could be so bad about talking to a trained adult about one’s miseries and insecurities?
But the psychotherapy myth might be equally harmful and more insidious. It may create iatrogenic illness by encouraging people to see themselves as fragile and incapable of dealing with the slings and arrows of everyday life. It may promote, even if inadvertently, the belief in repressed memories, a belief that has ruined many lives and sundered many relationships through false accusations. It may inculcate an atomistic view of humans and human suffering, diverting investment from stable institutions and strong communities to expensive therapists who are ultimately little more than glorified social supports. And it may be a kind of social cosmetic used to color the pallid face of a diseased society, distracting us from the psychological toll inflicted by years of dissolving communities and declining social capital.
Of course, the primary claim made on behalf of psychotherapy is that it works: It improves mental health. Indeed, many people, from counselors and social workers to patients and ordinary citizens, believe that therapy is helpful and effective. And the painful symptoms of those who attend therapy for mood (often depression, which we will focus on in this article) and anxiety disorders are often alleviated. However, many things may cause this improvement, such as:
The natural course of the disease. Depressive episodes wax and wane depending upon both external and still unknown internal factors. If a person experiences one or more stressful life events (e.g., the death of a loved one or the loss of a job), he or she is more likely to go to therapy. As distance from the stressful event grows, the symptoms tend to subside. Hence, a person who went to therapy immediately following a catastrophic life change may believe that his or her better mood six months later was at least partially caused by the therapy, post hoc, ergo propter hoc.
Spontaneous remission. Depression is often a chronic ailment with remission, relapse, recovery, and recurrence. Therefore, many individuals who do not seek treatment will experience remission (whose precise mechanisms are unknown). In one meta-analysis, for example, researchers found that 23% of cases of untreated depression will remit in 3 months; 32% will within 6 months; and 53% will within 12 months. Thus, 52% of patients from a random selection who attend therapy for a year will experience remission. And this would seem to be impressive evidence of the efficacy of therapy, post hoc, ergo propter hoc.
The Hawthorne effect. The Hawthorne effect describes a phenomenon whereby the behavior of observed individuals is changed by the knowledge of being observed. For a crass example, on reality television shows, the behavior of the participants is likely altered (often significantly) by the knowledge that they are being observed by camera and crew. This is important in more refined and consequential cases as well. In a therapeutical study, for example, the patient and the therapist may change their behavior simply because they know they are being observed, leading to greater perceived efficacy of therapy.
The placebo effect. A placebo effect is an effect produced by a drug or treatment that cannot be imputed to the medicinal properties of the drug or treatment and thus must be imputed to the beliefs of the patient in the efficacy of the treatment. In depressed patients, for example, the expectation or hope of improvement or confidence in the effectiveness of therapy (or of an antidepressant) may significantly alleviate the symptoms of depression. The remedial effect is not caused by the specific components of therapy (or medicine), but by the psychological states of the patient. These effects can be quite large. For example, in antidepressant trials, the average symptom change on the 17-item Hamilton Depression Rating Scale (HDRS-17) in the placebo group is roughly 9 points. In comparison, the change in the antidepressant group is 11 points. How much of this effect is an actual placebo versus other nonspecific treatment effects, spontaneous remission, and regression to the mean is unclear.
Furthermore, some scholars argue that hope and treatment expectations are legitimate common factors and that therefore a therapy without a placebo effect is an unnecessarily etiolated version. Nevertheless, if the claims of many therapists and the psychotherapy myth are correct, then the specific components of psychotherapy should have potency beyond the placebo effect. That is, if the therapist is more than a handsomely remunerated social partner or an expensive hope-generating machine, then the therapy itself should matter.
This complexity raises a troubling problem: How can we know how effective therapy is? Are we compelled to rely upon the self-interested testimony of therapists and the therapy industry? Or the potentially misguided or mistaken testimony of patients?
No. Instead, we can rely upon one of the most powerful designs in medical science: The randomized control trial (RCT). The idea is straightforward. In the real world, we cannot discern the effectiveness of treatment on any individual because he or she either receives treatment or does not (not randomly) and either improves or does not; we do not have access to his or her counterfactual. Suppose, for example, that Rebecca is depressed. And after hearing hundreds of encouraging commercials on her favorite podcasts, she goes to therapy. After six months, she feels much better, even happy. Did therapy help? We cannot know because we cannot know what would have happened had she not gone to therapy. Perhaps her symptoms would have disappeared without treatment.
Thus, in the randomized control trial, we randomly assign patients to control and treatment conditions. The only difference between the two groups is (or should be) the presence or absence of the proposed treatment mechanism. And we can estimate the average effectiveness of the treatment by subtracting the treatment group from the control group.
The control here is crucial because a researcher can easily inflate the effectiveness of an intervention by picking an inadequate or misleading control condition. For example, because the placebo effect can often be large, in a randomized control for antidepressants, a control group that did not take a pill (received no treatment, in other words) would likely exaggerate the effectiveness of the antidepressant medicine by combining and thus potentially conflating actual medicinal causation with placebo effect. The control group should be as similar as possible to the treatment group. In antidepressant studies, for example, a control group that takes an active placebo, i.e., a placebo that mimics the side effects of antidepressant pills, is probably preferable to one that takes an inert placebo.
When examining the efficacy of psychotherapy on depression, researchers often use one or both outcome measures, the Hamilton Depression Rating Scale (HDRS) or the Beck Depression Inventory (BDI). Although these are reasonable measures, it is worth noting that their real-world significance is still debated (e.g., how much does a score have to change to matter?). The best way to assess the overall effectiveness of psychotherapy is to examine meta-analyses, or articles that collect and combine the effects of published and unpublished studies to find an overall effect size. Of course, like any instrument, meta-analysis is limited and can be used badly. But it remains indispensable for scholars and lay-people alike.
Smith and Glass conducted the first meta-analysis of the efficacy of psychotherapy in 1977. They compiled data from 375 studies and estimated an overall effect size of d = 0.68, which is between medium and large by Cohen’s convention. In 1980, Smith, Glass, and Carter followed this with a book-length treatment in which they meta-analyzed 475 studies, finding an overall effect size of d = 0.85, which is large by Cohen’s convention, and means that the average person in therapy would be better off than 80% of untreated patients.
Recent meta-analyses generally find effect sizes between 0.5 and 0.9, which are medium to large. Furthermore, and perhaps surprising, the particular modality or type of psychotherapy (e.g., psychodynamic, cognitive behavioral, behavioral activation treatment, interpersonal psychotherapy) does not seem to matter. Effect sizes are similar for all bona fide therapies. However, most studies of comparative effectiveness are underpowered to detect clinically significant differences, so this should be interpreted with caution. As of now, however, evidence for specific modality effects is exiguous.
Not only does modality seem unimportant, but short-term therapy may be as effective as longer-term therapy; psychotherapy delivered through video may be as effective as in-person therapy with similar attrition rates; and even psychotherapy delivered through the telephone may be as effective as video or in-person therapy.
These results appear powerful confirmation of the fundamental premise of the psychotherapy myth: The talking cure works. Hundreds of meta-analyses and perhaps thousands of randomized control trials have demonstrated this. The skeptic might, however, point to the consistency of the effectiveness across many modalities and interfaces as evidence that the psychotherapy story is more complicated and more subtle than the therapy industry would like. After all, the depiction of the therapist as a highly trained psychological surgeon who brings a unique and difficult-to-find skill set to the problems of the psyche is hard to sustain if the type of therapy he or she is practicing is irrelevant. On the other hand, if therapy works, it works. That’s good enough. The therapist, like the pharmacist, may not understand the intricate nature of her medicine, but she does not have to.
However, there are myriad potential problems with randomized control trials and the meta-analyses that build from them that we must inspect before we conclude that these moderate to large effects are real.
The first and perhaps most important factor that distorts the literature is publication bias This is when the results of published studies differ systematically from the results of unpublished studies and is most commonly caused by a preference for publishing novel, interesting, or positive results over null results. Researchers who conduct laborious and expensive studies of a new therapy may be less motivated to write a paper if the therapy is no more effective than treatment as usual; and journals may be less willing to publish it.
Evidence suggests that publication bias significantly inflates the effect size of the therapy literature. For example, Driessen and his colleagues examined grants awarded by the US National Institute of Health to fund randomized trials of psychological treatments between 1972-2008. They found that 13 of 55 funded trials did not result in publication; and the unpublished studies had a small effect size of Hedges g = 0.20 compared to a moderate effect size of g = 0.52 in the published studies. When the unpublished studies were added, the overall effect size declined by 25% to g = 0.39.
Another review by Cuijpers and colleagues of 175 comparisons between psychotherapy and control conditions on adult depression found an effect size of 0.67 that was reduced by 37% to 0.42 after adjusting for publication bias. A conservative estimate, therefore, is that the efficacy of psychotherapy in the published literature is exaggerated by roughly 30% in meta-analyses that do not explicitly correct for publication bias.
Allegiance effects are a further source of potential bias. When randomized control trials are conducted by those who are partial (who have allegiance) to a particular type of therapy, this may inflate the effect size through questionable research practices (QRPs) or other subtle tactics (e.g., training the therapist performing the allegiant therapy better than the alternative); furthermore, allegiance may bias therapists, therapist supervisors, and editors and reviewers at journals. A notorious study demonstrated that 69% of the variance in the treatment outcome of psychotherapy studies was accounted for by researcher allegiance. This is important because most meta-analyses do not report allegiance.
Another potential cause of bias is selective reporting. This is when a study only reports a subset of the analyses the researchers conducted. Often, the subset of reported analyses has a larger effect than those that are not reported. In a psychotherapy RCT, for example, researchers may use multiple measures of depression, e.g., Quality of Life Measures, the HSDR, the Beck Depression Inventory, etc., and they may only report the measures that differed significantly between the treatment and the control groups.
Selective reporting is virtually impossible to ascertain in articles that are not correctly registered, but since the widespread popularity of pre-registration, researchers can examine the effects of selective reporting. One investigation of RCTs of psychotherapy in the highest impact factor journals between 2010-2014 found that only 13 of 112 trials (11.6%) were correctly registered and reported; of these 13, seven showed evidence of selective outcome reporting. Another investigation from 2005-2020 found that 13 of 75 registered studies engaged in selective reporting, which inflated the effect sizes from 0.54 to 0.81.
A final potential cause of bias, one that requires more philosophical contemplation than the others, is comparison with an inadequate or weak control group. Many different control conditions can be used in psychotherapy trials, but the four major categories are: (1) No treatment, in which participants are given assessments and minimal therapist contact and know they are not receiving treatment; (2) Waiting list, in which participants know they will receive treatment after a waiting period; (3) Psychological placebo, in which participants spend same amount of time with a therapist but with no specific therapeutic techniques; and (4) Placebo pill, in which participants are given an inactive pill.
The control condition can lead to improvement, no response, or even produce negative effects (nocebo effect). Research has demonstrated that the waiting list may produce a nocebo response as it is less effective than no treatment (i.e., participants in the no treatment group generally do better than participants in the waiting list). This may be because individuals in the waiting list condition know they will receive therapy in the future and do not attempt any life changes in the interim. Thus, the choice of an appropriate control is crucial.
Predictably, given the opportunity for bias to creep into the published literature, analyses that carefully consider potential biases find significantly smaller effect sizes than those that do not. For example, a 2009 meta-analysis by Cuijpers and colleagues investigated 115 controlled trials of psychotherapies for depression using eight quality criteria. The overall effect size was 0.74, consistent with other meta-analytic results; however, for studies that met all eight quality criteria, the effect size was 70% smaller at 0.22.
Similarly, a 2019 meta-analysis by Cuijpers and colleagues that examined 325 comparisons between psychotherapy and control conditions in randomized trials on depression found an overall effect size of 0.70. But when the analysis was restricted to Western countries, the effect size was 0.63. And when studies that used a wait-list control group were excluded, the effect size was 0.51. And when studies with moderate to high risk of bias were excluded, the effect size was 0.38. And, finally, after correcting for publication bias, the effect size estimate shrank to 0.31. (See figure 1.)
Relatedly, a meta-analysis that compared psychotherapy for depression to a placebo pill, potentially the best control group to ascertain the real effect size beyond the placebo effect, found a small effect size of 0.25, which translates to 2.66 points on the Hamilton Depression Rating Scale and 3.20 on the Beck Depression Inventory. These numbers are below or nearly below the estimated threshold for the minimally important difference, i.e., the difference that represents a meaningful subjective change for the patient on the HDRS and BDI.
These results suggest that the real effect of therapy is small and quite possibly below clinical relevance for patients. Furthermore, corrections for known biases still leave residual bias in RCTs of psychotherapy. And since the effects of psychotherapy vis-à-vis a placebo pill are small, removing this residual bias might further diminish the effect of psychotherapy not just to clinical but also to statistical insignificance. (For just one potential source of residual bias: Patients in psychotherapy RCTs cannot really be blind since they know they are seeing a therapist.)
Charitably, we can estimate that the real effect of psychotherapy on depression is between 0.10 and 0.40 and is of dubious clinical significance.
Another—messier and more ambiguous—way to probe the real effects of psychotherapy is to examine depression in the world since the rise and spread of psychotherapy. If psychotherapy were an effective treatment, one would expect declines in depression and suicide rates and improvements in mental health in the United States, absent countervailing forces. And if we do not see these declines and improvements, then this should at least engender some skepticism about the efficacy of psychotherapy. After all, if ambitious neuroscientists claimed that some new and widely available potion would increase human intelligence but ten years later, human intelligence was the same, we would be dubious of the potion’s effectiveness.
Depression rates have not, in fact, declined since the 1970s and possibly not since the 50s. In fact, some evidence suggests that depression prevalence has increased since the 1990s, though not uniformly. Rates have increased recently especially among adolescent girls, an alarming trend some attribute to social media use, though the etiology remains debated. Similarly, suicide rates have not declined since the 1950s. In 1959, the rate was 12.3 per 100,000. In 2017, the rate was 14.0 per 100,000. And last, reports of subjective well-being have also remained stable since 1972, with a slight negative trend (and a more significant dip during Covid).
Of course, the psychotherapy defender might contend that mental health would have cratered during this period if not for the discovery and promulgation of psychotherapy. But when these data are considered with the data from carefully controlled randomized control trials, the overall effectiveness of psychotherapy on depression appears unimpressive and, at minimum, should cause some discomfort to the advocate of the psychotherapy myth. A more measured response from a defender might be: “Well, sure, it’s not so effective as we would like, but it’s better than nothing. And it’s not painful. It’s not intrusive. So, what’s the problem?”
But this is not the way to judge treatments or social policies since the alternative to any treatment or policy proposal is not nothing. Even if we stipulated that psychotherapy has a small but real effect, the claim that it is therefore good, important, or even defensible does not follow. We know, for example, that myriad other treatments for depression have similar effect sizes, including antidepressants, vitamin D supplements, dietary improvements, Omega-3 PUFAs, and exercise. The claim is not that all these alternatives have real effects, but rather that they have roughly equivalent effects to psychotherapy in studies. What is more, exercise and dietary change, among other alternatives, undoubtedly have salubrious concomitant effects, including weight loss and increased mate value. (See figure 2.).
Furthermore, psychotherapy is often expensive, crowds out other treatments, potentially discourages other changes, and promotes a stultifying and erroneous myth about the human mind.
Patients can expect to pay somewhere between 60 to 250 dollars per hour for a therapy session. (A jog in the park, a church service, a long walk in the woods are all, of course, free.) In many cases, insurance covers part of this; however, therapy can still be expensive for the patient, and therapists—men and women who often espouse dubious, even risible theories—are handsomely remunerated. There is something unseemly about an industry that generates upper-middle-class jobs at the expense of desperate people while often promoting ideas that are so ludicrous that even ardent defenders disavow them with embarrassment. (The gap between what research-oriented psychologists believe and what practicing therapists promote is often quite large.)
Contrary to the claims of the psychotherapy myth, humans can be resilient and tough-minded; they can suffer the slings and arrows of life without expensive interventions from “experts.” And in many cases, they do not (and perhaps should not) need to dwell on, ruminate over, and talk about their pain. Of course, life is difficult, even tragic. Suffering and loss and death are inevitable.
Thus, a healthy culture should teach that life is often full of misery, dashed hopes, and thwarted desires; it should teach that agony, anguish, and despair are ineradicable parts of the human experience, not aberrant or fleeting intrusions; it should encourage more stoicism, more discipline, more sacrifice; and it should discourage cossetting, indulgence, and morbid contemplation. Reflecting obsessively upon grievances and hardships, like constantly fiddling with a wound, is unwholesome.
Furthermore, the idea that understanding the cause of one’s suffering is the key to curing it is dubious. Getting terminated from a high paying job might reliably cause misery but ruminating on one’s termination is unlikely to dissipate one’s depression—and may exacerbate it. The stubborn fact remains that we do not know much about the nature of depression; but we do know that the theories from which modern psychotherapy arose are wrong. And we have reason to believe that the mismatch between modern society and our evolved brains is a prominent—though certainly not the only—reason for pervasive mental suffering. Often, the disease is not in the head, but in the society. And thus, even if psychotherapy were highly effective, it might be a dangerous distraction.
The idea that the good therapist is a highly skilled mental engineer who knows how to manipulate the complicated machinery of the human psyche has been memorably promoted in movies such as “Ordinary People,” and, if it were true, it might justify the exorbitant salary some therapists command. But alas, it is no truer than the Freudianism that spawned it; and despite its veneer of sophistication and scientism, psychotherapy ultimately remains a human interaction, purchased at great expense to the patient and perhaps to society.
People will always want to talk to other people about their miseries and insecurities, flaws and failures, hopes and dreams; and counselors and therapists will remain employed into the foreseeable future. Some may even do considerable good. But we hope they will drop the pernicious mythology, the exorbitant prices, and the complicated and often unnecessary licensing system and recognize the simple but tragic fact that many people are desperate for sympathetic social partners and will pay a lot of money for them. What is needed is not more expensively trained experts, but more real social relationships.
Bo Winegard is the Executive Editor of Aporia.
Ben Winegard is an independent writer and researcher. He holds a Ph.D. in Developmental Psychology from the University of Missouri.
There is much to criticize about the modern psychotherapy industry and culture overall, but I think this piece misses the mark. You seem to be arguing against an outdated neo-psychoanalytic caricature instead of what most therapy is actually like. Your focus on depression is also narrow, and it seems like your real beef is with the vaguely unhappy "worried well" going to therapy instead of doing other things with their lives. Psychotherapy is something that some people find useful and other people don't, and some people find it even distasteful, which is fine. But if you looked at this through the lens of, say, exposure and response prevention (ERP) for obsessive-compulsive disorder you might come away with a somewhat different impression as to psychotherapy's goals and efficacy. You may also underestimate the extent to which some people really have difficulty getting through life. I think this is something many people forget.
Anyway, I'm driven crazy too by the intrusion of therapy-speak into ordinary life, and people bragging about going to therapy, and all those excesses. Some of this could be blamed on therapists but I think it is just as much a product of the meme-amplification-machine that is our current culture. Important to make fine distinctions and not throw the baby out with the bathwater.
I am somewhat sympathetic to the authors' position that psychotherapists often promise too much and hold on to their patients for too long. Particularly with mildly distressed patients, therapy becomes the purchase of friendship.
However, I strongly disagree with their characterization of the serotonin hypothesis, which was invented before we had the tools to measure anything going on inside the brain. This is not pop psychiatry, but a caricature promoted by anti-psychiatrists. I spent almost 40 years as a psychologist in academic psychiatry and family medicine departments. The only time I ever heard depression described as a serotonin deficiency was by drug reps who were hired for their youth and beauty, not their intelligence.
See my A Critical Look at the Impact of Joanna Moncrieff’s “Chemical Imbalance” Umbrella Review, https://jimcoyneakacoyneoftherealm.substack.com/p/a-critical-look-at-the-impact-of