Jun 22, 2022

The Dogs Wouldn't Jump.

How did the theory of learned helplessness contribute to the cognitive revolution?

Open in playerListen on);
Ideas for a future worth wanting. Social science. Philosophy. Culture.
Episode details

Written by Matthew Archer.

“… a tap on the psychological jugular will produce compliance.”

— C.I.A. Torture Manual

The dogs were first tortured. But the thing that took a while to figure out was why they didn’t just jump to avoid the electric shocks. It was the early sixties and a group of psychologists at the University of Pennsylvania were trying to figure out how fear conditioning influenced the ability of organisms to learn — the scientists preferred the all-encompassing but colder ‘organism to ‘animal’. Among them were two promising young PhD students who couldn’t have known the career-defining journey they’d just begun, let alone the impact their research would have. Their names were Martin Seligman and Steven Maier, both self-avowed dog lovers. They would later recall the harrowing experience of restraining the dogs in cloth hammocks with holes in for their legs, allowing the researchers to shock the paws. 

With hindsight it was the price to pay for a scientific breakthrough that would help to fight depression, suicide, and post-traumatic stress. But considering nobody knew that at the time, you’d be forgiven for thinking this was all rather…dark. Before a dog was shocked, it would hear a tone. Twenty-four hours later it was placed into something called a shuttlebox, a container with two chambers divided by a short barrier. To escape the shock, all the dogs needed to do was to jump over the barrier. But jump they would not. 

Before this experiment most psychologists had assumed that animals were only capable of associative learning—you put your hand on a hot stove and quickly learn to associate hot stoves with eye-watering pain. Psychologists had pretty good reasons to believe non-human animals were not capable of much more than this. Incredible (and at times rather unbelievable) experiments had been carried out for decades—the type you’d see in a farfetched spy movie. During World War II, for example, the famous Harvard behaviourist B.F. Skinner worked on a program called Project Pigeon. His goal: to create pigeon-guided missiles. He conditioned the birds to peck at a target on a screen that would steer the missile to its destination. The military cancelled the project, but according to Skinner the associative learning had worked. To develop battle-hardened birds, he’d fired pistols next to their heads, put them in a pressure chamber to create 10,000 feet of altitude, spun them around a centrifuge, and simulated shell bursts with blinding lights. Nothing stopped the pigeons pecking. Such was the success of the behaviourists’ experiments that a joke went round about Skinner being conditioned by the Harvard undergraduates he taught. Whenever he moved to the left corner of the lecture theatre, so the story went, the students would become vividly engaged. Eventually Skinner wound up frozen in the corner. 

B.F. Skinner at Harvard circa 1950.jpg
B. F. Skinner

In the late 1940s, a psychologist named Orval Hobart Mowrer formulated a theory about the type of associative learning Seligman and Maier were investigating with the dogs. He called it the two-factor theory of avoidance. Mowrer had performed his own electric shock experiments and his recipe for producing avoidant behaviour was elegant and intuitive. First, by shocking the animal, you would generate fear through pain, which could then be linked to other stimuli present simultaneously. It was an idea known as classical conditioning—when the animal encountered the stimuli again, it would evoke fear. Fear is what the behaviourists call an ‘aversive emotion’. A response that removes the fear-evoking stimuli, say by jumping over a barrier, will be negatively reinforced. This is called instrumental conditioning.

Thus Seligman and Maier assumed, in line with Mowrer’s theory, that the dogs would avoid the shocks and that the reward for doing so (not being shocked) would increase the avoidant behaviour. When the dogs just lay there passively waiting for the shock to stop, Seligman and Maier, then in their mid-twenties, began to wonder whether the big names of behaviourism had gotten something fundamentally wrong about the animal mind. They realised a radical leap was required.

The leap they made was toward cognitive theory. Rather than merely learning responses to stimuli, cognitive theory is based on the idea that the dogs were acquiring knowledge and understanding through thought, experience, and the senses. In other words, the dogs had developed the belief that nothing they did mattered. Their helplessness was learned. This is to say they were not objectively helpless; they could, if they wanted to, escape the shocks.

By 1967, Seligman and Maier were ready to fire their first salvo in the behaviourist-cognitivist wars inaugurated by Jean Piaget and Noam Chomsky a decade or so prior. It was time to publish. To their surprise, their tersely titled paper—“Failure to Escape Traumatic Shock”—was accepted by the prestigious, but deeply conservative, Journal of Experimental Psychology. The reviews contained one criticism: “paralyse is usually spelled with a ‘z’ ”. 

Then it was time to engage the leading behaviourists at the prestigious Princeton conference. After their presentation, Skinner’s former student, the Harvard scholar Richard Herrnstein, demonstrated the theoretical skirmish Seligman and Maier had waded into:

“You are proposing that animals learn that responding is ineffective. Animals learn responses; they don't learn that anything.”

The radicalism of Seligman and Maier’s cognitive theory cannot, therefore, be stressed enough. They were suggesting that animals could learn propositions, specifically that dogs could detect the futility of responding and thus expected that any future shock would be independent of their actions. Detect and Expect, a new two-factor theory of avoidance—vastly more sophisticated than learning specific responses to specific stimuli. As the years passed, the behaviourists’ critiques waned under the weight of impressive experiments which cleverly and decisively favoured the cognitive view. 

Richard Herrnstein (1930–1994).png
Richard Herrnsteinn

With Maier retraining as a neuroscientist, Seligman drove the research forward. (Although eventually it’d be Maier’s work on the brain that would reveal the ultimate reasons driving learned helplessness). Seligman asked the obvious question: what about humans? In one experiment, he swapped dogs for college students and electric shocks for loud noises. Just like the dogs, most of the college students failed to escape the shuttlebox. So Seligman then swapped the loud noise for solvable and unsolvable anagrams. He got the same results. 

Psychologists love theories like this—easy to implement (fun, even), easy to explain to research agencies, and, most important of all, it offered at least a partial explanation for a wide variety of social phenomena. Take this paper published in 1980 when the theory was all the rage. The title is: ‘Alcohol consumption by college women following exposure to unsolvable problems: Learned helplessness or stress induced drinking?’ Just like Seligman, the psychologists split female undergraduates into groups and gave some of them unsolvable problems, inducing the state of learned helplessness. What happened to these students? You guessed it. They drank more beer in a taste rating task.

So popular was the theory that even the CIA took notice. In the wake of 9/11, the architects of the CIA’s “enhanced interrogation” programme—a euphemism for torture—asked Seligman to give lectures on learned helplessness. Off the back of this and other meetings, Seligman was accused of helping to torture prisoners at Guantanamo and Abu Ghraib, a claim he fiercely denies in his memoir and one that an independent investigation failed to endorse. 

To get a visual representation of the research explosion Seligman and Maier launched, just glance at the Google Ngram graph for printed mentions of the phrase “learned helplessness”:

Dozens of examples exist to illustrate the explanatory range of the theory when applied to humans, here are two:

  • Child neglect: When parents believe they are helpless to stop their child crying, they’re more likely to give up. In the short term, this surrender can lead the child to cry more, thus reinforcing the parents’ helplessness.

  • Ageing: The death of loved ones, the loss of physical independence, and the development of age-related illness can all contribute to individuals neglecting their needs as they feel helpless against father time. When this happens, a quicker, steeper decline becomes more likely, thus reinforcing helplessness. 

Can you see the allure, the elegance? Learned helplessness not only diagnoses the cyclical conditions we often spiral into, but, in doing so, it offers us a way out. Take a frustrating stimulus that you think is immutable and that’s enough to make you act as if it is, in fact, immutable. Being helpless is stressful and being stressed decreases your problem solving skills and perhaps even your IQ [other source].

In fact, the last phase of Seligman’s work examined the possibility that learned helplessness was a laboratory model of clinical depression, with eight of the nine symptoms being produced in the experiments with animals and humans. The only exception was suicide:

Later experiments have served to confirm the depressive effect of feeling a lack of control over an aversive stimulus. For example, in one experiment, humans performed mental tasks in the presence of distracting noise. Those who could use a switch to turn off the noise rarely bothered to do so, yet they performed better than those who could not turn off the noise. Simply being aware of this option was enough to substantially counteract the noise effect.

However, the line between objective helplessness and learned helplessness is much easier to draw in a laboratory scenario. In the real world, it’s often not clearcut. Take the example of something close to my heart: education. This is from the Wiki article:

The motivational effect of learned helplessness is often seen in the classroom. Students who repeatedly fail may conclude that they are incapable of improving their performance, and this attribution keeps them from trying to succeed, which results in increased helplessness, continued failure, loss of self-esteem and other social consequences. This becomes a pattern that will spiral downward if it continues to go untreated.

In praising the explanatory power of learned helplessness, we leaped over an important question: how do we know if a student has learned their helplessness or is actually objectively helpless? Take the example of gifted students underperforming because they find the tasks too dull. They could, perhaps, do something about this; they could grit their teeth and march on through the tedium, satisfying their teacher and school. But in the long-term, this often has a corrosive effect on their mental health. Ultimately, we have to look at a wider time frame before concluding someone has learned their helplessness. We are not dogs. Human life is infinitely more complex. Outside of the laboratory, there isn’t always a direct analogy to the electric shock and the barrier allowing escape.

Matthew Archer is the Editor-in-Chief of Aporia.