Even a blind hog finds an acorn now and then.
Remember Karl Rove’s “inevitability strategy”?
It’s well-known that Karl Rove believes that swing voters like to vote for the winner. Therefore, one of the central political strategies for Bush has been to create an “aura of inevitability” that, theoretically, will bring people to his side. If everyone believes you’re a political juggernaut, the theory goes, then you will become a political juggernaut. (emphasis added)
Rove believes so completely in this strategy that when Bush II was running against Al Gore in 2000 “[t]o demonstrate his confidence, Mr. Bush traveled at the end of the campaign to California and New Jersey, states firmly in Democratic hands.” The strategic thinking seemed to be that having Bush campaign in states everyone knew he couldn’t win signaled to undecided voters how confident Bush was that he’d take the swing states he actually needed. To Rove, this “signaling” was more important than any substantive campaigning in the last few days might be.
Now, I’ve never been a big believer in the “Karl Rove is a political genius” school of thought. Karl Rove’s success has always seemed to me to stem mostly from his sheer unwillingness to abide by any sense of morality or decency if doing so might blunt his candidate’s chance to win an election. His readiness to simply forsake all previously accepted limitations on what constitutes appropriate campaign conduct amounts to nothing more than a breaking of the (unwritten) rules that - prior to Rove’s appearance - had always been understood to govern the contest. Put another way, Rove wins elections by cheating. This doesn’t make him a genius, it makes him an asshole.
But yesterday, whilst searching for something else entirely, I came across information that indicates ol’ Rover might actually be on to something with his “inevitability strategy.” It turns out that, when it comes to elections anyway, it might actually be possible for a political campaign to “create [its] own reality."
The book I was looking in is Duncan Watts’s Six Degrees: The Science of a Connected Age, and essentially is an exploration of “network science”: how networks – particularly information networks – arise, why they arise, the common features almost all networks share, etc. Chapter 7 deals with the way in which people make decisions in the absence of hard facts, and the two sections I found most interesting were those dealing with “information externalities” and “coercive externalities.”
In explaining “information externalities,” Watts points to the work of social psychologist Solomon Asch in the 1950s regarding the “conformity effect” on people when asked to make an independent decision after being exposed to obviously incorrect information supplied by other, supposedly neutral observers.
Essentially, Asch placed a participant in a room with seven other people, all of whom - unknown to the participant – were part of the experiment. These people were all shown several images and asked some very simple questions about each. In the beginning all of the individuals answered the questions correctly so as not to arouse suspicion. But eventually a few of the experiment’s “plants” started deliberately injecting incorrect answers – and these were very simple questions to get right.
For example, here is one of the images Asch used:
The people taking part in the experiment were asked to identify which of the three lines on the right was closest in length to the line on the left. Asch found that when only 2 or 3 people in the group of 8 provided incorrect answers, the actual subject of the experiment became much more likely to give incorrect answers as well despite the fact the correct answer is obvious. In all, Asch found that “at least 75% of the subjects gave the wrong answer to at least one question,” whereas a control group in which everybody gave correct answers “threw up only one incorrect response out of 35 [although] this could probably be explained by experimental error.”
Now these results seem to an outside observer to be insane, and one wonders how anybody could so succumb to peer pressure as to get the question regarding the length of the lines wrong. But Watts points out that when we don’t understand a situation particularly well we all tend to take cues from those around us. For example, Watts asks us to
[i]magine you are walking down the street in a foreign city, looking for a place to eat, and you see two restaurants, side by side, with similar-looking (and equally unfamiliar) menus, indistinguishable prices, and much the same decor. But one is bustling and the other is deserted. Which one do you pick? Unless you have a specific problem with crowds, or you feel sorry for the beckoning waiter in the empty restaurant, you do what we all do in the absence of better information – you go with the crowd. After all, how could so many people be wrong?
(Personally, I don’t think this is a terribly good example because I would almost certainly choose the deserted restaurant; I do have kind of a problem with crowds and I would expect to have better service if other patrons weren’t competing for the waiter’s attention, but I understand the point Watts is trying to make here.)
Watts suggests that Asch’s experiment simply shows that taking cues from others when making decisions is so deeply ingrained in us that sometimes people rely on it even when doing so isn’t appropriate, i.e., when they already have access to all the information they need and the correct answer is obvious.
In Asch’s experiment, the subjects did find their actual beliefs being influenced by the answers provided by their peers; however, Watts notes that some reported they “simply felt pressured to indicate their consent, even though privately their opinion did not change.” (emphasis in the original).
This effect – where one’s behavior is affected even if one’s private belief may not be – is said to result from a “coercive externality.” That is, whereas “information externalities” result in one changing one’s own belief based on the information cues provided by one’s peers, “coercive externalities” result in one changing one’s behavior – but not one’s inner belief – based on a simple desire to conform to the group.
And, it turns out, this can be sufficient to turn the tide of elections.
In the 1960s and 70s, political scientist Elizabeth Noelle-Neumann conducted a study in West Germany that led to her developing the political science and mass communication theory known as “the spiral of silence.” (For whatever it may be worth to this discussion, Noelle-Neumann worked briefly for the Nazi newspaper Das Reich in 1940 and wrote at least one article “propagating the myth that a Jewish syndicate ran the American media.” Whether she did or did not actually hold anti-semitic beliefs became the subject of controversy in the early 90’s when she was a visiting professor at the University of Chicago.)
As Watts explains it, what Noelle-Neumann found was that
prior to two national elections, conversations concerning politics displayed a consistent pattern of the holders of the perceived majority opinion growing increasingly vocal and insistent at the expense of the perceived minority. The key word here, however, is perceived. As Noelle-Neumann showed, the levels of support for the two political parties, expressed privately by individual citizens, remained roughly consistent. What changed was the individuals’ perception of the majority opinion, and therefore their expectations of which party would win. (emphasis in the original).
Essentially, the “spiral of silence” predicts that as one point of view becomes perceived to be the majority opinion simply by being more powerfully and more often expressed in public, people who dissent from that point of view start to go silent – which then further adds to the perception that the other opinion is favored by the majority. Unfortunately this mere perceived difference in which point of view is the majority opinion can affect voter behavior. Again, Watts explains:
Voting, however, is a private activity, so perhaps the balance of pre-election discourse is unimportant. Not so, Noelle-Neumann discovered. Her most striking finding was that on election day, the strongest predictor of electoral success was not which party an individual privately supported but which party he or she expected to win. Beliefs concerning the beliefs of others, therefore, seem capable of influencing individual decision making, even in the privacy of the electoral booth (or possibly by affecting the decision of whether or not to even vote). As with Asch’s experiments . . . it is somewhat unclear what forces are driving the spiral of silence or influencing an individual’s ultimate voting decision, but probably both coercive and information externalities are at play. (emphasis added).
One begins to get a sense of just how deadly dangerous the 24/7 Republican Wurlitzer can be to our political discourse. When it drowns out competing political opinions voters may end up voting in accordance with Fox News simply because they believe that is what “most people” think.
* * *
Ever since reading this classic Chris Hayes article – in which Hayes describes his experience reaching out to uncommitted voters on behalf of the Kerry campaign – I’ve been fascinated by the idea that those whom the media insists on calling “independent voters” or “swing voters” should more properly be understood to be “low information voters.”
Hayes argued that these so-called “independent voters” don’t carefully choose between candidates based on a sagacious and judicious consideration of the candidates’ policy positions; instead, they are people who don’t really care about or understand politics and who find political discussions extremely boring. That sounds about right to me, and so I was pleased when Mike Lofgren’s recent Truth Out article raised this issue again, and then when another former congressional staffer chimed in to agree with Lofgren.
As a result, I tend to think of the electorate as divided into three camps. The first consists of politically interested voters on the Left; these constitute the Democratic party’s base and – provided they can be persuaded to vote – can usually be safely counted on to vote for the Democratic candidate. These people don’t need to have the Democratic candidate’s policies spelled out for them because (i) they are already strongly partisan and for that reason alone would never vote for a Republican, and/or (ii) they are high information voters who pay attention and already understand the Democratic candidate’s policy positions.
The second camp consists of politically interested voters on the Right; these constitute the Republican party’s base and – provided they can be persuaded to vote – can almost always be safely counted on to vote for the Republican candidate. These people don’t need to have the Republican candidate’s policies spelled out for them because (i) they are already strongly partisan and for that reason alone would never vote for a Democrat, and/or (ii) they are high information voters who pay attention and already understand the Republican candidate’s policy positions.
(Now – right here – it is important to step back and draw a distinction between “high information voters” on the Left and “high information voters” on the Right. By and large, “high information voters” on the Left actually have a good grasp of the underlying realities of a particular position; we tend to belong to the Reality Based Community, and are much more inclined to evaluate a policy position based on its empirically determined efficacy. In contrast, “high information voters” on the Right don’t seem to have a good grasp of the likely effect of the policies they support – just think of Fox News viewers, who tend to be the most misinformed voters around. That doesn’t mean that they aren’t “high information voters” for the purpose of this dicussion, just that the information they have is very often misinformation.)
Roughly speaking, in any given election these two groups will often largely cancel each other out. Which means that to win that election candidates must win a majority of the third group: the low information voters who find politics boring, who don’t have sufficient information to make an informed decision between the two candidates and who therefore tend to decide for whom to vote based on cues other than the policy positions of the parties.
It is with respect to these voters that Karl Rove’s insight regarding the “inevitability effect” carries some weight. As detailed by Watts and by Asch’s and Noelle-Neumann’ studies, the existence of both information and coercive externalities and their effect on people’s decision-making abilities are supported by empirical evidence.
Asch demonstrated that - especially in the absence of information, but sometimes even when that information is staring them in the face - people tend to believe what others tell them. So if low information voters are told a year from now that, say, Rick Perry is guaranteed to win the Presidential election, many of them will believe it. And Noelle-Neumann’s study indicates that if enough people come to believe Rick Perry is guaranteed to win the Presidential election, the coercive effect of simply believing that may cause them to actually vote for Perry even if they might really prefer Barack Obama. Thus, information and coercive externalities can result in self-fulfilling prophecies, at least in electoral politics. In this way what is proclaimed to be inevitable can, in fact, become inevitable.
* * *
I don’t want to give Karl Rove too much credit; I still don’t think he is a genius. In fact, I think his decision back in 2000 to have Bush II campaign in California and New Jersey instead of the swing states he actually needed to win was incredibly dumb and indicates that Rove doesn’t really understand how his own “inevitability strategy” is supposed to work.
Because the strategy is really most effective with undecided voters, in order to demonstrate to those voters that your candidate is a lock to win the election you have to give those voters a signal that they can understand. Most low information voters likely didn’t understand that California and New Jersey were out of reach for Bush, so they wouldn’t have understood his campaigning there in the last days of the 2000 campaign to have meant anything in particular – let alone supreme self-confidence and inevitability. Which means that Rove’s decision to send Bush to California and New Jersey was basically a worthless stunt that cost Bush time that could have been spent making substantive appearances in states that could have actually helped him.
Like a hog, Rove may have found an acorn in his “inevitability strategy,” but the fact he clearly didn’t understand how that strategy works makes him a blind hog too.
Still, an acorn is an acorn, and understanding how and why this type of campaigning might sometimes work in the real world can only help Democrats anticipate what the GOP may pull in next year’s general election, and hopefully help Obama avoid some unforced errors.