American Behavioral Scientist

1 downloads 0 Views 550KB Size Report
explain how the Suffolk University/WHDH predictions, using additional bellwether tests, were closer than most other polls in correctly calling the Democratic ...
Americanhttp://abs.sagepub.com/ Behavioral Scientist

Use of Bellwether Samples to Enhance Pre-Election Poll Predictions: Science and Art David Paleologos and Elizabeth J. Wilson American Behavioral Scientist 2011 55: 390 originally published online 28 February 2011 DOI: 10.1177/0002764211398068 The online version of this article can be found at: http://abs.sagepub.com/content/55/4/390

Published by: http://www.sagepublications.com

Additional services and information for American Behavioral Scientist can be found at: Email Alerts: http://abs.sagepub.com/cgi/alerts Subscriptions: http://abs.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://abs.sagepub.com/content/55/4/390.refs.html

>> Version of Record - Mar 25, 2011 OnlineFirst Version of Record - Feb 28, 2011 What is This?

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Article

Use of Bellwether Samples to Enhance PreElection Poll Predictions: Science and Art

American Behavioral Scientist 55(4) 390­–418 © 2011 SAGE Publications Reprints and permission: http://www. sagepub.com/journalsPermissions.nav DOI: 10.1177/0002764211398068 http://abs.sagepub.com

David Paleologos1 and Elizabeth J.Wilson1

Abstract In this article, the authors document a new pre-election polling method that combines traditional probability sampling in statewide polls with sampling of bellwether districts. Against the backdrop of the 2008 presidential primary in New Hampshire, the authors explain how the Suffolk University/WHDH predictions, using additional bellwether tests, were closer than most other polls in correctly calling the Democratic primary race. Thus, the contributions in this article are twofold. The authors offer information that provides advances to the field of opinion research by describing the nature of electoral bellwether districts, a topic of recurring interest to public opinion scholars and practitioners. Second, they describe a process for selecting electoral bellwether districts. The authors explore the validity of their ideas by comparing pre-election poll data with bellwether tests to election outcomes for additional races in the state primaries and general election. They offer conclusions about the value and use of bellwether tests in the spirit of an open-source methodology to the opinion research community. Keywords pre-election polling and methodology, accuracy, bellwether, political prediction, Tom Brokaw, Michael Dukakis, Chris Matthews, Jon Stewart, New Hampshire Democratic primary, sampling, AAPOR, Democratic primary, Republican primary, exit polling

Hillary Clinton’s surprise win in the 2008 New Hampshire Democratic presidential primary once again brings pre-election polling methodology issues to the forefront of discussion in the opinion research community. The noticeably high level of inaccuracy between poll predictions and outcomes in New Hampshire motivated a detailed investigation by a special ad hoc committee of the American Association for Public Opinion 1

Suffolk University, Boston, MA, USA

Corresponding Author: David Paleologos, Suffolk University, 8 Ashburton Place, Boston, MA 02108 Email: [email protected]

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Paleologos and Wilson

391

Research (AAPOR). In that report, polling methods for primary elections in four states (Iowa, New Hampshire, California, and Wisconsin) were examined in detail. Findings for the New Hampshire race suggest that predictions could have been affected by the timing of the election (which came very soon after the Iowa caucus), lack of attempts to reach hard-to-contact respondents, nonresponse bias, and variations in likely voter models. Interestingly, other factors, such as exclusion of cell phone–only individuals and use of a two-part candidate preference question, were not thought to have affected polling results (AAPOR 2009a). The situation in New Hampshire was dramatic: “The majority of the polls before [the] New Hampshire [primary] suggested the wrong winner, while only half in Iowa did” (AAPOR 2009a, p. 6). In this article, we describe how one poll, Suffolk University, maintained as late as January 7 and 8 that the race was very close and allowed for a Clinton win by as much as 3.8 points (Suffolk University, 2008b). In the post-election coverage by NBC, Tom Brokaw points out to Chris Matthews that the Suffolk poll (of January 7) was closest to correctly predicting the final outcome.1 Their conversation unfolds as follows: Matthews begins by reviewing Hillary Clinton’s turbulent days leading up to the New Hampshire primary and says of the election outcome, “The polls weren’t reading something the last several days.” Brokaw: Chris, I was watching you earlier today and you kept referring to the Suffolk University poll as being the anomaly in the group because it showed a 1% margin between Obama and Hillary Clinton and it turns out that was the poll that was correct and the others were wrong. So my investment advice to you tonight, Chris, is to invest in the Suffolk University poll. . . . Matthews: I think, Tom, that was on the Republican side. I’m gonna check on that. I think that’s where they were off but maybe you’re right. Brokaw: Well, no, it was [accurate] on both sides actually. Matthews: Was it? Brokaw: Yeah. Matthews: Well, maybe the Suffolk poll knew something that no one else on the planet knew. What the Suffolk pollster “knew” in reporting the polls were predictions from statewide polls plus estimates from samples in bellwether districts, which, combined, offered greater insight into voter intentions. Our thesis is that public opinion research may benefit from pre-election polling methods that combine traditional “scientific” methods—probability sampling for generalizability—with complementary nonprobability methods—random sampling within purposively selected electoral districts, called bellwethers. Thus, our purpose is to describe, in detail, a pre-election polling methodology that employs statewide surveys (i.e., the science) plus bellwether samples as a sister test (i.e., the art). Much in the same way as multiple regression has its scientific and artistic prescriptions for use (Hair, Black, Babin, & Anderson, 2010), pre-election polling may also.

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

392

American Behavioral Scientist 55(4)

Our contributions in this article are twofold. First, we offer information that provides methodological advances to the field, as called for by the editor of this issue, by AAPOR (2009a), and by others (Blumenthal, 2005; Jacobs & Shapiro, 2005; Traugott, 2005). Second, we describe and characterize the nature of bellwether districts and explicate a process for identifying bellwether districts. This pre-election polling methodology is offered for consideration and use by the public opinion research community in the spirit of continuous improvement in practice. Next, we offer a brief review of the relevant literature on polling methods and a general discussion of the environment surrounding how results (and accompanying methods) are reported. We then describe events in the 2008 New Hampshire presidential primary as an illustrative case of how a regional nonpartisan polling organization, the Suffolk University Political Research Center (SUPRC), successfully used bellwether tests for additional insight in reporting poll results. From this case, we offer inductive research propositions for theory development about the nature of bellwether districts. We describe the selection process for choosing bellwethers. Finally, we offer a meta-analysis of poll predictions using bellwethers and election outcomes in 10 other state primaries and in the November 2008 general election. This meta-analysis demonstrates that bellwether districts can serve as “reliable benchmarks” to enhance confidence in poll results, as called for by Blumenthal (2005).

Background Literature Review The recent AAPOR (2009a) report and scholarly articles (e.g., Murray, Riley, & Scime, 2009) are evidence of the continuous effort to think and write about issues and improvements in polling methodology. Past special issues of Public Opinion Quarterly (e.g., the 2005 volume) contain articles that address methodological topics as well as the environment for reporting poll results. The domain of political opinion research is unique compared to other fields, such as marketing research, in that poll results are a source of news that often generates controversy (Daves & Newport, 2005). Indeed, the editors of the 2005 special issue note the following. An unsettling contradiction has developed: while polls have been consistently accurate, they and their sponsors have now come under withering attacks during election campaigns. The sustained effort is to fan unjustified public suspicions about such proven techniques as probabilistic sampling and statistical weighting, as done by reputable pollsters. (Jacobs & Shapiro, 2005, p. 637) Because poll results are closely monitored as news by the media, methodological issues are often at the forefront of discussion. Consider the situation of poll results in a close race where Candidate X has a slight lead (within the margin of error [MOE]) over Candidate Y. Supporters of Candidate X tout the results in the media as being generated by a poll whose method meets the gold standard of possessing objectivity, validity, and reliability. Supporters of Candidate Y, however, use the methodology as

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Paleologos and Wilson

393

a “whipping boy” to discount the accuracy of the results. The “likely voter estimation” issue is a prime example of vociferous and highly contentious debate (in the media) of results in the 2004 U.S. presidential race (Traugott, 2005). This situation is exacerbated given new advances in electronic technology, which seem to occur continuously. For example, the standard practice of random-digit dialing for data collection can now be complemented with inclusion of cell phone numbers, fully automated telephone polling (with recorded interviewer voices and touch-key responses), and Internet surveys (Jacobs & Shapiro, 2005). The accuracy of polls, post hoc, is an important topic that has been widely addressed in the literature; Martin, Traugott, and Kennedy (2005) offer a thorough and cogent review. Of greater significance, though, is their development of a new measure, A (a log-odds ratio), to reflect the accuracy of poll predictions compared to election outcomes (Martin et al., 2005). This measure communicates accuracy in terms of both magnitude (the difference between prediction and outcome) and direction (actual winner vs. projected winner). This statistic, used in the AAPOR (2009a) report to analyze poll performance, highlights that some polls are clearly more accurate than others. Recognizing that individual polls may tweak their methodology for marginal increases in accuracy, pre-election polling has as its foundation the basic rules of probability sampling. It is safe to say that pollsters will always use traditional random sampling as a guiding framework to conduct voter intention polls regardless of technological tools employed to accomplish the fieldwork, many of which are the focus of ongoing research (Blumenthal, 2005). The resulting pre-election poll estimates will be generalizable to a population with a known level of confidence and MOE. If generally accepted industry standard levels of confidence and error are observed, such as those suggested by organizations such as AAPOR and the Council of American Survey Research Organizations, what more can be done? One technique, sampling bellwether electoral districts, has been discussed by others (Kenski & Dreyer, 1977; Tufte & Sun, 1975), but conclusions about the usefulness of bellwethers remains equivocal. Bellwether electoral districts are those thought to be indicative of the prevailing mood in an election. Bellwethers may be likened to a barometer in forecasting weather or to a “leading indicator” in forecasting economic conditions. Kenski and Dreyer (1977) note that past research identified Illinois, New Mexico, Michigan, and California as bellwethers in presidential elections. Their study, as well as work by Tufte and Sun (1975), sought to examine the predictive capability of presidential bellwethers using postdictive analysis of election results. In other words, both studies examine the extent to which states voting to elect the winning candidate in prior elections predicted the winning candidate in a new election. Neither study found much in the way of strong conclusive results. “As others have noted and our historical experiments demonstrate, a state that has served as a bellwether in one election can easily fail to remain one in the next” (Kenski & Dreyer, 1977, p. 504). They also note a lack of theory to guide in the search for electoral bellwethers; clearly, their research

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

394

American Behavioral Scientist 55(4)

findings indicate that a strictly postdictive analysis alone is not enough to capture the bellwether phenomenon. In the next section, we turn our attention to the New Hampshire primary of January 2008 to describe the methodology used by the Suffolk University poll, which employed bellwether samples in addition to statewide polls. The New Hampshire primary case is notable for two reasons. First, that particular primary was a very close race. Second, the Suffolk poll predictions, contrary to other established major polls, were closer than the majority of polls in predicting the election outcome for both the Democrat and Republican races. To recall Chris Matthews’ comment cited earlier, the information “no one else on the planet knew” was data from the bellwether samples alongside the results of the statewide polls.

The Case of the 2008 NH Primary to Inform Theory and Methodological Practice All polling activity by Suffolk University is conducted by its Political Research Center (SUPRC). The university’s media partner is the local NBC affiliate WHDH-TV, which disseminates poll results to the local and regional viewing audiences. When the Suffolk University poll is recorded by other reporting sources, such as RealClearPolitics.com, the poll is usually identified as “Suffolk/WHDH” or “Suffolk.” Statewide polls conducted by the SUPRC consist of 400 to 500 likely voters for generalizable estimates (4% to 5% MOE and 95% confidence level). The data for this case discussion are primarily from three sources. First, personal recollections of the SUPRC pollster, David Paleologos, were obtained during a series of in-depth interviews between January and April 2009. Second, poll results available at RealClearPolitics.com (2008a, 2008b) were used to depict the chain of events during the period from October 2007 through the primary election held January 8, 2008. Finally, poll results and news releases from the SUPRC website (http://www.suffolk .edu/research/1450.html) were used to document events. We focus our discussion on the Democratic primary race, providing an in-depth look at events leading up the election on January 8, 2008. The Republican primary was also close. In the interest of brevity, however, we provide results about the Republican race in summary form. Detailed results for both races are available from the authors.

The Democratic Primary Hillary Clinton enjoyed a comfortable lead over Barack Obama from February through November 2007. “Everyone had Clinton winning—the question was just by how much,” Paleologos reflected (personal interview, Jan 26, 2009). Figure 1 provides RealClearPolitics poll average results for the top three candidates (Clinton, Obama, and Edwards) tracking from October 2007 until the election on January 8, 2008. As evident in Figure 1, Clinton’s lead began to erode around November 11 while Obama’s share of voters began to increase.

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

395

Paleologos and Wilson

45.0 40.0 35.0 30.0 25.0

Obama

20.0

Clinton

15.0

Edwards

10.0 5.0 0.0 30-Sep-07

30-Oct-07

30-Nov-07

30-Dec-07

Figure 1. New Hampshire Democratic primary poll average Source: RealClearPolitics.com (2008a)

Table 1 provides a comparison of results for polls conducted in the last week of November and 1st week of December. The Suffolk statewide poll was largely consistent with other regional and national polls: Clinton is firmly in the lead. In addition, two bellwether districts were sampled in Rockingham County, New Hampshire.2 These results, shown in Table 1, indicate even stronger support for Clinton compared to the Suffolk statewide poll. Results for Clinton and Obama shifted in the 1st week of January, with Clinton increasing and Obama decreasing in share of voters. Table 2 provides results of polls that show Clinton maintaining the lead during the period January 1 to 4. Three polls were conducted by Suffolk University during this time. The Suffolk statewide results were in agreement with those of national polls. The victory in the Iowa caucus on January 3, 2008, may have led to increases in poll estimates for Obama in the days that follow. In the series of polls conducted January 4 to 5, results are mixed, as shown in Table 3. Three national polls have Obama ahead in double digits. The Reuters/C-SPAN/Zogby poll has Clinton by 1 point, whereas the CNN poll has the race tied. The Suffolk poll has Clinton ahead by 2 points. Notably, bellwether samples taken in addition to the statewide poll show Clinton with a solid margin over Obama (10 points and 12 points, respectively).

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

396

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

34 22 15

41 19 15

47 16  8

SU = Suffolk University; ARG = American Research Group.

Clinton Obama Edwards

34 23 17

32 21 16

SU/WHDH November 25-27, listed as Reuters/ November 28 SU Bellwether SU Bellwether ARG C-SPAN/Zogby Candidate in SU 1: Sandown 2: Kingston Nov 26-29 Dec 1-3

Poll

33 26 15

Rasmussen Nov 29

37 24 18

Marist Nov 28Dec 2

Table 1. New Hampshire Democratic Primary Poll Results, Late November/Early December (in percentages)

35 29 17

30 23 17

ABC/ Fox Washington Post News Nov 29Nov 27Dec 3 29

397

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

39 23 17 Clinton by 16

Candidate

Clinton Obama Edwards Leader

37 25 15 Clinton by 12

SU/WHDH January 2-3a

SU = Suffolk University; ARG = American Research Group. a No bellwethers sampled.

SU/WHDH January 1-2a 36 29 13 Clinton by 7

SU/WHDH January 3-4, released by SU January 5a

Poll

35 31 15 Clinton by 4

ARG January 1-3

Table 2. New Hampshire Democratic Primary Poll Results, January 1-4, 2008 (in percentages)

32 26 20 Clinton by 6

Reuters/ C-SPAN/Zogby January 1-3

31 33 17 Obama by 2

Mason- Dixon, January 2-4

398

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

35 33 14 Clinton by 2

Candidate

Clinton Obama Edwards Leader

34 24 21 Clinton

SU Bellwether 1: Sandown 37 25 16 Clinton

SU Bellwether 2: Kingston 26 38 20 Obama by 12

ARG January 4-5 31 30 20 Clinton by 1

Reuters/ C-SPAN/Zogby January 3-5

SU = Suffolk University; ARG = American Research Group; UNH = University of New Hampshire.

SU/WHDH January 4-5, released by SU January 6

Poll

Table 3. 2008 New Hampshire Democratic Primary Poll Results, January 4-5 (in percentages)

27 37 19 Obama by 10

Rasmussen January 4

27 39 18 Obama by 12

Rasmussen January 5

33 33 20 Tie

CNN/ WMUR/ UNH January 4-5

Paleologos and Wilson

399

Paleologos traveled to New Hampshire on the evening of January 5, 2008 to spend the day of January 6 participating in media interviews for WHDH-TV as well as other organizations (BBC, PBS, C-SPAN, the national networks, radio stations, political websites, etc.). The Suffolk poll numbers (for Clinton) were inconsistent with those of highly respected regional and national polls, which showed Obama with the lead. However, Paleologos believed, on the basis of past experience, that the bellwether data were valid and reliable. The official press release on January 6 (Suffolk University, 2008a) states that Clinton and Obama are in a “statistical tie.” The media were headquartered at the Radisson Hotel in Manchester, New Hampshire, for election coverage. Representatives from broadcasting organizations were set up in the lobby area to do interviews with booths, lights, and cameras. Pollsters typically go from one set to the next to discuss results. Level 2 of the hotel is dubbed “radio row.” Paleologos described being asked by a respected media colleague how he could continue saying the race was a tie. She told me that people can’t get into Obama rallies; there are lines of people standing in the cold. Clinton, on the other hand, has only a handful of people showing up, some of which are campaign workers. If that’s any measure, then the Obama momentum is going to be crushing. My response was that our methodology is sound—we are adhering to good practice of probability sampling for the statewide polls and we also have valid data from the bellwethers. Because of this, I was confident our estimates were valid and reliable . . . in spite of anecdotal evidence about the Obama momentum. I didn’t dispute that Obama had gained momentum but our data indicated it would still be a close race. (personal interview, February 5, 2009) And thus began the saga of Paleologos’ experiences in dealing with his colleagues in the media.

Reporting the Suffolk Polls: A Long Day’s Journey Into Night For the media interviews on January 6, 2008, the most current poll numbers for the Suffolk poll are those shown in Table 3. Paleologos describes the situation as follows. I remember walking through the hotel lobby—there were many national and international media organizations and many political operatives. I remember one interview with C-SPAN in particular where it was almost embarrassing. I was continuously reminded that other polls were showing a different picture compared to the Suffolk poll. The other polls had Obama leading comfortably, some by double digits. I just had knots in my stomach. After that interview, I’d get comments from other reporters like “You’re the guy that has it close. . . .” After a while, I just couldn’t hang around and do any more interviews. The atmosphere was getting very uncomfortable.

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

400

American Behavioral Scientist 55(4)

When anything unusual like this happens, the critics jump on the poll methodology to discount unexpected findings. I felt like I was trying to roll a boulder uphill all day to explain the bellwethers and why they were valid information to consider in a close race. Many media folks are gifted wordsmiths but don’t typically have formal training in statistics. (personal interview, February 23, 2009) The situation described above is not surprising, given experiences in the 2004 presidential election. Daves and Newport (2005) offer a detailed account of two pollsters’ experiences, which are described as extremely personal and mean-spirited, after reporting results detrimental to a candidate. Frankovic (2005) notes that “one of the strengths of public opinion polling over time has been the belief that polling was scientific and nonpartisan. In the attacks of 2004, however, the sides were frequently partisan versus pollster” (p. 695). Paleologos dealt with the pressure of intense scrutiny by repeating details about his methodology during the course of many interviews during the day. But finally, late in the day, he needed to stop for a while. He said, I took a break from the interviews, which by this time were a lot of damage control for our poll, and went downstairs to the hotel basement where I could be alone and think. I thought this situation must have been how John Zogby felt in 2004 when he had Howard Dean in New Hampshire getting close in the final few days while everyone else [Suffolk included] had John Kerry as the landslide winner. I felt so alone. I sat against the cold brick wall in my raincoat and thought. It reminded me of my childhood years when all I wanted was to be accepted by my peers but I was different. I figured I had obviously missed something. So I got myself up, got on my phone, and started calling to double-check. I called the phone bank supervisor and asked him, “Is there any chance that the outcome is being systematically biased?” A quick analysis of every individual phone station was done to check for unusual upward spikes for Clinton. No systematic pattern of this sort emerged. After phoning a trusted pollster friend and explaining the bellwether methodology and results, my friend laughed and concluded, “Well you could be right.” (personal interview, February 23, 2009) By the end of the day on January 6, the New Hampshire Democratic primary became a waiting game for Paleologos and his colleagues in the media. He goes on to describe activities leading up to the election by saying, We continued polling right through January 7th; a lot of pollsters stopped polling on January 5th or 6th. In the January 7th poll, Obama had the 1-point lead and the race was still too close to call. I remained skeptical of an Obama win because both bellwethers from the January 6th poll were saying that Hillary Clinton was leading. My counterpart, political reporter Andy Hiller at WHDH, was not sure whether to use our results since so many other polls were projecting Obama to win. There were media people in the hotel lobby calling New

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Paleologos and Wilson

401

Hampshire for Obama by a wide margin, 20 points in some cases. Our poll was criticized for underestimating Obama’s momentum. (personal interview, February 23, 2009) Results for the Suffolk statewide poll released January 7 compared to other polls are shown in Table 4.

The Final Countdown The Democratic primary election was called for Hillary Clinton at approximately 10 p.m., January 8; she had a share of 39% of voters to 36% for Obama. There are several ways to evaluate the accuracy of the Suffolk poll. First, we can say that the Suffolk poll of January 6 was the closest in predicting the actual election outcome (a Clinton win) and the bellwethers rang true. See Table 5. Second, we can look at the January 8 results with the most common media interpretation—the difference between the top two candidates relative to the poll MOE (Martin et al., 2005). The Suffolk statewide poll had Obama (39%) leading Clinton (34%). The 3-point Clinton win over Obama is within the Suffolk poll MOE, which was 4.4%. Third, on the absolute basis of winning candidate, the Suffolk poll missed. However, Clinton’s winning share (39%) was slightly outside the upper bound of the MOE (38.4), and Obama’s share (36%) was within the MOE. Thus, the Suffolk poll with upper-bound Clinton MOE and lower-bound Obama MOE included the possibility of a Clinton win (by up to 3.8%). Fourth, from the AAPOR postdictive perspective, the Suffolk University poll had the smallest percentage difference for the winner (5 percentage points) compared to all other polls and the smallest absolute percentage difference of the winner plus secondplace finisher, 2 percentage points (see AAPOR, 2009a, p. 14). As described at the beginning of this article, the media commentary between Tom Brokaw and Chris Matthews in NBC News post-election coverage lauded the Suffolk poll and raised the all-important question, What did the Suffolk poll have that others did not? The answer is that the Suffolk poll had bellwether samples, which served as important “sister tests” to the statewide polls. The New Hampshire bellwether districts, the towns of Sandown and Kingston, were consistent indicators for Clinton at three different times (June and November 2007, January 2008). In reporting the polls, Paleologos consistently maintained that the race would be very close and that a double-digit Obama win was not a “lock,” as was being reported by most other polls. For the Republican primary, the race between McCain and Romney was also close as the election drew near (See Figure 2). The Suffolk polls of January 6 and 7 had Romney leading McCain by a small margin (30% vs. 27%, respectively). The bellwether samples of the January 6 poll were split, as shown in Table 5. The bellwether estimates from Sandown show a Romney lead of 5 percentage points. For Kingston, McCain has a 1-point lead over Romney (30% and 29%, respectively).

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

402

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

34 35 15 Obama by 1

Candidate

Clinton Obama Edwards Leader

28 39 22 Obama by 11

ARG January 4-6 29 39 19 Obama by 10

Reuters/ C-SPAN/Zogby January 4-6 28 38 18 Obama by 10

Rasmussen January 5-6

SU = Suffolk University; ARG = American Research Group; UNH = University of New Hampshire. a No bellwethers sampled.

SU/WHDH January 5-6, released by SU January 7a

Poll

Table 4. New Hampshire Democratic Primary Poll Results, January 5-6, 2008 (in percentages)

28 36 22 Obama by 8

Marist January 5-6

30 39 16 Obama by 9

CNN/ WMUR/ UNH Jan 4-5

28 35 19 Obama by 7

CBS News January 5-6

403

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

SU = Suffolk University.

Democratic primary    Clinton    Obama SU prediction    Clinton Republican primary    McCain    Romney SU prediction    McCain

Candidate

30 29

McCain win

37 31

37 42

39 36

Statewide

27 30

37 25

Bellwether 2: Kingston

Clinton win

34 24

Bellwether 1: Sandown

Clinton

35 33

SU/WHDH state poll

January 6 SU/WHDH numbers

29 40

45 30

Bellwether 1: Sandown

January 8 election outcome

Table 5. Polls, Bellwether Samples, and the New Hampshire Primary Election Outcome (in percentages)

34 36

44 31

Bellwether 2: Kingston

404

American Behavioral Scientist 55(4)

40.0 35.0 30.0 25.0 McCain

20.0 Romney

15.0

Giuliani

10.0 5.0 0.0 30-Sep-07

30-Oct-07

30-Nov-07

30-Dec-07

Figure 2. New Hampshire Republican primary poll average Source: RealClearPolitics.com (2008b)

This situation is interesting because it illustrates a phenomenon called “the backyard effect.” A candidate’s poll estimates can be inflated when his or her home state borders the bellwether district in a neighboring state. Both bellwethers are in Rockingham County, New Hampshire, which borders Massachusetts—Romney’s home state. Thus, Romney’s share of voters in the bellwethers was overstated and needed to be adjusted. Figure 3 offers another view of the Republican primary via a comparison of the Suffolk statewide poll results with Rockingham County in general and with Sandown and Kingston bellwethers in particular. On January 6, Rockingham County has Romney by 13 points; bellwethers show less support for Romney (they are split), and the overall statewide lead for Romney is 3 points. On January 7, Rockingham County has Romney ahead by only 1 point, and on January 8, Romney and McCain are tied. The change in Romney’s voter share in Rockingham County is dramatic and directionally consistent with the bellwether estimates. Rockingham County voted for Romney by 1 point, but McCain won New Hampshire by 5 points. Making a prediction about the election outcome in this situation requires qualitative judgment and taps into more of the “art” in polling versus strict “science.” With qualitative insight from the backyard effect, media discussions of the results indicated a McCain win. Paleologos describes the adjustment logic as follows.

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Paleologos and Wilson

405

Figure 3. New Hampshire Republican primary (Rockingham County)

The bellwethers are located in Rockingham County, which borders Massachusetts, so it’s easy to see how Romney would do better in these areas. But the fact that McCain had as much as he did [37% and 30%] in the bellwether districts was an indicator that he was going to do very well in New Hampshire overall. The difference in the bellwether shares compared to the statewide numbers necessitated a downward adjustment for Romney and an upward adjustment for McCain. We were suggesting a McCain win in the interviews in spite of the statewide poll numbers showing Romney with a 3-point lead. (personal interview, March 2, 2009) With estimates from the January 8 poll, the Suffolk prediction was that McCain would win with 30.4% (the upper bound given MOE) compared to Romney at 25.6% (lower bound given MOE). In fact, McCain won 37% to Romney’s 32%. Although McCain’s actual share of voters was outside the MOE by 6.5 points, Romney’s share was within the MOE. The difference between McCain and Romney (5 points) was also outside the MOE for the Suffolk statewide poll (4.4%). For New Hampshire overall, the information from the Suffolk statewide polls plus bellwether samples was critical for Paleologos to go on record with his prediction that the Democratic race would be much closer than the majority of polls were predicting and included the a possibility of a Clinton win. On the Republican side, the statewide polls combined with bellwether data, adjusted for the backyard effect, led him to believe and report in media interviews that Romney would not win by a landslide and that there was a possibility of a win by McCain.

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

406

American Behavioral Scientist 55(4)

Informing Theory: The Nature of Bellwether Districts To address the lack of theory-building effort in this area (Kenski & Dreyer 1977), we offer several propositions about the nature of bellwether districts. Note that a bellwether district may be defined as a state, county, town, ward, or precinct. For the purposes of this discussion, we use the word district as a generic term for an electoral area. In the case of the New Hampshire primary, the bellwether districts used by the SUPRC are two wards, Sandown and Kingston, both of which are also towns. As noted earlier, research on the existence and usefulness of electoral bellwethers is scant. Studies by Kenski and Dreyer (1977) and Tufte and Sun (1975) are two that systematically examined the predictive ability of bellwether states. Using natural experiments with historical data, both studies conclude that postdictive analysis alone is not highly predictive of future election outcomes. In other words, states that voted in favor of the winning candidate in past elections are not necessarily predictive of winners in future elections. Thus, our first proposition to characterize bellwether districts is as follows: Proposition 1: Bellwether districts change over time. Bellwether districts are not constant. There is no reason to believe that particular areas are populated by individuals with prescient knowledge for accurately predicting election outcomes repeatedly over time. As varying demographics rotate in and out of neighborhoods and change, so do bellwether districts. So what is it about the bellwether districts in the Suffolk poll that offer better prediction? In short, it is not the district itself but, rather, how the district is selected. Proposition 1, thus, begs the question, How does one identify bellwether districts? The answer, reflected as Proposition 2, is as follows: Proposition 2: Bellwether districts are identifiable based on “like-election” criteria. A systematic review of voting patterns in like-elections over time is required to effectively identify bellwether districts. Like-election criteria are characteristics such as party affiliation, whether there is an incumbent candidate, and election type (local preliminary or runoff, primary, national-level election). To choose bellwethers, postdictive election results are examined in consideration of race types and conditions. In other words, when past election situations mirror present election conditions, districts that chose the winning candidate then may be bellwethers now. The process for selecting the bellwethers for the New Hampshire presidential primary election is described in the next section. These thoughts motivate a third question, How should estimates from bellwether samples be used? Bellwether districts are determined because they meet specific criteria defined by voting patterns in past like-elections. Thus, districts are chosen in a non-probabilistic manner akin to purposive sampling (Churchill & Iacobucci, 2002,

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Paleologos and Wilson

407

p. 455). Sampling of likely voters within the bellwether district is probability based with random-digit dialing. The size of a bellwether sample is, on average, 300 likely voters. Bellwether samples are large enough to get a reasonable sense of voter intention, but statistical generalization estimates (confidence and MOE) are not computed. Bellwether sample size is another characteristic more akin to art than traditional scientific principles of polling. In short, the SUPRC employs bellwether samples as additional information in analyzing and reporting statewide polls; thus, Proposition 3 is as follows: Proposition 3: Bellwethers are “sister tests” to be used as a complement to probability-based polls. Proposition 3 has precedence in the opinion research knowledge base. Perry (1979) recounts methodology at Gallup in the 1930s where a purposive sample design “was used in the selection of cities, towns, and rural areas, and the quota method [probability sample] was used for the selection of respondents within a sampling area” (p. 313). Within those districts, random-digit dialing is used to sample likely voters. So in practice, the bellwether samples are complements to probability-based polls; they are not used in isolation or as substitutes for statewide polls. This is consistent with comments of Blumenthal (2005), who advocates tolerance in using non-probability methods for complementary “projective, qualitative” views of voters (p. 667). Paleologos’s use of bellwethers began some 20 years ago when he began his career as a political consultant. He developed the method on the basis of observations and experiences in polling to advise candidate clients over the years. Paleologos, a longtime member of AAPOR, also learned best practices from attending conferences and talking about these ideas with others in the opinion research practitioner community. He said, I developed my approach to the bellwether pretty early on because it gave me a way to offer more value to my clients. And in that way, I gained an edge over other pollsters which helped me survive in the business. When I became director of SUPRC, I brought with me the knowledge of the bellwether technique and how to use it. I’m glad that I could bring this to Suffolk University because it adds value to the polling work we do. (personal interview, March 9, 2009)

Informing Practice: Bellwether Selection and Use In this section, we describe, in detail, the process used to identify bellwether districts and to sample within those districts. We return to the situation in the New Hampshire presidential primary to illustrate how this process unfolded. Selection of bellwether districts. Selection of the bellwether districts is a reduction process. One begins with all districts in consideration as being bellwethers. By comparing voting records in like-elections, districts that voted for the winning candidate

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

408

American Behavioral Scientist 55(4)

are identified across a set of trials. Two past elections are “like” that of 2008 in terms of election type (a presidential primary) and incumbent status (no incumbent president). These are the 1988 and 2000 presidential elections. Data for these years for the NH primary elections were obtained by voting wards (n = 300) for the analysis.3 Most wards represent an entire town. Only 13 of 226 towns were divided into two or more wards. So in this situation, the bellwether district is at the level of the voting ward. The bellwether selection process for New Hampshire is in two phases. The first phase uses the 1988 election results as follows. 1. Enter vote counts for all wards for half of the leading candidates in the primary race. 2. Compute the percentage share of voters in the ward for each candidate. 3. For each candidate, estimate the difference between the percentage of votes in the ward and the percentage of votes in the election outcome statewide. 4. If the difference between a candidate’s share in the ward and the outright winner’s share for the election is less than approximately 4%, retain the ward for further consideration. Otherwise, do not consider it further as a potential bellwether. To remain in contention as a bellwether, a ward needs to demonstrate close predictions across successive candidates, or trials. In the 1988 New Hampshire primary, the bellwether selection process was randomly started with the three Republican candidates (George H. W. Bush, Dole, Kemp). After three candidate trials, 28 out of 300 wards, approximately 10%, remain in contention. A fourth candidate trial (Dukakis) was introduced; 16 wards were eliminated. When all six candidate trials (Gephardt and Simon are Trials 5 and 6, respectively) for the 1988 election were considered, the pool of bellwethers was reduced to six districts. The second phase in the selection process introduced data from the 2000 election. This further narrows down the wards. Only two wards, the towns of Sandown and Kingston, had voting patterns that mirrored the election outcomes (within ±4%) for both races across the 11 total trials possible in the 1988 and 2000 races. The frequency distribution of number of bellwether wards by correct trials is shown in Table 6. The result was that Sandown and Kingston accurately reflected both the Democratic and Republican primary outcomes with the statewide outcomes in two like-elections. Selection of the bellwethers is a mixture of science and art. The science is in the systematic consideration of election results over time (postdictive analysis). The art is the qualitative judgment for treating the election events. In the case of New Hampshire, the two like-elections (1988 and 2000) were weighted equally. Paleologos explained that “we wanted two election events to determine the bellwethers” (personal interview, DATE). Unequal weighting might be judged better in a situation where like-elections are old or separated by large time gaps. Sampling in bellwether districts. New Hampshire bellwether districts were sampled in June 2007, in November 2007, and on January 6, 2008. These times for data collection

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

409

Paleologos and Wilson Table 6. Finding the Bellwethers Number of correct trials

Number of bellwether districts remaining

Up to 3  4  5  6  7  8  9 11

28 12  7  6  6  5  4  2

were chosen in light of two factors. First, there were no campaign events that could possibly affect voter preference (Hillygus & Jackman, 2003), and second, there were no extraneous local events that could affect data collection. For example, no sampling was done in December 2007 when Oprah Winfrey held a national media event in support of Barack Obama (a campaign event). Drawing on established principles of experimentation, it is important to sample bellwether districts in “normal” conditions to minimize bias from extraneous effects. Other considerations are the composition of bellwether samples in terms of respondent age and gender. Paleologos explained that in the bellwether samples, the goal is to get as many likely voters as possible to respond. In essence the bellwether becomes a “simulated election.” If adjustments by age and gender are needed [to match the characteristics of the district], we do that after the data are collected. (personal interview, March 16, 2009) The postdictive results from the two New Hampshire bellwethers are shown in Table 7. For all 11 trials, the bellwether estimates are within 3 percentage points of the election outcome. For 10 of the trials, bellwether estimates are within 2 points of the election outcome. Interpreting bellwether results. Estimates from the bellwether samples serve as sister tests to the corresponding statewide poll. Going back to Table 5, the numbers in the bellwether samples collected in correspondence with the Suffolk poll of January 6 indicate a lead for Clinton, which was borne out in the election. On the Republican side, the bellwether numbers reflected a backyard effect. In reporting the polls, Paleologos used qualitative judgment to temper his discussions with the media in making verbal projections of the election outcomes, given statewide poll numbers in view of the bellwether data. When bellwether sample estimates agree with statewide poll numbers, the pollster has corroborating information. When the numbers differ, bellwether data are reviewed for qualitative factors, such as the backyard effect or other extraneous factors. The interpretation of poll results in light of bellwether indicators is another area that requires qualitative evaluation by the pollster.

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

410

American Behavioral Scientist 55(4)

Table 7. Predictive Accuracy of Bellwether Samples in Like-Elections, New Hampshire Primaries, No Incumbent President (in percentages) Candidate 1988    Republican    G. H. W. Bush    Dole    Kemp    Democratic    Dukakis    Gephardt    Simon 2000    Republican    McCain    G. W. Bush    Forbes    Democratic    Gore    Bradley

Actual

Bellwether 1: Sandown

Bellwether 2: Kingston

38 29 13

38 27 14

39 30 13

49 27 24

47 31 22

48 25 27

53 33 14

53 31 16

54 33 13

52 48

55 45

51 49

Performance of the bellwethers. The SUPRC continued sampling bellwether districts for the remainder of the 2008 primary season. Including New Hampshire, data for 13 state races are available for analysis, as shown in Table 8. If we consider the 13 state races, the Suffolk poll and bellwether sample(s) were in agreement in 8 races. The prediction in terms of the winning candidate was accurate across all trials. For the 5 races where there was disagreement, the bellwether districts correctly called the winning candidate in 4 races (80%). The California Republican race was one instance where the Suffolk statewide poll was correct and the Fresno County bellwether was incorrect. If we consider the data at the level of the bellwether, we find intriguing results. A total of 17 bellwether districts were sampled across the pool of primary races (in some instances, 2 bellwether districts were sampled per race). For 16 of 17 districts (94%), the bellwether prediction accurately called the election winner. Table 9 provides data for the comparison of results of Suffolk polls and bellwether samples in various states leading up to the national election. The statewide poll prediction was corroborated by the bellwether sample estimate for 11 of 13 districts (85%). The only situation of dissonance was in the second Florida poll (October 27), where the candidates were tied (at 42%) in both bellwethers (Hillsborough and Monroe counties). Furthermore, historical data from the SUPRC has six more trials where bellwethers were used (2006 Massachusetts gubernatorial primary and election). In those six trials, the bellwether sample correctly predicted the election outcome. Across all uses of the bellwether at SUPRC to date, bellwether predictions have matched election outcomes in 33 of 36 cases, a “hit rate” of 92%. This finding is compelling and highlights the

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

411

Paleologos and Wilson

Table 8. Prediction Analysis of Statewide Polls and Bellwethers for 2008 State Primary Races

Primary race

Suffolk poll predictiona

New Hampshire Democratic

Obama by 5

New Hampshire Republican

Romney by 3

Florida Republican Massachusetts Democratic

McCain by 3 Obama by 2

Massachusetts Republican California Democratic California Republican Ohio Democratic

Romney by 13 Obama by 1 McCain by 7 Clinton by 12

Pennsylvania Democratic Indiana Democratic West Virginia Democratic Kentucky Democratic

Clinton by 10

Oregon Democratic

Obama by 4

Clinton by 6 Clinton by 36 Clinton by 28

Bellwether prediction Sandown: Clinton by 15 Kingston: Clinton by 13 Sandown: Romney by 11b Kingston: Romney by 2b McCain by 2 Waltham: Clinton by 4 StonehamNahant: Clinton by 13 Waltham: Romney by 12 Fresno County: Clinton by 22 Fresno County: Romney by 15 Morgan County: Clinton by 30 Greene County: Clinton by 11 Allegheny County: Clinton by 12 Delaware County: Clinton by 7 Mason County: Clinton by 49 Montgomery County: Clinton by 28 Marion County: Obama by 2

Suffolk poll and bellwether agree?

Election winner

No

Clinton by 3

Yes

McCain by 5

Yes No

McCain by 5 Clinton by 15

Yes

Romney by 10

No

Clinton by 10

No

McCain by 8

Yes

Clinton by 10

Yes

Clinton by 9

Yes

Clinton by 2

Yes

Clinton by 41

Yes

Clinton by 36

Yes

Obama by 18

a

Suffolk poll closest to the election. Suffolk poll reported backyard effect for Romney.

b

value of bellwether data as complementary information to supplement statewide polling results. Bellwethers offer pollsters greater depth of insight and knowledge in reporting poll results, which may, in turn, offer more informed judgments and opinions about

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

412

American Behavioral Scientist 55(4)

Table 9. Prediction Analysis of Statewide Polls and Bellwethers for 2008 General Election

State

Suffolk poll Predictiona

Bellwether prediction

Ohio

Obama by 9

Virginia

Obama by 12

Colorado (August) Colorado (October) Missouri

Obama by 5

Florida (October 28)

Obama by 5

Florida (October 1)

Obama by 4

New Hampshire (October) New Hampshire (September) Nevada

Obama by 13

Perry County: Obama by 2 Accomack County: Obama by 1 Chesapeake City: Obama by 6 Alamosa County: Obama by 4 Alamosa County: Obama by 2 Platte County: McCain by 7 Hillsborough County: Tie Monroe County: Tie Hillsborough County: Obama by 11 Monroe County: Obama by 4 Epping/Tamworth: Obama by 10 Epping/Tamworth: Obama by 6 Washoe County: Obama by 4

Obama by 4 McCain by 1

Obama by 1 Obama by 10

Suffolk poll and bellwether agree?

Election winner

Yes

Obama by 4

Yes

Obama by 6

Yes

Obama by 8.5

Yes

Obama by 8.5

Yes

McCain by 0.1

No

Obama by 2.5

No Yes

Obama by 2.5

Yes

Obama by 9.5

Yes

Obama by 9.5

Yes

Obama by 12

a

Suffolk poll closest to the election.

voter intentions. Paleologos uses an analogy of an engine to describe the place of the bellwether. The bellwether is a valuable added test. For our work at Suffolk University, we think about two things. First, our methodology for statewide polls is sound; we follow AAPOR guidelines in the most rigorous sense. Given that, the bellwether is a supplemental indicator that should not be ignored. It’s like a turbo charge for an engine. It gives that extra kick in terms of precision and it gets you to an accurate call faster. But you can’t have the turbo without the engine in the first place. (personal interview, April 8, 2009)

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Paleologos and Wilson

413

Discussion The emergent finding from our analysis is that bellwether data add insight and value for understanding voter intentions. How then, might this information fit within traditional polling practice? We suggest that bellwether data can make a contribution in two ways: by being a “last best indicator” prior to elections and possibly as an alternative to traditional exit polling. We consider these two ideas next. First, the bellwether sister test, in addition to statewide poll data, can be used by the pollster or media to better inform the public during the months and days leading up to an election, as illustrated in the case of the 2008 New Hampshire primary. This was especially the case in the Democratic race, where the bellwethers consistently indicated a Clinton win. Also, on the Republican side, the bellwether data, in conjunction with recognition of the backyard effect, were evidence that Romney was not as strong as statewide polls indicated. As shown in Tables 8 and 9, bellwether data were, directionally, a last best indicator of candidate standing across numerous races. In a number of cases, we see bellwether estimates being very close to the actual percentage share of voters for the winning candidate. So, as an added test to a statewide poll, bellwether data may offer the pollster (and the media, in general) an edge in correctly calling the race, up or down, for a candidate. This idea is supported by 1988 Democratic presidential nominee and former governor of Massachusetts Michael Dukakis. In reflecting on the 2008 New Hampshire primary he said, “You [the SUPRC] were right in reporting the situation in New Hampshire. And in other state races, use of the bellwethers allowed the Suffolk poll to pick up voter intention dynamics quicker than other polling organizations” (M. Dukakis, personal communication, May 2009). Dukakis’s observation is substantiated empirically in a post hoc analysis of the results for state races in the general election reported in Table 9. At the beginning of a bounded time period (up to 10 weeks, depending on proximity to the election), we computed an average of RealClearPolitics poll results (the pre-Suffolk estimate) for the candidates. At the midpoint of the time period, we recorded the Suffolk poll result. We then computed the poll average for the remainder of the time period (post-Suffolk estimate). Figure 4 provides graphs for three state races as illustrative examples. The trend line is superimposed on the bars to show how the Suffolk poll was a leading indicator for voter preferences as borne out in later polls. Briefly, results for Ohio show the Suffolk poll projecting a gain in momentum for Obama; this is validated in the post-Suffolk poll average. The Ohio bellwether projected an Obama win by 4 points, which corresponded exactly to the election outcome. The situation was similar in Virginia. The Suffolk poll projected stronger support for Obama; the Chesapeake bellwether projected an Obama win by 6 points, which was mirrored in the election outcome. Finally, the results for Missouri are most interesting and illustrative of Dukakis’s statement. The pre-Suffolk poll average showed Obama ahead by 4 points; the Suffolk poll showed a McCain lead by 1, and the bellwether has

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

414

American Behavioral Scientist 55(4)

Figure 4. Suffolk poll and bellwether comparisons for selected state races in 2008 national election

McCain ahead by 7. The post-Suffolk average on the Obama lead decreased to 1.75%. McCain won in Missouri by a very small margin (0.1%). In sum, for 9 of 10 state race comparisons in Table 9, Dukakis’s observation holds: The Suffolk statewide poll with bellwether was predictive of the voter trend. Furthermore, in several instances, the bellwether was very close to the actual election outcome (e.g., Ohio, Virginia/Chesapeake, New Hampshire on October 30). In the interest of brevity, we show data for three states discussed above; the complete set of graphs and data used to generate them are available from the authors. Although the bellwethers identify the winning candidate in most cases (92%), it is not a fail-safe test by any means. There were clear misses in the California Republican race (Table 8) and inconclusive findings for the October 28 survey in Florida (Table 9). Nevertheless, the pattern of correct predictions leads us to believe that this methodology has promise for improving practice in public opinion research. This point leads to our second suggestion for use of bellwethers as a supplement or alternative to traditional exit polls.

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Paleologos and Wilson

415

Data from bellwether districts immediately prior to the election may offer advantages compared to traditional exit polling. This is important to the media in that they often use exit poll results as an “early reading” on the election outcome— “a matter of some controversy among polling professionals and broadcasters” (Levy, 1983, p. 63). Indeed, the purpose of exit polls, according to AAPOR (2009b) is to explain why the election turned out the way it did, not to make predictions. Furthermore, it is well documented that exit polls suffer from problems such as early information leaks and error from various sources (AAPOR, 2009b). For example, Traugott and Price (1992) concluded that social desirability effects associated with face-to-face interviewers (as opposed to anonymous responses to a selfreport survey) biased exit poll results in the 1989 Virginia gubernatorial race. Other possible threats are random factors, such as weather, and nonrandom factors, such as interviewer characteristics (language, fatigue), location of the interviewer, precinct rules (interviewer proximity boundaries), time of day, and respondent refusal. We suggest that bellwether data can be helpful in combination with exit poll data to determine consistencies or to shed light on unexpected developments. Paleologos summarized the situation as follows: The bellwether is an added tool for prediction. The major networks and the Associated Press all pool resources and hire one firm to do exit polling for a national race. Everybody shares the same information. The exit poll results are not supposed to be released until the statewide polls close. But sometimes the numbers get out early. An alternative would be for a commission [of media organizations] to do bellwether tests immediately prior to the election—like the day or night before the polls open. This would allow everyone to have a second look at the data right before the election. The concept of exit polling is right, but the execution can be problematic. With a bellwether sample, potential sources of bias can be reduced or eliminated. The respondent is at home, there is little chance for administrator error [or] bias, the survey is only 1 or 2 minutes long, and responses are anonymous. For exit polls, there is a requirement for random selection—sometimes this is hard to achieve in the field. In the quiet of a call center, you can adjust for sampling externalities. (personal interview, April 8, 2009) The idea of a bellwether as an alternative for exit polling is also appealing because of the similarities in the precinct selection process. Like bellwethers, the selection of exit poll precincts is based on voting patterns in recent past elections (Edison Media Research, 2009).

Conclusion The purpose of this article has been to advance the thesis that pre-election polling may offer better information if statewide polls are combined with bellwether samples as

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

416

American Behavioral Scientist 55(4)

sister tests. We suggest that the opinion research community consider this approach because it is in agreement with Blumenthal’s (2005) recommendation: We should continue to put our trust in probability sampling, regardless of the mode of data collection. Nonetheless, no survey is infallible. The more we can do to validate our samples against reliable benchmarks, the greater the confidence we can all place in our work. (p. 666) Bellwether samples seem to be one important type of benchmark against which statewide poll results can be evaluated for qualitative insight. There is a need for additional field testing and accompanying meta-analysis to further validate the methodology. In conclusion, our contribution in this article has been to offer new insights for theory and practice as to the nature of bellwether districts, how they are selected, and how resulting data can be used. We offer this information to the opinion research community in the spirit of an open-source methodology (Blumenthal, 2005). Authors’ Note The authors are listed alphabetically. Both contributed equally to the creation of this article.

Acknowledgments The authors thank Elliot Kim for editorial and research assistance. We are grateful to John Berg for comments and suggestions on this article.

Declaration of Conflicting Interests The author(s) declared no conflicts of interest with respect to the authorship and/or publication of this article.

Funding The author(s) received no financial support for the research and/or authorship of this article.

Notes 1. Transcribed from YouTube (2008). 2. The method for choosing the bellwether districts is elucidated later in this article. 3. Such data are usually available from the state-level office of the secretary of state.

References American Association for Public Opinion Research. (2009a, March). An evaluation of the methodology of the 2008 pre-election primary polls (M. Traugott, ad hoc committee chair). Retrieved from http://www.aapor.org/uploads/AAPOR_Press_Releases/AAPOR_Rept_of_ the_ad_hoc_committee.pdf American Association for Public Opinion Research. (2009b). Explaining exit polls. Retrieved from http://www.aapor.org

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

Paleologos and Wilson

417

Blumenthal, M. M. (2005). Toward an open-source methodology: What we can learn from the blogosphere. Public Opinion Quarterly, 69(5), 655-669. Churchill, G. A., & Iacobucci, D. (2002). Marketing research (8th ed.). Fort Worth, TX: Harcourt College. Daves, R. P., & Newport, F. (2005). Pollsters under attack: 2004 election incivility and its consequences. Public Opinion Quarterly, 69(5), 670-681. Edison Media Research. (2009). Exit polls. Retrieved from http://www.exit-poll.net Frankovic, K. A. (2005). Reporting “the polls” in 2004. Public Opinion Quarterly, 69(5), 682-697. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall. Hillygus, D. S., & Jackman, S. (2003). Voter decision making in Election 2000: Campaign effects, partisan activation, and the Clinton legacy. American Journal of Political Science, 47(4), 583-596. Jacobs, L. R., & Shapiro, R. Y. (2005). Polling politics, media, and election campaigns. Public Opinion Quarterly, 69(5), 635-641. Kenski, H. C., & Dreyer, E. C. (1977). In search of state presidential bellwethers. Social Science Quarterly, 58(3), 498-505. Levy, M. R. (1983). The methodology and performance of Election Day polls. Public Opinion Quarterly, 47(1), 54-67. Martin, E. A., Traugott, M. W., & Kennedy, C. (2005). A review and proposal for a new measure of poll accuracy, Public Opinion Quarterly, 69(3), 342-369. Murray, G. R., Riley, C., & Scime, A. (2009). Pre-election polling: Identifying likely voters using iterative expert data mining. Public Opinion Quarterly, 73(1), 159-171. Perry, P. (1979). Certain problems in election survey methodology. Public Opinion Quarterly, 43(3), 312-325. RealClearPolitics.com. (2008a). Election 2008: The New Hampshire Democratic primary. Retrieved from http://www.realclearpolitics.com/epolls/2008/president/nh/new_hampshire _democratic_primary-194.html#polls RealClearPolitics.com. (2008b). Election 2008: The New Hampshire Republican primary. Retrieved from http://www.realclearpolitics.com/epolls/2008/president/nh/new_hampshire_republican _primary-193.html Suffolk University. (2008a, January 6). Poll: It’s neck and neck in New Hampshire. Retrieved from http://www.suffolk.edu/26006.html Suffolk University. (2008b, January 7). Poll: NH voters see Obama presidency. Retrieved from http://www.suffolk.edu/26043.html Traugott, M. W. (2005). The accuracy of the national preelection polls in the 2004 presidential election. Public Opinion Quarterly, 69(5), 643-654. Traugott, M. W., & Price, V. (1992). Exit polls in the 1989 Virginia gubernatorial race: Where did they go wrong? Public Opinion Quarterly, 56(2), 245-253. Tufte, E. R., & Sun, R. A. (1975). Are there bellwether electoral districts? Public Opinion Quarterly, 39(1), 1-18. YouTube. (2008). Tom Brokaw lectures Chris Matthews [NBC News broadcast]. Retrieved from http://www.youtube.com/watch?v=NDGkOgtz-f8&feature=related

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013

418

American Behavioral Scientist 55(4)

Bios David Paleologos is the director of the Suffolk University Political Research Center (SUPRC), where he works in partnership with WHDH-TV (Boston) and WSVN-TV (Miami) conducting polls, including for the most recent presidential election. The SUPRC has gained national prominence with poll results reported on all major print, broadcast, cable and new media around the world. He is an adjunct faculty member in Suffolk’s Department of Government, where he teaches one of the department’s most popular courses, Political Survey Research, and is a guest lecturer on political survey research at Emerson College, Tufts University, and the Institute of Politics at Harvard University. He is also the founder and CEO of DAPA Research and received his BA degree in economics from Tufts University. He is also a member of the World Association of Public Opinion Researchers, American Association of Public Opinion Researchers, and the Northeast Political Consultants Association. Elizabeth J. Wilson is an associate professor in and chairperson of the Department of Marketing of Sawyer Business School at Suffolk University. She received the PhD in business administration from Pennsylvania State University in 1989 and has held faculty appointments at Louisiana State University, University of New South Wales, and Boston College. Her recent research spans several areas, including case research methods, social partnerships among organizations, brand strategy, and buyer behavior. She teaches courses in marketing research and marketing principles to undergraduate and graduate students in traditional (live) and online environments.

Downloaded from abs.sagepub.com at SUFFOLK UNIV on May 13, 2013