Crash Test Dummies? The Impact of Televised Automotive Crash ...

2 downloads 82991 Views 111KB Size Report
Broadcasts of Insurance Institute for Highway Safety high-speed automotive crash tests on NBC's .... the relationships were not uniform across all market classes,.
Crash Test Dummies? The Impact of Televised Automotive Crash Tests on Vehicle Sales and Securities Markets Stephen W. Pruitt and George E. Hoffer Broadcasts of Insurance Institute for Highway Safety high-speed automotive crash tests on NBC’s Dateline do not affect either market shares or stock prices. However, injury claims within a market class are highly correlated with crash test ratings. Inattention to the tests may be a consequence of the manner in which the crash tests have been presented and because the crashworthiness of the U.S. vehicle fleet is positively correlated with consumer estimates of expected vehicle safety.

ollowing the passage of the National Traffic and Motor Vehicle Safety Act of 1966 and the Highway Safety Acts of 1966 and 1970, the federal government effectively assumed its current role as the primary regulator of vehicle and roadway safety. Originally established in 1970, the National Highway Traffic Safety Administration (NHTSA)—initially through automotive safety recall campaigns and since 1979 through its New Car Assessment Program vehicle crash tests (U.S. Department of Transportation 2003)—has provided seemingly unbiased automotive quality and safety information to the public. Before 1966, however, private-sector organizations such as the Consumer’s Union (CU) served as the primary providers of information pertaining to automotive safety. Much of CU’s information has had limited dissemination to the general public, as CU has vigorously fought the commercial use of its assessments. In recent years, a second nonprofit, private-sector organization has become prominent. The Arlington, Virginiabased Insurance Institute for Highway Safety (IIHS) has employed network, primetime television broadcasts to disseminate the results of its crash test programs to millions of U.S. consumers. From April 1995 to January 2004, more than 25 frontal, offset crash test program segments (representative of approximately 200 automotive models) were broadcast on NBC’s Dateline NBC (hereinafter, Dateline) news magazine. The purpose of this study is to analyze quantitatively the impact of these frontal, offset crash tests on both vehicle sales and manufacturer stock prices. We are unable to doc-

F

Stephen W. Pruitt holds the Arvin Gottlieb/Missouri Endowed Chair in Business Economics and Finance and is Chair of the Department of Finance, Information Management & Strategy, Henry W. Bloch School of Business and Public Administration, University of MissouriKansas City (e-mail: [email protected]). George E. Hoffer is Professor of Economics, School of Business, Virginia Commonwealth University (e-mail: [email protected]). The authors are grateful to the three anonymous JPP&M reviewers for many helpful comments on previous drafts and to the Arvin Gottlieb/Missouri Endowment for funding assistance.

102 Journal of Public Policy & Marketing

ument any consistent evidence that either consumer or securities markets responded to either positive or negative crash test results. However, because IIHS crash test results are shown to be highly correlated to actual market-class– adjusted injury claim frequencies, we investigate why markets seemingly choose to ignore this useful information. In the next section, we present a directed literature review. In the subsequent sections, we describe the institutional participants and data sets and develop our models. We present the empirical results in the next section. We conclude with a summary of the significant policy implications.

Previous Results During the past 30 years, a significant body of research has developed that pertains to the economic impacts of automotive safety regulation.1 From initial cost–benefit analyses (Crandall and Graham 1989; Crandall, Keeler, and Lave 1982; Lave and Weber 1970) to the offsetting behavior debate (Harless and Hoffer 2003; Peltzman 1975; Peterson and Hoffer 1994), this work has progressed multidimensionally. However, rather than attempt to summarize this entire literature, we review only that research that focuses on the impact of safety-related information on consumer behavior and securities markets. More than 20 years ago, scholars found evidence that the sales rates of automotive lines involved in NHTSAinfluenced automotive recall campaigns were adversely affected during the month of the recall announcement (Crafton, Hoffer, and Reilly 1981; Reilly and Hoffer 1983). More recent work has reported that NHTSA-influenced recall campaigns disproportionately involve smaller manufacturers’ older model vehicles whose problems are relatively less hazardous (Rupp and Taylor 2002). This latter research also supports prior work (Hoffer, Pruitt, and Reilly 1994) that reports that motor vehicle return response rates are positively correlated with the perceived danger of the safety-related problem and inversely correlated with the age of the model recalled. 1Separate technical and engineering literature exists on the efficacy of specific vehicle safety appliances. Much of this literature is referenced in Accident Analysis and Prevention.

Vol. 23 (2) Fall 2004, 102–114

Journal of Public Policy & Marketing

Research into capital market responses to safety-related automotive recall campaigns (Hoffer, Pruitt, and Reilly 1987; Jarrell and Peltzman 1985; Pruitt and Peterson 1986; Pruitt, Reilly, and Hoffer 1986) has documented evidence of statistically significant decreases in the stock prices of the affected manufacturers at the time of recall but no share price impact on competitor firms (Hoffer, Pruitt, and Reilly 1988). In addition, there is no evidence that the stock price responses to federally mandated recall campaigns differ from those in response to voluntary recalls by the manufacturer (Rupp 2001). Unlike automotive recall announcements, only one study addressing consumer responses to NHTSA high-speed, fullfrontal crash tests has appeared in the literature (Hoffer, Pruitt, and Reilly 1992). This study finds that consumer markets do not respond to the publication of either “good” or “poor” survivability statistics. These findings may reflect the limited dissemination of the NHTSA crash test results, however, because the NHTSA has had no formal relationship with any private-sector media organization. Recently, Farmer (2004), of the IIHS, studied the relationship between the IIHS’s frontal, offset crash test ratings and fatality rates. He concludes that better rated crash-tested vehicles tend to have lower driver fatality rates. However, the relationships were not uniform across all market classes, nor were they consistently statistically significant. Farmer does not attempt to analyze the impact of the IIHS tests on consumer or financial markets. In light of the prominent roles that automotive safety and quality metrics have assumed in the literature, it is somewhat surprising that the market impact of the IIHS’s crash tests has yet to be investigated. We present the results of such tests.

The IIHS and Dateline The IIHS and its affiliate, the Highway Loss Data Institute (HLDI), have become the most publicly visible entities researching automotive safety issues. For more than 20 years, HLDI has published relative injury and property damage experiences annually per vehicle line. For almost 10 years, IIHS has conducted high-speed, frontal, offset crash tests. Performed in its rural Virginia crash test facility, this crash test research spans the full spectrum, from low-speed bumper taps to high-speed, frontal, offset crash tests. The IIHS simultaneously releases its crash test results through several media outlets, including its own Web site, press releases, and, most important, the NBC primetime news magazine Dateline NBC. Other NBC news venues, such as MSNBC and the NBC Nightly News, often run complementary stories. Broadcast since 1992, Dateline is NBC’s multinight, primetime news magazine franchise that averages approximately ten million viewers per broadcast (Variety 2002). Program segments featuring the IIHS crash tests generally run from 10 to 15 minutes in length. Each segment includes both normal and slow-motion tapes of the tests; postcrash structural analysis and editorial comments are provided by an IIHS officer or scientist in an interview format that is conducted by a Dateline anchor or correspondent. Broadcasts of high-speed IIHS crash tests have appeared on Date-

103

line since 1995 and are featured at irregular intervals approximately three times each calendar year.

Data and Empirical Methodology Data IIHS Crash Tests Every 40 miles-per-hour (mph) frontal, offset crash test conducted by the IIHS and telecast on NBC’s Dateline news magazine between April 1995 and January 1, 2002, was included in the initial database. Crash test telecast dates were determined from a guided news search of NBC News transcripts on Lexus-Nexis. Because all programs were telecast after market hours, the next market business day was considered the test date for securities analysis purposes. Crash tests subsequent to January 1, 2002, or those that involved foreign manufacturers not traded on U.S. stock exchanges were excluded from the analysis because data were unavailable on the University of Chicago’s Center for Research in Security Prices (CRSP) database. The IIHS evaluations are based on a vehicle’s performance when it is crashed offset at 40 mph into a deformable barrier. The IIHS notes that its crashworthiness ratings are based on occupant compartment intrusion injury measures from a 50th percentile (physical size) male Hybrid III crash test dummy positioned in the driver’s seat and slow-motion film analysis (IIHS 2002). Final overall evaluations are classified according to a four-part scale (“good,” “acceptable,” “marginal,” and “poor”). NHTSA Crash Test Scores Since 1979, the NHTSA has conducted full-frontal crash tests of passenger vehicles at 35 mph into a fixed barrier. The IIHS (2002) has noted that the two types of frontal tests complement each other. To analyze the consistency among the several sources of publicly available safety data, linespecific vehicular safety information was obtained for both the driver and front-seat passengers from NHTSA. Unlike the IIHS, NHTSA uses unit volume sales data as criteria in choosing which vehicles to crash test. Therefore, many lowvolume vehicles crash tested by the IIHS (in particular those in the upper price levels) have no NHTSA counterpart. Since the early 1990s, NHTSA has employed a five-gradient star system, in which a rating of five stars represents the best crash test performance. Stock Prices Following standard practice, the University of Chicago’s computerized CRSP database served as the source for all analyzed stock market data. Thus, only publicly traded U.S. manufacturers and foreign-based companies with U.S.traded American Depository Receipt shares listed on the tape were included in the analysis. Whereas most major automotive manufacturers are listed on the CRSP tape, data for a few companies (BMW, Fuji, Hyundai, Kia, and Volkswagen) were not available and therefore were excluded from further study. Vehicle Sales and Market Class Data Data on monthly unit sales for each tested vehicle line and for virtually all like-class competitor models for the month

104 Crash Test Dummies

before, the month of, and two months following each IIHS crash test were obtained from various issues of Automotive News (1995–2002). Market classification categories were identical to those defined by Automotive News for tests conducted between 1995 and 1998 and by Consumer Guide (1998–2002) for post-1998 crash tests. Models were excluded from the like-class cohort if they were clearly in a ramp-up or phase-out stage of the product life cycle. For example, sales of the Oldsmobile Aurora were not included in the near luxury product segment at the time of its IIHS crash test because this vehicle line was being phased out at the time of the test. In addition, a few specialty models included in the same vehicle class by Automotive News or Consumer Guide but that possessed markedly different vehicle characteristics than the tested model were excluded. For example, the Nissan Maxima (upper mid-range segment) was crash tested in April 1995, but sales of the Audi Cabriolet were excluded from the construction of the like-class market share statistics because the Audi was available only as a convertible. Because no minimum monthly sales levels were used, virtually all sales in the same vehicle class were included in the creation of the market share statistics, which form the basis for the market share impact tests that follow. Manufacturer Incentives (Consumer and Dealer Rebates) For more than a decade, Automotive News has reported national and regional manufacturer incentives on a weekly basis. Accordingly, reported manufacturer incentives, which include both consumer and “hidden” direct-to-dealer cash rebates (but not the below-market interest rate financing typically offered in lieu of an actual cash rebate), were collected for the month before each IIHS crash test, as well as for the following three months.2 The dealer and consumer cash incentives were then totaled and divided by the stated vehicle list prices to obtain a normalized measure of the percentage of monthly vehicle rebates (if any) offered on each crash-tested vehicle. HLDI Injury Index To investigate the consistency of the several publicly available measures of expected vehicular safety and actual loss experiences, we obtained HLDI’s Injury, Collision, and Theft Losses data for 1996–2002. For a generation, HLDI has published vehicular line/body style injury losses in terms of the relative frequency of injury claims per insured vehicle year that are filed under personal injury protection coverages in the so-called no-fault states, which presently number 17, plus the District of Columbia. In each year, a rating of 100 represents the average injury loss frequency experience for all reported passenger vehicles; a rating of,

2As automotive advertising often mentions, the magnitude of most interest rate rebates differs by loan term, making the calculation of the actual dollar value of these incentives exceedingly difficult. Until very recently, there was no standardized measure of the value of vehicle rebate programs. Accordingly, we concentrate on percentage cash rebates, which, though imperfect, are likely the best test we can construct given the available data constraints.

for example, 61 indicates that the vehicle is 39% better than average. Tables 1 and 2 present summaries of the nameplates and manufacturers of the 128 vehicles included in the final data set, as well as a breakdown of the number of crash tests conducted per year and the results of the tests by classification category. Although slightly more models (34) registered “good” survivability results than “poor” (29) results, the actual distribution of the test results across the four outcome categories is essentially flat. Because of its broader product line, General Motors’ vehicles were tested more frequently than were those of other manufacturers. Table 1.

Nameplates and Manufacturers of Crash-Tested Vehicles Analyzed

Nameplate

Crash Tests

Acura Buick Cadillac Chevrolet Chrysler Dodge Ford GMC Honda Infiniti Isuzu Jeep Lexus Lincoln Mazda Mercedes Mercury Nissan Oldsmobile Plymouth Pontiac Saab Saturn Suzuki Toyota Volvo

1 2 2 11 5 8 12 5 8 2 6 5 4 1 6 4 7 8 3 3 5 2 2 2 11 2

Total

Table 2.

Manufacturer Chrysler Daimler DaimlerChrysler Ford General Motors Honda Nissan Toyota Volvo Total

Crash Tests 8 1 16 27 41 9 10 15 1 128

128

Summary Statistics for the Crash-Tested Vehicles

IIHS Safety Rating

Crash Tests

Year Tested

Crash Tests

“Good” “Acceptable” “Marginal” “Poor”

34 34 31 29

1995 1996 1997 1998 1999 2000 2001

17 22 17 18 19 8 27

Total

128

Total

128

Journal of Public Policy & Marketing

Methodology Tests for Changes in Vehicle Market Shares A key question we addressed in this study was whether the broadcast of IIHS crash tests resulted in economically and/ or statistically significant changes in motor vehicle market shares for the tested models. Because our primary goal was to isolate the effect of a single variable (IIHS crash test broadcasts) on vehicle market shares, we chose not to develop a regression model of motor vehicle demand. Rather, we statistically compared the class-specific market shares for each tested vehicle in each month t following each IIHS crash test with each tested vehicle’s class-specific market share in the month before the tests. On the assumption that both tested and untested car and truck models, on average, will tend to be similarly influenced by unspecified time-related variables, the differences between tested and untested vehicles’ rates of change will be free of this influence. Accordingly, these differences in market share may be viewed as a collection of observations of a sample whose mean should be zero under the null hypothesis that broadcast IIHS crash tests have no impact on the market shares of tested vehicle models. We conducted both immediate (change in market share in the month of the test) and longerterm (change in market share in the two months following) tests. Tests for Changes in Manufacturer Stock Prices A detailed discussion of the mathematical procedures involved in performing the event-based stock price analysis is beyond the scope of this study and is available elsewhere (Brown and Warner 1985). In general, however, the technique involves modeling how a given firm’s stock price would be expected to perform, in terms of an index of overall stock prices, in the absence of a specific, economically significant event. When modeled, these day-to-day stock market expectations can be subtracted from actual ex post stock price manifestations to arrive at a reasonable estimate of the economic value of the events being studied—in this case, the broadcast of IIHS crash tests. Although the overall economic value of each individual studied crash test date is subject to a small degree of random measurement error, when aggregated across similar events (typically 20 or more), such random fluctuations are diversified statistically and reveal the overall mean influence of the events in question on the stock prices of the subject firms. Originally developed in the field of finance, similar studies of eventinduced stock market impacts have been successfully used in many areas of consumer research to assess the valuation effects of announcements of a wide range of economically significant events, including, as we noted previously, those involving automotive safety. The actual methodology we employed to generate the stock market results is known as the Scholes-Williams standardized cross-sectional market model. With this technique, three separate parameter-estimating regressions between the stock market index (the CRSP value-weighted index of all stocks in the database) and the stock prices of each automotive manufacturer were performed over event days t = –175 to –26, relative to day t = 0, or the first day of trading after each Dateline broadcast. The combined results of these

105

regressions—both slope and intercept—were then used in conjunction with actual changes in the CRSP market index to estimate the expected stock return changes for each manufacturer during a 51-day event window that began 25 trading days before and ended 25 trading days after each Dateline broadcast. The stock price effects of each individual crash test, or abnormal returns, were then obtained by subtracting the expected stock returns over event days t = –25 to +25 from those actually observed in the market. When computed, the individual manufacturer abnormal returns were then aligned in “event time” using the event day t = 0 as the common reference point. Thus, the mean abnormal return for event day t is the arithmetic average of the individual manufacturer abnormal returns registered by the companies on each event day t. The metric of stock price changes we report in this study, the mean cumulative abnormal return (MCAR), is defined as the cumulative total of the individual daily mean abnormal returns registered between any two specified event dates of interest. It measures the mean and net impact of the events over the intervals studied. We present the results of intervals of both immediate (t = –1 to +1 and –1 to +5) and intermediate (t = –1 to +10 and –1 to +25) terms.3 The interpretation of the resultant Z-statistic for each MCAR interval is straightforward: The null hypothesis of no stock price impact (implying an MCAR insignificantly different from zero) can be rejected only if the calculated Zstatistic reaches conventional statistical levels.

Empirical Results Do IIHS Crash Tests Lead to Changes in Manufacturer Stock Prices? The stock price responses we analyze must be aggregated by individual manufacturers because Dateline broadcasts, which feature different nameplates and models produced by the same manufacturer, have a nontrivial probability of conflicting ratings. For example, if a particular broadcast segment included crash tests of both DaimlerChrysler’s Mercedes E-Class sedan and its Dodge Grand Caravan minivan, one vehicle could receive a “good” rating while the other was classified as “poor.” Because we expect DaimlerChrysler’s stock price to respond to both the good and the bad news of this test, the net effects of the Dateline broadcast could be zero, even if, considered separately, each event would move its stock prices by a statistically significant degree. Although it might be possible to construct a weighted average ranking based on the relative volumes of vehicles sold that received each rating, such measures still

3Because virtually all work in finance and economics supports the hypothesis that stock prices react very quickly to sudden innovations in the informational environment, event windows of longer than one trading month are analyzed in only the most exceptional circumstances. Because the date of each Dateline broadcast can be readily determined with the utmost precision, extending the event windows beyond the intervals studied here would reduce the power of the employed statistical tests, because random economic events unrelated to the IIHS crash test announcements would increase the variance of stock returns without necessarily changing the mean.

106 Crash Test Dummies

would involve countervailing stock price movements, which would tend to bias the net share price results inappropriately toward zero (i.e., a finding of no IIHS crash test effect). To eliminate this possibility entirely, the original data set was purged of all conflicting IIHS crash test ratings issued on the same calendar date. Thus, the stock price results we present represent only pure or consistent events, in which all the vehicles produced by a given manufacturer received identical IIHS crash test rankings on each broadcast date. Although this elimination reduces the total number of manufacturer events from 128 to 83, the possibility of a slight loss of generality of the study findings is more than compensated for by our ability to draw uncontaminated and accurate inferences with respect to the actual valuation effects of the tested events.4 This reduction in available data points does not come at the cost of the inclusion of a disproportionately large number of low-volume vehicles. The average (150,940 versus 131,180 units), median (92,000 versus 79,500 units), and minimum (4000 versus 3000 units) annual vehicle unit sales levels for the pure 83 event data set are considerably higher than the same metrics for the complete sample of 128 crash tests. The maximum (912,000 units) is unchanged. Panels A–D of Table 3 present the results of four separate event analyses of the mean cumulative abnormal (marketand risk-adjusted) percentage share price changes for manufacturers of vehicles rated by the IIHS as “good,” “acceptable,” “marginal,” and “poor.” In each case, N, the number of unique manufacturer events, is the lower bound of the number of specific models represented by each crash test; the number of models produced by a given manufacturer that achieved the specified IIHS rating ranges from 1 to 4, with a mean of just less than 1.5. The number N+ represents the number of manufacturers with that IIHS crash test rating that experienced cumulative abnormal increases in stock prices over the interval in question. Recall as well that the first day of trading following each Dateline broadcast is designated as event day t = 0. Thus, for example, the event window from t = –1 to +1 includes the day before the first day of trading following each Dateline broadcast, the first day of trading following the previous night’s broadcast, and the day after the broadcast. This specific announcement interval has become standard literature practice, and we employed it to allow for any preprogram, day-of-broadcast promotions by either the IIHS or NBC. Furthermore, this interval ensures that stock market traders had at least one full day of trading to digest the informational content of the ratings. The other event windows (t = –1 to +5, t = –1 to +10, and t = –1 to +25) seek to capture evidence of longer-term reevaluations. As we show in Panel A of Table 3, there is no evidence that Dateline broadcasts of “good” IIHS crash tests were associated with increases in manufacturer share prices. Not only are none of the cumulative abnormal return measures statistically significant at the 5% level or less, but two also are actually negative. Even more illustrative of the lack of a positive market reaction to “good” IIHS crash tests is the 4The importance of a clean sample in the context of an event study of the stock price responses to automotive safety-related events has been documented in previous literature (Hoffer, Pruitt, and Reilly 1988).

Table 3.

Event Interval

Mean Cumulative Percentage Automotive Manufacturer Stock Price Reactions (MCAR) to Dateline Broadcasts of IIHS Crash Tests

N

MCAR

Z-Statistic

N+

Z-Statistic

A: Vehicles with “Good” Survivability Crash Test Results –1 to +1 15 .892 .856 7 –.124 –1 to +5 15 –.340 –.036 6 –.641 –1 to +10 15 1.650 .762 7 –.124 –1 to +25 15 –1.723 –.642 5 –1.157 B: Vehicles with “Acceptable” Survivability Crash Test Results –1 to +1 17 .161 .414 10 .969 –1 to +5 17 –.361 –.139 9 .483 –1 to +10 17 –1.314 –.371 8 –.003 –1 to +25 17 –3.560 –.957 6 –.975 C: Vehicles with “Marginal” Survivability Crash Test Results –1 to +1 14 1.279 1.131 9 1.166 –1 to +5 14 3.080 1.925 11 2.235* –1 to +10 14 4.901 2.618* 12 2.770* –1 to +25 14 2.454 1.187 9 1.166 D: Vehicles with “Poor” Survivability Crash Test Results –1 to +1 11 –.462 –.871 4 –.816 –1 to +5 11 –.481 –.714 4 –.816 –1 to +10 11 –.073 –.406 6 .391 –1 to +25 11 1.560 –.007 6 .391 *Significant at the 5% level or less. Notes: N = number of unique manufacturer events, and N+ = number of manufacturers with specific crash test rating.

simple fraction of firms that registered positive increases in stock prices. In every tested interval, this fraction is less than the random chance probability of .5. Panels B, C, and D of Table 3 repeat the same series of tests for “acceptable,” “marginal,” and “poor” IIHS crash tests. The only statistically significant changes in share prices noted in Table 3 are the cumulative abnormal return increases registered by manufacturers of vehicles rated by the IIHS as “marginal.” The binomial proportionality test statistic (Z) for the simple fraction of firms with “marginal” IIHS crash tests is statistically positive for two of the four intervals tested. Viewed as a whole, there is no evidence in Table 3 that changes in manufacturer stock prices around the time of IIHS crash test releases are consistent with a priori expectations that IIHS crash test ratings and changes in common stock prices are positively correlated. In the case of the longest time interval tested, the correlation, though insignificant, is reversed. Table 4 presents the results of stock price responses of combined samples of “good” and “acceptable” and “marginal” and “poor” IIHS crash tests. Combining the ratings into two larger subsets increases the sample sizes, though there is still no evidence that relatively better IIHS crash test results led to increases in stock prices or that relatively poor ones were associated with stock price declines. In conclu-

Journal of Public Policy & Marketing

Table 4.

107

Mean Cumulative Percentage Automotive Manufacturer Stock Price Reactions (MCAR) to Dateline Broadcasts of IIHS Crash Tests for Combined Samples

Event Interval

N

MCAR

Z-Statistic

N+

A: Vehicles with “Good” or “Acceptable” Survivability Crash Test Results –1 to +1 32 –.500 .959 –1 to +5 32 –.352 –.114 –1 to +10 32 .084 .359 –1 to +25 32 –2.698 –1.116

17 15 15 11

B: Vehicles with “Marginal” or “Poor” Survivability Crash Test Results –1 to +1 25 .513 –1 to +5 25 1.508 –1 to +10 25 2.714 –1 to +25 25 2.058

13 15 18 15

.199 .680 1.464 .816

Z-Statistic .621 –.087 –.087 –1.503 .331 1.132 2.332* 1.132

*Significant at the 5% level or less. Notes: N = number of unique manufacturer events, and N+ = number of manufacturers with specific crash test rating.

sion, there is no evidence in Table 3 or 4 that securities markets consider broadcast crash test results informationally significant events.5

Do IIHS Crash Tests Affect Vehicle Market Shares? Table 5 presents a summary of the impact of Dateline broadcasts of the IIHS crash tests on changes in classspecific market shares one, two, and three months following the IIHS crash tests for vehicles with “good,” “acceptable,” “marginal,” and “poor” ratings. As we noted previously, by their construction, these tests implicitly control for unspecified changes in vehicle market shares unrelated to the broadcast of the IIHS crash tests. As we show in Table 5, there is no evidence that the broadcast of the IIHS crash tests led to statistically significant increases in the market shares of models with “good” or “acceptable” crash test ratings. Similarly, there is no evidence that consumers shunned vehicles with “marginal” or “poor” ratings. Indeed, there are no statistically significant changes in the table, which supports the share price results presented previously and is consistent with the hypothesis that, overall, IIHS crash tests are informationally insignificant events, at least with respect to changes in market share. The percentage of tested vehicle models registering increases in market share following IIHS crash tests was similar for vehicles rated both “good” and “poor.” In the month of the IIHS crash tests, 66.7% (65.5%) of vehicles rated “good” (“poor”) experienced increases in market share, whereas two months after the tests, only 54.5% of the “good” vehicles had experienced market share increases. In contrast, fully 65.5% of vehicles rated “poor” had experienced increases in market share two months after the IIHS tests. 5An anonymous reviewer suggested that the ability of consumers to switch preferences from one vehicle market segment to another produced by the same manufacturer will tend to bias the stock price test results toward zero. However, to the extent that consumers wish to purchase vehicles from a given market class segment, the tests remain relevant and, if consumers heed the results of the IIHS crash tests, should produce results consistent in sign, if not significance, with our a priori expectations.

Table 6 completes the basic market share impact analysis by showing the market share results of aggregated tests on models rated either “good” or “acceptable” (Panel A) and models rated either “marginal” or “poor” (Panel B). Panel C presents tests of the differences in the mean market share changes between the two samples. As we show in Table 6, there is no evidence that vehicles with “good” or “acceptable” IIHS crash tests experienced statistically significant increases in market share relative to models with “marginal” or “poor” test results. Except for the case of two months after the crash tests (when the mean difference in market share between “good” and “acceptable” and “marginal” and “poor” models was approximately three-tenths of 1%), differences between the two samples were restricted to the low hundredths of 1% of market share.

Table 5.

Crash Test Score

The Impact of Televised IIHS Crash Tests on Changes in Monthly Percentage Vehicle Market Shares Month One Share

A: Vehicles Rated “Good” Mean .00128 Standard error .00286 t-statistic .44755

Month Two Share

Month Three Share

.00458 .00255 1.79608

.00160 .00242 .66116

B: Vehicles Rated “Acceptable” Mean .00015 Standard error .00187 t-statistic .08021

–.00363 .00327 –1.11009

.00184 .00369 .49864

C: Vehicles Rated “Marginal” Mean –.00557 Standard error .00354 t-statistic –1.57345

–.00410 .00351 –1.16809

–.00753 .00530 –1.42075

D: Vehicles Rated “Poor” Mean .00519 Standard error .00393 t-statistic 1.32061

.00489 .00334 1.46407

.00476 .00286 1.66434

108 Crash Test Dummies Table 6.

Difference Tested

Differences in Monthly Changes in Percentage Vehicle Market Share Following Televised IIHS Crash Tests Month One Share

Month Two Share

Month Three Share

A: Vehicles Rated “Good” or “Acceptable” Mean difference .00071 .00041 Standard error .00169 .00212 t-statistic .42012 .19340

.00172 .00220 .78182

B: Vehicles Rated “Marginal” or “Poor” Mean difference .00037 .00024 Standard error .00271 .00248 t-statistic .13653 .09677

–.00159 .00315 –.50476

C: “Good” or “Acceptable” Minus “Marginal” or “Poor” Mean difference .00034 .00017 .00331 Standard error .00089 .00325 .00384 t-statistic .33735 .05234 .86265

Do Consumer Expectations Mute the Influence of IIHS Crash Tests on Market Shares? Although the results in Tables 5 and 6 suggest that IIHS crash tests do not elicit changes in vehicle market shares, it is possible that unusually accurate consumer expectations of IIHS crash test findings reduce the ability of empirical tests to document a significant relationship between the two variables. If, for example, the results of the IIHS crash tests were fully expected by consumers, there would be no reason to believe that their dissemination by Dateline (or any other media outlet) would lead to demonstrable changes in vehicle market shares. To test this hypothesis directly, a proxy for model-specific consumer expectations of IIHS crash tests was used. As we have noted, the NHTSA has been crash testing vehicles into a fixed barrier since 1979. Although the NHTSA test is designed to measure different aspects of expected vehicle safety, the correlation between the findings of the two crash test programs is high. We present the results of three ordinary least squares (OLS) regressions between the two sets of findings in Table 7. Panels A–C of Table 7 report the results of three OLS regressions between the IIHS crash test results and NHTSA’s driver-only, passenger-only, and driver-pluspassenger full-frontal crash test scores. The relationships between the two data sets are always positive and statistically significant. In each case, the correlation coefficients exceed .4, and the t-statistics always exceed 4.5. Because of the strength of the relationship between the findings of the two crash test programs, model-specific consumer expectations of individual IIHS crash tests were proxied with the most recent NHTSA crash test score registered for each tested vehicle model that was available before each IIHS crash test.6 In the few cases in which later IIHS crash 6Because the IIHS and NHTSA use different rating scales (four versus five increments), we used an adjusting equation to put them on an equal footing. Specifically, we converted each NHTSA crash test score for model i to its IIHS equivalent as follows: IIHSi = [(NHTSAi – 1.0) × .75] + 1.0. This conversion equation, with appropriate adjustments, is identical to the procedure employed to convert Fahrenheit (English) temperature readings into standard (metric) equivalents.

tests revisited models previously tested by the IIHS, the original IIHS crash test score was used as the consumer expectations proxy. Finally, the unexpected deviation in crash test performance for each tested vehicle model was calculated by subtracting the employed expectations proxy for each model from the actual IIHS crash test result. Panels A–C of Table 8 present three separate regressions between the proxy for unexpected IIHS crash test ratings and changes in market share at the time of the crash tests and the two succeeding months. As we show, there is no evidence that unexpected IIHS crash tests elicited measurable changes in vehicle market shares over any of the tested time periods. Indeed, though insignificant, the coefficient for the variable that captures the unexpected component of the IIHS crash tests has a sign opposite that anticipated by our a priori expectations.

Do IIHS Crash Tests Convey Relevant Information? Because of the uniform lack of consistent consumer sales or stock market responses to televised IIHS crash tests, it is Table 7.

Constant (t)

Regression Analyses Between IIHS and NHTSA Crash Test Results Coefficient (t)

R2

Correlation

A: IIHS Crash Tests and NHTSA Crash Tests (driver only) (–.18991 .68067 .17552 .41900 (–.32294) (4.52078*) B: IIHS Crash Tests and NHTSA Crash Tests (passenger only) (–.06612 .63833 .19397 .44042 (–.12507) (4.80651*) C: IIHS Crash Tests and NHTSA Crash Tests (driver plus passenger) (–.99565 .44154 .24804 .49803 (–1.61576) (5.62721*) *Significant at the 1% level, two-tailed test.

Table 8.

Constant (t)

Regression Analyses Between Unexpected IIHS Crash Test Scores and Monthly Changes in Vehicle Market Share Coefficient (t)

R2

Correlation

A: Changes in Vehicle Market Share for Month t = 0 and IIHS Crash Tests (–.00043 –.00115 .00518 –.07020 (–.19357) (–.68815) B: Changes in Vehicle Market Share for Month t = 1 and IIHS Crash Tests (.00006 –.00127 .00675 –.08215 (.02982) (–.78634) C: Changes in Vehicle Market Share for Month t = 2 and IIHS Crash Tests (–.00122 –.00120 .00444 –.06665 (–.48455) (–.63722)

Journal of Public Policy & Marketing

important to document whether the IIHS tests convey meaningful safety-related information to the marketplace. Consumer disregard for IIHS crash tests would be rational if the tests could not be shown to be reasonably correlated with real-world vehicular safety performance. To assess the relationship between IIHS crash tests and actual bodily injury claims, class-adjusted HLDI injury statistics (using the previously defined vehicle classifications) were calculated for each tested vehicle. These statistics, which represent positive and negative deviations centered on a mean of zero, were attained by subtracting the mean HLDI loss experience variable for each vehicle class from the HLDI bodily injury claims experience score for each vehicle. Thus, vehicles that were safer (less safe) than the mean for each class are represented by negative (positive) numbers, reflecting their lower (higher) than average injury claims experience. Class adjustments of the raw HLDI data are necessary because, as the IIHS (2004) notes on its Web site, “test results can be compared only among vehicles of similar weight.” We reproduce the results of an OLS regression of the class-adjusted HLDI bodily injury claims experiences with IIHS crash test ratings in Table 9. There is a strong and negative correlation between IIHS crash test ratings and classadjusted HLDI bodily injury claims experiences. Unlike the general insignificance of the market share and stock price results, the coefficient for the IIHS crash test rating is significant at the 1% level, which is exactly as would be expected if the IIHS crash tests convey relevant safetyrelated information to the marketplace. As such, the lack of significance of the market share and stock price tests, as we present in Tables 3–8, represents something of a puzzle.

Why Do Consumers Fail to Respond to the Informational Content of IIHS Crash Tests? Several potential hypotheses could explain why vehicle market shares and stock prices do not appear to respond to the obviously valuable informational content of IIHS crash tests. We propose and test some of them in the sections that follow. Do Vehicle Rebates Change Following IIHS Crash Tests? One obvious scenario that might explain why consumer market share statistics do not capture changes in vehicle purchase behavior pertains to changes in manufacturer purchase incentives concomitant with the IIHS broadcasts. For a generation, manufacturers have relied on special purchase incentives (commonly known as rebates) to help spur demand for specific vehicles at specific points in calendar time. For example, on May 17, 2004, General Motors Table 9.

Regression Analysis Between IIHS Crash Test Results and Class-Adjusted HLDI Relative Bodily Injury Statistics

Constant (t)

Coefficient (t)

R2

Correlation

(10.78329 (–4.88136*)

–4.88136 (–3.04617*)

.08814

–.29688

*Significant at the 1% level, two-tailed test.

109

announced that as a direct consequence of rapidly rising gasoline prices, it would begin offering for the first time special purchase incentives to stimulate sales of its once fast selling (but fuel guzzling) Hummer sport-utility vehicles (SUVs). Accordingly, if vehicle manufacturers purposely and effectively managed their rebate programs to counterbalance the influence of IIHS crash test releases by decreasing rebates for vehicles that receive especially good IIHS ratings and/or increasing rebates for poorly rated ones, vehicle market shares might not demonstrate a meaningful IIHS market share response.7 To investigate this possibility formally, a series of tests for the changes in total consumer and dealer percentage cash rebates around the time of the televised IIHS crash tests was conducted. These tests subtract the change in the mean percentage of total consumer and dealer cash rebates for vehicles that earn IIHS crash test scores of “acceptable,” “marginal,” and “poor” from the change in mean percentage total cash rebates offered on vehicles rated as “good.” By construction, positive numbers indicate increases in the mean total consumer and dealer percentage cash rebates for vehicles with “good” crash test ratings (compared with vehicles with “acceptable,” “marginal,” or “poor” ratings), whereas negative numbers indicate increases in the mean total percentage cash rebates for lower rated vehicles compared with those rated as “good.” Accordingly, if manufacturers altered cash rebates in response to IIHS crash tests, our a priori expectations suggest that the sign of all reported mean percentage changes in rebates should be negative (i.e., relative declines in rebates for vehicles rated “good” and/or relative increases in rebates for vehicles rated “acceptable,” “marginal,” or “poor”). We reproduce the results of these tests in Table 10. There is no evidence that manufacturers systematically alter cash rebates in response to IIHS crash tests. Of the nine test scenarios presented, less than half (four) are of the anticipated sign, and none are significant at conventional levels. 7However, stock prices might be expected to fall in response to unusually large increases in rebate incentives.

Table 10.

Difference Tested

Differences in Changes in Percentage Total Consumer and Dealer Cash Sales Rebates Following Televised IIHS Crash Tests Month One Rebates

Month Two Rebates

Month Three Rebates

A: “Good” Minus “Acceptable” Mean difference .00375 Standard error .00129 t-statistic 1.48098

–.00315 .00256 –.61202

.01062 .00307 1.75107

B: “Good” Minus “Marginal” Mean difference .00702 Standard error .00458 t-statistic 1.53159

–.00020 .00727 –.02717

–.00378 .00842 –.44887

C: “Good” Minus “Poor” Mean difference .00050 Standard error .00034 t-statistic 1.48098

–.00145 .00237 –.61202

.00309 .00176 1.75558

110 Crash Test Dummies

Thus, the failure to document discernable changes in vehicle market share following IIHS crash tests does not appear to be a consequence of countervailing changes in manufacturer sales incentives. Do Consumers Implicitly Understand Vehicle Safety? An important hypothesis that would explain the lack of responses to IIHS crash tests pertains to the level of implicit consumer estimates of expected vehicle safety. For example, if consumer estimates of the safety performance of specific automotive models are perfectly correlated with IIHS crash test results, there would be no reason for consumers to respond to the informational content of Dateline/IIHS broadcasts. To investigate this hypothesis, senior undergraduate business students at a Midwestern urban research university were surveyed to assess their implicit opinions of the safety of specific vehicle models. Students ranging in age from 21 to 55 years, with a mean age of 26 years, were asked for their numerical estimates of the relative safety of all foreign and domestic models for which IIHS crash tests and HLDI relative bodily injury statistics were available. The students were provided a list of vehicles that included only vehicle model (e.g., Jeep Grand Cherokee), manufacturer (e.g., DaimlerChrysler), and class (e.g., mid-size SUV). No visual cues (e.g., photographs of the vehicles) were provided to the participants, nor could students communicate with one another during the survey. Therefore, the student estimates of expected relative vehicle safety were entirely “internal” to the survey participants and were based on lifetime learning and/or common sense. To provide a common basis on which to judge the expected level of safety of each vehicle, the students were given the following additional instructions: To help you gauge vehicular safety within the proper context, please note that the AVERAGE VEHICLE has a safety rating of 100, while the SAFEST VEHICLE has a safety rating of 39 and the LEAST SAFE VEHICLE has a rating of 228.

These numbers correspond to the range of relative injury frequencies reported by HLDI for the models tested. Two groups of 40 students completed the survey. Each student was paid $10 for his or her participation. Although students were told that they could skip any vehicle for which they had inadequate knowledge to make even a reasoned guess of its likely relative safety, the minimum number of student responses for a specific model was 35. Table 11 presents a comparison of the summary statistics of the student estimates of vehicle safety for the vehicles for which HLDI, IIHS, and CRSP data are available. The mean student estimates of automotive safety were pessimistic compared with reality. The students’ mean estimate of relative safety was 108, compared with a mean actual injury rating of 91. Similarly, the range of mean student estimates was too narrow (87 versus 134) with regard to HLDI reported loss experiences. However, our study question is not how accurate consumer estimates of safety are for any given vehicle model but rather whether these estimates of safety are highly correlated with the safety information provided by IIHS crash tests. To assess the usefulness of such estimates of vehicular safety in purchase decisions, two OLS regressions were per-

Table 11.

Comparison of Student Estimates of Expected Vehicle Safety with HLDI Actual Relative Bodily Injury Experience

Summary Statistic

HLDI Data

Student Estimates

Mean Standard error Median Mode Standard deviation Kurtosis Skewness Range Minimum Maximum

90.69 2.91 82.50 70.00 28.84 .41 .93 134.00 39.00 173.00

108.53 1.66 105.50 123.50 16.43 .24 .54 87.40 64.00 151.40

Table 12.

Constant (t)

Regression Analyses Between Student Estimates of Expected Vehicular Safety and Known Metrics of Vehicle Safety Coefficient (t)

R2

Correlation

A: IIHS Crash Test Scores and Student Estimates of Safety (4.58678 –.01943 .10299 .32091 (8.24565*) (–3.75787*) B: HLDI Class-Adjusted Loss Experience and Class-Adjusted Student Estimates of Safety (–1.74215 .25158 .01885 .13731 (–1.07436) (1.53738) *Significant at the 1% level, two-tailed test.

formed. Panel A of Table 12 reports the results of a regression between the student estimates of vehicular safety and IIHS crash test scores. Higher IIHS scores are associated with better crash test performance, whereas higher student estimates are associated with vehicles perceived as less safe; therefore, a consistent relationship between the two sets of figures requires the correlation between them to be negative. The correlation between the student estimates and the IIHS crash test scores is –.321, and the variable coefficient is highly significant. Panel B of Table 12 shows the results of a regression between class-adjusted HLDI bodily injury claims experience and class-adjusted student estimates of expected vehicle safety. The strength of the relationship between these two variables is especially important to consider because it provides the only direct evidence of the ability of consumers to judge among the safety of alternative vehicles in the same market class, the very metric the IIHS crash tests were specifically designed to measure. As Panel B shows, although the class-adjusted student estimates are positively correlated with the class-adjusted HLDI bodily injury statistics, the relationship between the variables is insignificant at conventional levels and far less important than the relationship between class-adjusted HLDI bodily injury statistics and the IIHS crash tests. That is, IIHS test scores are a much better predictor of expected vehicle safety performance than are student estimates of safety for the same vehicle class.

Journal of Public Policy & Marketing

If we consider the market share tests discussed previously, the results in Panel A of Table 12 are consistent with the hypothesis that consumer inattention to IIHS crash test information may be partially due to implicit consumer estimates of vehicle safety being generally accurate in terms of both sign and significance. However, just as important, the results also support the contention that IIHS crash tests provide valuable safety-related information to the vehicle marketplace and that ignorance about the findings of these important tests carries a significant risk of peril.8

Can the Public Safety Content of Dateline/IIHS Crash Test Broadcasts Be Improved? Given that the safety content of IIHS crash tests should be valuable to U.S. automotive consumers, the question naturally arises as to whether the presentation of this content in future Dateline broadcasts can be improved. This question is of considerable significance because the format through which the IIHS and Dateline have chosen to disseminate the crash test results has changed significantly over the years. In addition, the lack of caveats that are necessary for the proper interpretation of the IIHS tests in many Dateline broadcasts may lead to unnecessary confusion on the part of some consumers, further reducing the informational value of the segments. Should the Dateline/IIHS Broadcasts Focus Only on Entire Automotive Classes? Although approximately half of all Dateline/IIHS broadcasts have featured tests of vehicles from a single market class (most of these occurred before 2000), many tests have represented only small portions of the tested class. For example, the November 14, 2000, tests of mid-range SUVs included only the BMW X-5, the Isuzu Trooper, and the Mitsubishi Montero—all vehicles with low unit sales volumes compared with much more popular competitors such as the Ford Explorer, Chevrolet Blazer, and Jeep products. Therefore, the informational content conveyed to consumers from these less-than-full vehicle classes may be limited with regard to tests of entire vehicle classes. Likely to be even more confusing to consumers are the IIHS tests that include vehicles from more than one market class. Consider, for example, the March 3, 1998, segment, which included tests of five vehicles drawn from four distinct automotive classes. This broadcast is not especially unique in this respect: A December 1999 broadcast included tests of seven vehicles from four different classes, and a March 2001 broadcast included eight vehicles from four classes. In each of these cases, the ability of consumers to render reasoned judgments about the cross-sectional variation of safety performance inherent for a specified vehicle class was severely limited.

8Although

empirically untested in this study, the lack of consumer responses to IIHS crash tests may be due less to ignorance on the part of consumers of the value of IIHS crash test results and more to consumers viewing the information as of little relevance to their lives. For example, some people may choose to purchase dangerous vehicles simply because they believe that their superior driving skills will enable them to avoid having an accident. We are grateful to an anonymous reviewer for bringing this possibility to our attention.

111

Only four times since the Dateline/IIHS broadcasts began in April 1995 have the crash test segments featured tests across virtually the entire range of a specific vehicle class. Because these tests convey the greatest possible information content to the marketplace, it is important to document whether changes in vehicle market share following these allinclusive tests differ from those associated with less comprehensive test segments.9 Tables 13 and 14 present the identical market share impact tests for whole class and non–whole class IIHS broadcast segments, respectively, similar to our presentation of the crash test sample considered as a whole in Table 6. Our comparison of the market share impact results from the whole class vehicle tests (Table 13) with those associated with the non–whole class tests (Table 14) reveal marked, if not statistically significant, differences. In the case of the non–whole class tests, vehicles with “good” or “acceptable” IIHS crash test ratings register minimal mean market share changes, ranging from a slight loss to, at most, just more than a .1% increase. The same metrics for the whole class tests are positive and range from a low of greater than .4% to a high of just less than .9%. For vehicles rated “marginal” or “poor,” all three months for the non–whole class crash tests register increases in vehicle market share (a result divergent from expectations), whereas in the case of the whole class tests, all three months record expected market share decreases. Subtracting the market share changes registered by “marginal” or “poor” vehicles from the changes registered by vehicles with “good” or 9Stock price tests cannot be conducted for these four broadcast segments for two important reasons. First, because of the small number of event dates (only four), the tests will be subject to a considerable degree of what is known in the literature as event time clustering. This problem, the bane of financial research, dramatically reduces the ability to purge securityspecific errors of the cross-sectional movements that are common to all stocks. Second, the small number of available firm-specific data points just reaches the limit of sample size reasonableness in toto and is much lower than the applicable literature recommends when they are separated into distinct IIHS crash test ratings categories. Brown and Warner (1985) provide important background information on both these empirical issues.

Table 13.

Difference Tested

Changes in Percentage Vehicle Market Share Following Televised IIHS Crash Tests for the Sample of Four IIHS Whole Class Crash Tests Month One Share

Month Two Share

Month Three Share

A: “Good” or “Acceptable” Mean difference .00589 Standard error .00497 t-statistic 1.18511

.00424 .00846 .50118

.00877 .01056 .83049

B: “Marginal” or “Poor” Mean difference –.00140 Standard error .00472 t-statistic –.29661

–.00085 .00441 –.19274

–.00230 .00607 –.37891

C: “Good” or “Acceptable” Minus “Marginal” or “Poor” Mean difference .00729 .00509 .01107 Standard error .00685 .00954 .01218 t-statistic 1.06451 .53358 .90907

112 Crash Test Dummies Table 14.

Difference Tested

Changes in Percentage Vehicle Market Share Following Televised IIHS Crash Tests for the Sample of IIHS Non–Whole Class Crash Tests Month One Share

Month Two Share

Month Three Share

A: “Good” and “Acceptable” Mean difference –.00004 Standard error .00177 t-statistic –.02260

.00013 .00215 .05953

.00101 .00212 .47642

B: “Marginal” and “Poor” Mean difference .00196 Standard error .00275 t-statistic .71273

.00226 .00250 .90400

.00013 .00253 .05138

C: “Good” and “Acceptable” Minus “Marginal” and “Poor” Mean difference –.00200 –.00213 .00088 Standard error .00169 .00212 .00220 t-statistic –.61319 –.64609 .26435

“acceptable” IIHS ratings results in a net market share loss for the better rated vehicles in only two of the three months tested for the non–whole class vehicles. Overall, of the nine test cells in Tables 13 and 14, only three are consistent even in sign with a priori expectations in the case of the non–whole class crash tests, whereas all nine are consistent with expectations in the case of whole class crash-tested vehicles. Accordingly, we may be reasonably confident in stating that the manner in which Dateline and IIHS have chosen to present the crash test results has had a substantive impact on their assimilation by, and ultimate impact on, U.S. automotive consumers.10 How Important Is the IIHS “Class-Specific” Disclaimer? Most IIHS televised crash test findings have been presented in absolute terms, with no caveats with respect to either vehicle size or class. As such, a large vehicle that performed poorly is awarded the same absolute rating as a similarly performing small vehicle, with few admonitions that the two ratings are valid only for comparing vehicles of similar size, weight, and class. This caveat is extremely important, however; the official IIHS Web site notes specifically that “Frontal crash test ratings cannot be compared across vehicle type and weight categories” (IIHS 2004). Illustrative of the lack of such important comparison limitations is the transcript of the IIHS’s June 2001 test of fullsized pickup trucks. Each available vehicle line in this market segment was represented. The Toyota Tundra was awarded a “good” rating, the Chevrolet (GMC) Silverado a “marginal” rating, and the Dodge Ram 1500 and Ford F-150 a “poor” rating. Referring to the full-sized Dodge Ram, IIHS President Brian O’Neill stated, “it’s been a long time since the Institute’s seen injury reports this bad in any vehicle.” In a voiceover, Dateline reporter Lea Thompson stated, “Dodge 10Had the magnitude and variance of the mean effects remained unchanged but the sample size of the whole class vehicle tests increased even moderately, it is likely that the results in Table 11 would have been consistent with a priori expectations in both sign and significance.

Ram gets the worst rating, ‘poor’.” With respect to the “poor”-rated Ford F-150, O’Neill specifically stated, “This is as bad as it gets in terms of crash performance. Look at the collapse.” When asked by Dateline what he would do if he owned the F-150, O’Neill said, “I’d get rid of it. I wouldn’t put my family, my wife, or myself in a vehicle like this.” However, common sense (and the laws of physics) would suggest that the 4300-pound Ford F-150 Supercab, in absolute terms, is a safer vehicle than a 2500-pound economy sedan. In September 2002, HLDI reported that both the Dodge Ram 1500 and the Ford F-150 were in the lowest (best) injury loss claims category. Similarly, in March 2001, the IIHS gave the Honda Civic its highest crash test rating, whereas in September 2002, HLDI found the Civic fourdoor sedan to be in the second-worst loss category. As noted on the IIHS Web site, “Given equivalent frontal ratings for heavier and lighter vehicles, the heavier vehicle typically will offer better protection in real-world crashes” (IIHS 2004). Thus, by failing to provide such important comparison information, the IIHS and NBC likely confuse consumers who are seriously interested in automotive safety, and this confusion may be an important factor in explaining apparent consumer disregard of the Dateline/IIHS tests. Unfortunately, the IIHS and Dateline have never fully integrated the important class-specific caveat into their crash test broadcast segments. In only 4 of the 19 most recent broadcast segments (November 1997–January 2004) has an explicit caveat been presented that the results are valid only when comparing like-weight or same-class vehicles. Furthermore, in those 4 segments, the caveat represented only 2% of the story script. As such, although the results show that laboratory tests indeed provide useful information to consumers with respect to comparisons between, for example, the Dodge Caravan and the Pontiac Transport, it is highly unlikely that they will ever be of significant value in safety comparisons between vehicles of different market classes (e.g., Ford Expedition, Dodge Neon). To assess empirically the relative value of IIHS crash tests and student estimates of expected vehicle safety in non–like-class comparisons of alternative vehicles, regressions between non–class-adjusted HLDI bodily injury safety statistics and IIHS crash test scores and student estimates of expected safety were performed. We present the results of these tests in Table 15. Panel A of Table 15 reproduces the results of a regression between the IIHS crash test results and unadjusted HLDI data on bodily injury claim frequencies for each tested vehicle. As stated in its official materials, HLDI adjusts the data across car lines to account for differences in operator ages. As we expected, the relationship between the two sets of data is negative; however, the correlation between them is relatively low—much lower than the class-adjusted figure we reported in Table 9. The R2 of this regression (less than .02) indicates the general lack of explanatory power of the model. The t-statistic of the independent variable (the IIHS crash test score) is insignificant at the 5% level. As such, the results in Panel A provide little direct support for the hypothesis that, absent the class-specific statistical environment for which the tests originally were designed, IIHS crash tests provide particularly valuable safety information

Journal of Public Policy & Marketing

Table 15.

Constant (t)

Regression Analyses for HLDI Relative Bodily Injury Statistics with IIHS Crash Tests and Student Estimates of Expected Vehicle Safety Coefficient (t)

R2

Correlation

A: HLDI Actual Relative Loss Experience and IIHS Crash Test Scores 96.75731 –3.63185 .01915 –.13837 (14.91142*) (–1.54950) B: HLDI Actual Relative Loss Experience and Student Estimates ( –18.89500 1.00507 .39980 .63230 (–1.58208) (9.05169*) *Significant at the 1% level, two-tailed test.

to consumers. These findings, in conjunction with the general lack of class-specific disclaimers in Dateline/IIHS broadcasts, may help explain why consumer markets apparently disregard televised IIHS crash test findings. Finally, Panel B of Table 15 presents the results of a regression between the unadjusted HLDI bodily injury claims experience and student estimates of expected vehicle safety. In what constitute the strongest statistical results presented in this study, the relationship between the two variables is both positive and highly significant. Both the R2 (.400) and the t-statistic for the student estimate coefficient (9.052) are much higher than the same metrics for the unadjusted IIHS/HLDI regression, and they imply that in safety comparisons between vehicles in different market classes, commonsense assessments of expected vehicle safety are likely to be much more closely related to overall vehicular injury claims experience than are IIHS crash test results.

Summary and Policy Recommendations This study has investigated both consumer and securities markets responses to the informational stimulus of the IIHS’s high-speed, frontal, offset automotive crash tests telecast over a seven-year period on NBC’s Dateline news magazine. Although we employ various empirical tests, our study fails to document consistent evidence that either consumers or investors have responded to either positive or negative crash test results. This lack of responses is unfortunate, however, because the results of this study also document that the Dateline/ IIHS crash test ratings are an excellent predictor of relative injury claim losses for a given market class—far better than are student estimates of expected vehicle safety. Accordingly, we develop and test several hypotheses that help explain this general market inattention and point the way toward positive public policy prescriptions for future Dateline/IIHS broadcasts. Although IIHS’s Web site cautions that its crash test ratings are valid only for comparing same-class vehicles, transcripts of NBC’s televised broadcasts of the IIHS crash tests have been far less forthcoming. Only 4 of the most recent 19 (and only 1 of the most recent 9) televised segments have included this important test disclaimer. Because the televised test results are presented in absolute terms, the “good”

113

rating of a Honda Civic sedan may convey to the average vehicle consumer a completely nonsensical message: A Honda Civic is an absolutely safer vehicle than the much larger, extended cab Dodge Ram 1500 pickup truck (which received a “poor” rating from the IIHS). Faced with such an extraordinary incongruity between “TV data” and common sense and lacking the explicit knowledge that IIHS crash tests are valid predictors of safety only within the same vehicle class, some consumers may be led to disregard IIHS crash tests altogether and rely strictly on their instincts about vehicular safety performance when they purchase a vehicle. Accordingly, the results of this study strongly support any future efforts that might be undertaken by Dateline and/or the IIHS to emphasize both the usefulness and the limitations of the IIHS crash tests during every broadcast segment. Efforts on the part of IIHS officials to avoid making any absolute statements about vehicle crashworthiness would be particularly helpful in this regard. On a related note, though beyond the scope of our analysis, it is likely that attempts by the IIHS to change its crash test rating procedures to provide relative measures of vehicle safety that can be compared across different vehicle market segments would go even further to reduce potential consumer confusion about vehicle safety tests.11 Such tests and changes might save many lives. The study also has discovered evidence that the manner in which the IIHS crash tests are presented to consumers may influence their impact on consumer purchase decisions. Although overall there is no evidence that Dateline/IIHS broadcasts led to consistent changes in vehicle market share for models rated by the IIHS as either “good” or “poor,” when we restricted the crash test segments to a single market class that included virtually all of the vehicles available for sale in that class, we observed changes in vehicle market shares that were consistent in sign, if not significance, with our a priori expectations. These are the very test broadcasts that convey the greatest possible amount of safety information to consumers. Therefore, the lack of market share responses to an average Dateline/IIHS segment indicates consistency with the hypothesis that the confusion that results from conducting tests across several market segments or less than the full complement of vehicles in a given market segment may antagonize the broader public safety objectives of the IIHS. In summary, crash test segments involving small cars, SUVs, minivans, and large luxury sedans may serve to increase the attention of the casual television viewer by broadening the number of potentially interested viewers, but this increase comes at the cost of providing less relevant safety information to potential vehicle purchasers. Finally, it must be acknowledged that individual expectations of vehicle safety performance played a significant role in this and any other analysis of stock price or market share changes involving the automotive safety environment. If consumer expectations of vehicle safety and laboratory crash test scores are too highly correlated, the successful determination of the full extent of interrelationships among 11We are grateful to an anonymous reviewer for noting this important point. One possibility for such a test is to replace the aluminum honeycomb deformable barrier presently used in testing with a barrier specifically designed to mimic the crash test behavior of a vehicle that represents the midpoint of the U.S. vehicle fleet in terms of both physical size and weight (mass).

114 Crash Test Dummies

the studied variables may be limited. However, the same empirical methodologies employed in this study previously have documented economically and statistically significant changes in stock prices and vehicle sales in studies of related aspects of automotive safety (e.g., recalls). Furthermore, even our attempts to incorporate reasonable proxies for the level of consumer expectations, by using modelspecific NHTSA crash test results, were unsuccessful in illustrating the expected positive relationship between IIHS crash test scores and vehicle market shares. Thus, our inability to document stronger stock price and market share changes attributable to Dateline/IIHS broadcasts may be due less to data or methodological failures and, as we have demonstrated, more to consumers’ apparent intuitive sense of which automotive models are safer than others. However, although intuition and common sense may serve consumers reasonably well in comparisons among vehicles in different market classes, they appear to be of essentially no value in comparisons among vehicles in the same market class. Fortunately, the Dateline/IIHS crash tests were designed specifically to convey just such information.

———, ———, and ——— (1988), “The Impact of Product Recalls on the Wealth of Sellers: A Reexamination,” Journal of Political Economy, 96 (June), 663–70. ———, ———, and ——— (1992), “Market Responses to Publicly-Provided Information: The Case of Automotive Safety,” Applied Economics, 24 (July), 661–67. ———, ———, and ——— (1994), “When Recalls Matter: Factors Affecting Owner Response to Automotive Recalls,” Journal of Consumer Affairs, 28 (Summer), 96–106. Insurance Institute for Highway Safety (2002), “NBC News Transcripts,” (accessed November 19, 2003), [available at http:// www.lexus-nexis.com]. ——— (2004), “How the Institute Evaluates Vehicles in the Frontal Offset Crash Test,” (accessed August 4, 2004), [available at http://www.iihs.org/vehicle_ratings/ce/def.htm]. Jarrell, Gregg and Sam Peltzman (1985), “The Impact of Product Recalls of the Wealth of Sellers,” Journal of Political Economy, 93 (June), 512–36. Lave, Lester B. and Warren E. Weber (1970), “A Benefit–Cost Analysis of Auto Safety Features,” Applied Economics, 2, 265–75.

References

Peltzman, Sam (1975), “The Effects of Automobile Safety Regulation,” Journal of Political Economy, 83 (August), 677–725.

Brown, Stephen and Jerald Warner (1985), “Using Daily Stock Returns: The Case of Event Studies,” Journal of Financial Economics, 14 (January), 3–31.

Peterson, Steven P. and George E. Hoffer (1994), “The Impact of Airbag Adoption on Relative Personal Injury and Absolute Collision Insurance Claims,” Journal of Consumer Research, 20 (March), 661–67.

Crafton, Steven M., George E. Hoffer, and Robert J. Reilly (1981), “Testing the Impact of Recalls on Automobile Demand,” Economic Inquiry, 19 (October), 694–703. Crandall, Robert W. and John D. Graham (1989), “The Effect of Fuel Economy Standards on Automobile Safety,” Journal of Law and Economics, 32 (January), 97–119. ———, Theodore E. Keeler, and Lester B. Lave (1982), “The Cost of Automobile Safety and Emissions Regulation to the Consumer: Some Preliminary Results,” American Economic Review, 72 (May), 324–27. Farmer, Charles M. (2004), Relationships of Frontal Offset Crash Test Results to Real-World Driver Fatality Rates. Arlington, VA: Insurance Institute For Highway Safety.

Pruitt, Stephen W. and David R. Peterson (1986), “Security Market Reactions to Around Product Recall Announcements,” Journal of Financial Research, 9 (Summer), 113–22. ———, Robert J. Reilly, and George E. Hoffer (1986), “Security Market Anticipation of Consumer Preference Shifts: The Case of Automotive Recalls,” Quarterly Journal of Business and Economics, 25 (Autumn), 14–28. Reilly, Robert J. and George E. Hoffer (1983), “Will Retarding the Information Flow on Automobile Recalls Affect Consumer Demand?” Economic Inquiry, 21 (July), 444–47. Rupp, Nicholas (2001), “Are Government Initiated Recalls More Damaging to Stockholders? Evidence from Automobile Recalls, 1973-98,” Economic Letters, 71 (May), 265–70.

Harless, David W. and George E. Hoffer (2003), “Testing for Offsetting Behavior and Adverse Recruitment Among Drivers of Airbag-Equipped Vehicles,” The Journal of Risk and Insurance, 70 (4), 629–50.

——— and Curtis G. Taylor (2002), “Who Initiates Recalls and Who Cares? Evidence from the Automobile Industry,” Journal of Industrial Economics, 50 (June), 123–29.

Highway Loss Data Institute (1996–2002), Injury, Collision and Theft Losses. Arlington, VA: Highway Loss Data Institute.

U.S. Department of Transportation (2003), “New Car Assessment Program,” National Highway Traffic and Safety Administration, (accessed 2003), [available at www.nhtsa.gov/ncap].

Hoffer, George E., Stephen W. Pruitt, and Robert J. Reilly (1987), “Automotive Recalls and Informational Efficiency,” The Financial Review, 22 (February), 433–42.

Variety (2002), “Nielsen Ratings, Oct. 14–20, 2002,” 388 (October 28–November 3), 24.