Detecting Corporate Failure - Springer Link

4 downloads 255 Views 335KB Size Report
University of Stellenbosch, Stellenbosch, Western Cape, RSA ... But, if a group of firms in an economy simultane- ...... Journal of Risk and Insurance 64(1),.
Chapter 107

Detecting Corporate Failure Yanzhi Wang, Lin Lin, Hsien-Chang Kuo, and Jenifer Piesse

Abstract This article introduces definitions of the terms bankruptcy, corporate failure, insolvency, as well as the methods of bankruptcy, and popular economic failure prediction models. We will show that a firm filing for corporate insolvency does not necessarily fail to pay off its financial obligations as they mature. Moreover, we will assume an appropriate risk monitoring system centered by well-developed failure prediction models, which is crucial to various parties in the investment world as a means to look after the financial future of their clients or themselves. Keywords Corporate failure r Bankruptcy r Distress r Receivership r Liquidation r Failure prediction r Discriminant analysis r Conditional probability analysis r Hazard model r Misclassification cost model

107.1 Introduction The financial stability of firms is always of concern to many agents in a society, including investors, bankers, governY. Wang Department of Finance, Yuan Ze University, 135 Yuan-Tung Rd., Jung-Li, Taiwan 32003, R.O.C. e-mail: [email protected] L. Lin () Department of Banking and Finance, National Chi-Nan University, 1 University Rd., Puli, Nantou Hsien, Taiwan 545, R.O.C. e-mail: [email protected] H.-C. Kuo Department of Banking and Finance, National Chi-Nan University and Takming University of Science and Technology, 1 University Rd., Puli, Nantou Hsien, Taiwan 545, R.O.C. e-mail: [email protected] J. Piesse Management Centre, School of Social Science and Public Policy, King’s College, University of London, 150 Stamford St., London SE1 9NN, UK University of Stellenbosch, Stellenbosch, Western Cape, RSA e-mail: [email protected]

mental/regulatory bodies, and auditors. The credit rating of listed firms is an important indicator not only to the stock market for investors to adjust the stock portfolio they hold, but also to the capital market for lenders to calculate the costs of loan default and to consider the borrowing terms for their clients. It is also the duty of the governmental and regulatory organizations to monitor the general financial status of firms in order to make proper economic and industrial policies. Moreover, in the interest of the public, the auditors of firms need to maintain a watching brief over the going concern of their clients for the foreseeable future and fairly present their professional view in the audit report attached to each of their client’s financial statements. Thus, it is understandable that the financial stability of firms attracts so much attention from so many different parties. A single firm’s bankruptcy will influence a chain of members of society, especially its debtors and the employees. But, if a group of firms in an economy simultaneously face financial failure, it will not only leave scars on this particular economy, but also its neighbors. The latest evidence is demonstrated by the financial storm clouds gathered over Thailand in July 1997, which caused immediate damage to most Asia–Pacific countries. For these reasons, the development of bankruptcy theory and bankruptcy prediction models, which can protect the market from unnecessary losses, is essential. This can also help governmental organizations make appropriate policies in time to maintain industrial cohesion and minimize the damage caused by widespread corporate bankruptcy to the economy as a whole. As a popular definition of “distress,” the financial terms “near bankruptcy or corporate failure,” “bankruptcy” (see Brealey et al. 2001: 621; Ross et al. 2002: 858), or “financial failure,” can be referred to in a variety of circumstances, including: 1. the market value of assets of the firm is less than its total liabilities; 2. the firm is unable to pay debts when they come due; 3. the firm continues trading under court protection.

C.-F. Lee et al. (eds.), Handbook of Quantitative Finance and Risk Management, c Springer Science+Business Media, LLC 2010 DOI 10.1007/978-0-387-77117-5_107, 

(Pastena and Ruland 1986: 289) 1593

1594

Of these, the second condition; that is, the insolvency of the company, has been of a main concern in the majority of early bankruptcy studies in literature, because insolvency cannot only be explicitly identified but has also served as a legal and normative definition of the term “bankruptcy” in many developed countries. However, the first definition is more complicated and subjective in the light that different accounting treatments of asset valuation usually give a different market value of a company’s assets. Meanwhile, the court protection legislation varies among countries.

107.2 The Possible Causes of Bankruptcy Insolvency problems can result from decisions taken within the company or a change in the economic environment, essentially outside the firm. Some general causes of insolvency are noted in Table 107.1. Apart from those listed above, a new company is usually thought to be riskier than those with longer history. Blum (1974: 7) confirmed that “other things being equal, younger firms are more likely to fail than older firms.” Hudson (1987), studying a sample between 1978 and 1981, also pointed out that companies liquidated through a procedure of creditor’s voluntary liquidation or compulsory liquidation during that period of time were mainly 2–4 years old and three-quarters of them less than 10 years old. Moreover, Walker (1992: 9) also found that “many new companies fail within the first three years of their existence.” This evidence suggests that the distribution of the failure likelihood against company age is skewed to the right. However, a clear cut point in age structure has so far not been identified to distinguish “new” from “young” firms in a business context, nor is there any convincing evidence with respect to the propensity to fail by firms of different ages. In consequence, the age characteristics of liquidated companies can only be treated as an “observation” or “suggestion,” rather than a “theory.”

Table 107.1 Some possible causes of insolvency (Rees 1990: 394) 1: Low and declining real profitability 2: Inappropriate diversification – into unfamiliar industries or not away from declining ones 3: Import penetration into the firm’s home markets 4: Deteriorating financial structures 5: Difficulties controlling new or geographically dispersed operations 6: Overtrading in relation to the capital base 7: Inadequate financial control over contracts 8: Inadequate control over working capital 9: Failure to eliminate actual or potential loss-making activities 10: Adverse changes in contractual arrangements

Y. Wang et al.

Although the most common causes of bankruptcy have been noted above, they are not sufficient to explain or predict corporate failure. In other words, a company with any of these characteristics is not doomed to bankruptcy in a predictable period of time. This is because some factors such as government intervention may play an important part in the rescue of distressed firms. Therefore, as Bulow and Shoven (1978) noted, the conditions under which a firm goes through liquidation are rather complicated. On the whole, as Foster (1986: 535) put it, “there need not be a one-toone correspondence between the non-distressed/distressed categories and the non-bankrupt/bankrupt categories.” It is noticeable that this ambiguity is even more severe in the notfor-profit sector of the economy.

107.3 The Methods of Bankruptcy As the corporate failure is not only an issue for those people involved as company owners and creditors but also influences the economy as a whole, many countries legislate for formal bankruptcy procedures for the protection of public interests from avoidable bankruptcy, such as Chaps. 10 and 14 in the US, and the Insolvency Act in the UK. The objectives of legislation are to “[firstly] protect the rights of creditors. . . [secondly] provide time for the distressed business to improve its situation. . . [and finally] provide for the orderly liquidation of assets” (Pastena and Ruland 1986: 289). In the UK where a strong rescue culture prevails, the Insolvency Act contains six separate procedures that can be applied to different circumstances to prevent either creditors, shareholders, or the firm as a whole from unnecessary loss, thereby reducing the degree of individual as well as social loss. They will be briefly described in the following section.

107.3.1 Company Voluntary Arrangements A voluntary arrangement is usually submitted by the directors of the firm to an insolvency practitioner, “who is authorized by a recognized professional body or by the Secretary of State” (Rees 1990: 394) when urgent liquidity problems have been identified. The company in distress then goes through the financial position in detail with the practitioner and discusses the practicability of a proposal for corporate restructuring. If the practitioner endorses the proposal, it will be put to the company’s creditors in the creditors’ meeting, requiring an approval rate of 75% of those attending. If this restructuring report is accepted, those notified will thus be bound by this agreement and the practitioner becomes the supervisor of the agreement. It is worth emphasizing that a

107 Detecting Corporate Failure

1595

voluntary arrangement need not pay all the creditors in full but a proportion of their lending (30% in a typical voluntary agreement in the UK) on a regular basis for the following several months. The advantage of this procedure is that it is normally much cheaper than formal liquidation proceedings and the creditors usually receive a better return.

holders and other preferential creditors by selling the assets of the businesses at the best prices. The whole business may be sold as a going concern if it is worthy. As in an administration order, the receiver must advise creditors of any progress by way of a creditors’ meeting, which will be convened in a short period of time after the initial appointment.

107.3.2 Administration Order

107.3.4 Creditors’ Voluntary Liquidation

It is usually the directors of the insolvent firm who petition the court for an administration order. The court will then assign an administrator who will be in charge of the daily affairs of the firm. However, before an administrator is appointed, the company must convince the court that the making of an order is crucial to company survival or a better realization of company assets than would be the case if the firm was declared bankrupt. Once it is rationalized, the claims of all creditors are effectively frozen. The administrator will then submit recovery proposals to the creditors’ meeting for approval within 3 months of the appointment being made. If this proposal is accepted, the administrator will then take the necessary steps to put it into practice. An administration order can be seen as the UK’s version of the American Chap. 11 in terms of the provision of a temporary legal shelter for companies in trouble to escape future failure without damaging their capacity to continuously trade (Counsell 1989). This does sometimes lead to insolvency avoidance (Homan 1989).

In a creditor’s voluntary liquidation, the directors of the company will take the initiative to send an insolvency practitioner an instruction that will lead to the convening of creditors’ and shareholders’ meetings. In a shareholders’ meeting, a liquidator will be appointed and this appointment will be asked for ratification in a creditors’ meeting later. Creditors have the right of final say as to who acts as liquidator. A liquidator will start to find potential purchasers and realize the assets of the insolvent firm to clear its debts. Unlike receivers who have wide ranging powers in the management of the businesses, a liquidator’s ability to continue trading is restricted to such that a beneficial winding up will be promised. As Rees (1990) stated, it is the most common method used to terminate a company.

107.3.3 Administrative Receivership An administration receiver has very similar power and functions of an administrator but is appointed by the debenture holder (the bank) secured by a floating or fixed charge after the directors of the insolvent company see no prospect for improving its ability to repay debts. In some cases, before the appointment of an administration receiver, a group of investigating accountants will be empowered to examine the real state of the company. The investigation normally includes the estimation of the valuable assets and liabilities of the company. If this investigation team finds that the company has no other choices but to be liquidated, an administration receiver who works in partnership with the investigation team will thus be entitled to take over the management of the company. The principal aim is to raise money to pay debenture

107.3.5 Members’ Voluntary Liquidation The procedure for a member’s voluntary liquidation is rather similar to that of a creditors’ voluntary liquidation. The only difference is that in a members’ voluntary liquidation the directors of the firm must swear a declaration of solvency to clear debts with fair interest within 12 months and creditors should not be involved in the appointment of a liquidator. In other words, a company’s announcement of a members’ voluntary liquidation by no means signals its insolvency, but only means a shutdown along with the diminishing necessity of its existence.

107.3.6 Compulsory Liquidation A compulsory liquidation is ordered by the court to wind up a company directly. This order is usually initiated by the directors of the insolvent firm or its major creditors. Other possible petitioners include the Customs and Excise, the Inland

1596

Revenue, and local government (Hudson 1987: 213). The whole procedure is usually started with a statutory demand made by creditors who wish to initiate a compulsory liquidation. If the firm fails to satisfy their request in certain working days, this failure is sufficient grounds to petition the court to wind up the firm. Once the order is granted, Official Receiver will take control of the company instantly or a liquidator will be appointed by Official Receiver instead. The company then must cease trading and liquidation procedure starts. However, an interesting phenomenon is that many valuable assets may be removed or sold prior to the control of the liquidator or even during the delivery of the petition to the court. Therefore, it probably leaves nothing valuable for the liquidator to deal with. In this sense, the company initiating a compulsory liquidation has been practically terminated far before a court order is granted.

107.4 Prediction Model for Corporate Failure Because corporate failure is not simply a closure of a company but has impacts on the society and economy where it occurs. Therefore, it makes good business and academic sense to model corporate failure for prediction purposes. If bankruptcy can be predicted properly, it may be avoided and the firm restructured. By doing so, not only the company itself but also employees, creditors, and shareholders can all benefit. Academics have long believed that corporate failure can be predicted through the design of legal procedure (Dahiya and Klapper 2007) or financial ratio analysis. It is because that using financial variables in the form of ratios can control for the systematic effect of size of, and industry effects on, the variables under examination (Lev and Sunder 1979: 187– 188), to facilitate cross-sectional comparisons in attempt to objectively discover the “symptoms” of corporate failure. In addition to the cross-sectional comparisons, Theodossiou (1993) builds up the time series analysis to investigate the discovery of the corporate failure. In consequence, financial ratio analysis for decades is not only preferred when the interpretation of financial accounts is required, but it has also been extensively used as inputs to the explicit formulation of corporate bankruptcy prediction models. Other than the financial ratio analysis, the application of the Merton (1974) Model (e.g., the KMV Model) that is widely applied by practitioners also predicts corporate failure. Differing from the financial ratio analysis, the Merton (1974) Model examines the possibility of bankruptcy with the stock market valuation using the concept of “distance to default,” and the possibility of bankruptcy is negatively related to the distance to default, which measures how far the firm valuation is from the critical value of default in the firm value distribution.

Y. Wang et al.

107.4.1 Financial Ratio Analysis and Discriminant Analysis The use of ratio analysis for the purpose of predicting corporate failure at least can be dated back to Fitzpatrick (1932), but this method did not attract enough attention until Beaver (1966) proposed his famous univariate studies. Beaver (1966) systematically categorized 30 popular ratios into six groups, and then found that some ratios under investigation, such as the cash flow/total debt ratio, demonstrated excellent predictive power in the corporate failure model. His results also showed the deterioration of the distressed firms prior to their financial failure, including a sharp drop in their net income, cash flow, and working capital, coupled with an increase in total debt. Although useful in predicting bankruptcy, univariate analysis was later criticized for its failure to incorporate more causes of corporate failure measured by other ratios into a single model. This criticism prompted interest in the multivariate distress prediction model that simultaneously includes different financial ratios with better combined predictive power for corporate failure. In the 1980s, with the increasing attention placed on multiratio analysis, multivariate discriminant analysis (MDA) began to dominate the bankruptcy prediction literature. MDA determines the discriminant coefficient of each of the characteristics chosen in the model on the basis that these will discriminate between failed and nonfailed ones in an efficient manner. Then, a single score for the firms in study will be generated. A cutoff point will be determined to minimize the dispersion of scores associated with firms in each category and the chance of overlap. An intuitive advantage of using MDA techniques is that the entire profile of the characteristics under investigation and their interaction are considered. Another advantage of using MDA lies in its convenience in application and interpretation (Altman 1983: 102–103). One of the most popular MDA must be the Z-score model developed by Altman (1968). On the basis of their popularity and relevancy to corporate failure, 22 carefully selected financial ratios were further classified into five bankruptcyrelated categories. In Altman’s sample 33 bankrupt and 33 nonbankrupt manufacturing companies during the period 1946–1965 were included. The best and final model in terms of failure prediction contained five variables that are still frequently used in banking and business sectors to date. This linear function is Zscore D 1:2Z1 C 1:4Z2 C 3:3Z3 C 0:6Z4 C 0:999Z5 ; (107.1) where Z-score is the overall index; Z1 the working capital/total assets; Z2 the retained earnings/total assets; Z3 the earnings before interest and taxes/total assets; Z4 the market value equity/book value of total debt; and Z5 the sales/total assets.

107 Detecting Corporate Failure

Altman (1968) also tested the cutoff point to balance the Type I and Type II Error, and found that in general it was highly possible for a company with its Z-score smaller than 1.8 to go bankrupt in the next few years of the examining period, and one with its Z-score higher than 2.99 was comparatively much safer. This Z-score model is so well known that it is still one favorite indicator of credit risk to fund suppliers in the new millennium. Although these statistical discrimination techniques are popular in predicting bankruptcy, they suffer from some methodological problems. Some methodological problems identified stem directly from the use of financial ratios (also see Agarwal and Taffler 2008). For example, proportionality and zero-intercept assumptions are the main two factors that are crucial to the credibility of the ratio analysis. The proportionality and zero-intercept assumption determines the form of any ratio. The basic form of a ratio is usually assumed to be y=x D c, where y and x are two accounting variables, which are assumed to be different but linearly related, and c can be interpreted as the value of this specific ratio. This raises three questions. First, is there an error term in the relationship between the two accounting variables? Second, is it possible that an intercept term exists in this relationship? And finally, what if the numerator and denominator are not linearly related? With regard to the first question, Lev and Sunder (1979) proved that if there is an additive error term in the relationship between y and x suggested by the underlying theory, that is, y D ˇx C e or y=x D ˇ C e=x, the comparability of such ratios will be limited. It is because “the extent of deviation from perfect size control depends on the properties of the error term and its relation to the size variable, x” (Lev and Sunder 1979: 191). The logic behind their argument can be summarized as follows. Considering the error term is homoscedastic, e=x is then smaller for the large firms than for the small firms because x as a size variable for large firms will on average be greater than that of small firms. In other words, the ratio y=x for large firms will be closer to the slope term ˇ than that for small firms. Thus, because the variance of the ratio y=x for smaller firms is greater than that of larger firms, it proves that the ratio y=x of two groups (i.e., large and small firms) are statistically drawn from two different distributions. This certainly weakens the comparability of the ratios with such an underlying relationship. It seems that to place an additive error term in the relationship between denominator and numerator in a ratio is not adequate in terms of size control. However, if heteroscedasticity in y is the case, it may result in the homoscedasticity of y=x. But it is also possible that this heteroscedastic problem of y=x is unchanged. In fact, Lev and Sunder (1979) also pointed out that this problem may be ameliorated only when the error term is

1597

multiplicative in the relationship; that is, y D ˇxe or y=x D ˇe. It is because the deviation of y=x now has no mathematical relationship with size variable x. As a result, this format of the ratio is more appropriate for comparison purpose. The above discussion can also be applied to the case where an intercept term exists in the relationship between two ratio variables, represented by y D ˛ C ˇx or y=x D ˇ C ˛=x. It is obvious that the variance of ratio y=x for smaller firms will be relatively larger than that of larger firms, under the influence of the term ˛=x. Again, suffice to say that the underlying formation of ratio is not acceptable to be used in the comparison of company performance. If two variables are needed to control the market size of the variable y, such as y D ˛CˇxCız or y D ˛CˇxCıx 2 , and if the underlying relationship is nonlinear, considerable confusion will be caused in the interpretation of ratios, not to mention the results of the ratio analysis. All those problems cast doubts on whether the normally used form of ratios is appropriate in all circumstances. Theoretically, it seems that the use of ratios is less problematic if and only if highly restrictive assumptions can be satisfied. Empirically, Whittington (1980) claimed that the violation of proportionality assumption of the ratio form is the problem researchers will most frequently encounter in the use of financial data in practice, especially in a time series study of an individual firm. McDonald and Morris (1984: 96) found that the proportionality assumption is better satisfied when a group of firms in a simple homogeneous industry was analyzed, otherwise, some amendment of the form of ratios will be necessary. However, they do not suggest the replacement of the basic form of ratio with a more sophisticated one. On the contrary, they commented that, on average, the basic form of ratio empirically performed quite satisfactorily in its application to ratio analysis. Keasey and Watson (1991: 90) also suggested that possible violations of the proportionality assumptions can be ignored. Since then, due to lacking further theoretical developments or the improvement of ratio forms, the basic form of ratio is still prevailingly used in the bankruptcy studies. In addition to the flaws in the designs of financial ratios, there are some methodological problems associated with the use of MDA. Of them, non-normality, inequality of dispersion matrices across all groups, and nonrandom sampling are the three problems that haunt the model users. The violation of normality assumption of MDA has given rise to considerable discussion since the 1970s (Kshirsagar 1971; Deakin 1976; Eisenbeis 1977; Amemiya 1981; Frecka and Hopwood 1983; Zavgren 1985; Karels and Prakash 1987; Balcaena and Ooghe 2006). Violation of the normality is the ultimate cause of biased tests of significance and estimated error rates. Studies on univariate normality of financial ratios found that ratio distributions tend to be

1598

skewed (Deakin 1976; Frecka and Hopwood 1983; Karels and Prakash 1987). If the ratios included in the model are not perfectly univariate normal, their joint distribution will, a priori, not be multivariate normal (Karels and Prakash 1987). Therefore, a good variable set for bankruptcy modeling should be able to minimize multivariate non-normality problems. A traditional but prevalent stepwise procedure apparently cannot satisfy this requirement. However, despite quite a few complementary studies on data transformation and outlier removal for ratio normality (Eisenbeis 1977; Ezzamel et al. 1987; Frecka and Hopwood 1983), their suggestion is rarely used in later research on the generation program of MDA models (Shailer 1989: 57). Because all these techniques are imperfect, McLeay (1986) advocated that selecting a better model is more straightforward than the removal of outliers or data transformations. In comparing the problems of non-normality to inequality of dispersion matrices across all groups, the latter does not seem crucial to the estimation procedure of MDA. In theory, the violation of the equal dispersion assumption will affect the appropriate form of the discriminating function. After testing the relationship between the inequality of dispersions and the efficiency of the various forms of classification models, a quadratic classification rule seems to outperform a linear one in terms of the overall probability of misclassification when the variance–covariance matrices of the mutually exclusive populations are not identical (Eisenbeis and Avery 1972; Marks and Dunn 1974; Eisenbeis 1977). More importantly, the larger the difference between dispersions across groups, the more the quadratic form of discriminating function is recommended. One of the strict MDA assumptions is random sampling. However, the sampling method that MDA users prefer for bankruptcy prediction studies is choice-based, or state-based, sampling, which prepares an equal or approximately equal draw of observations from each population group. Because corporate failure is not a frequent occurrence in an economy (Altman et al. 1977; Wood and Piesse 1988), such sampling techniques will cause a relatively lower probability of misclassifying distressed firms as nondistressed firms (Type I Error) but a higher rate of misclassifying nondistressed firms as distressed firm (Type II Error) (Lin and Piesse 2004; Kuo et al. 2002; Palepu 1986; Zmijewski 1984). Therefore, the high predictive power of MDA models claimed by many authors appears to be suspect. As Zavgren (1985: 20) commented, MDA models are “difficult to assess because they play fast and loose with the assumptions of discriminant analysis.” When doubt is cast on the validity of the results of MDA models, scholars begin to look at more defensible approaches, such as conditional probability analysis (CPA). The change in these mainstreams is also reviewed by Balcaena and Ooghe (2006).

Y. Wang et al.

107.4.2 Conditional Probability Analysis Since the late 1970s, the use of discriminant analysis was gradually replaced by the use of CPA, which is different from MDA in that CPA produces the “probability of occurrence of a result, rather than producing a dichotomous analysis of fail/survive as is the norm with basic discriminant techniques” (Rees 1990: 418). CPA primarily refers to logit and probit techniques and has been discussed or used rather widely in academics (Keasey and Watson 1987; Martin 1977; Mensah 1983; Ohlson 1980; Peel and Peel 1987; Storey et al. 1987; Zavgren 1985, 1988; Sun 2007). Its popularity highlights the various responses of the model users to the risk of failure. But the main value of using CPA is that CPA application does not depend on demanding assumptions as does MDA (Kennedy 1991, 1992). However, logit CPA is not always better than MDA under all conditions. If the multivariate normality assumption is met, the maximum likelihood estimator (MLE) of MDA is more efficient asymptotically than MLE of logit models. In any other circumstance, the MLE of MDA may not remain consistent, but MLE of logit models will (Amemiya 1981; Judge et al. 1985; Lo 1986). However, as a rejection of normality in bankruptcy literature is very common, the logit model is appealing. Methodologically speaking, suffice to say that it seldom goes wrong if logit analysis is used in distress classification. One most commonly cited and used CPA was developed by Ohlson 1980). Different from Altman’s (1968) sample with an equal number of bankrupts and nonbankrupts, his sample included 105 bankrupt and 2058 nonbankrupt industrial companies during 1970–1976. Using logistic analysis, his failure prediction models with accuracy rate of above 92% contained nine financial ratios and suggested that the company size, capital structure, return on assets, and current liquidity were the four most powerful bankruptcy indicators. His model looked like Y D 1:3  0:4Y1 C 6:0Y2  1:4Y3 C 0:1Y4  2:4Y5 1:8Y6 C 0:3Y7  1:7Y8  0:5Y9 ;

(107.2)

where Y is the overall index; Y1 the log(total assets/GNP price-level index); Y2 the total liabilities/total assets; Y3 the working capital/total assets; Y4 the current liabilities/current assets; Y5 the one if total liabilities exceed total assets, zero otherwise; Y6 the net income/total assets; Y7 the funds provided by operations/total liabilities; Y8 the one if net income was negative for the last 2 years, zero otherwise; and Y9 the measure of change in net income. It is interesting to find that Ohlson (1980) chose 0.5 as cutoff point, implicitly assuming a symmetric loss function across the two types of classification error. He later ex-

107 Detecting Corporate Failure

1599

plained that the best cutoff point should be calculated using data beyond his examining period (i.e., 1976), but unnecessary in his paper because the econometric characteristics of CPA and his large sample size would neutralize the problems (Ohlson 1980: 126). He also pointed out the difficulty of comparing his results with others due to the differences in the design of lead time, selection of predictors and observation periods, and finally the sensitivity of these results to choice of estimation procedures. As far as the predictive accuracy rates of MDA and CPA are concerned, Ohlson 1980) found that the overall results of logit models were not an obvious improvement on those of MDA. Hamer (1983) tested the predictive power of MDA and logit CPA, and then concluded that the both performed comparably in the prediction of business failure for a given variable set. However, with the knowledge that the predictive accuracy rates were overstated in previous MDA papers mainly due to the employment of choice-based sampling technique, the above comparison may be biased and the inferences from them could favor CPA. Apart from this, there do exist some factors that vary among previous papers and may erode such comparisons, including differences in the selection of predictors, the firm matching criteria, the lead time, the estimation and test time periods, and the research methodologies. Unless these factors are deliberately controlled, any report about the comparisons between CPA and MDA in terms of the predictive ability will not be robust. In conclusion, not only can CPA provide what any other technique can do in terms of user and ease of interpretation, but more importantly, it has no strict assumptions in the way MDA does, from which biased results primarily stem (Keasey and Watson 1991: 91). Even more recently, the dynamics of firm-specific and macroeconomic covariates can be captured by the new CPA modeling proposed by Duffie et al. (2007). With these appealing advantages, CPA is believed to be superior and preferred to MDA for bankruptcy classification.

107.4.3 Three CPA Models: LP, PM, and LM

where y is a dichotomous dummy variable that takes the value of 1 if the event occurs and 0 if it does not, and Pr./ represents the probability of this event. F ./ is a function of a regressor vector x coupled with a vector ˇ of parameters to govern the behavior of x on the probability. The problem arises as to what distribution model best fits the above equation. Derived from three different distributions, LP, PM, and LM are then chosen to fit into the right-hand side of the equation. LP is a linear regression model, which is easy to use but has two main problems as far as its application to generate the probability of an outcome is concerned. The first problem is the heteroscedastic nature of the error term. Recall the form of an ordinary LP, Y D X 0 ˇ C ", where Y is the probability of an outcome and X is a column of independent variables, ˇ is the parameter vector, and " is the error term. When an event occurs, Y D 1, " D 1X 0 ˇ, but when it does not occur, Y D 0, " D .X 0 ˇ/. It is noticeable that this error term is not normally distributed, so feasible general least squares estimation procedure should be used to correct this heteroscedasticity problem (Greene 1997: 87). The second problem, which may be the real difficulty with the LP model, is that LP cannot constrain Y to lie between 0 and 1 as a probability should. Amemiya (1981: 1486) then suggested the condition that Y D 1 if Y > 1 and Y D 0 if Y < 0. But this may produce unrealistic and nonsensical results. Therefore, it is not surprising that LP is used less frequently now. Hence, it will not be employed in this study either. In the discussion of qualitative response models, academics seem to be more interested in the comparisons between logit and probit models. Although logit models are derived from logistic density and probit models generated from Normal density, these two distributions are almost identical except that the logistic distribution has thicker tails and a higher peak in the middle (Cramer 1991: 15). In other words, the probability at each tail and in the middle of logistic distribution curve will be larger than that of the Normal distribution. However, one of the advantages of using the logit model is its computational simplicity. A look at the formula of these two models listed below will help one to appreciate this merit: Z

There are three commonly cited CPA models: the linear probability model (LP), the probit model (PM), and the logit model (LM). For CPA estimates the probability of the occurrence of a result in study, the general form of a CPA equation can be easily set as Pr.y D 1/ D F .x; ˇ/; Pr.y D 0/ D 1  F .x; ˇ/;

Probit Model W

1

1 2 p e t =2 dt 2

D ˆ.ˇ 0 x/; Logit Model W

Prob.Y D 1/ D D

(107.3)

ˇ0 x

Prob.Y D 1/ D

(107.4)

exp.ˇ 0 x/ 1 C exp.ˇ 0 x/ 1 ; 1 C exp.ˇ 0 x/ (107.5)

1600

where the function ˆ./ is the standard normal distribution. This mathematical convenience of logit models is one of the reasons for its popularity in practice (Greene 1997: 874). As far as the classification accuracy of CPA models is concerned, some comparisons of the results produced from these two models have been made and, as discussed above, generally suggest that they are actually indistinguishable in cases where the data are not heavily concentrated in the tails or the middle (Amemiya 1981; Cramer 1991; Greene 1997). This finding is consistent with the difference in the shape of the two distributions from which PM and LM are derived. It is alsopproven that the coefficients of the logit model are about = 3  1:8 times as large as those of the probit model, implying that the slopes of each variable in both models are very similar. In other words, “the logit and probit models results are nearly identical” (Greene 1997: 878). Choice of sampling methods is also a validity factor of CPA. The prevailing sampling method in the bankruptcy literature is to draw a sample with an approximately equal number of bankrupts and nonbankrupts, usually referring to the state-based sampling technique, as an alternative of random sampling. Although the econometric estimation procedure usually assumes random sampling, the use of state-based sampling has an intuitive appeal. As far as bankruptcy classification models are concerned, corporate failure is an event with rather low probability. Hence, a random sampling method may result in the inclusion of a very small percentage of bankrupts but a very high percentage of nonbankrupts. Such a sample will not be an effective one for any econometric model to produce efficient estimators (Palepu 1986: 6). In contrast, state-based sampling is an “efficient sample design” (Cosslett 1981: 56) that can effectively reduce the required sample size without influencing its provision of efficient estimators if an appropriate model and modification procedure are employed. In short, the information content of a state-based sample for model estimation is relatively better than that of a random sampling. A state-based sample for CPA results in the understatement of the Type I Error but the overstatement of the Type II Error (Palepu 1986; Lin and Piesse 2004). Manski and McFadden (1981) suggested several alternatives that can negate the drawbacks of using statebased sampling. They include the weighted exogenous sampling maximum likelihood estimator (WESMLE) and modified version by Cosslett (1981), the nonclassical maximum likelihood estimator (NMLE), and the conditional maximum likelihood estimator (CMLE). They compare and report these estimation procedures, which can be summarized into the following four points: 1. All these estimators are computationally tractable, consistent, and asymptotically normal. 2. The weighted estimator and conditional estimator avoid the introduction of nuisance parameters.

Y. Wang et al.

3. The nonclassical maximum likelihood estimators are strictly more efficient than the others in large samples. 4. In the presence of computational constraints, WESMLE and CMLE are the best; otherwise, NMLE is the most desirable. Accordingly, by using any one of the above modification methods, the advantages of using state-based sampling technique can be maintained, whereas the disadvantages can be mostly removed. What can also be inferred from this comparison is that the selection of modification method depends on two factors: the sample size and the computational complexity. Of them, CMLE is most frequently referred to in the bankruptcy literature for three reasons. First, this method has been more extensively demonstrated in Cosslett (1981) and Maddala (1983) studies with its application to the logit model. Second, this method has been used in the development of the acquisition prediction model by Palepu’s (1986), merger/insolvency choice model by BarNiv and Hathorn (1997), and bankruptcy classification models by Lin and Piesse (2004). Finally, because CMLE only changes the constant term of the model produced by normal MLE procedures, but has no effects on other parameters, this procedure is relatively simple and straightforward for CPA users. In a word, the merits of using CMLE with CPA include computational simplicity and practical workability. Without biases caused by the choice of sampling methods, modified CPA can almost correct all possible methodological flaws that MDA can have.

107.4.4 Time Series Analysis: CUSUM Model One application regarding the time series analysis on corporate distress is provided by Theodossiou (1993). He explores the idea that the difference between healthy and distressed firms by using the firm’s change in characteristics in time series could help to discover the corporate distress. In the procedure from normal status to the distress, the financial ratios deteriorate over time, which provides the information for early warning. In Theodossiou’s (1993) framework, we may set up Xi;1 ; : : : ; Xi;T be a sequence of important financial variables for firm i . The unconditional mean of Xi;t in the group for healthy companies is stationary over time; i.e., E.Xi;t jh/ D uh . For the group of distressed firms, the mean of Xi;t diverges from uh and gradually moves toward to mean uf which is denoted as the mean of the distressed firms; that is, uh ! uf;s !    ! uf;1 ! uf D uf;0 , in which uf;m  E.Xi;t jf; m/ for m D 0; 1; 2; : : :; s being the mean of Xi;t , m reporting periods before failure. Next, the deviations of a firm’s indicator vectors from their means are expressed as a vector-autoregressive-movingaverage (VARMA) process of order p and q:

107 Detecting Corporate Failure

Xi;t  uh D

p X

1601 q X

ˆk .Xi;t k  uh / C "i;t 

‚s "i;t s ;

sD1

kD1

(107.6) X p

Xi;t  uf;m D

ˆk .Xi;t k  uf;mCk / C "i;t

kD1



q X

m D 0; 1; 2; : : : ; (107.7)

‚s "i;t s ;

sD1

E."i;t / D 0; D D 0 for i ¤ j and/or t ¤ s, i; j D 1; 2; : : :,N and N D Nf C Nh . Based on the time series processes of financial ratios, Theodossiou (1993) shows a cumulative sums (CUSUM) model that conveys the information regarding the deterioration on the distress firm’s financial status: ˙; E."i;t "0j;s /

Ci;t D min.Ci;t 1 C Zi;t  K; 0/ < L;

for K; L > 0; (107.8)

Ci;t and Zi;t are cumulative(dynamic) and an quarterly(static) time series performance score for the i th company at time t. K and L are sensitivity parameters required as positive values. Accordingly, score Zi;t is a function of the attribute vector Xi;t accounting for serial correlation in the data, and is estimated by Zi;t D ˇ0 C ˇi Xi;t 

p X

ˆk Xi;t k C

q X

# ‚s "i;t s ;

sD1

kD1

(107.9) # 0

" p X 1 .uh  uf /  ˇ0   ˆk .uh  uf;k / 2D kD1 # " p X 1 ˙ ˆk .uh C uf;k / ; .uh C uf /  kD1

"

ˇ1  .1=D/ .uh  uf / 

p X

#0 ˆk .uh  uf /

˙ 1 ;

kD1

and " D D .uh  uf /  2

" ˙

1

Pf D prob.Ci;t > Ljdistress firm ands D 1/; Ph D prob.Ci;t  Ljhealthy firm/

E."i;t "0i;t /

"

thus the Ci;t scores are equal to zero. In contrast, any distress firm that has Zi;t lower than K makes the cumulative score of Ci;t negative. In particular, two parameters K and L determine the discovery ability of CUSUM model and relate to the Type I and Type II Errors. Generally, the larger the value of K is, the less likely Type I Error and the more likely Type II Error might occur. The opposite is true with the parameter L:

p X

#0 ˆk .uh  uf /

kD1

.uh  uf / 

p X

# ˆk .uh  uf / :

kD1

Based on the CUSUM model, we are able to measure the overall performance on the firm’s financial distress possibility by the cumulative score Ci;t . If one company is typically a healthy firm, then Zi;t scores are positive and greater than K,

(107.10)

are, respectively, the percentages of distress and healthy country in the population not classified accurately by the prediction of CUSUM model, which also termed as Type I and Type II Errors. In means of the determination of the optimal values of K and L, we solve the dynamic optimization problem: Min D EC D wf Pf .K; L/ C .1  wf /Ph .K; L/; K;L

(107.11) where wf and wh D 1  wf are investor’s specific weights attached to the probabilities Pf and Ph . EC is the expected error rate.

107.4.5 Merton Model In the Merton (1974) Model, the probability of a firm going into default is measured by the stock market valuation on firm value. Conceptually, the firm value (on the market valuation basis) less than the firm’s obligation falls into distress. Yet, the market valuation on a firm (equal to sum of debt and equity) is not provided directly; only the stock market valuation on equity is available. Merton (1974) attempts to estimate the mean and standard deviation of the firm value indirectly through the option formula. One simple way refers to the Black and Scholes (1973) option formula, the mean and volatility of the firm value could be estimated from the option model. The measure of “distance to default” serves as the proxy for the possibility of corporate financial distress where distance to default is defined as the distance from the actual firm value to the critical value of the distress firm, given the confident level of Type I Error in determining the firm’s default. In Fig. 107.1, the distance to default is expressed as the DD. More precisely, the market valuation and standard deviation of the firm equity could be obtained from the Merton (1974) Model as VE D VA N.d1 /  De  r N.d2 /; E D

N.d1 /VA A ; VE

(107.12)

1602

Y. Wang et al.

Fig. 107.1 Firm value distribution and distance to default. Note. This figure is from Fig. 8.1 in Crosbie and Bohn (2002): (1) the current asset value; (2) the distribution of the asset value at time H ; (3) the volatility of the future assets value at time H ; (4) the level of the default point, the book value of the liabilities; (5) the expected rate of growth in the asset value over the horizon; and (6) the length of the horizon, H

where d1 D

ln.VA =D/ C .r C .1=2/A2 / p ; A 

p d2 D d1  A ;

VE is the market value of the firm’s equity, E is the standard deviation of the equity, VA is the market valuation on firm value, A is the standard deviation of the firm value, r is risk-free rate,  is time interval between the current date and the option expiration date, D is the total liability, N is the cumulative density function for Normal. From Equation (107.9) and (107.10), we solve unknown parameters VA and A . Based on the estimates for VA and A , the distance to default is DD D

ln.VA =D/ C .  A2 =2/ p ; A 

the Merton-based KMV Model with a naïve alternative. Let market value of each firm’s debt equal to the face value of debt; that is, naïve D D F: Then, Bharath and Shumway (2008) approximate the volatility of each firm’s debt by following naïve equation: naïve D D 0:05 C 0:25E : The 5% is a naïve estimation on term structure volatility, and 25% multiplying the equity volatility, which represents the tendency of default risk. Bharath and Shumway (2008) use this simple approximation and estimate the total volatility of the firm:

(107.13)

in which  is the growth rate of assets and DD is the distance to default. Generally, the smaller DD, the higher likelihood this firm will meet the default in near future. One vital question in the estimation of the distance to default is how to compute the standard deviation of the return on equity price. One way is to estimate the standard deviation by daily returns with some autocorrelation adjustments. In the real world, KMV Corporation applies Merton (1974) to build up the financial distress forecast model that is usually termed as KMV Model. In the estimation of equity return standard deviation, KMV Corporation adopts estimations on VA and A other than the option approach. By and large, the different method of measuring standard deviation of equity return predicts different financial distress model. To model the equity return distribution (particularly the standard deviation), Bharath and Shumway (2008) compare

naïve A D

VE naïve D E C naïve D VE C naïve D VE C naïve D D

VE F E C .0:05 C 0:25E /: VE C F VE C F (107.14)

Next, they set up the expected return process by the firm’s stock return in previous year: naïve  D ri t 1 : Finally, the naïve distance to default is naïve DD D

lnŒ.VE C F /=F C .ri t 1  0:5naïve A2 / p : naïve A  (107.15)

107 Detecting Corporate Failure

1603

Under this naïve DD, Bharath and Shumway (2008) argue that this alternative is slightly better than the traditional Merton-based KMV Model.

107.5 The Selection of Optimal Cutoff Point The final issue with respect to the accuracy rate of a bankruptcy classification model is the selection of an optimal cutoff point, especially for the scoring model. As Palepu (1986) noted, traditionally the cutoff point determined in most early papers was an arbitrary cutoff probability, usually 0.5. This choice may be intuitive, but lacks theoretical backing. Joy and Tollefson (1975), Altman and Eisenbeis (1978), and Altman et al. (1977) developed an equation to calculate the optimal cutoff point in their ZETA model. Two elements in the calculation of the optimal cutoff point can be identified as (1) the costs of Type I and Type II Errors and (2) the prior probability of failure and survival. These two essentials had been ignored in most previous studies. However, Kuo et al. (2002) adopted the points by use of Fuzzy theory to enhance credit decision model. Although their efforts were such a big breakthrough, there were still several unsolved problems. The first problem is the subjectivity in deciding the costs of Type I and Type II Error. Altman et al. (1977: 46) claimed that bank loan decisions will be approximately 35 times more costly for Type I Errors than for Type II Errors. This figure certainly cannot be applied to other decision models used by different parties such as investors and in different periods of time. In this case, such an investigation on costs of Type I and Type II Errors may be a one-off case. With constrains in reality, the subjectivity of selecting convenient cutoff figures in academic studies seems inevitable. The second problem is the subjectivity of selecting a prior bankruptcy probability. Wood and Piesse (1988) criticized Altman et al. (1977) for choosing a 2% higher failure rate than the average annual failure rate of 0.5%, suggesting spurious results from Altman et al. and necessitating a correction that was taken up in the later research. The final problem is that the optimal cutoff score produced may not be “optimal” in the light of the violation of assumptions with respect to multinormality and equal dispersion matrices (Altman et al. (1977: 43, footnote 17), which is apparently a common methodological problem in this data analysis. The optimal cutoff equation in Maddala (1983: 80) seems to be less problematic. It firstly develops the overall misclassification cost model as Z Z f1 .x/dx C C2 P2 f2 .x/dx; (107.16) C D C1 P1 G2

G1

where C is the total cost of misclassification; C1 the cost of misclassifying a failed firm into nonfailed one (Type I Error); C2 the cost of misclassifying a nonfailed firm into failed one (Type II Error); P1 the proportion of the failed firms to the total population firms; P2 the proportion of the nonfailed firms to the total population firms; G1 the failed firm group; G2 the nonfailed firm group; x a vector of characteristics x D .x1 ; x2 ; : : :; xk /; f1 .x/ the joint distribution of the characteristics x in failed firm group; f2 .x/ the joint distribution of x in nonfailed firm group; and P1 C P2 D 1. However, Z

Z

*

f1 .x/dx C G2

f1 .x/dx D 1:

(107.17)

G1

From Equations (107.3) and (107.4), we then have Z ) C D C1 P1 1 

Z f1 .x/dx C C2 P2 G1

Z D C1 P1 C

f2 .x/dx G1

ŒC2 P2 f2 .x/  C1 P1 f1 .x/ dx: G1

(107.18) To minimize the total cost of misclassification, min C , we have to let C2 P2 f2 .x/  C1 P1 f1 .x/  0 or

f1 .x/ C2 P2 : f2 .x/ C1 P1

(107.19)

(107.20)

Assume the expected costs of Type I Error and Type II Error are the same, C2 P2 D C1 P1 , the condition to minimize the total misclassification cost will be f1 .x/ 1: f2 .x/

(107.21)

This result is consistent with the one proposed by Palepu (1986) under the assumption of equal costs of Type I and II Errors. Therefore, the optimal cutoff point should be the probability value where the two conditional marginal densities, f1 .x/ and f2 .x/, are equal. In this equation, what can be found is that there is no need to use the prior failure rate to calculate the optimal cutoff point, but instead, the ex post failure rate (i.e., sample failure rate). Palepu (1986) more clearly illustrates this convenience by the application of Bayes’ formula. Instead of using the costs of Type I and Type II Errors, the expected costs of these errors are still required in this formula. Unfortunately, the subjectivity of deciding the relationship of these two types of expected costs still cannot be removed. There is no theory suggesting they shall be the

1604

same; that is, C2 P2 D C1 P1 . However, compared to the traditionally arbitrary 50% cutoff point, this assumption is more neutral and acceptable in an economic sense. This application of this procedure for the determination of bankruptcy cutoff probability can be found in Palepu (1986) and Lin and Piesse (2004).

107.6 Recent Development While MDA and CPA are classified as static analyses, dynamic modeling is becoming the mainstream in bankruptcy literature as other academic schemes. Shumway (2001) criticized static bankruptcy models for their observation of only the status of each bankrupt company 1 year prior to their failures and their ignorance of firms’ changes from year to year. In contrast, Shumway (2001) proposed a simple hazard dynamic model, which contained a survivor function and a hazard function to determine a firm’s failure possibility at each point in time. Given the infrequency of corporate failure in reality, hazard model users will avoid the small sample problem because this dynamic model requires all available time series information of firms. As this hazard model takes the duration dependence, time-varying covariates, and data sufficiency problems into consideration, it is in methodology superior to the models of MDA and CPA families. More empirical evidence is needed to support its excellency in prediction power. Studies following similar concepts can also be found in Whalen (1991) and Helwege (1996).

107.7 Conclusion In all the frequently used methods of bankruptcy, it is clear that the reasons a firm may file for corporate insolvency does not necessarily include the inability to pay off its financial obligations when they mature. For example, a solvent company can also be wound up through members’ voluntary liquidation procedure to maximize the shareholders’ wealth when the realized value of its assets exceeds its present value in use. Bulow and Shoven (1978) modeled the potential conflicts among the various claimants to the assets and income flows of the company (e.g., bondholders, bank lenders, and equity holder) and found that a liquidation decision should be made when “the coalition of claimants with negotiating power can gain from immediate liquidation” (Bulow and Shoven 1978: 454). Their model also considered the existence of some asymmetrical claimants of the firm. These results confirmed the complicated nature of the bankruptcy decision and justify the adoption of members’ voluntary liquidation procedure to determine a company’s value (see Brealey et al. 2001: 622; Ross et al. 2002: 857).

Y. Wang et al.

As to the development of failure prediction models, new models are methodologically superior, but the increase of their prediction power does not seem to trade off the increase of their modeling complexity, which casts doubts to their real values in practice. In addition, the costs of bankruptcy vary with different bankruptcy codes in different countries (see Brealey et al. 2001: 439–443; Ross et al. 2002: 426). It implies that bankruptcy prediction models with universally applicable factors and cutoff probability does not exist. Acknowledgments We would like to express our heartfelt thanks to Professor C. F. Lee who provided valuable suggestions in structuring this article. We also owe thanks to many friends at University of London (UK) and National Chi Nan University (Taiwan) for valuable comments. We also want to thank our research assistant Chiu-Mei Huang for preparing the manuscript and cheerfully proofreading several drafts of the manuscript. Last, but not least, special thanks go to the Executive Editorial Board of the Handbook of Quantitative Finance and Risk Management at Kluwer Academic Publishers, who encouraged us to write the article, expertly managed the development process, and superbly turned the final manuscript into a finished product.

References Agarwal, V. and R. Taffler. 2008. “Comparing the performance of market-based and accounting-based bankruptcy prediction models.” Journal of Banking and Finance 32, 1541–1551. Altman, E. I. 1968. “Financial ratios, discriminant analysis and the prediction of corporate bankruptcy.” Journal of Finance 23(4), 589–609. Altman, E. I. 1983. Corporate financial distress: a complete guide to predicting, avoiding, and dealing with bankruptcy, Wiley, New York, NY. Altman, E. I. and R. O. Eisenbeis. 1978. “Financial applications of discriminant analysis: a clarification.” Journal of Financial and Quantitative Analysis 13(1), 185–195. Altman, E. I., R. G. Haldeman, and P. Narayanan. 1977. “Zeta analysis: a new model to identify bankruptcy risk of corporations.” Journal of Banking and Finance 1, 29–54. Amemiya, T. 1981. “Qualitative response models: a survey.” Journal of Economic Literature 19(4), 1483–1536. Balcaena, S. and H. Ooghe. 2006. “35 years of studies on business failure: an overview of the classic statistical methodologies and their related problems.” British Accounting Review 38, 63–93. BarNiv, R. and J. Hathorn. 1997. “The merger or insolvency alternative in the insurance industry.” Journal of Risk and Insurance 64(1), 89–113. Beaver, W. 1966. “Financial ratios as predictors of failure.” Journal of Accounting Research 4(Suppl.), 71–111. Bharath, S. and T. Shumway. 2008. “Forecasting default with the Merton distance to default model.” Review of Financial Studies 21(3), 1339–1369. Black, F. and M. Scholes. 1973. “The pricing of options and corporate liabilities.” Journal of Political Economy 81(3), 637–654. Blum, M. 1974. “Failing company discriminant analysis.” Journal of Accounting Research 12(1), 1–25. Brealey, R. A., S. C. Myers, and A. J. Marcus. 2001. Fundamentals of corporate finance, 3rd Edition, McGraw-Hill, New York, NY. Bulow, J. and J. Shoven. 1978. “The bankruptcy decision.” Bell Journal of Economics 9(2), 437–456.

107 Detecting Corporate Failure Cosslett, S. R. 1981. “Efficient estimation of discrete-choice models,” in Structural analysis of discrete data with econometric applications, C. F. Manski and D. McFadden (Eds.). MIT Press, London, pp. 51–111. Counsell, G. 1989. “Focus on workings of insolvency act.” The Independent, 4th April. Cramer, J. S. 1991. The logit model: an introduction for economists, Edward Arnold, London. Crosbie, P. and J. Bohn. 2002. Modeling default risk, KMV Corporation, San Francisco, CA. Dahiya, S. and L. Klapper. 2007. “Who survives? A cross-country comparison.” Journal of Financial Stability 3, 261–278. Deakin, E. B. 1976. “Distributions of financial accounting ratios: some empirical evidence.” Accounting Review 51(1), 90–96. Duffie, D, L. Saita, and K. Wang. 2007. “Multi-period corporate default prediction with stochastic covariates.” Journal of Financial Economics 83, 635–665. Eisenbeis, R. A. 1977. “Pitfalls in the application of discriminant analysis in business, finance, and economics.” Journal of Finance 32(3), 875–900. Eisenbeis, R. A. and R. B. Avery. 1972. Discriminant analysis and classification procedure: theory and applications, D.C. Heath, Lexington, MA. Ezzamel, M., C. Mar-Molinero, and A. Beecher. 1987. “On the distributional properties of financial ratios.” Journal of Business Finance & Accounting 14(4), 463–481. Fitzpatrick, P. J. 1932. “A comparison of ratios of successful industrial enterprises with those of failed firms.” Certified Public Accountant, October, November, and December, pp. 598–605, 656–662, and 727–731, respectively. Foster, G. 1986. Financial statement analysis, 2nd Edition, PrenticeHall, Englewood Cliffs, NJ. Frecka, T. J. and W. S. Hopwood. 1983. “The effects of outliers on the cross-sectional distributional properties of financial ratios.” Accounting Review 58(1), 115–128. Greene, W. H. 1997. Econometric analysis, 3rd Edition, Prentice-Hall, Englewood Cliffs, NJ. Hamer, M. M. 1983. “Failure prediction: sensitivity of classification accuracy to alternative statistical methods and variable sets.” Journal of Accounting and Public Policy 2(4), 289–307. Helwege, J. 1996. “Determinants of saving and loan failures: estimates of a time-varying proportional hazard function.” Journal of Financial Services Research 10, 373–392. Homan, M. 1989. A study of administrations under the insolvency act 1986: the results of administration orders made in 1987, ICAEW, London. Hudson, J. 1987. “The age, regional and industrial structure of company liquidations.” Journal of Business Finance & Accounting 14(2), 199–213. Joy, M. O. and J. O. Tollefson. 1975. “On the financial applications of discriminant analysis.” Journal of Financial and Quantitative Analysis 10(5), 723–739. Judge, G. G., W. E. Griffiths, R. Carter Hill, H. Lutkepohl, and T. C. Lee. 1985. The theory and practice of econometrics, Wiley, New York, NY. Karels, G. V. and A. J. Prakash. 1987. “Multivariate normality and forecasting of business bankruptcy.” Journal of Business Finance & Accounting 14(4), 573–595. Keasey, K. and R. Watson. 1987. “Non-financial symptoms and the prediction of small company failure: a test of the Argenti’s hypothesis.” Journal of Business Finance & Accounting 14(3), 335–354. Keasey, K. and R. Watson. 1991. “Financial distress prediction models: a review of their usefulness.” British Journal of Management 2(2), 89–102. Kennedy, P. 1991. “Comparing classification techniques.” International Journal of Forecasting 7(3), 403–406.

1605 Kennedy, P. 1992. A guide to econometrics, 3rd Edition, Blackwell, Oxford. Kshirsagar, A. M. 1971. Advanced theory of multivariate analysis, Dekker, New York, NY. Kuo, H. C., S. Wu, L. Wang, and M. Chang. 2002. “Contingent fuzzy approach for the development of banks’ credit-granting evaluation model.” International Journal of Business 7(2), 53–65. Lev, B. and S. Sunder. 1979. “Methodological issues in the use of financial ratios.” Journal of Accounting and Economics 1(3), 187–210. Lin, L. and J. Piesse. 2004. “The identification of corporate distress in UK industrials: a conditional probability analysis approach.” Applied Financial Economics 14, 73–82. Lo, A. W. 1986. “Logit versus discriminant analysis: a specification test and application to corporate bankruptcies.” Journal of Econometrics 31(3), 151–178. Maddala, G. S. 1983. Limited-dependent and qualitative variables in econometrics, Cambridge University Press, Cambridge. Manski, C. F. and D. L. McFadden. 1981. Structural analysis of discrete data and econometric applications. Cambridge, The MIT Press. Marks, S. and O. J. Dunn. 1974. “Discriminant functions when covariance matrices are unequal.” Journal of the American Statistical Association 69(346), 555–559. Martin, D. 1977. “Early warning of bank failure: a logit regression approach.” Journal of Banking and Finance 1(3), 249–276. McDonald, B. and M. H. Morris. 1984. “The functional specification of financial ratios: an empirical examination.” Accounting and Business Research 15(59), 223–228. McLeay, S. 1986. “Students and the distribution of financial ratios.” Journal of Business Finance & Accounting 13(2), 209–222. Mensah, Y. 1983. “The differential bankruptcy predictive ability of specific price level adjustments: some empirical evidence.” Accounting Review 58(2), 228–246. Merton, R. C. 1974. “On the pricing of corporate debt: the risk structure of interest rates.” Journal of Finance 29(2), 449–470. Ohlson, J. A. 1980. “Financial ratios and the probabilistic prediction of bankruptcy.” Journal of Accounting Research 18(1), 109–131. Palepu, K. G. 1986. “Predicting takeover targets: a methodological and empirical analysis.” Journal of Accounting and Economics 8(1), 3–35. Pastena, V. and W. Ruland. 1986. “The merger bankruptcy alternative.” Accounting Review 61(2), 288–301. Peel, M. J. and D. A. Peel. 1987. “Some further empirical evidence on predicting private company failure.” Accounting and Business Research 18(69), 57–66. Rees, B. 1990. Financial analysis, Prentice-Hall, London. Ross, S. A., R. W. Westerfield, and J. F. Jaffe. 2002. Corporate finance, 6th Edition, McGraw-Hill, New York, NY. Shailer, G. 1989. “The predictability of small enterprise failures: evidence and issues.” International Small Business Journal 7(4), 54–58. Shumway, T. 2001. “Forecasting bankruptcy more accurately: a simple hazard model.” Journal of Business 74(1), 101–124. Storey, D. J., K. Keasey, R. Watson, and P. Wynarczyk. 1987. The performance of small firms, Croom Helm, Bromley. Sun, L. 2007. “A re-evaluation of auditors’ opinions versus statistical models in bankruptcy prediction.” Review of Quantitative Finance and Accounting 28, 55–78. Theodossiou, P. 1993. “Predicting shifts in the mean of a multivariate time series process: an application in predicting business failures.” Journal of the American Statistical Association 88, 441–449. Walker, I. E. 1992. Buying a company in trouble: a practical guide, Gower, Hants. Whalen, G. 1991. “A proportional hazard model of bank failure: an examination of its usefulness as an early warning tool.” Economic Review, First Quarter, 20–31.

1606 Whittington, G. 1980. “Some basic properties of accounting ratios.” Journal of Business Finance & Accounting 7(2), 219–232. Wood, D. and J. Piesse. 1988. “The information value of failure predictions in credit assessment.” Journal of Banking and Finance 12, 275–292. Zavgren, C. V. 1985. “Assessing the vulnerability to failure of American industrial firms: a logistic analysis.” Journal of Business Finance & Accounting 12(1), 19–45.

Y. Wang et al. Zavgren, C. V. 1988. “The association between probabilities of bankruptcy and market responses – a test of market anticipation.” Journal of Business Finance & Accounting 15(1), 27–45. Zmijewski, M. E. 1984. “Methodological issues related to the estimation of financial distress prediction models.” Journal of Accounting Research 22(Suppl.), 59–82.