Quantitative Modeling of Operational Risk in Finance ...

3 downloads 0 Views 4MB Size Report
ducted in ICICI Prudential Banking and Financial Services, India [8]. It is an ... in India and Prudential Plc one of Britain's largest players in the financial services.
Studies in Fuzziness and Soft Computing

Arindam Chaudhuri Soumya K. Ghosh

Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory

Studies in Fuzziness and Soft Computing Volume 331

Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected]

[email protected]

About this Series The series “Studies in Fuzziness and Soft Computing” contains publications on various topics in the area of soft computing, which include fuzzy sets, rough sets, neural networks, evolutionary computation, probabilistic and evidential reasoning, multi-valued logic, and related fields. The publications within “Studies in Fuzziness and Soft Computing” are primarily monographs and edited volumes. They cover significant recent developments in the field, both of a foundational and applicable character. An important feature of the series is its short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results.

More information about this series at http://www.springer.com/series/2941

[email protected]

Arindam Chaudhuri Soumya K. Ghosh •

Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory

123 [email protected]

Soumya K. Ghosh School of Information Technology Indian Institute of Technology Kharagpur India

Arindam Chaudhuri Samsung R&D Institute Delhi Noida, Uttar Pradesh India

ISSN 1434-9922 ISSN 1860-0808 (electronic) Studies in Fuzziness and Soft Computing ISBN 978-3-319-26037-2 ISBN 978-3-319-26039-6 (eBook) DOI 10.1007/978-3-319-26039-6 Library of Congress Control Number: 2015954341 Springer Cham Heidelberg New York Dordrecht London © Springer International Publishing Switzerland 2016 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Printed on acid-free paper Springer International Publishing AG Switzerland is part of Springer Science+Business Media (www.springer.com)

[email protected]

To our families and teachers

[email protected]

Contents

1

Introduction. . . . . . . . . . . . . . . . . . . . 1.1 Organization of the Monograph . . 1.2 Notation . . . . . . . . . . . . . . . . . . 1.3 State of Art . . . . . . . . . . . . . . . . 1.4 Figures . . . . . . . . . . . . . . . . . . . 1.5 MATLAB Optimization Toolbox . References . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

1 1 3 3 4 4 4

2

Operational Risk . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . 2.2 Operational Risk: A General View . . . . . . . . 2.3 Regulatory Framework . . . . . . . . . . . . . . . . 2.4 Operational Risk Data: Internal and External . 2.5 Quantifying Operational Risk. . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

7 7 8 23 25 25 28

3

The g-and-h Distribution . . . . . . . . . . . . 3.1 Introduction. . . . . . . . . . . . . . . . . . 3.2 Definition . . . . . . . . . . . . . . . . . . . 3.3 Properties of g-and-h Distribution. . . 3.4 Fitting g-and-h Distributions to Data. 3.5 Comments on g-and-h Parameters . . References . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

29 29 33 36 42 43 45

4

Probabilistic View of Operational Risk . . . . . . . . . 4.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The g-and-h Distribution for Operational Risk . 4.3 Value at Risk. . . . . . . . . . . . . . . . . . . . . . . . 4.4 Subadditivity of Value at Risk . . . . . . . . . . . . 4.5 Subjective Value at Risk . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

47 47 48 49 50 54

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

vii

[email protected]

viii

Contents

4.6 4.7 4.8 4.9

Risk Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deviation Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . Equivalence of Chance and Value at Risk Constraints. . . . Advanced Properties of G-and-H Distribution . . . . . . . . . 4.9.1 Tail Properties and Regular Variation . . . . . . . . . 4.9.2 Second Order Regular Variation. . . . . . . . . . . . . 4.10 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.1 Subjective Value at Risk Optimization . . . . . . . . 4.10.2 The Regression Problem . . . . . . . . . . . . . . . . . . 4.10.3 Stability of Estimation . . . . . . . . . . . . . . . . . . . 4.11 Decomposition According to Contribution of Risk Factors References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

5

Possibility Theory for Operational Risk. . . 5.1 Introduction. . . . . . . . . . . . . . . . . . . 5.2 σ-Algebra . . . . . . . . . . . . . . . . . . . . 5.3 Measurable Space and Measurable Set 5.4 Measurable Function. . . . . . . . . . . . . 5.5 Uncertainty Measure. . . . . . . . . . . . . 5.6 Uncertainty Space . . . . . . . . . . . . . . 5.7 Uncertainty Distribution . . . . . . . . . . 5.8 Uncertainty Set . . . . . . . . . . . . . . . . 5.9 Possibilistic Risk Analysis. . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. 75 . 75 . 77 . 77 . 79 . 81 . 83 . 85 . 88 . 103 . 111

6

Possibilistic View of Operational Risk . . . . . . . . . . . . . . . . . . 6.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Fuzzy g-and-h Distribution . . . . . . . . . . . . . . . . . . . . . . 6.3 Fuzzy Value at Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Subadditivity of Fuzzy Value at Risk . . . . . . . . . . . . . . . 6.5 Fuzzy Subjective Value at Risk . . . . . . . . . . . . . . . . . . . 6.6 Fuzzy Risk Measures . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Fuzzy Deviation Measures. . . . . . . . . . . . . . . . . . . . . . . 6.8 Application: Fuzzy Subjective Value at Risk Optimization References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

113 113 115 118 120 121 123 125 128 129

7

Simulation Results . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . 7.2 Risk Control Estimates . . . . . . . . . . . . . . . 7.2.1 Value at Risk. . . . . . . . . . . . . . . . 7.2.2 Fuzzy Value at Risk . . . . . . . . . . . 7.3 Linear Regression Hedging . . . . . . . . . . . . 7.3.1 Value at Risk Deviation . . . . . . . . 7.3.2 Subjective Value at Risk Deviation. 7.3.3 Mean Absolute Deviation . . . . . . . 7.3.4 Standard Deviation . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

131 131 132 135 135 136 138 138 139 139

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

[email protected]

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

56 57 58 59 59 63 67 67 69 71 72 73

Contents

ix

7.4

Example: Equivalence of Chance and Value at Risk Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 7.5 Portfolio Rebalancing Strategies: Risk Versus Deviation . . . . . . 141 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 8

9

A Case Study: Iron Ore Mining in India. . . . . . . . . 8.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Dataset and Computational Framework . . . . . . . 8.3 Risk Calculation with Fuzzy Subjective Value at Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . 8.5 Comparative Analysis with Other Techniques . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.... .... .... Risk .... .... .... ....

. . . . . . . 147 . . . . . . . 147 . . . . . . . 148 . . . .

. . . .

. . . .

. . . .

. . . .

Evaluation of the Possibilistic Quantification of Operational Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Preparation of Preliminary Life Cycle Assessment Study. . . . 9.3 Calculation of Fuzzy Analytic Hierarchy Process Weights . . . 9.4 Trapezoidal Fuzzy Numbers for Pairwise Comparison Between the Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Pairwise Comparison Matrices and Normalized Fuzzy Weights for Each Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Determination of Ranks of Operational Risks with Trapezoidal Fuzzy Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Integration of Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Fuzzy Analytic Hierarchy Process . . . . . . . . . . . . . 9.7.2 Fuzzy Extension of the Technique for Order Preference by Similarity to Ideal Solution . . . . . . . . 9.8 Calculation of Solutions for Risk Aversion Alternatives . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10 Summary and Future Research 10.1 Summary . . . . . . . . . . . . 10.2 Future Research . . . . . . . . References . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

161 165 166 168

. . . .

. . . .

169 169 174 175

. . 176 . . 176 . . 179 . . 179 . . 180 . . 181 . . 181 . . 183 . . . .

. . . .

185 185 187 188

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

[email protected]

List of Figures

Figure Figure Figure Figure

2.1 3.1 3.2 3.3

Figure 3.4 Figure 3.5 Figure 3.6 Figure 3.7

Figure 3.8 Figure 3.9

Figure 4.1 Figure 4.2

Figure 4.3 Figure 4.4

A trivial method to measure the operational risk . . . . Representative skewness and mid-summaries plot . . . The plot of ln (p-sigma) versus Z2 . . . . . . . . . . . . . . The lognormal distribution with effect of location parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The lognormal distribution with the effect of scale parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The h distribution . . . . . . . . . . . . . . . . . . . . . . . . . The skewness kurtosis plot . . . . . . . . . . . . . . . . . . . Some examples of PDFs and CDFs of the g-and-h distribution (the symbols l; r2 ; c3 and c4 denote the mean, variance, skewness and kurtosis respectively) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Some examples of PDFs and CDFs of the g-and-h distribution with negative skew . . . . . . . . . . . . . . . . The g-and-h pdf approximations to empirical pdf using measures of circumference (in centimeters) taken from n = 252 (The g-and-h pdfs are scaled using AqðzÞ þ B where A ¼ s=r and B ¼ m  Al). . . . . . . . . . . . . . . The relations between two different typical representations of a population . . . . . . . . . . . . . . . . . . . . . . The graphical representation of VaR measures the distance between the value of portfolio in current period and its a-quantile . . . . . . . . . . . . . . . . . . . . . The plot of dg;h ðaÞ as function of a for g ¼ 2:4 and h ¼ 0:2 at n ¼ 107 . . . . . . . . . . . . . . . . . . . . . . . . . The contour plot of dg;h ðaÞ as a function of g and h values for fixed at a ¼ 99 % (left panel) and a ¼ 99:9 % (right panel) given n ¼ 107 . . . . . . .

..... ..... .....

13 31 32

.....

34

..... ..... .....

35 35 37

.....

41

.....

42

.....

43

.....

48

.....

50

.....

51

.....

52

xi

[email protected]

xii

List of Figures

The plot of dg;h ðaÞ as a function of a with Gauss copula and correlation parameters q ¼ 0; 0:5; 0:7 (g ¼ 2:4; h ¼ 0:2; n ¼ 107 ) . . . . . . . . . . . . . . . . . . . Figure 4.6 A representation of VaR and SVaR measures in the operational risk context. . . . . . . . . . . . . . . . . . . . . . Figure 4.7 The axiomatic definition of the deviation measure . . . Figure 4.8 The theoretical mean excess function (thick line) together with 12 empirical mean excess plots of the g-and-h distribution . . . . . . . . . . . . . . . . . . . . . . . . Figure 4.9 The density of the g-and-h distribution plotted on a log-log scale (The different plotting ranges of the axes is to be noted) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 4.10 The measure of SVaR robust mean in terms of SVaR actual frontiers. . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.1 The illustration of probability (left curve) and indeterminacy (right curve) . . . . . . . . . . . . . . . . Figure 5.2 A measurable function . . . . . . . . . . . . . . . . . . . . . . Figure 5.3 The extension from rectangles to product r-algebra . . Figure 5.4 The uncertainty set nðcÞ on ðCk ; Lk ; Mk Þ . . . . . . . . . Figure 5.5 MfB  ng ¼ |{z} inf lðxÞ and Figure 4.5

.....

53

..... .....

55 58

.....

60

.....

60

.....

69

. . . .

. . . .

76 79 84 89

Mfn  Bg ¼ 1  sup lðxÞ . . . . . . . . . . . . . . . . . . . . . . . |{z}

93

. . . .

. . . .

. . . .

x2B

x2B

Figure 5.6 Figure Figure Figure Figure Figure

5.7 5.8 5.9 5.10 5.11

Figure 5.12 Figure 5.13 Figure 5.14 Figure 5.15 Figure Figure Figure Figure Figure Figure

5.16 5.17 5.18 5.19 5.20 5.21

c

The rectangular, triangular and trapezoidal membership functions. . . . . . . . . . . . . . . . . . . . . . . The membership function of young . . . . . . . . . . . . . The membership function of tall . . . . . . . . . . . . . . . The membership function of warm. . . . . . . . . . . . . . The membership function of most . . . . . . . . . . . . . . The membership function l of the uncertainty set nðcÞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The inverse membership function l1 ðaÞ . . . . . . . . . The membership function of the union of uncertainty sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The membership function of the intersection of uncertainty sets . . . . . . . . . . . . . . . . . . . . . . . . . The membership function of the complement of uncertainty set . . . . . . . . . . . . . . . . . . . . . . . . . . Series system . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parallel system . . . . . . . . . . . . . . . . . . . . . . . . . . . Standby system . . . . . . . . . . . . . . . . . . . . . . . . . . . A structural system with n rods and an object . . . . . . A structural system with 2 rods and an object . . . . . . The value-at-risk . . . . . . . . . . . . . . . . . . . . . . . . . .

[email protected]

. . . . .

. . . . .

. . . . .

94 94 95 95 96

..... .....

96 97

.....

101

.....

102

. . . . . . .

102 103 104 104 107 108 110

. . . . . . .

. . . . .

. . . . . . .

. . . . .

. . . . . . .

. . . . . . .

List of Figures

Figure 6.1 Figure 6.2 Figure 6.3 Figure 6.4 Figure 6.5

Figure 6.6 Figure 6.7 Figure 6.8

Figure Figure Figure Figure Figure Figure Figure Figure

7.1 7.2 7.3 8.1 8.2 8.3 8.4 8.5

Figure 8.6 Figure 8.7

Figure 8.8 Figure 8.9 Figure 9.1

Figure 9.2 Figure 9.3 Figure 9.4

xiii

The trapezoidal membership function . . . . . . . . . . . . An illustration of the fuzzy distribution function . . . . The a-levels of distribution function represented in Fig. 6.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The vague surface formed by mapping of the fuzzy linguistic model to evaluate risk. . . . . . . . . . . . . . . . The vague surface formed by simulating the final comprehensive operational risk corresponding to other factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The plot of d~g;~h ð~aÞ as function of ~a for ~g ¼ 2:37 and ~h ¼ 0:21 at n ¼ 107 . . . . . . . . . . . . . . . . . . . . . A representation of V~aR and SV~aR in the operational risk context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The 3-dimensional representation of fuzzy risk measures for a predefined risk level and the risk factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VaR and V~aR minimization. . . . . . . . . . . . . . . . . . . The portfolio asset mix: opening balance . . . . . . . . . The portfolio asset mix: closing balance . . . . . . . . . . Jharkhand state in India . . . . . . . . . . . . . . . . . . . . . The different districts in Jharkhand state . . . . . . . . . . The products from iron ore (hematite) . . . . . . . . . . . The basic flow chart of iron ore (hematite) industry . . The trapezoidal membership function defined by trapzoid (x; a, b, c, d) . . . . . . . . . . . . . . . . . . . . . . The objective function, SV~aR constraints for various ~............................. risk levels x The relative discrepancy in assignment, SV~aR ~ ¼ 0:005) and inactive constraint is active (at x ~ ¼ 0:02) . . . . . . . . . . . . . . . . . . . . . . . . . . . . (at x The index and optimal assignment values, SV~aR ~ ¼ 0:005) . . . . . . . . . . . . . . constraint is active (at x The index and optimal assignment values, SV~aR ~ ¼ 0:02). . . . . . . . . . . . . . constraint is inactive (at x A representation of integrated AHP and TOPSIS under fuzzy environment to support LCA for enabling risk aversion performance comparison . . . . . . . . . . . . . . A typical 3-level AHP . . . . . . . . . . . . . . . . . . . . . . The hierarchical structure of the fuzzy AHP approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The trapezoidal membership function defined by trapezoid ðx; a; b; c; d Þ . . . . . . . . . . . . . . . . . . . . . . .

[email protected]

..... .....

116 117

.....

117

.....

118

.....

119

.....

120

.....

122

. . . . . . . .

. . . . . . . .

125 133 142 143 148 149 150 151

.....

163

.....

163

.....

164

.....

164

.....

165

..... .....

171 172

.....

175

.....

176

. . . . . . . .

. . . . . . . .

. . . . . . . .

List of Tables

Table 2.1 Table 2.2 Table 3.1

Table 3.2 Table 4.1 Table 7.1 Table 7.2 Table 7.3 Table Table Table Table

7.4 7.5 7.6 8.1

Table Table Table Table Table Table

8.2 8.3 8.4 8.5 8.6 8.7

Table 8.8 Table 8.9

Operational risks. . . . . . . . . . . . . . . . . . . . . . . . . . . The business units, business lines and indicators of the standardized approach . . . . . . . . . . . . . . . . . . The observed and expected frequencies and chi-square test based on the g-and-h approximation to the chest data in Panel B of Fig. 3.2. . . . . . . . . . . . . . . . . . . . The examples of g-and-h TMs based on the data in Fig. 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rate of convergence to GPD for different distributions as a function of threshold u . . . . . . . . . . . . . . . . . . . The value of risk functions . . . . . . . . . . . . . . . . . . . The out of sample performance of various deviations on optimal hedging portfolios . . . . . . . . . . . . . . . . . . . . The out of sample performance of various downside risks on optimal hedging portfolios . . . . . . . . . . . . . . Chance versus VaR constraints . . . . . . . . . . . . . . . . . The out of sample Sharpe ratio. . . . . . . . . . . . . . . . . The out of sample portfolio mean return . . . . . . . . . . The mineral resources in Jharkhand (as on 1st April 2005) . . . . . . . . . . . . . . . . . . . . . . . . . . . . The parameters for every hematite mine . . . . . . . . . . The parameters of each hematite product . . . . . . . . . . The parameters for product j produced by mine i . . . . The assignment results for different products . . . . . . . The objectives for both the upper and lower levels . . . ~ in the SV~aR The results of various risk levels x constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The errors analysis by solving the crisp equivalent model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The computing time and memory by different techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.....

12

.....

15

.....

44

.....

44

..... .....

66 135

.....

138

. . . .

. . . .

. . . .

. . . .

. . . .

138 140 143 143

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

149 152 152 155 159 160

.....

163

.....

166

.....

167 xv

[email protected]

xvi

Table 9.1 Table 9.2 Table 9.3 Table 9.4 Table 9.5 Table 9.6

List of Tables

The relative importance between each criterion in terms of possibilistic judgements . . . . . . . . . . . . . . . . . . . . The relative importance between each criterion in terms of fuzzy numbers . . . . . . . . . . . . . . . . . . . . . . . . . . The linguistic terms corresponding to the trapezoidal fuzzy membership function . . . . . . . . . . . . . . . . . . . The fuzzy risk aversion solution evaluation matrix . . . The weighted risk aversion solution evaluation matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The summarized results of the FTOPSIS approach . . .

[email protected]

.....

177

.....

178

..... .....

179 180

..... .....

180 181

Chapter 1

Introduction

1.1

Organization of the Monograph

Operational risk is a widely studied research topic in industry and academia [1, 7, 8, 15]. Huge amounts of capital are allocated to mitigate this risk. Availability of huge datasets [29] has created an opportunity to analyse this risk mathematically. The risk measurement gains superior concern for capital allocation, hedging and new product development. There is always an inherent degree of vagueness and impreciseness [27] present in real life data. Due to this risk function is treated here through possibility theory [26] using indeterminate uncertainty encompassing belief degrees [13]. The parametric g-and-h distribution [6, 8, 11, 18] associated with extreme value theory [4] has emerged as an interesting candidate here. A comprehensive assessment of methods are performed through fuzzy versions of value at risk, VaR [11] and subjective value at risk, SVaR [3]. The stability estimation of VaR and SVaR are also performed. The simulation studies reveal that possibilistic quantification of risk performs consistently better than probabilistic model [3]. A case study in Indian scenario is also conducted to show benefits of the model. Finally, an evaluation of the risk is performed to assess various aspects towards averting underlying associated risks by integrating fuzzy analytic hierarchy process (FAHP) [24] with fuzzy extension of technique for order preference by similarity (FTOPSIS) [19] to ideal solution. This book is written in the following divisions: (1) the introductory chapters consisting of Chaps. 1, 2 and 3 (2) probabilistic view of operational risk in Chap. 4 (3) possibility theory and possibilistic view of operational risk in Chaps. 5 and 6 (4) simulation results in Chap. 7 (5) a case study based on iron ore mining in India in Chap. 8 and (6) evaluation of possibilistic quantification of operational risk with summary and future research in Chaps. 9 and 10. First we become familiar with operational risk. All we need to know about operational risk for this book comprises of Chap. 2. A beginners’ introduction to operational risk is given in [14]. This is followed by an introduction to g-and-h © Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_1

[email protected]

1

2

1

Introduction

distribution in Chap. 3 [25]. The probabilistic view of operational risk in Chap. 4 is based on the probabilistic concepts given in book [22] where reader can refer the introductory probabilistic concepts. This chapter is very important from conceptually appreciating the theories stated in Chaps. 5 and 6 as well as simulation results explained in Chap. 7. The possibility theory of operational risk is highlighted in Chap. 5. The interested reader can refer the possibilistic concepts given in [26]. An elementary knowledge of fuzzy set theory [27] will also be helpful in understanding different concepts discussed in Chap. 5. Several interesting new concepts like subjective value at risk [3] and deviation measures [3] are discussed in this chapter. The interested reader can refer the book [28] on fuzzy sets for better understanding. The possibilistic view of operational risk in Chap. 6 is based on the possibility theory given in the earlier chapter. This chapter forms the central theme of the entire book. A better understanding of this chapter will help the reader to apply the stated concepts in several real life applications. This chapter can also be considered as the possibilistic extension to the operational risk concepts highlighted in Chap. 5. The simulation results on the probabilistic and possibilistic concepts are discussed in Chap. 7 with certain illustrations from fuzzy optimization [17]. All the estimates and quantitative measures are borrowed from the [4]. The experimental results are based on the concepts given in earlier chapters. All the experiments are conducted on several real life datasets using the MATLAB optimization toolbox [30]. A case study on the iron ore mining in India is illustrated in Chap. 8. The entire computational framework is based on iron ore mine in Jharkhand state in India [3] which produces the maximum iron ore in the country. The optimization problem formulated is multiobjective [3] in nature with several constraints. The interested reader can refer the book [21] on multiobjective optimization. Thereafter, several risk measures are calculated with corresponding sensitivity analysis. The computational framework is implemented using MATLAB optimization toolbox [30]. Finally, a comparative analysis with other techniques is done. The evaluation of the possibilistic quantification of operational risk through various assessment methods are explained in Chap. 9. The evaluation methods used here are FAHP [24] and FTPOSIS [19]. All the assessment methods are adopted from the [3] with experiments done using MATLAB optimization toolbox [30]. The summary and future research directions are given in Chap. 10. Chapters 2, 3 and 5 can be read independently. Chapters 4, 6, 7, 8 and 9 cannot be read independently. They are based on the concepts illustrated in Chaps. 2, 3 and 5 and requires a better understanding of these chapters in order to understand the rest of the book. For example, in Chap. 6 on possibilistic view of operational risk can be well understood by the reader when he starts appreciating the concepts of possibility theory in Chap. 5. The major prerequisite for better understanding of this book besides Chaps. 2, 3 and 5 is basic knowledge of elementary statistics [16, 20]. We will cover the elementary statistics as and when required with suitable pointers to different books [11, 23].

[email protected]

1.2 Notation

1.2

3

Notation

It is very challenging to write a book with lot of mathematical symbols and achieve a uniformity in the notation. All the chapters of the book contain a fair amount of mathematical analysis. We have tried to maintain a uniform notation within each chapter. This means that we may use the letters x and y to represent a closed interval [x, y] in one chapter but they could stand for parameters in a probability density function in another chapter. We have used the following uniform notation ~ B ~ etc. throughout the book: (1) we place tilde over a letter to denote a fuzzy set A; and (2) all the fuzzy sets are fuzzy subsets of the real numbers. We use some standard notation in statistics viz. (1) we use α in confidence intervals and (2) we have b as the significance level in hypothesis tests. So (1 − α) 100 % confidence interval means a 95 % confidence interval if α = 0.05. The hypothesis test H0 : l ¼ 0 verses H1 : l 6¼ 0 at β = 0.05 means given that H0 is true, the probability of landing in the critical region is 0.05. The term crisp means not fuzzy. A crisp set is a regular set and a crisp number is a real number. There is a potential problem with the symbol  . It usually means ~ B ~ is a fuzzy subset of B. ~ which stands for A ~ The meaning of the fuzzy subset as A symbol  should be clear from its use. Throughout the book x will be the mean of a random sample and not a fuzzy set. It is explicitly pointed when this first arises in the book. Let N ðl; r2 Þ denote the normal distribution with mean l and variance r2 : The critical values for the normal distribution  is written as zγ for hypothesis testing (confidence intervals). We2have Pr X  zc ¼ c: Similarly the critical values of other distributions such as χ distribution are also specified as and when required.

1.3

State of Art

The major part of this research falls in the intersection of the operational risk [14, 15], g-and-h distribution [11, 18], VaR [4], SVaR [3], possibility theory [5, 26], and belief degrees [13]. The references in this chapter gives the complete list of papers in these areas. The possibility theory researchers have their own web site [30] which has links to several basic papers in the area and conferences on possibility theory. There are several papers available in the literature devoted to operational risk and possibility theory. The interested reader can always search these topics on his favourite search engine. Different from the papers on possibility theory which employ second order probabilities, upper/lower probabilities etc. we use fuzzy numbers [27] to model uncertainty in some of the probabilities. We can use crisp intervals to express the uncertainties but we are not going to use standard interval arithmetic to combine the uncertainties. We do substitute fuzzy numbers for uncertainty probabilities but we are not using fuzzy probability theory to propagate the uncertainty through the model. Our method is to use fuzzy numbers for

[email protected]

4

1

Introduction

expressing possibilities and use restricted fuzzy arithmetic to calculate other fuzzy probabilities and other required values. Statistical theory is based on probability theory. So fuzzy statistics can take many forms depending on what probability (imprecise, interval, fuzzy) theory we are using. A key reference in this exciting area is [2] where the reader can find many more references.

1.4

Figures

All the figures and graphs in the book are created using different methods. The figures are adopted from standard texts of operational risk [12, 22] and possibility theory [5]. The graphs are plotted based on standard datasets available from various sources. Valid pointers are given for different datasets throughout the text book. Some of the graphs are first plotted in Excel [32] and then exported to MS Word. Likewise the other graphs are plotted in MATLAB [30] and then exported to MS Word. The interested can refer the suggested references to graphs as per the needs and requirements of the applications. All efforts are made to preserve the quality of the graphs and figures towards better readability for the reader.

1.5

MATLAB Optimization Toolbox

We have used the MATALAB optimization toolbox [30] to solve the simulation results highlighted in Chap. 7 as well as the Case Study entitled Iron ore mining in India given in Chap. 8. Some of the optimization toolbox commands [30] were also used to solve different problems arising in Chap. 9 towards the evaluation of possibilistic quantification of operational risk.

References 1. Artzner, P., Delbaen, F., Eber, J.M., Heath, D.: Coherent measures of risk. Math. Fin. 9(3), 203–228 (1999) 2. Buckley, J.J.: Fuzzy Probability and Statistics, Studies in Fuzziness and Soft Computing. Springer, Berlin (2006) 3. Chaudhuri, A.: A Study of Operational Risk using Possibility Theory, Technical Report, Birla Institute of Technology Mesra, Patna Campus, India (2010) 4. Degen, M., Embrechts, P., Lambrigger, D.D.: The Quantitative Modeling of Operational Risk: Between g-and-h and EVT, Technical Report, Department of Mathematics, ETH Zurich, Zurich, Switzerland (2007) 5. Dubois, D., Prade, H.: Possibility Theory. Plenum, New York (1988) 6. Dutta, K., Perry, J.: A Tale of Tails: An Empirical Analysis of Loss Distribution Models for Estimating Operational Risk Capital, Federal Reserve Bank of Boston, Working Paper Number 6–13 (2006)

[email protected]

References

5

7. Embrechts, P., Hofert, M.: Practices and issues in operational risk modeling under basel II. Lith. Math. J. 50(2), 180–193 (2011) 8. Franklin, J.: Operational risk under basel II: a model for extreme risk evaluation. Bank. Fin Serv. Policy Rep. 27(10), 10–16 (2008) 9. Headrick, T.C., Kowalchuk, R.K., Sheng, Y.: Parametric probability densities and distribution functions for Turkey g-and-h transformations and their use for fitting data. Appl. Math. Sci. 2 (9), 449–462 (2008) 10. Hines, W.W., Montgomery, D.C., Goldman, D.M., Borror, C. M.: Probability and Statistics in Engineering, 4th edn. Wiley, India (2008) 11. Hoaglin, D.C.: Summarizing shape numerically: the g-and-h distributions. In: Hoaglin, D.C., Mosteller, F., Tukey, J.W. (eds.) Exploring Data Tables, Trends and Shapes. Wiley, New York (1985) 12. Holton, G.A.: Value at Risk: Theory and Practice, 2nd edn. (2014). (http://value-at-risk.net) 13. Huber, F., Schmidt, C.P.: Degrees of Belief, Synthese Library, vol. 342. Springer, Berlin (2009) 14. Hussain, A.: Managing Operational Risk in Financial Markets, 1st edn. Butterworth Heinemann (2000) 15. King, J.L.: Operational Risk: Measurement and Modeling, 1st edn. The Wiley Finance Series, Wiley (2001) 16. Laha, R.G., Rohatgi, V.K.: Probability Theory, Volume 43 of Wiley Series in Probability and Mathematical Statistics, Wiley (1979) 17. Lodwick, W.A.: Fuzzy Optimization: Recent Advances and Applications, Studies in Fuzziness and Soft Computing. Springer, Berlin (2010) 18. MacGillivray, H.L., Cannon, W.H.: Generalizations of the g and-h distributions and their uses, Unpublished thesis (2000) 19. Ng, C.Y., Chuah, K.B.: Evaluation of eco design alternatives by integrating AHP and TOPSIS methodology under a fuzzy environment. Int. J. Manage. Sci. Eng. Manage. 7(1):43–52 (2012) 20. Pal, N., Sarkar, S.: Statistics: Concepts and Applications, 2nd edn. Prentice Hall of India (2007) 21. Rao, S.S.: Engineering Optimization: Theory and Practice, 4th edn. Wiley, New York (2009) 22. Rozanov, Y.A.: Probability Theory: A Concise Course, Dover Publications (1977) 23. Ruppert, D.: Statistics and Data Analysis for Financial Engineering, Springer Texts in Statistics (2010) 24. Torfi, F., Farahani, Z., Rezapour, S.: Fuzzy AHP to determine the relative weights of evaluation criteria and fuzzy TOPSIS to rank the alternatives. Appl. Soft Comput. 10(2), 520– 528 (2010) 25. Tukey, J.W.: Modern Techniques in Data Analysis, NSF Sponsored Regional Research Conference, Southern Massachusetts University, North Dartmouth, Massachusetts (1977) 26. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1(1), 3–28 (1978) 27. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965) 28. Zimmermann, H.J.: Fuzzy Set Theory and its Applications, 4th edn. Kluwer Academic Publishers, Massachusetts (2001) 29. http://www.kdnuggets.com/datasets/index.html 30. http://in.mathworks.com/products/optimization/ 31. http://www.irit.fr/*Didier.Dubois/ 32. https://office.live.com/start/Excel.aspx

[email protected]

Chapter 2

Operational Risk

Abstract In this Chapter an overview of the operational risk is provided. Operational risk is the most popular topic among the finance and banking professionals. It generally results from the loss arising from inadequate or failed processes, people or systems or from external events. It has also attracted the attention of academic research community. This Chapter contains the basic fundamental ideas of operational risk as stipulated through the Basel regulatory framework. The Chapter forms the basic building block for the entire research monograph. The operational risk data can be either internal or external in nature which is often used in the quantification of operational risk. Keywords Operational risk

2.1

 Finance  Banking  Basel  Quantification

Introduction

The topic of operational risk has gained increasing attention in both academic research and in practice. In this Chapter we have collected together the basic ideas of operational risk needed for a better understanding of the book. A general view of operational risk is given in Sect. 2.2. In Sect. 2.3 a discussion of regulatory framework on operational risk is highlighted as coined by Basel I, II and III. This is followed by the operational risk internal and external data in Sect. 2.4. In Sect. 2.5 we present a method to quantify operational risk. Any reader familiar with operational risk may directly proceed to Sect. 2.5. A good general reference for operational risk and its quantification is [12].

© Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_2

[email protected]

7

8

2 Operational Risk

2.2

Operational Risk: A General View

Operational risk [1, 10] is the risk of loss resulting from inadequate or failed processes, people or systems or from external events. It is an important risk component for financial institutions and banks [9] as evident by large sums of capital that are allocated to mitigate this risk. This risk is delicately placed between credit and market risk. It is usually estimated between 15 and 25 % of total risk and deserves serious attention. According to Basel Committee operational risk can be defined as: Operational risk is the risk of direct or indirect loss resulting from inadequate or failed internal process, people or systems or from external events.

The operational risk could be disaster risk, fraud risk, technological risk or litigation risk. Since the beginning of 1970s, Black and Scholes [2] have shown how to dynamically hedge market risk using derivatives. Later, the introduction of credit derivatives opened the way to hedge credit risk at the end of 1980s. The first recommendations of Basel Committee was not concerned with operational risk, considering implicitly that hedging two other risks covers automatically the third one [3]. The need of an effective risk management and measurement system for operational risk only appears during the revision process of Basel I with first explicit call of a minimum capital charge devoted to risk at the beginning of 2000. The allocation of capital that bank keeps as reserves for potential operational losses remains the only way to cover operational risk. This means that it is not really dynamic or active hedging strategy as it can for two other risks. The Basel Committee [4] proposed to encompass explicitly risks other than credit and market in the New Basel Capital Accord. The Committee made the New Basel Capital Accord more risk sensitive with the realization that risks other than credit and market are substantial. Further it developed banking practices such as securitization, outsourcing, specialized processing operations and reliance on rapidly evolving technology and complex financial products and strategies suggested that these other risks were increasingly important factors to be reflected in credible capital assessments by both supervisors and banks. Under the 1988 Accord the Basel Committee [4] recognized that the capital buffer related to credit risk implicitly covered other risks. The broad brush approach in the 1988 Accord delivered an overall cushion of capital for both the measured risks viz. credit and market and other unmeasured banking risks. The new requirements for measured risks were a closer approximation to the actual level of those risks less a buffer that exists for other risks. It was also noted that banks themselves typically hold capital well in excess of the current regulatory minimum and some were already allocating economic capital for other risks. The Basel Committee [4] believed that a capital charge for other risks included a range of approaches to accommodate the variations in industry risk measurement and management practices. Through extensive industry discussions the Committee learned that measurement techniques for operational risk and a subset of other risks

[email protected]

2.2 Operational Risk: A General View

9

remain in an early development stage at most institutions. As additional aspects of other risks remained very difficult to measure, the Committee focused the capital charge on operational risk and offered a range of approaches for assessing capital against this risk. The Basel Committee’s [4] goal was to develop methodologies that increasingly reflected an individual bank’s particular risk profile. The Basic Indicator Approach linked the capital charge for operational risk to a single risk indicator such as gross income for the whole bank. The Standardized Approach was more complex variant of the Basic Indicator Approach that used a combination of financial indicators and institutional business lines to determine the capital charge. Both approaches were predetermined by regulators. The Internal Measurement Approach strived to incorporate within a supervisory specified framework, an individual bank’s internal loss data into the calculation of its required capital. Like the Standardized Approach the Internal Measurement Approach demanded a decomposition of the bank’s activities into specified business lines. However, the Internal Measurement Approach allowed the capital charge to be driven by banks’ own operational loss experiences within a supervisory assessment framework. In future a Loss Distribution Approach in which the bank specified its own loss distributions, business lines and risk types were available. An institution’s ability to meet specific criteria determined the framework used for its regulatory operational risk capital calculation. The Basel Committee’s [4] intention was to calibrate the spectrum of approaches so that the capital charge for a typical bank were less at each progressive step on the spectrum. This was consistent with the Committee’s belief that increasing the levels of sophistication of risk management and precision of measurement methodology should be rewarded with a reduction in the regulatory operational risk capital requirement. The Basel Committee [4] wanted to enhance operational risk assessment efforts by encouraging the industry to develop methodologies and collect data related to managing operational risk. Consequently the focus was primarily upon the operational risk component of other risks and it encouraged the industry to further develop techniques for measuring, monitoring and mitigating operational risk. In framing the proposals the Committee adopted a common industry definition of operational risk as defined earlier. The strategic and reputational risk was not included in this definition for the purpose of a minimum regulatory operational risk capital charge. This definition focused on the causes of operational risk and the Committee believed that this was appropriate for both risk management and measurement. However, in reviewing the progress of the industry in the measurement of operational risk the Committee was aware that causal measurement and modelling of operational risk remained at the earliest stages. For this reason the Committee rolled out further details on the effects of operational losses in terms of loss types to allow data collection and measurement to commence. As stated in the definition of operational risk the Basel Committee [4] intends for the capital framework to shield institutions from both direct and certain indirect losses. At this stage the Committee was unable to prescribe finally the scope of the charge in this respect. However, it was intended that the costs to fix an operational

[email protected]

10

2 Operational Risk

risk problem, payments to third parties and write downs generally would be included in calculating the loss incurred from the operational risk event. Furthermore there were other types of losses or events which should be reflected in the charge such as near misses, latent losses or contingent losses. The costs of improvement in controls, preventative action and quality assurance and investment in new systems were not included. In practice such distinctions were difficult as there existed a high degree of ambiguity inherent in the process of categorizing losses and costs which may result in omission or double counting problems. The Basel Committee [4] was cognizant of the difficulties in determining the scope of the charge and looked for comments on how to better specify the loss types for inclusion in a more refined definition of operational risk. Further it was likely that detailed guidance on loss categorization and allocation of losses by risk type need to be produced. This allowed the development of more advanced approaches to operational risk and the Committee also looked for detailed comments in this respect. In line with other banking risks [4] conceptually a capital charge for operational risk covered unexpected losses due to the risk involved. Provisions also covered the expected losses. However, accounting rules in many countries do not allow a robust, comprehensive and clear approach to setting provisions. Rather these rules appeared to allow for provisions only for future obligations related to events that have already occurred. In particular accounting standards generally required measurable estimation tests to be met and losses to be probable before provisions or contingencies were actually booked. In general provisions set up under such accounting standards bear only a very small relation to the concept of expected operational losses. Regulators were interested in a more forward looking provisions’ concept. There were cases where contingent reserves may be provided that relate to operational risk matters. An example being the costs related to lawsuits arising from a control breakdown. Also there were certain types of high frequency or low severity losses such as those related to credit card fraud that appear to be deducted from the income as they occur. However, provisions were generally not set up in advance for these. The current practice for pricing for operational risk varies widely. Regardless of actual practice it was conceptually unclear that pricing alone was sufficient to deal with operational losses in the absence of effective reserving policies. The situation may be somewhat different for banking activities that have a highly likely incidence of expected, regular operational risk losses that were deducted from reported income in the year such as fraud losses in credit card books. In these limited cases it might be appropriate to calibrate the capital charge to unexpected losses or unexpected losses plus some cushion of imprecision. This approach assumes that the bank’s income stream for the year would be sufficient to cover expected losses and that the bank can be relied upon to regularly deduct losses. Against this background the Basel Committee [4] proposed to calibrate the capital charge for operational risk based on expected and unexpected losses, but to allow some recognition for provisioning and loss deduction. A portion of end of

[email protected]

2.2 Operational Risk: A General View

11

period balances for specific list of identified types of provisions or contingencies could be deducted from the minimum capital requirement provided the bank disclosed them as such. Since capital was a forward looking concept the Committee believed that only part of a provision or contingency should be recognized as reducing the capital requirement. The capital charge for a limited list of banking activities where the annual deduction of actual operational losses was prevalent could be based on unexpected losses only plus a cushion for imprecision. The feasibility and desirability of recognizing provisions and loss deduction depend on there being a reasonable degree of clarity and comparability of approaches to defining acceptable provisions and contingencies among countries. The industry was invited to comment on how such a regime might be implemented. In June 2004 Basel II [4] was published which was intended to create an international standard for banking regulators to control how much capital banks need to put aside to guard against the financial and operational risks banks face. It has forced banks to give more direct attention to risks that outsiders might first think of. It is agreed that while credit, market and insurance risks are relatively tractable as methodology and availability of necessary data that is not the case for operational risk. Table 2.1 shows a number of kinds of operational risk along with some examples of where those risks have been realized and some applicable methodologies. The table also includes a few risks that are not classified as operational risk under Basel II. The Basel II Capital Accord [4] stipulates the bank’s capital adequacy requirements. This accord requires operational risk to be measured and controlled separately from market risk and credit risk. The advanced measurement approach (AMA) [3] uses the most sophisticated risk management methodologies and allows banks to use their own internal model for calculating operational risk as there is no standard measurement method has been established. Figure 2.1 represents a trivial method which is often used by the financial institutions to measure operational risk. All three pillars of the New Basel Capital Accord viz. minimum capital requirements, the supervisory review process and market discipline play an important role in the operational risk capital framework. The Basel Committee regulated a Pillar 1 minimum capital requirement and a series of qualitative and quantitative requirements for risk measurement which was used to determine eligibility to use a particular capital assessment technique. The Committee believed that a rigorous control environment was essential to prudent management and limiting of exposure to operational risk. Accordingly the Committee proposed that supervisors should also apply qualitative judgment based on their assessment of adequacy of the control environment in each institution. This approach operated under Pillar 2 of the New Basel Capital Accord which recognized the supervisory review process as an integral and critical component of the capital framework. The Pillar 2 regulated a framework in which banks were required to assess the economic capital they needed to support their risks and then this assessment process was reviewed by supervisors. Where the capital assessment process was inadequate and the allocation was insufficient supervisors expected a bank to take prompt action to correct the situation. Supervisors reviewed the inputs and assumptions of internal

[email protected]

12

2 Operational Risk

Table 2.1 Operational risks Type of risk

Example

Methodology

Acute physical hazards Long term physical hazards Biorisks

Tsunami, hail

Reinsurers’ data + extreme value theory Climate modeling + work on effects on banking system

Climate change

Terrorism Financial markets risk

Bombing, internet attack 1997 Asian crisis, depression

Real estate market risk Collapse of individual major partner Regulatory risk

Home loan book loss value

Biomedical research + quarantine expertise Intelligence analysis Macroeconomic modeling, stock market analysis + extreme value theory Real estate market modelling

Enron

Data mining on company data

Basel III, nationalization, government forces banks to pay universities for graduates Compensation payouts for misinformed customers Payout of unwanted CEO, dangerous management decision Barings rogue trader

Political analysis

Electronic access by thieves

Model pooled data, IT security expertise Goodwill pricing theory + marketing expertise Futurology

Legal risk Managerial and strategic risk Internal fraud and human error Robbery Reputational risk New technology risk Reserve risk Interactions of all the above

SARS, animal plague

Run on bank, spam deceives customers Technology allows small players to take bank market share Reserved funds change value Depression devalues real estate and reserves

Compensation law and likely changes

Model pooled anonymised data, fraud detection

Causal modeling of system interactions

methodologies for operational risk in the context of the firm wide capital allocation framework. The Committee intended to publish guidance and criteria to facilitate such an assessment process [4, 7]. Pillar 3 focused on market discipline which had the potential to reinforce capital regulation and other supervisory efforts to promote safety and soundness in banks and financial systems. The market discipline imposed strong incentives on banks to conduct their business in a safe, sound and efficient manner. It also provided a bank with an incentive to maintain a strong capital base as a cushion against potential

[email protected]

2.2 Operational Risk: A General View

13

Fig. 2.1 A trivial method to measure the operational risk

future losses arising from its risk exposures. To promote market discipline the Basel Committee [4] believed that banks should publicly and in a timely fashion disclose detailed information about the process used to manage and control their operational risks and the regulatory capital allocation technique they use. More work was required to assess fully the appropriate disclosures in this area. It was possible for banks to disclose operational losses in the context of a fuller review of operational risk measurement and in the longer term such disclosures formed a part of the qualifying criteria towards internal approaches. The framework outlined above presents three methods for calculating operational risk capital charges in a continuum of increasing sophistication and risk sensitivity. The Basel Committee intends to develop detailed criteria as guidance to banks and supervisors on whether banks qualify to use a particular approach [4]. The Committee believed that when a bank had satisfied the criteria it should be allowed to use that approach regardless of whether it has been using a simpler approach previously. Also in order to encourage innovation the Committee anticipated that a bank could have some business lines in Standardized Approach and others in Internal Measurement Approach. This will help reinforce the evolutionary nature of new framework by allowing banks to move along the continuum on a piecemeal basis. Banks could not choose to move back to simpler approaches once they have been accepted for more advanced approaches and should on a consolidated basis capture the relevant risks for each business line. In view of substantive industry efforts to develop and implement systems for assessing, measuring and controlling operational risk the Basel Committee [4] strongly encouraged continuing dialogue and development of work among its Risk Management Group and individual firms, industry groups and others on all aspects of incorporating operational risk into the capital framework. The continued contact with industry was required to clarify further a number of issues, including those related to definitions of loss events and data collection standards. In this regard the Committee noted that by the time the New Basel Capital Accord was implemented

[email protected]

14

2 Operational Risk

banks have had a meaningful opportunity to enhance internal control procedures and develop systems to support an internal measurement approach for operational risk. With respect to data ongoing industry liaison had shown a number of important needs that should be addressed over the coming periods. The Basel Committee [4] urged the industry to work on the development of codified and centralized operational risk databases using consistent definitions of loss types, risk categories and business lines. A number of separate processes were currently in train and the Committee believed that both the supervisory and banking community would be well served by industry supported databases for pooling certain industry internal loss data. This was important not only for operational risk management purposes but also for the development of the Internal Measurement Approach. A further related data issue ensured that clean operational risk data was collected and reported. In the absence of this calibration would be difficult and capital would fail to be risk sensitive. The Basel Committee [4] recognized the degree of cooperation that already existed on issue and welcomed the work that others have performed in conjunction with the Risk Management Group. The Committee believed that further collaboration would be essential in developing a risk sensitive framework for operational risk and for calibrating the proposed approaches. The Committee looks forward to further work with the industry to finalize a rigorous and comprehensive framework for operational risk. The Basic Indicator Approach is the most basic approach that allocated operational risk capital using a single indicator as a proxy for an institution’s overall operational risk exposure. The gross income is proposed as the indicator with each bank holding capital for operational risk equal to the amount of a fixed percentage multiplied by its individual amount of gross income. It is easy to implement and universally applicable across banks to arrive at a charge for operational risk. Its simplicity however comes at the price of only limited responsiveness to firm specific needs and characteristics. While the approach might be suitable for smaller banks with a simple range of business activities the Basel Committee expects internationally active banks and banks with significant operational risk to use a more sophisticated approach within the overall framework. For more details on this approach interested readers can refer [4, 7]. Another commonly used approach is the Standardized Approach which represents a further refinement along the evolutionary spectrum of approaches for operational risk capital. This approach differs from the Basic Indicator Approach such that a bank’s activities are divided into a number of standardized business units and business lines. Thus the Standardized Approach is better able to reflect the differing risk profiles across banks as reflected by their broad business activities. However, like the Basic Indicator Approach the capital charge would continue to be standardized by the supervisor. The proposed business units and business lines of the Standardized Approach mirror those developed by an industry initiative to collect internal loss data in a consistent manner. Working with the industry, regulators specify in greater detail which business lines and activities correspond to the

[email protected]

2.2 Operational Risk: A General View

15

categories of this framework enabling each bank to map its structure into the regulatory framework. For more details on this approach interested readers can refer [4, 7]. Within each business line regulators have specified a broad indicator that is intended to reflect the size or volume of bank’s activity in this area. The indicator is intended to serve as a rough proxy for the amount of operational risk within each of these business lines. Table 2.2 presents the business units, business lines and size or volume indicators of the Standardized Approach. Within each business line, the capital charge is calculated by multiplying a bank’s broad financial indicator by beta factor. The beta factor serves as a rough proxy for the relationship between the industry’s operational risk loss experience for a given business line and the broad financial indicator representing the banks’ activity in that business line calibrated to a desired supervisory soundness standard. For example for the Retail Brokerage business line, the regulatory capital charge would be calculated as: KRetail Brokerage ¼ bRetail Brokerage  Gross Income

ð2:1Þ

In Eq. (2.1) KRetail Brokerage is the capital requirement for the retail brokerage business line, βRetail Brokerage is the capital factor to be applied to the retail brokerage business line and Gross Income is the indicator for this business line. The total capital charge is calculated as the simple summation of the capital charges across each of the business lines. For more details on this approach interested readers can refer [3, 4, 7]. The primary motivation for the Standardized Approach is that most banks are in the early stages of developing firm wide data on internal loss by business lines and risk types. In addition the industry has not yet been able to show a causal relationship between risk indicators and loss experience. As a result banks that have not developed internal loss data by the time of the implementation period of the revised New Basel Capital Accord and do not meet the criteria for the Internal Measurement Approach will require a simpler approach to calculate their regulatory capital charge. In addition certain institutions may not choose to make the investment to collect internal loss data for all of their business lines, particularly those that

Table 2.2 The business units, business lines and indicators of the standardized approach Business units

Business lines

Indicator

Investment banks

Corporate finance Trading and sales Retail banks Commercial banks Payment and settlement Retail brokerage Asset management

Gross income Gross income Annual average assets Annual average assets Annual settlement throughput Gross income Total funds under management

Banks

Others

[email protected]

16

2 Operational Risk

present less material operational risk to the institution. Another important feature of the Standardized Approach is that it provides a basis for moving on a business line by business line basis towards the more sophisticated approaches and as such will help encourage the development of better risk management within banks. Another approach worth mentioning is the Internal Measurement Approach which provides discretion to individual banks on the use of internal loss data while the method to calculate the required capital is uniformly set by supervisors. In implementing this approach supervisors would impose quantitative and qualitative standards to ensure the integrity of the measurement approach, data quality and the adequacy of the internal control environment. The Basel Committee believes that as the Internal Measurement Approach will give banks incentives to collect internal loss data step by step. This approach is positioned as a critical step along the evolutionary path that leads banks to the most sophisticated approaches. However, the Committee also recognizes that the industry is still in a stage of developing data necessary to implement this approach. Currently there is not much sufficient data at the industry level or in a sufficient range of individual institutions to calibrate the capital charge under this approach. The Committee is laying out in some detail the elements of this part of the approach and the key issues that need to be resolved. In particular, in order for this approach to be acceptable the Committee will have to be satisfied that a critical mass of institutions have been able individually and at an industry level to assemble adequate data over a number of years to make the approach workable. Under the Internal Measurement Approach a capital charge for the operational risk of a bank would be determined using the following procedures [4, 7]: (i) A bank’s activities are categorized into a number of business lines and a broad set of operational loss types is defined and applied across business lines. (ii) Within each business line or loss type combination the supervisor specifies an exposure indicator ðEI Þ which is a proxy for the size of each business line’s operational risk exposure. (iii) In addition to the exposure indicator for each business line or loss type combination banks’ measure based on their internal loss data, a parameter representing the probability of loss event (PE) as well as a parameter representing the loss given that event (LGE). The product of EI * PE * LGE is used to calculate the expected loss (EL) for each business line or loss type combination. (iv) The supervisor supplies a factor viz. γ for each business line or loss type combination which translates the expected loss (EL) into a capital charge. The overall capital charge for a particular bank is the simple sum of all the resulting products. This can be expressed as: XX required capital ¼ ½cði; jÞ  EI ði; jÞ  PE ði; jÞ  LGE ði; jÞ ð2:2Þ i

j

In Eq. (2.2) i is the business line and j is the risk type.

[email protected]

2.2 Operational Risk: A General View

17

(v) To facilitate the process of supervisory validation banks supply their supervisors with the individual components of the expected loss calculation i.e. EI; PE; LGE instead of just the product EL. Based on this information supervisors calculate EL and then adjust for unexpected loss through the gamma term to achieve the desired soundness standard. The Basel Committee proposed that the business lines will be the same as those used in Standardized Approach. It is also proposed that operational risk in each business line then be divided into a number of non-overlapping and comprehensive loss types based on the industry’s best current understanding of loss events. By having multiple loss types the scheme can better address differing characteristics of loss events while the number of loss types should be limited to a reasonable number to maintain the simplicity of the scheme. The Committee’s provisional proposal on the grid for business lines, loss types and exposure indicators which has reflected considerable discussion with the industry [4]. While further work will be needed to specify the indicators for each risk type per business line the Committee had more confidence that the business lines and loss types are those which will form the basis of the new operational risk framework. The Committee believed that there should be continuity between approaches and that the indicators under the Standardized and Internal Measurement Approaches should be similar. The Committee therefore welcomed comment on the choice of indicators under both approaches including whether a combination of indicators might be used per business line in the Standardized Approach. The Committee also welcomed comment on the proposed loss categories. The EI represents a proxy for the size of a particular business lines operational risk exposure. The Basel Committee proposed to standardize EIs for business lines and loss types while each bank would supply its own EI data. The supervisory prescribed EIs would allow for better comparability and consistency across banks, facilitate supervisory validation, and enhance transparency. The PE represents the probability of occurrence of loss events and loss given event (LGE) represents the proportion of transaction or exposure that would be expensed as loss given that event. PE is expressed either in number or value term as far as the definitions of EI, PE and LGE are consistent with each other. For instance PE could be expressed as the number of loss events or the number of transactions and LGE parameters can be defined as the average of (loss amount/transaction amount). While it is proposed that the definitions of PE and LGE are determined and fixed by the Basel Committee. These parameters are calculated and supplied by individual banks subject to Committee guidance to ensure the integrity of the approach. A bank would use its own historical loss and exposure data perhaps in combination with appropriate industry pooled data and public external data sources so that PE and LGE would reflect each banks own risk profile. The term γ represents a constant that is used to transform EL into risk or a capital charge which is defined as the maximum amount of loss per a holding period within a certain confidence interval. The scale of γ will be determined and fixed by supervisors for each business line or loss type. In determining the specific figure of

[email protected]

18

2 Operational Risk

γ that will be applied across banks the Basel Committee developed an industry wide operational loss distribution in consultation with the industry and used the ratio of EL to a high percentile of the loss distribution (99 %). The current industry practice and data availability do not permit the empirical measurement of correlations across business lines and risk types. The Basel Committee proposed a simple summation of the capital charges across business line or loss type cells. However, in calibrating the γ factors the Committee seeks to ensure that there is a systematic reduction in capital required by the Internal Measurement Approach compared to the Standardised Approach for an average portfolio of activity. While the Basel Committee [4] believed that the definitions of business lines or loss types and parameters should be standardized at least in an early stage. The Committee also recognised such standardization may limit banks’ ability to use the operational risk measures that they believe most accurately represent their own operational risk although banks could map their internal approaches into regulatory standards. As banks and supervisors gain more experience with the Internal Measurement Approach and as more data is collected the Committee examined the possibility of allowing banks greater flexibility to use their own business lines and loss types. In order to implement the Internal Measurement Approach for regulatory capital calculation there are a number of outstanding issues to be resolved. The Committee examined the following issues in close consultation with the industry [4, 7]: (i) In order to use bank’s internal loss data in regulatory capital calculation harmonization of what constitutes an operational risk loss event is a prerequisite for a consistent approach. Developing workable supervisory definitions in consultation with the industry of what constitutes an operational loss event for different business lines and loss types will be key to the robustness of the Internal Measurement Approach. In particular, this includes issues such as what constitutes a direct loss versus an indirect loss, over what holding period losses are considered, over what observation period historical losses are captured and the role of judgement in data collection and consolidation. (ii) In order to calibrate the capital calculation an industry wide distribution is used. This raises questions on data collection and consolidation and the confidence limits used. It underscores the importance of accelerating industry efforts to pool loss data under supervisory guidance on loss data collection processes. (iii) The historical loss observation may not always fully capture a bank’s true risk profile, especially when the bank does not experience substantial loss events during the observation period. To ensure that the required capital calculated using the Internal Measurement Approach appropriately covers the potential loss including low frequency high impact events the Committee conservatively sets out elements of the scheme including factors for each business lines or risk type combination and holding period. (iv) As noted previously the regulatory γ which is determined based on an industry wide loss distribution will be used across banks to transform a set of

[email protected]

2.2 Operational Risk: A General View

19

parameters such as EI, PE and LGE into a capital charge for each business line and risk type. However, the risk profile of a bank’s loss distribution may not always be the same as that of the industry wide loss distribution. One way to address this issue is to adjust the capital charge by a risk profile index (RPI) which reflects the difference between the bank’s specific risk profiles compared to the industry as a whole. The Committee plans to examine the extent to which individual banks risk profile will deviate significantly from that of the types of portfolios used to arrive at the regulatory term and the cost or benefits of introducing a RPI to adjust for such differences. Another important methodology is the loss distribution approach (LDA). Under LDA a bank using its internal data estimates two probability distribution functions for each business line and risk type one on single event impact and the other on event frequency for the next one year. Based on the two estimated distributions the bank then computes the probability distribution function of the cumulative operational loss. The capital charge is based on the simple sum of the VaR for each business line and risk type. The approach adopted by the bank would be subject to supervisory criteria regarding the assumptions used. Generally the Basel Committee does not anticipate that such an approach would be available for regulatory capital purposes when the New Basel Capital Accord is introduced. However, this does not preclude the use of such an approach in the future and the Committee encourages the industry to engage in a dialogue to develop a suitable validation process for this type of approach. In the proposed evolutionary framework of the approaches to determine capital charges for operational risk, individual banks are encouraged to move along the spectrum of available approaches as they develop more sophisticated operational risk measurement systems and practices. Additional standards are intended to ensure the integrity of the measurement approach, data quality and the risk management control environment. The minimum standards that the Basel Committee sees as essential for recognizing a bank to be eligible for each stage are as follows [4, 7]: (i) The Basic Indicator Approach is intended to be applicable by any bank regardless of its complexity or sophistication. As such no criteria for use apply. Nevertheless, banks using this approach will be urged to comply with the forthcoming Committee guidance on Operational Risk Sound Practices which will also serve as guidance to supervisors under Pillar 2. (ii) As well as meeting the Committees Operational Risk Sound Practices banks will have to meet the following standards to be eligible for the Standardized Approach: (a) Banks must meet a series of qualitative standards including the existence of an independent risk control and audit function, effective use of risk reporting systems, active involvement of board of directors and senior management and appropriate documentation of risk management systems.

[email protected]

20

2 Operational Risk

(b) Banks must establish an independent operational risk management and control process which covers the design, implementation and review of its operational risk measurement methodology. Responsibilities include establishing the framework for the measurement of operational risk and control over the construction of the operational risk methodology and key inputs. (c) Banks internal audit groups must conduct regular reviews of the operational risk management process and measurement methodology. (d) Banks must have appropriate risk reporting systems to generate data used in the calculation of a capital charge and the ability to construct management reporting based on the results. (e) Banks must begin to systematically track relevant operational risk data by business line across the firm. It should be noted that the ability to monitor loss events and effectively gather loss data is a basic step for operational risk measurement and management and is a pre-requisite for movement to the more advanced regulatory approach. (f) Banks will have to develop specific documented criteria for mapping current business lines and activities into the standardized framework. In addition, a bank should regularly review the framework and adjust for new or changing business activities and risks as appropriate. (iii) In this approach business lines, risk types and exposure indicators are standardized by supervisors and individual banks are able to use internal loss data. In addition to the standards required for banks using the Standardized Approach and banks should meet the following standards to use the Internal Measurement Approach: (a) Accuracy of loss data and confidence in the results of calculations using that data including PE and LGE have to be established through use tests. Banks must use the collected data and the resulting measures for risk reporting, management reporting, internal capital allocation purposes, risk analysis etc. Banks that do not fully integrate an internal measurement methodology into their day-to-day activities and major business decisions should not qualify for this approach. (b) Banks must develop sound internal loss reporting practices supported by an infrastructure of loss database systems that are consistent with the scope of operational losses defined by supervisors and the banking industry. (c) Banks must have an operational risk measurement methodology, knowledgeable staff and an appropriate systems infrastructure capable of identifying and gathering comprehensive operational risk loss data necessary to create a loss database and calculate appropriate PEs and LGEs. Systems should be able to gather data from all appropriate sub-systems and geographic locations. Missing data from various systems, groups or locations should be explicitly identified and tracked. (d) Banks need an operational risk loss database extending back for a number of years to be set by the Basel Committee for significant business lines.

[email protected]

2.2 Operational Risk: A General View

(e)

(f)

(g)

(h)

(i) (j)

(k)

(l)

21

Additionally banks must develop specific criteria for assigning loss data to a particular business line and risk types. Banks must have in place a sound process to identify in a consistent manner over time the events used to construct a loss database and to be able to identify which historical loss experiences are appropriate for the institution and are representative of their current and future business activities. This entails developing and defining loss data criteria in terms of the type of loss data and the severity of the loss data that goes beyond the general supervisory definition and specifications. Banks must develop rigorous conditions under which internal loss data would be supplemented with external data as well as a process for ensuring the relevance of this data for their business environment. Sound practices need to be identified surrounding the methodology and process of scaling public external loss data or pooled internal loss data from other sources. These conditions and practices should be re-visited on a regular basis must be clearly documented and should be subject to independent review. Sources of external data must be reviewed regularly to ensure the accuracy and applicability of the loss data. Banks must review and understand the assumptions used in the collection and assignment of loss events and resultant loss statistics. Banks must regularly conduct validation of their loss rates, risk indicators and size estimations in order to ensure the proper inputs to the regulatory capital charge. Banks must adhere to rigorous processes in estimating parameters such as EI; PE and LGE. As part of the validation process, scenario analysis and stress testing would help banks in their ability to gauge if the operational environment is accurately reflected in data aggregation and parameter estimates. A process would need to be developed to identify and incorporate plausible historically large or significant events into assessments of operational risk exposure which may fall outside the observation period. These processes should be clearly documented and be specific enough for independent review and verification. Such analysis would also assist in gauging the appropriateness of certain judgements or over-rides in the data collection process. Bank management should incorporate experience and judgement into an analysis of the loss data and the resulting PEs and LGEs. Banks have to clearly identify the exceptional situations under which judgement or over-rides may be used to what extent they are to be used and who is authorized to make such decisions. The conditions under which these over-rides may be made and detailed records of changes should be clearly documented and subject to independent review. Supervisors will need to examine the data collection, measurement and validation process and assess the appropriateness of the operational risk control environment of the institution.

[email protected]

22

2 Operational Risk

Outsourcing by banks is another activity which is increasing both in terms of volume of business involved and the range of functions outsourced. There are sound business reasons why a bank may outsource functions. These include a reduction in both fixed and current expenditure and compensation for a lack of expertise or resources. The Basel Committee [4] believes that banks engaged in outsourcing should aim to ensure that a clean break in their outsourced activities is established if there is to be a reduction in operational risk capital mainly through arranging robust legal agreements with outside service providers through a Service Level Agreement. Banks should also develop appropriate policies and controls to assess quality and stability of outside service providers. Where outsourcing is conducted between banks it is the entity that bears the ultimate responsibility for operational loss that should hold the capital. In order to benefit from a reduction in regulatory capital the bank conducting outsourcing need to demonstrate supervisor’s satisfaction that effective risk transfer has occurred. In an effort to encourage better risk management practices the Basel Committee is keenly interested in efforts by institutions to better mitigate and manage operational risk. Such controls or programs have the potential to reduce the exposure, frequency or severity of an event. Due to the crucial role these techniques can play in managing risk exposures. The Committee intends to work with the industry on risk mitigation concepts. However, careful consideration needs to be given to whether the control is truly reducing risk or merely transferring exposure from the operational risk area to another business sector. One growing risk mitigation technique is the use of insurance to cover certain operational risk exposures. During discussion with the industry the Basel Committee [4] found that firms were using or were considering using insurance policies to mitigate operational risk. These include a number of traditional insurance products such as bankers’ blanket bonds and professional liability insurance. Specifically, insurance could be used to externalize the risk of potentially low frequency and high severity losses such as errors and omissions including processing losses, physical loss of securities and fraud. The Committee agrees that in principle such mitigation should be reflected in the capital requirement for operational risk. Moreover banks that use insurance should recognize that they might be replacing operational risk with a counterparty risk. There are also other questions relating to liquidity i.e. the speed of insurance payouts, loss adjustment and voidability, limits in the product range, the inclusion of insurance payouts in internal loss data and moral hazard. The Committee welcomes further industry analysis on the robustness of such mitigation techniques in the context of a discussion about regulatory capital requirements. The Risk Management Group continues to develop its existing dialogue with the industry on this topic. It is widely agreed that there are unusual difficulties in the way of bank’s quantifying its operational risks adequately or even of getting a ballpark figure for many of them. Availability of data is a major challenge. Individual banks rarely report internal frauds unless they are catastrophic. An individual bank has thus very

[email protected]

2.2 Operational Risk: A General View

23

little data on past events that it fears may impact it severely in the future. It is not usual for individual banks to hold data on public events like tsunamis as banks are not in environmental modeling business. Therefore, there are opportunities for bank regulators to encourage a public centre to warehouse shared and if necessary anonymised data and to broker the expertise of environmental and economic modelers on risks from external sources that can be studied with publicly available data. It is generally agreed also that the diversity of operational risks creates methodological difficulties both in quantifying individual risks and in estimating their interactions. Given that the downside tails of distribution of events are crucial and that there is little data on tail events, it is necessary to avoid assuming that the events follow a standard distribution such as normal distribution even if that fits well the middle range of events. Basel II mandates usage of extreme value theory, the statistical methodology for extrapolation of tails of distributions beyond the range of existing data. The paucity of data on operational risks also means that it is essential to combine what data is there with experts’ opinion. The elicitation and calibration of expert opinion by small data sets is itself a difficult theoretical area.

2.3

Regulatory Framework

Operational risk has been an actively sought after topic in financial institutions, banks, insurance companies etc. [3]. In the past few decades these institutions have experienced more than 100 operational loss events exceeding over hundreds of million dollars. Some noteworthy examples include $691 million rogue trading loss at Allfirst Financial, $484 million settlement due to misleading sales practices at Household Finance and estimated $140 million loss stemming from the 9/11 attack at the Bank of New York. Recent settlements related to questionable business practices have further heightened interest in the management of operational risk at financial institutions. These issues are handled through certain regulatory frameworks as suggested by the recommendations of Basel Committee for banks [4, 7]. The Basel Committee provides a forum for regular cooperation on banking supervisory matters. Its objective is to enhance understanding of key supervisory issues and improve the quality of banking supervision worldwide. The most elementary form of Basel viz. Basel I capital accord was created on 1988 [14] whose general purpose was to: (a) strengthen the stability of international banking system and (b) set up a fair and a consistent international banking system in order to decrease competitive inequality among international banks. The basis of capital in Basel I is categorized in two tiers viz. (a) tier I (core capital) which includes stock issues and declared reserves such as loan loss reserves set aside to cushion future losses or for smoothing income variations and (b) tier II (supplementary capital) which includes all other capital such as gains on investment assets,

[email protected]

24

2 Operational Risk

long term debt with maturity greater than five years and hidden reserves i.e. excess allowance for losses on loans and leases. According to Basel I the total capital should represent at least 80 % of the bank’s credit risk which can be: (a) on-balance sheet risk like risks associated with cash and gold held with bank, government bonds and corporate bonds (b) market risk including interest rates, foreign exchange, equity derivatives and commodities and (c) non trading off-balance sheet risk like forward purchase of assets or transaction related debt assets. However, Basel I suffered from certain limitations such as: (a) limited differentiation of credit risk (b) static measure of default risk (c) no recognition of term structure of credit risk (d) simplified calculation of potential future counter party risk and (e) lack of recognition of portfolio diversification effects. The limitations of Basel I are effectively handled by Basel II [7] which is based on three pillars viz. (a) minimum capital where banks must hold capital against 8 % of their assets after adjusting the risk factors (b) supervisory review whereby national regulators ensure their home country banks are adhering the rules and (c) market discipline based on enhanced disclosure of risk. In Basel II risk was categorized as credit risk, market risk and operational risk. The credit risk has three approaches such as standardization, foundation internal ratings and advanced internal ratings. Basel II impact on banking sector led to huge capital requirement, wider market domain, a large array of products and customers. Some important advantages of Basel II are: (a) the discrepancy between economic capital and regulatory capital is reduced significantly due to which the regulatory requirements rely on bank’s own risk methods (b) Basel II are more risk sensitive and (c) it has wider recognition of credit risk mitigation. Basel II suffers from limitations such as: (a) too much regulatory compliance (b) over focusing on credit risk (c) the new accord is complex and therefore demanding for supervisors and unsophisticated banks and (d) strong risk identification in the new accord can adversely affect the borrowing position of risky borrowers. The stated limitations of Basel II are taken care of by Basel III in 2010 [13] which is based on norms such as: (a) improving the banking sector’s ability to absorb shocks arising from financial and economic stress (b) improve risk management and governance and (c) strengthen banks’ transparency and disclosures. The structure of Basel III accord includes: (a) minimum regulatory capital requirements based on risk weighted assets where maintained capital is calculated through credit, market and operational risks (b) supervisory review process which specifies regulation of tools and frameworks for dealing with peripheral risks that bank face and (c) market discipline which increases the disclosures that banks must provide to increase the transparency of banks. Some major changes of Basel III are: (a) better capital quality (b) capital conservation buffer (c) counter cyclical buffer (d) minimum common equity and tier I capital requirements (e) leveraging ratios (f) liquidity ratios and (g) systematically important financial institutions. The Basel III has major impact on: (a) on banks (b) on financial stability and (c) on investors.

[email protected]

2.4 Operational Risk Data: Internal and External

2.4

25

Operational Risk Data: Internal and External

The data may be collected from different sources for analysing operational risk [3]. The two commonly used sources for data collection are: (a) internal source of data mainly comes from inside the organization which is provided by the management after verifying the actual operational losses incurred while running the business and (b) external source of data is generally provided by vendors who gather the data on behalf of the organization after surveying the operational losses incurred. In recent times the data is often provided by the vendors as it eliminates biasedness in the data collection process. The vendors collect data from public sources [8] such as news reports, court filings, securities and exchange (SEC) filings etc. One such instance of data collection was done in May 2001 by the Basel Committee on banking supervision [14] which launched a survey of banks operational risk data. In a repeat of this exercise, the committee collected detailed data from the banking sector on operational risk for the current financial year. The data collection exercise included information on banks operational risk losses and various exposure indicators. This enabled the committee to further refine the calibration of the operational risk charge proposed for the new Basel accord. The committee provided banks with spreadsheets outlining the operational risk information requested as well as detailed instructions to assist banks in completing the survey. Banks were asked to complete and return the survey via national supervisors by 31st August 2002. All the data received were treated with complete confidentiality. The Committee then provided feedback to the industry on the results of the survey. However, this was done on a basis that avoids any disclosure of individual bank data. The raw data collected is basically unstructured and impure in nature [8]. The data is pre-processed through filtering, normalization etc. [3] in order to remove the inherent impurities in the data. This pre-processed data is in the form of a database and is suitable for further experimentation and analysis [3]. However, the absence of reliable internal operational loss data has impeded organization’s progress in measuring and managing operational risk. Without such data most firms have not been able to quantify operational risk correctly.

2.5

Quantifying Operational Risk

After the data is available in a reliable form it is subjected to quantification. Measuring operational risk from publicly available data poses several challenges, the most significant being that not all operational risk losses are correctly reported. One can also expect a positive relationship to exist between the loss amount and the probability that the loss is reported. If this relationship does exist then the data are not a random sample from the population of all operational losses but they are biased sample containing disproportionate number of losses. Standard statistical

[email protected]

26

2 Operational Risk

inferences based on such samples can yield biased parameter estimates [11]. The disproportionate number of losses may lead to an estimate that overstates organization’s exposure to operational risk. Another way of describing this sampling problem is to say that an operational loss is publicly reported only if it exceeds some unobserved truncation point. Because the truncation point is unobserved it is a random variable and the resulting statistical framework is known as a random or stochastic truncation model. Techniques for analysing randomly truncated data are reviewed in [3]. In related work [6] proposed a random truncation framework to model operational loss data and provide initial empirical results suggesting the feasibility of approach. Here we discuss one such approach to quantify operational risk [6]. Let x and y be random variables whose joint distribution is jðx; yÞ. The variable x is randomly truncated if it is observed only when it exceeds the unobserved truncation point y. If x and y are statistically independent then the joint density j(x, y) is equal to the product of marginal densities f(x) and g(y). The condition on x is and this is expressed as [3, 11]: jðx; yjx [ yÞ ¼ f ð xÞgð xÞ= Prðx [ yÞ ZZ x jðx; yjx [ yÞ ¼ f ð xÞgð xÞ= f ð xÞgð xÞdxdy Z 1 jðx; yjx [ yÞ ¼ f ð xÞgð xÞ= f ð xÞGð xÞdx

ð2:3Þ

In Eq. (2.3) G(·) denotes the cumulative distribution function of y. Integrating the unobserved variable y yields the marginal with respect to x [3, 11]: Z f ðxjx [ yÞ ¼ f ð xÞGð xÞ= f ð xÞGð xÞdx ð2:4Þ The above expression is the distribution of observed values of x and forms the basis for the estimation techniques. The experiment data generally consists of a series of operational losses exceeding millions of dollars in nominal value. Extreme value theory suggests that the distribution of losses exceeding such high threshold can be approximated by a generalized Pareto distribution. If X be a vector of operational loss amounts and x ¼ X  u, where u is a threshold value. The Pickands-Balkema-de Haan Theorem discussed in the next Chapter [3], [5] implies that the limiting distribution of x as u tends to infinity is given by: ( GPDn;b ð xÞ ¼

1=n 1  ð1 þ nx=b Þ x 1  exp  b

n[0 n¼0

ð2:5Þ

In Eq. (2.5) which of the two cases holds depends on the underlying loss distribution. If it belongs to a heavy tailed class of distributions such as burr, cauchy, log, gamma, pareto etc. then convergence is the GPD with n [ 0. If it

[email protected]

2.5 Quantifying Operational Risk

27

belongs to light tailed class such as gamma, lognormal, normal, Weibull etc. then convergence is to the exponential distribution ðn ¼ 0Þ. We assume that the distribution of operational losses belongs to heavy tailed class of distributions which implies that the distribution of log losses belongs to light tailed class. The exponential distribution has only one parameter that makes it attractive for current application. We thus model natural logarithm of operational losses and set f(x) in Eq. (2.4) as: f ð xÞ ¼ expðx=bÞ=b

ð2:6Þ

In Eq. (2.6) x denotes the log of the reported loss amount X minus the log of the million dollar threshold. The above method for modeling the distribution of losses is referred to as the peaks over threshold approach and is discussed in [3]. To model the distribution of the truncation point y we assume that whether or not a loss is captured in public disclosures depends on many random factors. In this case, a central limit argument suggests that y is normally distributed. However, we find that the normality assumption results in frequent non-convergence of the numerical maximum likelihood iterations. Alternatively we can assume that the truncation point has a logistic distribution [3]: Gð xÞ ¼ 1=½1 þ expðbðx  sÞÞ

ð2:7Þ

The logistic distribution closely approximates the normal distribution but its fatter tails can make it more suitable than the normal for certain applications. The logistic distribution is more suitable for the current application as well so that convergence issues are quite rare under this assumption. The logistic distribution has two parameters viz. (a) the location parameter τ that indicates the (log) loss amount with a 50 % chance of being reported and (b) a scale parameter β that regulates how quickly the probability of reporting increases or decreases as the loss amount increases or decreases. The data consist of fx; ug where x denotes the natural logarithm of the reported loss amount minus the natural logarithm of the million dollar threshold value and u is the million dollar threshold value below which losses are not reported and adjusted for inflation. The likelihood equation is [3, 11]: 2 Lðb; b; sjX; uÞ ¼

n Y i¼1

6 4f ðxi jbÞGðxi jb; sÞ=

Z1

3 7 f ðxjbÞGðxjb; sÞdx5

ð2:8Þ

uðiÞ

For more details on quantification on operational risk interested readers can refer [3].

[email protected]

28

2 Operational Risk

References 1. Artzner, P., Delbaen, F., Eber, J.M., Heath, D.: Coherent measures of risk. Math. Finance 9(3), 203–228 (1999) 2. Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Polit. Econ. 81(3), 637–654 (1973) 3. Chaudhuri, A.: A Study of Operational Risk Using Possibility Theory, Technical Report. Birla Institute of Technology Mesra, Patna Campus, India (2010) 4. Daníelsson, J., Embrechts, P., Goodhart, C., Keating, C., Muennich, F., Renault, O., Song Shin, H.: An Academic Response to Basel II, Special Paper 130. Financial Markets Group, London School of Economics (2001) 5. Degen, M., Embrechts, P., Lambrigger, D.D.: The Quantitative Modeling of Operational Risk: Between g-and-h and EVT, Technical Report. Department of Mathematics, ETH Zurich, Zurich, Switzerland (2007) 6. Fontnouvelle, P., DeJesus-Rueff, V., Jordan, J., Rosengren, E.: Using Loss Data to Quantify Operational Risk, Technical Report. Federal Reserve, Bank of Boston (2003) 7. Franklin, J.: Operational risk under Basel II: a model for extreme risk evaluation. Bank. Financ. Serv. Policy Rep. 27(10), 10–16 (2008) 8. Guest, G., Namey, E.E., Mitchell, M.L.: Collecting Qualitative Data. Sage Publications, New York (2013) 9. Hussain, A.: Managing Operational Risk in Financial Markets, 1st edn. Butterworth Heinemann, Oxford (2000) 10. King, J.L.: Operational Risk: Measurement and Modeling, 1st edn. The Wiley Finance Series, Wiley, New York (2001) 11. Laha, R.G., Rohatgi, V.K.: Probability Theory, Volume 43 of Wiley Series in Probability and Mathematical Statistics. Wiley, New York (1979) 12. Panjer, H.H.: Operational Risk: Modeling Analytics. Wiley Series in Probability and Statistics, New York (2006) 13. Quagliariello, M., Cannata, F.: Basel III and Beyond. Risk Books (2011) 14. Tarullo, D.K.: Banking on Basel: The Future of International Financial Regulation. The Peterson Institute of International Economics (2008)

[email protected]

Chapter 3

The g-and-h Distribution

Abstract An introduction to g-and-h distribution is provided in this chapter. The quantification of operational risk is often performed through g-and-h distribution. The concept of g-and-h distribution is presented along with some important properties of g-and-h distribution. The g-and-h distribution is fitted to the real life data. Some significant comments on the calculation of g and h parameters concludes the chapter. This chapter lays the foundation stone for the Chaps. 4, 6 and 7.







Keywords Operational risk g-and-h distribution Probability distribution Data fitting

3.1

Introduction

In this chapter we give the reader an introduction to g-and-h distribution. The parametric g-and-h distribution has emerged as an interesting candidate towards quantifying operational risk [4, 11]. In Sect. 3.2 we define the concept of g-and-h distribution. This is followed by some properties of g-and-h distribution in Sect. 3.3. Then the concept of fitting of g-and-h distributions to data is explained. Finally, this chapter concludes with a brief comment on the calculation of g and h parameters [7]. A thorough understanding of this chapter will help the reader better grasp the concepts explained in Chaps. 4, 6 and 7. Banks and financial institutions face a variety of challenges in the process of collection and analysis of operational risk data [13]. These institutions are generally requested to define mappings from their internally defined business lines and event types to the ones defined by Basel II [5, 6]. This categorization was useful because it helped to bring uniformity in data classification across institutions. The records include dates, amounts, insurance recoveries and codes for the legal entity for which risks are incurred. The internal business line risks are mapped to the eight Basel defined business lines [5, 6]. There are two types of reporting biases that are present in this data. The first type of bias is related to structural changes in reporting © Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_3

[email protected]

29

30

3 The g-and-h Distribution

quality. When institutions first collect operational risk data their systems and processes were not completely solidified. Hence, the first few years of data typically have far fewer risks reported than later years. In addition the earlier systems for collecting these data may have been more likely to identify larger risks than smaller ones. Thus the structural reporting bias may potentially affect risk frequency as well as severity. The second type of reporting bias is caused by inaccurate time stamping of risks which results in temporal clustering of risks. For many institutions a disproportionate number of risks occurred on the last day of the month, the last day of the quarter or the last day of the year. The non-stationarity of the risk data over time periods of less than one year constrains the types of frequency estimation that can be performed. Even though an institution may have reported risks for a given year the number of risks in the first few years was typically much lower than the number of reported risks in later years. Classifying the risk data into business line or event type buckets further reduces the number of observations and increases the difficulty of modeling them. With limited data the more granular the unit of measure the more difficult it may be to obtain precise estimates. The risk threshold is the minimum amount that a risk must equal in order to be reported in the institution’s data set. Some institutions have different risk thresholds for each business line. The different risk thresholds are not a problem but the data characteristic that must be handled accordingly [3]. The choice of distribution is an important step in order to model the severity of the operational risk data. For this we need to understand the structure and characteristics of the data. Tukey [18] argued that before we can probabilistically express the shape of the data we must perform a careful exploratory data analysis (EDA). In the recent past EDA has been used in several business applications [18]. EDA method can assess the homogeneity of data by using quantiles. For the operational risk data analysis we use two important easily visualized characteristics of the data viz. skewness and kurtosis. It is well accepted that the operational risk data is skewed and heavy tailed in nature. However, all these tail measurements are based on third and fourth moments of the data or distributions. But it is impossible to differentiate between tail measures of two distributions that have infinite fourth moments. Here the skewness and kurtosis are measured relatively rather than absolutely. In this analysis a distribution can have a finite skewness or kurtosis value at different percentile levels of distribution even when it has an infinite third or fourth moment. The skewness and kurtosis of the operational risk data is measured with respect to the skewness and kurtosis of normal distribution. If the data fXi gNi¼1 are symmetric then [2, 8]: X0:5  Xp ¼ X1p  X0:5

ð3:1Þ

Here Xp ; X1p and X0:5 are 100pth percentile, 100(1 – p)th percentile and median of the data respectively. This implies that for symmetric data such as data drawn from normal distribution a plot of X0:5  Xp versus X1p  X0:5 will be a straight line with a slope one. Any deviation from the line signifies skewness in data [2].

[email protected]

3.1 Introduction

31

  In addition if data is symmetric, the mid-summary of data midp ¼ 12 Xp þ X1p must be equal to the median of data for all percentiles p. The plot of midp versus 1 − p is useful in determining whether there is systematic skewness in data. It is observed that the operational risk data exhibits a high degree of unsystematic skewness. The shapes of the skewness are very similar for all banks and institutions at enterprise and business levels. The top (A) and bottom (B) panels of Fig. 3.1 [3] are representative plots of skewness and mid-summaries respectively. Panel A in Fig. 3.1 shows that data is highly skewed relative to normal distribution which is represented through straight line. The horizontal axis shows distance of Tukey’s lower letter data values from median data values. The vertical axis shows distance of Tukey’s upper letter data values from median data values. The straight 45° line signifies the Gaussian distribution. The values above this line indicate that data is skewed to the right. Panel B in Fig. 3.1 shows mid-summary which is the average of upper and lower letter values plotted against percentile. This plot shows mid-summaries increasing with percentiles further into the tail which indicates that data is skewed to the right. This plot reveals that data is less symmetric in the tail of distribution. For a normal random variable Y with mean l and standard deviation r, Y ¼ ðYp Y1p Þ l þ rZ where Z is a standard normal variate. Hence 2Zp ¼ r where Yp and Y1p are 100pth and 100(1 − p)th percentiles of Y and Zp is 100pth percentile of Z.

Fig. 3.1 Representative skewness and mid-summaries plot

[email protected]

32

3 The g-and-h Distribution

We can also define pseudosigma or p-sigma of data fXi gNi¼1 as

ðXp X1p Þ 2Zp

¼ r for each

percentile p. The pseudosigma is a measure of tail thickness with respect to tail thickness of normal distribution. If data is normally distributed the pseudosigma will be constant across p and equal to r. When the kurtosis of data exceeds that of normal distribution p-sigma will increase for increasing values of p. In Fig. 3.2 [3] we plot In (p-sigma) versus Z 2 to present the results in a more compact form. Even though we observed some similarity in terms of kurtosis among risk data in the sample, unlike skewness there is no definite emerging pattern. A horizontal line indicates neutral elongation in tail. A positive slope indicates that kurtosis of data is greater than that of the normal distribution. A sharp increase in slope indicates a non-smooth and unsystematic heavy tail with increasing values of Z 2 . The upper tail is highly elongated (thick) relative to the body of distribution. At the extreme end of tail p-sigma flattens out. Some flattening

Fig. 3.2 The plot of ln (p-sigma) versus Z2

[email protected]

3.1 Introduction

33

happens earlier than others as evident in Fig. 3.2d which shows an event type with very few data points and more homogeneity in the risks. Typically the event that has more data and has two adjacent risks are of disproportionately different magnitude will have flattening at higher percentile level. Based on this analysis we infer that in order to fit the risk data we need a distribution with a flexible tail structure that can significantly vary across different percentiles. With the present discussion on the nature of risk data we are not in a position to appeal mathematical theory to arrive at the ideal operational risk distribution. One approach to find an appropriate operational risk distribution is to experiment with many different distributions with the hope that some distributions will yield sensible results. Many risk distributions have been tested over a considerable period of time. It is practically impossible to experiment with every possible parametric distribution available at our disposal. An alternative way to conduct such an exhaustive search could be to fit general class distributions to risk data with the hope that these distributions are flexible enough to conform to the underlying data in a reasonable way. A general class distribution is a distribution that has many parameters (typically four or more) from which many other distributions can be approximated as special cases. The four parameters typically represent location (mean or median), scale (standard deviation or volatility), skewness and kurtosis [2]. These parameters can have a wide range of values and therefore can assume values for location, scale, skewness and kurtosis of many different distributions. Furthermore, because general class distributions nest a wide array of other parametric distributions they are very powerful distributions used to model operational risk data. A poor fit for general class distribution automatically rules out the possibility of a good fit with any of its nested distributions.

3.2

Definition

In our current experiments for the operational risk data we have chosen the g-and-h distribution [13]. The motivation towards using the g-and-h distribution is that they have been applied to various areas of finance. The g-and-h distribution was introduced by Tukey in 1977 [18] based on transformation of standard normal variable. They are often used to model univariate real world data in Monte Carlo studies [17]. Their popularity lies in the ease of transforming standard normal deviates with g and h parameters to generate non-normal distributions. Let Z  N ð0; 1Þ denote a standard normal random variable. A random variable X is said to have g-and-h distribution with parameters A; B; g; h 2 R if X satisfies [4, 10]: X ¼ AþB

egz  1 hz2 =2 e g

[email protected]

ð3:2Þ

34

3 The g-and-h Distribution

In Eq. (3.2) has the obvious interpretation such that parameter is generally g = 0. We have X * g-and-h when X has distribution function F * g-and-h. If g and h are constants a more flexible choice of parameters can be achieved by considering g and h to be polynomials including higher orders of Z 2 . Such a polynomial choice is necessary for some banks and business lines. However, here attention is restricted to the basic case where g and h are constants. The parameters g and h govern the skewness and the heavy tailedness of the distribution [2]. In order to generate data, standard normal variate Z * N(0,1) are converted to non-normal variates by specifying values of g and h for the function stated in Eq. (3.1). In case when h = 0 the Eq. (3.2) reduces to following [4]: X ¼ AþB

egz  1 g

ð3:3Þ

The Eq. (3.3) is referred to as the g-distribution such that it corresponds to a scaled lognormal distribution which is generally measured in terms of location and scale parameters [14]. The Figs. 3.3 and 3.4 shows the effect of scale and location parameters on the probability distribution functions (PDFs) of lognormal distribution. The location and scale parameters are equivalent to the mean and standard deviation of the logarithm of the random variable Z. In case when g = 0 the Eq. (3.1) is interpreted as follows [4]: X ¼ A þ BehZ

2

=2

Fig. 3.3 The lognormal distribution with effect of location parameter

[email protected]

ð3:4Þ

3.2 Definition

35

Fig. 3.4 The lognormal distribution with the effect of scale parameter

The Eq. (3.4) is referred to as the h-distribution which is shown in Fig. 3.5. When g = h = 0 then it leads to the normal case. Unless otherwise stated the values of parameters A and B for g-and-h distribution is restricted to A = 0 and

Fig. 3.5 The h distribution

[email protected]

36

3 The g-and-h Distribution

B = 1. It is also assumed that g, h > 0. The parameters of g-and-h distributions used in [7] to model operational risk at enterprise level are within the following ranges: g 2 ½1:79; 2:30 and h 2 ½0:10; 0:35 [4]. gx 2 Since the function kð xÞ ¼ e g1 ehx =2 for h > 0 is strictly increasing, the distribution function F of a g-and-h random variable X can be written as follows [4]:   F ð xÞ ¼ U k1 ð xÞ

ð3:5Þ

In Eq. (3.5) Φ denotes the standard normal distribution function. This representation immediately yields an easy procedure to calculate quantiles and hence the Value at Risk (VaR) [9] of a g-and-h random variable X is as follows [3, 4]:   VaRa ð X Þ ¼ F 1 ðaÞ ¼ k U1 ðaÞ

ð3:6Þ

In the next section we study some properties of the g-and-h distribution which are important for understanding the concepts underlying operational risk [4].

3.3

Properties of g-and-h Distribution

The corresponding g-and-h distribution of a univariate normal random variable Ygh is defined through the following transformation of Z [8]: Ygh ðZÞ ¼ A þ B

egz  1 hz2 =2 e g

ð3:7Þ

In Eq. (3.7) A and B (>0) are location the scale parameter which are mentioned in Sect. 3.2; g and h are scalars that govern skewness and elongation (kurtosis or heavy tail) [2] of Ygh respectively. The skewness and kurtosis helps to assess the nature of the g-and-h distribution [10]. In the context of a loss distribution the tail corresponds to that part of the distribution that lies above a high threshold. A heavy tailed distribution is one in which the likelihood of drawing a large loss is high. Another way of stating this is that operational losses are dominated by low frequency but high severity events. There are formal statistical notions of heavy tailed distributions [14]. For example distributions that are classified as sub exponential such as lognormal or dominated varying may be considered heavy tailed. If the loglogistic, lognormal or g-and-h with a positive h parameter distributions fit the data reasonably well the loss severities are classified as heavy tailed. The g-and-h distribution can also be characterized by its moments. The first two moments are the mean and variance which characterize the location and scale of a distribution. The skewness and kurtosis are the third and fourth moments of a distribution. One way of visualizing the flexibility of g-and-h distribution is by

[email protected]

3.3 Properties of g-and-h Distribution

37

rendering its skewness kurtosis plot. The plot shows the locus of skewness kurtosis pairs that the distribution can take by varying its parameter values [14]. This plot can be a point, curve or two dimensional surface depending upon whether the skewness or kurtosis are functions of zero, one or many parameters. The plots for a variety of distributions are presented in Fig. 3.6 where the skewness is squared to show only positive values. The normal and exponential distributions are represented as points on the graph because their skewness and kurtosis can only take one value each no matter what its parameter values are. The generalized beta distribution of second kind (GB2) and g-and-h distributions are represented as skewness kurtosis surfaces. These distributions are the most flexible in the sense that they span a large set of possible skewness kurtosis values. The loglogistic, lognormal, weibull, generalized pareto distribution (GPD) and gamma distributions have one dimensional skewness kurtosis curves. In some sense lognormal curve forms an envelope where skewness kurtosis plots above this envelope represent heavy tailed

Fig. 3.6 The skewness kurtosis plot

[email protected]

38

3 The g-and-h Distribution

distributions and plots below this envelope represent light tailed distributions. In this graphical sense weibull, gamma and exponential distributions would be classified as light tailed. The skewness kurtosis plot is particularly useful in illustrating the relationship among distributions [14, 18]. For example, exponential distribution is a special case of weibull distribution and therefore exponential is represented as a single point lying on weibull skewness kurtosis curve. Furthermore, loglogistic curve is completely contained within the g-and-h skewness-kurtosis surface. In this sense we can say that the g-and-h distribution is parent distribution of the loglogistic distribution. If a distribution fits data then its parent distribution will fit the data at least as well. Since general class distributions such as g-and-h and GB2 distributions span larger area on skewness kurtosis plot they provide more flexibility in modeling wide variety of skewness and kurtosis. The g-and-h family of distributions was extensively studied by Hoaglin [8], Martinez and Iglewicz [11]. Due to its appealing attributes in shape it has been used for handling several real life data. Despite its complex mathematical form, percentage points of the density function can be easily obtained numerically using various packages [3] described various features of g-and-h family of distributions. The most important and useful characteristic of g-and-h family of distributions is that it includes several probability distributions such as normal, log-normal, cauchy, t, uniform, v2 , exponential, logistic and gamma. In fact, several other families of distributions like Pearson and Johnson curves can be fitted closely to g-and-h distribution [7]. The quantile function (inverse of cumulative distribution function) of g-and-h distribution [11] from Eq. (3.6) is given by [8]:   1  egzu hz2u =2 Qx ðujl; r; g; hÞ ¼ A þ BZu 1 þ c e 1 þ egzu

ð3:8Þ

In Eq. (3.8) Zu is the uth standard normal quantile, the constant c helps towards producing proper distributions. For several real life data the recommended value of c = 0.8 [6]. The density of g-and-h distribution is generally expressed as an implicit function. It requires estimation of the parameters A, B, g and h. Several methods have been proposed to address this issue [8, 11, 12]. The most significant ones are the methods proposed by Hoaglin [8], Martinez and Iglewicz [11]. The expression for nth order raw moment of g-and-h distribution is derived by Chaudhuri [3] is as follows [8]:  2  g 1 E ðY Þ ¼ pffiffiffiffiffiffiffiffiffiffiffi e2ð1hÞ  1 ; g 1h E ðYn Þ ¼

  n X Þgg2 1 n f2ðni pffiffiffiffiffiffiffiffiffiffiffiffiffiffi e ð1nhÞ ; ð1Þi i gn 1  nh i¼0

0h1

g 6¼ 0; 0  h 

[email protected]

ð3:9Þ 1 n

ð3:10Þ

3.3 Properties of g-and-h Distribution

39

If m1 and m2 are the first and second moments around zero of data then g and h can be estimated by solving the following equations [8]: E ðY Þ ¼ m1

ð3:11Þ

  E Y 2 ¼ m2

ð3:12Þ

However, because of the complex nature of the equations it is quite difficult to have closed form of the solution [3] provided a simpler method for solving these equations. They showed that g and h are almost linearly related. From Eq. (3.11) it is possible to generate number of data pairs (g, h). Based on this data we have the least square estimate of a and b where [8]: g ¼ a þ bh

ð3:13Þ

Substituting the value of g from Eq. (3.13) in (3.12), the equation for h is solved. Once the value of h is estimated it is substituted in Eq. (3.10), the equation for g is solved. It is further observed that for the smaller values of moments the solutions are very close; but for the larger values of moments the solutions are quite close but not as close to the actual values as desired. This problem is solved by changing the scale of data since the variations of data does not affect the shape such as skewness and elongation (kurtosis). After the scale of data is changed the parameters g and h are estimated using this method and then this is applied to the original data [14]. There are two important methods to estimate the parameters of g-and-h distribution viz. maximum likelihood estimation (MLE) and quantile estimation [11]. The MLE method for g-and-h distribution is provided by Panjer [15]. It assigns equal weight to all the data used to fit the distribution. In contrast the quantiles estimation method can place more weight on the data in the tails of distribution. It has shown earlier that quantile based methods are more suited for the g-and-h distribution [18]. However, in several instances numerical MLE has been used to estimate the parameters of g-and-h distribution [10]. The g-and-h distribution can potentially take negative values even though results could never be recorded as negative numbers. This can be corrected this by using rejection sampling in the simulation for the loss distribution negative draws are replaced with positive draws. In order to test the impact of the negativity the simulation is run with and without the rejection sampling. Some insignificant changes in the capital estimates were observed. Hence this drawback had no practical significance. The g-and-h transformation must also be monotonic. In high quantiles we may need to use rejection sampling to force observations into an area where the distribution is well defined. The expression for likelihood is obtained from a distribution specified by quantile function Qx (u|θ) in terms of inverse quantile functions Q−1 x (xi|θ). For a

[email protected]

40

3 The g-and-h Distribution

simple random sample x1 ; . . .; xn taken from generalized g-and-h distribution we have [3, 8]: n Y

n Y @ 1 Lðhjx1 ; . . .; xn Þ ¼ fx ðxi jhÞ ¼ Qx ðxi jhÞ ¼ @x i i¼1 i¼1

n Y

0



Qx Q1 x ðxi jhÞjh



!1

i¼1

ð3:14Þ Hence 0

Qx ðujhÞ pffiffiffiffiffiffi z2  h2  ¼ rc 2pexp 2u exp 2u

   1 1  egzu  2gzu egzu þ 1 þ hz2u þ gz u c 1þe ð1 þ egzu Þ2

ð3:15Þ

The Eq. (3.15) is solved using regular numerical methods. In the process of fitting any particular parametric distribution to data only certain distributions have a good fit. There are two ways of assessing this goodness-of-fit either using graphical methods or using formal statistical goodness-of-fit tests. After estimating parameters the goodness of fit is verified using v2 statistic. The graphs such as quantile-quantile plot or a normalized probability-probability plot can help an individual determine whether a fit is very poor but may not reveal whether a fit is good in the formal sense of statistical fit. The goodness-of-fit tests fall into a variety of standard categories viz. v2 test, tests based on the empirical distribution function, tests based on regression or correlation and tests based on the third and fourth sample moments. Due to their popularity for assessing the fit of distribution to operational loss data v2 test is the most commonly used test. The v2 test is very flexible and it has low power relative to many other goodness-of-fit tests [14]. When testing the goodness-of-fit of a distribution to data it is important to use the correct critical values for the test statistic [14, 18]. When the null distribution has one or more shape parameters the critical values of the test statistic depends on the value of these shape parameters as well as the number of sample observations. Thus it is important to compute critical values by simulating the values of the test statistic. The quantile-quantile plot is a visual representation of fit. A seemingly good fit on the quantile-quantile plot may not always translate to a good statistical fit. However, if distribution A has a better fit than distribution B on the quantile-quantile plot and distribution B has not been rejected as a poor fit by a formal statistical test then using a transitivity argument distribution A can be considered to be at least as good a fit according to some formal test as well. Some examples of PDFs and cumulative distribution functions (CDFs) for the g-and-h distribution are presented in Fig. 3.7 [3]. Also included in Fig. 3.7 are computations of parameters (g, h), quartiles, heights and modes of distributions. It is worth mentioning that a distribution with negative skew can be obtained by changing the sign of parameter g as shown in Fig. 3.8.

[email protected]

3.3 Properties of g-and-h Distribution

41

Fig. 3.7 Some examples of PDFs and CDFs of the g-and-h distribution (the symbols l; r2 ; c3 and c4 denote the mean, variance, skewness and kurtosis respectively)

[email protected]

42

3 The g-and-h Distribution

Fig. 3.8 Some examples of PDFs and CDFs of the g-and-h distribution with negative skew

3.4

Fitting g-and-h Distributions to Data

Given any real life data the challenge remains in fitting g-and-h distribution to the data satisfying the line of best fit and minimizing the error estimates [12]. In this direction we superimpose g-and-h pdfs on histograms of circumference measures (in centimeters) taken from the neck, chest, hip and ankle of n = 252 adult males [7] as shown in Fig. 3.9. The g-and-h pdfs provide good approximations to the empirical data. In order to fit the g-and-h distributions to data the linear transformation is imposed on qðzÞ : AqðzÞ þ B where A ¼ s=r and B ¼ m  Al [5]. The values of means ðm; lÞ and standard deviations ðs; rÞ for the data and g-and-h pdfs respectively are given in Fig. 3.9. One way of determining how well a g-and-h pdf models a set of data is to compute v2 goodness of fit statistic. For example listed in Table 3.1 are the cumulative percentages and class intervals based on the g-and-h pdf for the chest data in Panel B of Fig. 3.9. The asymptotic value of p = 0.153 indicates that the g-and-h pdf provides a good fit to the data. We note that the degrees of freedom for this test were computed as [6] df ¼ 5 ¼ 10 (class intervals) −4 (parameter estimates) −1(sample size). Further the g-and-h trimmed means (TMs) given in Table 3.2 also indicate a good fit as the TMs are all within the 95 % bootstrap confidence intervals based on the data. These confidence intervals are based on 25,000 bootstrap samples.

[email protected]

3.5 Comments on g-and-h Parameters

43

Fig. 3.9 The g-and-h pdf approximations to empirical pdf using measures of circumference (in centimeters) taken from n = 252 (The g-and-h pdfs are scaled using AqðzÞ þ B where A ¼ s=r and B ¼ m  Al)

3.5

Comments on g-and-h Parameters

The calculation of the values of g and h parameters is most sought after issue for the business analysts and researchers [1, 7, 15]. The ability to compute the values of g and h for pre-specified values of skewness and kurtosis will often times obviate the need to use the method described in [7] such as the case for the approximation of

[email protected]

44

3 The g-and-h Distribution

Table 3.1 The observed and expected frequencies and chi-square test based on the g-and-h approximation to the chest data in Panel B of Fig. 3.2 Cumulative % 5 10 15 30 50 70 85 90 95 100 v2 ¼ 2:015

g-and-h class intervals

Observed frequency

115.90 13

Pr v25  2:015 ¼ 0:153 n ¼ 252

Expected frequency 12.60 12.60 12.60 37.80 50.40 50.40 37.80 12.60 12.60 12.60

Table 3.2 The examples of g-and-h TMs based on the data in Fig. 3.2 Empirical distribution

20 % TM

g-and-h TM

Neck 37.929 (37.756, 38.100) 37.899 Chest 100.128 (99.541, 100.753) 99.825 Hip 99.328 (98.908, 99.780) 99.020 Ankle 22.914 (22.798, 23.007) 22.800 Each TM is based on a sample size of n = 152 and has a 95 % bootstrap confidence interval enclosed in parentheses

v2df ¼6 distribution. More specifically the values of g and h can be easily obtained using the method described above in [7] such that g = 0.404565 and h = −0.031731. This direct approach is much more efficient than having to take the numerous steps described in [8] which also yield estimates that have less precision i.e. g = 0.406 and h = −0.033. Further, we note that the values of skewness and kurtosis for this distribution will not yield a valid g-and-h pdf because h is negative. It is also worthy to point out that the inequality given in [8] for determining where monotonicity fails for g-and-h distributions is not correct. Specifically, for g = 0.406 and h = −0.033 the distribution loses its monotonicity at z2 [ 1=h or jzj > 5.505 which would be correct if the distribution is a symmetric h distribution i.e. g = 0 [8]. Rather the correct values are obtained by equating the corresponding derivatives to zero. For more details on g and h parameters interested readers can refer [4, 7, 8, 10–12, 16].

[email protected]

References

45

References 1. Artzner, P., Delbaen, F., Eber, J.M., Heath, D.: Coherent measures of risk. Math. Fin. 9(3), 203–228 (1999) 2. Badrinath, S.G., Chatterjee, S.: On measuring skewness and elongation in common stock return distributions: the case of the market index. J. Bus. 61(4), 451–472 (1988) 3. Chaudhuri, A.: A Study of Operational Risk Using Possibility Theory, Technical Report, Birla Institute of Technology Mesra, Patna Campus, India (2010) 4. Degen, M., Embrechts, P., Lambrigger, D.D.: The Quantitative Modeling of Operational Risk: Between g-and-h and EVT, Technical Report, Department of Mathematics, ETH Zurich, Zurich, Switzerland (2007) 5. Embrechts, P., Hofert, M.: Practices and issues in operational risk modeling under basel II. Lith. Math. J. 50(2), 180–193 (2011) 6. Franklin, J.: Operational risk under basel II: a model for extreme risk evaluation. Bank. Fin. Serv. Policy Rep. 27(10), 10–16 (2008) 7. Headrick, T.C., Kowalchuk, R.K., Sheng, Y.: Parametric probability densities and distribution functions for Tukey g-and-h transformations and their use for fitting data. Appl. Math. Sci. 2 (9), 449–462 (2008) 8. Hoaglin, D.C.: Summarizing shape numerically: the g-and-h distributions. In: Hoaglin, D.C., Mosteller, F., Tukey, J.W. (eds.) Exploring Data Tables, Trends and Shapes. Wiley, New York (1985) 9. Holton, G.A.: Value at Risk: Theory and Practice, 2nd edn. (2014). (http://value-at-risk.net) 10. MacGillivray, H.L., Cannon, W.H.: Generalizations of the g-and-h distributions and their uses, Unpublished Thesis (2000) 11. Martinez, J., Iglewicz, B.: Some properties of the Tukey g-and-h family of distributions. Commun. Stat.: Theory Methods 13(3), 353–369 (1984) 12. Majumder, M.A., Ali, M.M.: A comparison of methods of estimation of parameters of Tukey’s g-and-h family of distributions. Pak. J. Stat. 24(2), 135–144 (2008) 13. Marrison, C.: The Fundamentals of Risk Measurement. McGraw Hill, New York (2002) 14. Pal, N., Sarkar, S.: Statistics: Concepts and Applications, 2nd edn. Prentice Hall of India (2007) 15. Panjer, H.H.: Operational Risk: Modeling Analytics. Wiley Series in Probability and Statistics, New York (2006) 16. Rayner, G.D., MacGillivray, H.L.: Numerical maximum likelihood estimation for the g-and-k and generalized g-and-h distributions. Stat. Comput. 12(1), 57–75 (2002) 17. Ruppert, D.: Statistics and Data Analysis for Financial Engineering. Springer Texts in Statistics (2010) 18. Tukey, J.W.: Modern Techniques in Data Analysis, NSF Sponsored Regional Research Conference. Southern Massachusetts University, North Dartmouth, Massachusetts (1977)

[email protected]

Chapter 4

Probabilistic View of Operational Risk

Abstract In this chapter we present the probabilistic view of operational risk which represented through g-and-h distribution. The concept of value at risk (VaR) is presented alongwith subadditivity of VaR. The subjective value at risk (SVaR) is proposed. The risk and deviation measures are also discussed. The equivalence of chance and VaR constraints are highlighted. Some advanced properties of g-and-h distribution are discussed. Some applications on probabilistic view of operational risk are given and then the decomposition is performed in accordance with contribution to risk factors.



Keywords Operational risk Probability theory Svar Risk measure Deviation measure



4.1



 G-and-h distribution  Var 

Introduction

After introducing the reader to operational risk and g-and-h distribution in Chaps. 2 and 3 respectively in this chapter we present the probabilistic view of operational risk. Operational risk has been studied using probability theory in the recent past [4]. In Sect. 4.2 we highlight probabilistic view of operational risk as suggested by [7] through g-and-h distribution. The concept of value at risk (VaR) is illustrated next Sect. 4.3. This is followed by a discussion on subadditivity of VaR. Then the concept of subjective value at risk (SVaR) is explained. The risk and deviation measures are discussed in Sects. 4.6 and 4.7 respectively. The next section analyses the equivalence of chance and VaR constraints. Some advanced properties of g-and-h distribution are explained in Sect. 4.9. In Sect. 4.10 some applications on probabilistic view of operational risk are given for interested readers. Finally, this chapter concludes with the decomposition according to contribution of risk factors.

© Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_4

[email protected]

47

48

4.2

4

Probabilistic View of Operational Risk

The g-and-h Distribution for Operational Risk

The operational risk is quantified based on the concept of g-and-h distribution [6] presented in Chap. 3. It is studied using random variables Xi ; i ¼ 1; . . .; n defined on a common probability space ðX; F; PÞ with the following cumulative distribution function [10]: CDFX ð yÞ ¼ ProbfX  yg

ð4:1Þ

The Eq. (4.1) typically represents one period risk factor in quantitative risk context. Figure 4.1 represents the relation between CDF and PDF between two different representations of a population. The variable X may assume values of either loss or gain. Generally X  g ^ h is considered when X has distribution function CDF and CDF  g ^ h. Instead of g and h being constants, a more flexible choice of parameters may be achieved by considering g and h to be polynomials including higher orders of Z 2 . In [7] such a polynomial choice was necessary for some banks and business lines. Here, attention is restricted to basic case where g and h are constants. In case h = 0 Eq. (1) reduces to [7]: X ¼ AþB

egz  1 g

Fig. 4.1 The relations between two different typical representations of a population

[email protected]

ð4:2Þ

4.2 The g-and-h Distribution for Operational Risk

49

The Eq. (4.2) is referred to as g-distribution which corresponds to a scaled lognormal distribution. Here, the linear transformation parameters A and B values are restricted to 0 and 1 respectively. Furthermore if g, h > 0 parameters of g-and-h distributions used by [7] to model operational risk at enterprise level is within the following range [7]: g 2 ½1:79; 2:30 ^ h 2 ½0:10; 0:35

ð4:3Þ

Since the function kðxÞ ¼ e g1 ehx =2 8 h [ 0 is strictly increasing, the distribution function CDF for g and h random variables can be written as [7]: gx

2

  CDF ðxÞ ¼ U k 1 ð xÞ

ð4:4Þ

For further details interested readers can refer [4].

4.3

Value at Risk

The standard normal distribution function Φ in Eq. (4.4) yields an easy procedure to calculate quantiles and VaR of g and h random variable X is given as follows [4], [10]:   VaRa ðX Þ ¼ CDF 1 ðaÞ ¼ k U1 ðaÞ ;

0\a\1

ð4:5Þ

The VaR of X with confidence level a 2 0; 1½ is defined as [6]: VaRa ðX Þ ¼ minfyjCDFX ðyÞ  ag

ð4:6Þ

It is a lower a percentile of the random variable X. The VaR is commonly used in many engineering areas involving uncertainties such as military, nuclear, material, airspace, finance etc. For example, finance regulations like Basel I and Basel II use VaR deviation measuring the width of daily loss distribution of a portfolio. For normally distributed random variables VaR is proportional to standard deviation. If X  Nðl; r2 ) and CDFx(y) is the cumulative distribution function of X then [15]: VaRa ð X Þ ¼ l þ kðaÞr

ð4:7Þ

In Eq. (4.7) we have, k ð aÞ ¼

pffiffiffi 1 2erf ð2a  1Þ

[email protected]

ð4:8Þ

50

4

Probabilistic View of Operational Risk

Fig. 4.2 The graphical representation of VaR measures the distance between the value of portfolio in current period and its a-quantile

and   Zy 2 2 et dt erf ð yÞ ¼ pffiffiffi p

ð4:9Þ

0

The ease and intuitiveness of VaR are counterbalanced by its mathematical properties. As a function of the confidence level for discrete distributions VaRa ðXÞ is a nonconvex, discontinuous function. The VaR risk constraints are equivalent to chance constraints on probabilities of losses. Figure 4.2 illustrates the graphical representation of VaR highlighted by the magnitude of horizontal bar and measures the distance between the value of a portfolio in current period and its a-quantile. Here, a ¼ 0:05 and returns are N(0.001, 0.000225) [4].

4.4

Subadditivity of Value at Risk

An explicit formula towards VaR is given in Eq. (4.5) [5] which is derived for g-and-h distribution and the corresponding random variables [4, 7]. In their famous working paper Dutta and Perry state that [7]: We have not mathematically verified the subadditivity property for g-and-h distribution but in all cases we have observed empirically that enterprise level

[email protected]

4.4 Subadditivity of Value at Risk

51

capital is less than or equal to the sum of the capitals from business lines or event types. It is obvious that the mathematical discussion of subadditivity involves multivariate modeling. In order to statistically investigate the subadditivity property for g-and-h distribution a simulation study is performed. Let X1, X2 be independent and identically distributed g-and-h random variables with parameters g = 2.4 and h = 0.2. By simulation of n = 107 realizations the diversification benefit of dg;h ðaÞ is estimated where [2]: dg;h ðaÞ ¼ VaRa ðX1 Þ þ VaRa ðX2 Þ  VaRa ðX1 þ X2 Þ

ð4:10Þ

The quantity dg;h ðaÞ is non-negative iff subadditivity occurs. The results are displayed in Fig. 4.3. For the above realistic choice of parameters super-additivity holds for a smaller than a certain level 99.4 % [7]. The fact that subadditivity i.e. VaRa ðX1 þ X2 Þ  VaRa ðX1 Þ þ VaRa ðX2 Þ holds for sufficiently large a is well established [3, 4]. The super-additivity enters for typical operational risk parameters at some levels below 99.4 % may be somewhat surprising. The latter aspect is important around the scaling of risk measures. Indeed risk managers realize that estimating VaRa at level a  99 % is statistically difficult. It has been suggested to estimate VaRa deeper down in the data at a ¼ 90 % and then scale up to 99.9 %. The change from super to subadditivity over this range is an issue of concern. It is to be noted that even finite mean examples can be constructed choosing a large enough skewness parameter g for levels 99.9 % and

Fig. 4.3 The plot of dg;h ðaÞ as function of a for g ¼ 2:4 and h ¼ 0:2 at n ¼ 107

[email protected]

52

4

Probabilistic View of Operational Risk

higher such that subadditivity of value at risk fails for all a\99:9 %. This is viewed in contrast to the following proposition [2]. Proposition 1 Suppose that the non-degenerate vector ðX1 ; X2 Þ is regularly varying with extreme value index n\1. Then VaRa is subadditive for sufficiently large a. Figure 4.3 exemplifies the subadditivity of VaR only in the very upper tail region. It is to be noted that the above Proposition 1 is an asymptotic statement and does not guarantee subadditivity for a broad range of high quantiles. Furthermore it should be noted that for n ¼ h [ 1 subadditivity typically fails. The reason is that for h > 1 one deals with infinite mean models [8]. For interested researchers it will be of prime importance to know for what choices of g and h values subadditivity can be expected. As shown in Fig. 4.3 this depends on the level a. For relevant practice we restrict ourselves to the a values 99 and 99.9 %. It is assumed that the operational risk data of two business lines of a bank are well modeled by independent and identically distributed g-and-h random variables with parameter values g 2 ½1:85; 2:30 and h 2 ½0:15; 0:35. These values roughly correspond to the parameters estimated by [3] at enterprise level. It is interesting to figure out if aggregation at business line level leads to diversification in the sense of subadditivity of VaR. For this purpose two independent and identically distributed g-and-h random variables with g and h values within the above mentioned ranges are considered. In Fig. 4.4 a contour plot of dg;h ðaÞ is displayed for a fixed a together with the rectangle containing the parameter values of interest. The number attached to each contour line gives the value of dg;h ðaÞ and the lines indicate levels of equal magnitude of diversification benefit. The zero value corresponds to models where VaRa is additive such that VaRa ðX1 þ X2 Þ ¼ VaRa ðX1 Þ þ VaRa ðX2 Þ. The positive values (bottom left hand corner) correspond to models yielding subadditivity. The top right hand corner corresponding to negative values for dg;h ðaÞ leads to super-additivity for the corresponding parameter values. It is to be noted that for a ¼ 99:9 % the entire parameter rectangle lies within the region of subadditivity. It

Fig. 4.4 The contour plot of dg;h ðaÞ as a function of g and h values for fixed at a ¼ 99 % (left panel) and a ¼ 99:9 % (right panel) given n ¼ 107

[email protected]

4.4 Subadditivity of Value at Risk

53

is important to realize that with only relatively small changes in the underlying g and h parameters one may end up in the super additivity region. The situation becomes more dramatic at lower quantiles. The left panel of Fig. 4.4 corresponds to a ¼ 99 % which is still relatively high. There the superadditivity region extends and a substantial fraction of the parameter rectangle lies therein. The above statements are postulated under the independent and identically distributed assumption. In the example given below dependence is allowed. For this we link the marginal g-and-h distributions with the same parameters as in Fig. 4.3 by a Gauss copula [6]. In Fig. 4.5 dg;h ðaÞ is plotted for three different correlation parameters q ¼ 0; 0:5 and 0.7. On comparing Figs. 4.3 and 4.5 it appears that in a range below 95 % jdg;h ðaÞj becomes smaller when the correlation parameter increases. This is not surprising because VaR is additive under comonotonic dependence i.e. for risks with maximal correlation [6]. As a consequence dg;h ðaÞ tends to zero for q ! 1. The effect of dependence can clearly be seen for large values of a. Based on the simulation study it appears that with increasing correlation q the range of superadditivity extends to even higher values of a. Hence the stronger the dependence the higher the level a has to be in order to achieve a subadditive model. When formulated differently for strong dependence (large values of q) most levels of a chosen in practice will lie within the range of superadditivity. These results have been worked out for other dependence structures like t and Gumbel copula. For these cases the contour plots are elaborated in Fig. 4.4. The results do not differ significantly from Fig. 4.4 and thus we refrain from displaying these plots here.

Fig. 4.5 The plot of dg;h ðaÞ as a function of a with Gauss copula and correlation parameters q ¼ 0; 0:5; 0:7 (g ¼ 2:4; h ¼ 0:2; n ¼ 107 )

[email protected]

54

4

Probabilistic View of Operational Risk

The situation in any loss distribution approach is of course in general much more complicated than this example. The researchers and risk managers should therefore interpret these statements rather from a methodological and pedagogical point of view. It seems that diversification of operational risk can go the wrong way due to the skewness and heavy tailedness of this type of data.

4.5

Subjective Value at Risk

It is widely accepted that risk management is a broad concept involving various perspectives. From the mathematical perspective risk management is a procedure for shaping a loss distribution such as an investor’s risk profile. In several optimization applications VaR is not always able to give feasible solutions [6]. To handle such situations we propose an alternative percentile measure of risk viz. SVaR [4]. The SVaR measure under certain specified conditions represents the expected value of some percentage of worst case loss scenarios. It approximately gives feasible solution in those situations where VaR fails. For random variable X with continuous distribution functions SVaRa ðXÞ is equal to the subjective expectation of X provided the following condition is satisfied [4]: X  VaRa ðX Þ

ð4:11Þ

This definition serves as the basis towards formulation of SVaR. The general definition of SVaR for random variable X with a possibly discontinuous distribution function is [4]: The SVaR of random variable X with confidence level a is the expectation of the generalized α tail distribution such that: Zþ 1 SVaRa ð X Þ ¼

y dCDFXa ð yÞ

ð4:12Þ

1

In Eq. (4.12) we have,  CDFXa ðyÞ

¼

0

CDFX ð yÞa 1a

when when

y  VaRa ð X Þ y [ VaRa ð X Þ

ð4:13Þ

In general case, SVaRa ðXÞ is never equal to the median of outcomes greater than VaRa ðXÞ. For general distributions, we may be required to separate the probability atom. As such when the distribution is modeled by scenario SVaR may be procured through the median of a fractional number of scenarios. To conceptualise this idea we present further definitions of SVaR.

[email protected]

4.5 Subjective Value at Risk

55

Let the superior value of SVaR be denoted by SVaRpositive ðXÞ. This is the cona ditional expectation of X subject to X [ VaRa ðXÞ such that [4]: SVaRpositive ðX Þ ¼ E ½XjX [ VaRa ð X Þ a

ð4:14Þ

Figure 4.6 represents VaR and SVaR measures in the operational risk context. The SVaRa ðXÞ can also be defined alternatively as: If CDFX ðVaRa ðXÞÞ\1 so that there might be chances of loss greater than VaRa ðXÞ then [4]: SVaRa ð X Þ ¼ na ð X ÞVaRa ðX Þ þ ð1  na ðX ÞÞVaRa ð X Þ þ    þ ð1  nna ðX ÞÞSVaRpositive ðX Þ a

ð4:15Þ

In Eq. (4.15) the definition of SVaR is not defined as conditional expectation and na ðXÞ is given by: na ð X Þ ¼

CDFX ðVaRa ðX ÞÞ  a ð1  a  2a      naÞ

ð4:16Þ

Let the inferior value of SVaR be denoted by SVaRnegative ðXÞ such that [4]: a SVaRnegative ð X Þ ¼ E ½XjX  VaRa ð X Þ a

ð4:17Þ

The definition in Eq. (4.17) coincides with SVaRa ðXÞ for continuous distributions. For general distributions it is discontinuous with respect to a and is concave in nature. The SVaR is continuous with respect to a and jointly convex in ðX; aÞ. If

Fig. 4.6 A representation of VaR and SVaR measures in the operational risk context

[email protected]

56

4

Probabilistic View of Operational Risk

CDFX ðaÞ has a vertical discontinuity gap, then there is an interval of confidence level a having the same VaR. The inferior and superior endpoints of such an interval are given by [4]:   ðX Þ ð4:18Þ anegative ¼ CDFX VaRnegative a   apositive ¼ CDFX VaRpositive ðX Þ a

ð4:19Þ

In Eq. (4.19) we have,   ðX Þ ¼ ProbfX\VaRa ð X Þg CDFX VaRnegative a

ð4:20Þ

When CDFX ðVaRnegative ðXÞÞ\a\CDFX ðVaRa ðXÞÞ\1 the VaRa ðXÞ atom has a the probability ðapositive  anegative Þ and is split by the confidence level a into n pieces [4]. This fact is illustrated by Eq. (4.15).

4.6

Risk Measures

The axiomatic investigation of risk measures was suggested by Artzner et al. [1]. A coherent risk measure in the extended sense was defined by [15] over the functional RM : F 2 ! 1; 1½ if: R1: RMðC Þ ¼ C 8 C R2: RMðð1  nÞX þ nX 0 Þ  ð1  nÞRMð X Þ þ nRMðX 0 Þ for n 2 0; 1½ ðconvexityÞ R3: RMð X Þ  RMðX 0 Þ when X  X 0 ðmonotonicityÞ   R4: RMð X Þ  0 when X k  X2 ! 0 with RM X k  0 ðclosednessÞ

A functional RM: F 2 ! 1; 1½ is called a coherent risk measure in the basic sense if it satisfies axioms R1, R2, R3, R4 alongwith the following axiom: R5: RMðnX Þ ¼ nRMðX Þ 8n [ 0 ðpositive homogeneityÞ A functional RM: F 2 ! 1; 1½ is called an averse risk measure in the extended sense if it satisfies axioms R1, R2, R4 alongwith the following axiom: R6: RMð X Þ [ EX 8 nonconstant X ðaversityÞ The aversity has the interpretation that the risk of loss in a non-constant random variable X cannot be acceptable RMðXÞ\0 i.e. unless EX\0. A functional RM: F 2 ! 1; 1½ is called an averse risk measure in the basic sense if it satisfies R1, R2, R4, R5 and R6. RMðXÞ ¼ VaRa ðXÞ is neither a coherent nor an averse risk measure. The problem lies in the convexity axiom R2 which is

[email protected]

4.6 Risk Measures

57

equivalent to the combination of positive homogeneity and subadditivity. This is defined as [15]: RMðX þ X 0 Þ  RMðX Þ þ RMðX 0 Þ

ð4:21Þ

In Eq. (4.21) although positive homogeneity is obeyed the subadditivity is violated. An averse measure of risk might not be coherent and a coherent measure might not be averse.

4.7

Deviation Measures

In this Section we present the concept of deviation measures as suggested by [13], [14]. A functional DM: F 2 ! ½0; 1 is called a deviation measure in the extended sense if it satisfies: D1 : DMðC Þ ¼ 0 8 C; but DMð X Þ [ 0 8 X D2 : DMðð1  nÞX þ nX 0 Þ  ð1  nÞDMð X Þ þ nDMðX 0 Þ for n 2 0; 1½ ðconvexityÞ     D3 : DMð X Þ  d when X k  X  ! 0 with DM X k  0 ðclosednessÞ 2

A functional is called a deviation measure in the basic sense when it satisfies axioms D1, D2, D3 alongwith the following axiom: D4: DMðnX Þ ¼ nDMð X Þ 8n [ 0 ðpositive homogenityÞ A deviation measure in the extended or basic sense is called a coherent measure in extended or basic sense if it additionally satisfies: D5: DMðX Þ  sup X  E½ X  8X [ 0 ðupper range bondednessÞ Figure 4.7 shows axiomatic definition of the deviation measure in the operational risk scenario. Based on the concept of deviation measures we now define two important deviation measures viz. a Value Risk deviation measure and a Subjective Value at Risk deviation measure as follows [4]: ð X Þ ¼ VaRa ðX  EX Þ aVaRdelta a

ð4:22Þ

aSVaRdelta ð X Þ ¼ SVaRa ðX  EX Þ a

ð4:23Þ

The SVaR deviation measure SVaRdelta ðXÞ basically is a coherent deviation a measure. Based on the deviation measures given by Eqs. (4.22) and (4.23) we

[email protected]

58

4

Probabilistic View of Operational Risk

Fig. 4.7 The axiomatic definition of the deviation measure

illustrate two new definitions of heterogenous deviation Subjective Value at Risk and heterogenous Subjective Value at Risk [4]: ðX Þ ¼ heterogenous  SVaRdelta a

N X

nj SVaRdelta aj ð X Þ

j¼1

8nj  0;

N X

ð4:24Þ nj ¼ 1 aj 2 0; 1½

j¼1

heterogenous  SVaRa ðX Þ ¼

N X

nj SVaRaj ð X Þ

ð4:25Þ

j¼1

For further information interested readers can refer [4].

4.8

Equivalence of Chance and Value at Risk Constraints

In several engineering and management studies we often deal with probabilistic constraints [12]. For instance in portfolio management, it may so happen that portfolio loss at a certain future time is with high reliability at most to a certain value. In these cases an optimization model can be set up so that constraints are required to be satisfied with some probability level. Let x 2 Rn and w 2 X be a random event over the set X of all random events. For any given x it may be required that most of the time some random functions fi ðx; wÞ; i ¼ 1; . . .; s satisfy the inequalities fi ðx; wÞ  0; i ¼ 1; . . .; s; that is we ask for [10]:

[email protected]

4.8 Equivalence of Chance and Value at Risk Constraints

Probffi ðx; wÞ  0g  pi

8i ¼ 1; . . .; s ^ 0  pi  1

59

ð4:26Þ

It can be argued that making this probability 1 is almost same as fi ðx; wÞ  0. In most applications, this approach can lead to modeling and technical problems. In modeling, there is little knowhow on what value of pi is required to be set. Moreover, one has to deal with the issue of constraint interactions and decide whether it is better to require Probff1 ðx; wÞ  0g  p1 ¼ 0:99 and Probff2 ðx; wÞ  0g  p2 ¼ 0:95 or work with a joint condition like Dealing numerically with function Fi ðxÞ ¼ Probffi ðx; wÞ  0g  p. Probffi ðx; wÞ  0g leads to the task of finding the relevant properties of Fi. A common difficulty is that the convexity of fi ðx; wÞ with respect to x may not carry over to the convexity of Fi ðxÞ with respect to x. The chance constraints and percentiles of a distribution are closely related. Let VaRa ðxÞ be VaRa of a loss function f ðx; wÞ that is [15]: VaRa ðxÞ ¼ minf2: Probff ðx; wÞ  2g  ag

ð4:27Þ

Then, we have the following equivalent constraints: Probff ðx; wÞ  2g  a , Probff ðx; wÞ [ 2g  1  a , VaRa ð xÞ  2 ð4:28Þ Generally, VaRa ðxÞ is nonconvex with respect to x. Therefore, VaRa ðxÞ  2 and Probff ðx; wÞ  2g  a may be nonconvex constraints.

4.9

Advanced Properties of G-and-H Distribution

After providing the reader with suitable introduction on g-and-h distribution we present some advanced mathematical properties of g-and-h distribution. Interested readers can refer [7] for further insights.

4.9.1

Tail Properties and Regular Variation

In certain issues on high quantile estimation, the statistical properties of the estimators used depend on the tail behavior of the underlying model. The g-and-h distribution is very flexible in that respect. There are numerous graphical techniques for revealing tail behavior of distribution functions such as mean excess plots (me-plots) and log-log density plots. In Fig. 4.8 me-plot is shown for a g-and-h distribution with parameter values typical in the context of operational risk. Alongwith the thick line corresponding to the theoretical mean excess function 12 empirical mean excess functions based on n = 105 simulated g-and-h data are plotted. The upward sloping behavior of me-plots indicates heavy-tailedness as

[email protected]

60

4

Probabilistic View of Operational Risk

Fig. 4.8 The theoretical mean excess function (thick line) together with 12 empirical mean excess plots of the g-and-h distribution

typically present in the class of subexponential distribution functions S [10]. The linear behavior corresponds to Pareto power tails. In the latter case, the resulting log-log-density plot shows a downward sloping linear behavior as shown in Fig. 4.9. Figure 4.8 also highlights the well-known problem when interpreting me-plots i.e. a very high variability of the extreme observations made visible through the simulated me-plots from the same underlying model. The both figures give insight into the asymptotic heavy-tailedness of the g-and-h distribution. A standard theory for describing heavy-tailed behavior of statistical models is Karamata’s theory of regular variation. For a detailed treatment of the theory

Fig. 4.9 The density of the g-and-h distribution plotted on a log-log scale (The different plotting ranges of the axes is to be noted)

[email protected]

4.9 Advanced Properties of G-and-H Distribution

61

interested readers can refer [7]. It can be recalled that a measurable function L : R ! ð0; 1Þ is slowly varying in nature ðdenoted by L 2 SVÞ if for t [ 0: lim

x!1

LðtxÞ ¼1 L ð xÞ

ð4:29Þ

A function f is called regularly varying ðat 1Þ with index a 2 R if f ðxÞ ¼ xa LðxÞ and is denoted by f 2 RVa . It is to be noted that RV0 ¼ SV. The following proposition is an immediate consequence of Karamata’s Theorem [7]. It provides an easy tool for checking regular variation. In the context of extreme value theory, the result is known as von Mises condition for the Fréchet distribution function. We  ¼ 1  F. further denote F Proposition 2 Let F be an absolutely continuous distribution function with density f satisfying xf ðxÞ  2 RVa ¼ a[0 ) F lim  F ð xÞ

ð4:30Þ

x!1

It is to be noted that in the slight abuse of notation we should restrict RV to nonnegative random variables. Through the tail equivalence [7] we can easily bypass this issue. We proceed by showing that the g-and-h distribution is indeed regularly varying ðat 1Þ with index −1/h (still assuming h > 0). We assume X * g-and-h then:   F ð xÞ ¼ U k1 ð xÞ

ð4:31Þ

uðk1 ðxÞÞ k 0 ðk1 ð xÞÞ

ð4:32Þ

f ð xÞ ¼

In Eq. (4.32) u denotes the density of a standard normal random variable. Using uð1  UðuÞÞ=uðuÞ ! 1 as u ! 1 we have [7]: u:¼k1 ðxÞ

xf ð xÞ xuðk1 ðxÞÞ z}|{ lim  ¼ ¼ lim x!1 F ð xÞ x!1 ð1  Uðk 1 ð xÞÞÞk 0 ðk 1 ð xÞÞ

lim

u!1 ð1

kðuÞuðuÞ  UðuÞÞk 0 ðuÞ

uðuÞðegu  1Þ 1 ¼ u!1 ð1  UðuÞÞðgegu þ huðegu  1ÞÞ h

¼ lim

ð4:33Þ  2 RV1=h . In a similar way it can also be shown that Hence by Proposition 2 F the h-distribution (h > 0) is regularly varying with the same index. This was already mentioned in [7]. The g-distribution (g > 0) however is a scaled lognormal distribution which is subexponential but not regularly varying. This leads us to the following result.

[email protected]

62

4

Probabilistic View of Operational Risk

 2 RV1=h . For h = 0 and Theorem 1 Suppose F * g-and-h with g, h > 0, then F g > 0, we have F 2 SnRV, where S denotes the class of subexponential distribution functions. Hence, if X * g-and-h with h > 0 we have by definition of regular variation  F ð xÞ ¼ x1=h Lð xÞ for some slowly varying function L. It is worth mentioning that the precise behavior of L may profoundly affect the statistical properties of extreme value theory based high quantile estimations. This point was very clearly stressed in [7] and is a very crucial point. The quality of high quantile estimation for power tail data very much depends on the second order behavior of the underlying (mostly unknown) slowly varying function L [7]. An explicit asymptotic formula for the slowly varying function L in the case of the g-and-h distribution is derived for interested readers. For g, h > 0 we have:     ð xÞx1=h ¼ 1  U k1 ð xÞ x1=h Lð xÞ ¼ F

ð4:34Þ

Hence,  gx  e  1 1=h x2 =2 Lðk ð xÞÞ ¼ ð1  Uð xÞÞðk ð xÞÞ ¼ ð1  Uð xÞÞ e g    gx   1 e  1 1=h 1 ¼ pffiffiffiffiffiffi 1þO 2 g x 2px 1=h

1 Lð xÞ ¼ pffiffiffiffiffiffi 2pg1=h

1

1=h egk ðxÞ  1 k 1 ðxÞ

ð4:35Þ

!! 1þO

1 ðk1 ð xÞÞ2

;

x!1

ð4:36Þ

In order to find an asymptotic estimate for k 1 , we define: ~kð xÞ ¼ 1 ehx2 þ gx  kð xÞ; g 2

x!1

ð4:37Þ

with the inverse function: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~k 1 ðxÞ ¼  g þ 1 g2 þ 2h logðgxÞ; h h

x[0

ð4:38Þ

It is to be noted here that f ðxÞ  gðxÞ; x ! a which means that: lim

f ð xÞ

x!a gð xÞ

¼1

ð4:39Þ

~ 1 ðxÞ  k1 ðxÞ for x ! 1. We now obtain the following Theorem [7]: Also K

[email protected]

4.9 Advanced Properties of G-and-H Distribution

63

 Theorem 2 Let F * g-and-h with g, h > 0. Then FðxÞ ¼ x1=h LðxÞ with L 2 SV where for x ! 1, h i1=h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

g    1 2 þ 2h logðgxÞ g exp g  þ  1 h h 1 1 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Lð xÞ ¼ pffiffiffiffiffiffi 1 þ O log x 2pg1=h  gh þ 1h g2 þ 2h logðgxÞ ~ Let us define LðxÞ as follows: ~ð xÞ ¼ pffiffiffiffiffi1ffi L 2pg1=h

~1 ð xÞ

eg k

1

1=h

~k1 ðxÞ

ð4:40Þ

It is to be noted that u := k 1 ðxÞ is a strictly increasing function for g, h > 0. Hence, pffiffiffiffiffiffi 1=h 1 pffiffiffiffiffiffi 1=h 1 2pg ~k ð xÞð1  Uðk 1 ð xÞÞÞx1=h 2pg ~k ðk ðuÞÞð1  UðuÞÞðk ðuÞÞ1=h ¼  ~1 1=h  ~1 1=h egk ðxÞ  1 egk ðkðuÞÞ  1  1=h ~1     egu  1 1 1 k ðk ðuÞÞð1  UðuÞÞ ¼ ¼ 1 þ O ; x!1 ¼ 1 þ O uðuÞ u2 log x eg~k1 ðkðuÞÞ  1

Lð xÞ ¼ ~ ð xÞ L

ð4:41Þ This completes the proof of Theorem formally. The slowly varying function L is pffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi modulo constants essentially of the form expð log xÞ= log x [7].

4.9.2

Second Order Regular Variation

The second order regular variation is generally expressed in terms of Pickands-Balkema-de Haan Theorem [7]. It is assumed that the reader is familiar with univariate extreme value theory [7]. For a g-and-h random variable X (with g; h [ 0) it was shown earlier that F 2 MDAðHn Þ i.e. belongs to the maximum domain of attraction of an extreme value distribution with index n ¼ h [ 0. The Pickands-Balkema-de Haan Theorem implies that for F 2 MDAðHn Þ, n 2 R there exists a positive measurable function bðÞ such that: lim supx2ð0;x0 uÞ Fu ðxÞ  Gn;bðuÞ ð xÞ ¼ 0

u"x0

[email protected]

ð4:42Þ

64

4

Probabilistic View of Operational Risk

The upper support point of F is denoted by x0 . In the case of g-and-h distribution x0 ¼ 1. By the above theorem, the excess distribution function Fu is defined by: Fu ðxÞ ¼ ProbðX  u  xjX [ uÞ

ð4:43Þ

It is well approximated by the distribution function of GPD Gn;bðuÞ ðxÞ for high threshold values u. This first order convergence result stands at the heart of extreme value theory and its numerous applications. For practical purposes however second order properties of F are of considerable importance for the performance of parameter estimates or the estimation of high quantiles. We are in particular interested in the rate of convergence of Fu towards Gn;bðuÞ i.e. in how fast does dðuÞ converge to 0 for u ! x0 where: ð4:44Þ d ðuÞ := supx2ð0;x0 uÞ Fu ðxÞ  Gn;bðuÞ ð xÞ For this we define the following expressions for some F 2 MDAðHn Þ: V ðtÞ := ð1  F Þ1 ðet Þ AðtÞ :=

V 00 ðlog tÞ n V 0 ðlog tÞ

ð4:45Þ ð4:46Þ

The following proposition gives insight into the behavior of the rate of convergence to 0 of dðuÞ in cases including for example the g-and-h distribution with n ¼ h [ 0 [7]. Proposition 3 Let F 2 MDAðHn Þ be a distribution function which is twice differentiable and let n [  1 if the following conditions are satisfied: (i) limt!1 AðtÞ ¼ 0 (ii) AðÞ is of constant sign near 1 (iii) there exists q  0 such that jAj 2 RVq then for u ! x0 1

d ðuÞ := supx2ð0;x0 uÞ Fu ðxÞ  Gn;V 0 ðV 1 ðuÞÞ ð xÞ ¼ O A eV ðuÞ The parameter q is called the second order regular variation parameter. It may be recalled that for g-and-h distribution FðxÞ ¼ Uðk 1 ðxÞÞ and hence  1 ðxÞ ¼ kðU1 ð1  xÞÞ. In this case the function V defined above is given by F VðtÞ ¼ kðU1 ð1  et ÞÞ. Moreover V 0 ðlog tÞ ¼

k 0 ð vð t Þ Þ tuðvðtÞÞ

[email protected]

ð4:47Þ

4.9 Advanced Properties of G-and-H Distribution

65

and V 00 ðlog tÞ ¼

k00 ðvðtÞÞ  tk 0 ðvðtÞÞðuðvðtÞÞ þ u0 ðvðtÞÞ=ðtuðvðtÞÞÞÞ ðtuðvðtÞÞÞ2

ð4:48Þ

  Here, vðtÞ:¼ U1 1  1t . The conditions (i) and (ii) stated in Proposition 2 can be checked for conformance. In addition using Lemma 2 of [7] it can be shown that jAj 2 RVq with second order parameter q ¼ 0. By definition of V we have:

1 1 V 00 log eV ðuÞ  ð uÞ Þ V 00 ðlog 1=F h A eV ðuÞ ¼ 0 h¼ 0 1 ðuÞ V  V ðlog 1=F ðuÞÞ Þ V ðlog e ð4:49Þ  ð uÞ  ð uÞ k 00 ðk 1 ðuÞÞF k1 ðuÞF ¼ 0 1 þ 1h k ðk ðuÞÞuðk 1 ðuÞÞ uðk 1 ðuÞÞ Lemma 1 For X ∼ g-and-h with g; h [ 0 the following asymptotic relation holds: 1

g A eV ðkðxÞÞ  ; x ! 1 x Now using the expansion:    ð xÞ xU 1 1 ¼ 1þ þO 3 ; uð xÞ x x

x!1

ð4:50Þ

We have 1

A eV ðkðxÞÞ lim

x!1

g x

 00      ð xÞ 1 k ð x Þ xU xUð xÞ ¼ lim 0 þx  1  hx g x!1 k ðxÞ uð xÞ uð xÞ  00     1 k ð xÞ 1 1þO 2  hx ¼ lim 0 g x!1 k ðxÞ x !   h2 x2 h 1 1 g þ 2hx þ g þ g  hx ¼ lim 1þO 2 hx g x!1 x g þ1  h þ O 1x 1 ¼ lim h 1 ¼ 1 g x!1 g þ x

ð4:51Þ

By Proposition 3 and since k 1 ðÞ is increasing (assuming g; h [ 0), the rate of convergence of the excess distribution function of a g-and-h distributed random variable towards the GPD Gn;bðuÞ with n ¼ h and bðuÞ ¼ V 0 ðV 1 ðuÞÞ is given by [7]: 

1 d ðuÞ ¼ O 1 k ð uÞ



  1 ¼ O pffiffiffiffiffiffiffiffiffiffi ; log u

[email protected]

u!1

ð4:52Þ

66

4

Probabilistic View of Operational Risk

  1 At this point it can be stressed that dðuÞ ¼ O pffiffiffiffiffiffiffi does not imply that the log u

rate of convergence is independent of the parameters g and h. Not a detailed derivation of this fact but rather a heuristic argument is provided by the following expression [7]:   log Lð xÞ pffiffiffi g 1 1  2 3=2 pffiffiffiffiffiffiffiffiffiffi ¼ O pffiffiffiffiffiffiffiffiffiffi ; log x h log u log u

x!1

ð4:53Þ

g Clearly the value h3=2 affects the rate of convergence of loglogLðxxÞ as x ! 1. Table 4.1 summarizes the rates of convergence in the GPD approximation as a function of the underlying distribution function. For both the exponential as well as the exact Pareto d(u) = 0. For distribution functions like the double exponential parent, normal, Student t and Weibull convergence is at a reasonably fast rate. Already for the very popular lognormal and loggamma distribution functions convergence is very slow. This situation deteriorates further for the g-and-h where the convergence is extremely slow. It is to be noted that distribution functions can always be constructed with arbitrary slow convergence of excess distribution function towards the GPD [4], [7]. This result is in a violent contrast to the rate of convergence in the Central Limit Theorem which for finite variance random variables is always n1=2 . From theoretical point of view this yields the following important result [7]: If data are well modeled by g-and-h distribution with g; h [ 0 then high quantile estimation for such data based on the POT method will typically converge very slowly. It is often stated by some authors that they have solved the critical optimal choice of threshold problem in the POT or Hill method. On several occasions it has been stressed that this problem has no general solution. The optimality can only be obtained under

Table 4.1 Rate of convergence to GPD for different distributions as a function of threshold u Distribution

Parameters

 FðxÞ

q

dðuÞ

Exponential ðkÞ

a[0

1

0

Pareto ðaÞ Double exp. parent

a[0

ekx xa

1 −1

Student t

m[0

0 Oðeu Þ   O u12   O u12

ee tm ðxÞ  UðxÞ x

Normal ð0; 1Þ Weibull ðs; cÞ

s 2 R þ nf1g; c [ 0

Lognormal ðl; rÞ

 2m 0

ðcxÞs

0

s 2 R; r [ 0

e  log xlÞ Uð

Loggamma ðc; aÞ

a [ 0; c 6¼ 1

Ca;c ðxÞ2

0

g-and-h

g; h [ 0

 1 ðxÞÞ Uðk

0

r

0

Oðus Þ

O log1 u

O log1 u   1 O pffiffiffiffiffiffiffi log u

[email protected]

4.9 Advanced Properties of G-and-H Distribution

67

some precise second order properties on the underlying slowly varying function L. It is precisely this L which is impossible to infer from the statistical data. Hence the choice of a reasonable threshold remains the Achilles heel of any high quantile estimation procedure based on extreme value theory. For more pedagogic and entertaining presentation of the underlying issues interested readers may refer [4, 7].

4.10

Applications

After providing the reader an insight on the probabilistic view of operational risk, we present some interesting applications on the operational risk optimization and regression i.e. subjective value at risk optimization, regression problem and stability of estimation [4].

4.10.1

Subjective Value at Risk Optimization

These days VaR has achieved an appreciable status of being considered into industry regulations. It is difficult to optimize VaR numerically when losses are not normally distributed. As a tool in optimization modeling, SVaR has superior properties in many respects. The SVaR optimization remains consistent with VaR optimization [13, 15]. For models with normal and elliptical distributions, experiments with VaR and SVaR remains almost equivalent. The SVaR can be expressed by a minimization formula which can be incorporated into the optimization problem with respect to the decision variables x 2 X 2 Rn . This is designed to minimize risk or shape it within bounds. Significant improvements are achieved while preserving problem features like convexity. Let us consider a random loss function f ðx; yÞ depending upon the decision vector x and random vector y of risk factors, such that we have an alternative expression as follows from which SVaR can be derived [4]. Ga ðx; wÞ ¼ w þ

1 E ½ðf ðx; yÞ  wÞ 1a

ð4:54Þ

It can be verified that Ga ðx; wÞ is convex with respect to a and VaRa ðxÞ is a minimum point of function Ga ðx; wÞ with respect to w. On minimizing Ga ðx; wÞ with respect to w gives SVaRa ðxÞ [4]: SVaRa ð xÞ ¼ min Ga ðx; wÞ a

[email protected]

ð4:55Þ

68

4

Probabilistic View of Operational Risk

The SVaR can be represented as either constrained or unconstrained optimization problem [12]. An advantage of SVaR over VaR is its capability to preserve convexity. If f ðx; yÞ is convex in x, then SVaRa ðxÞ is convex in x. Moreover, if f ðx; yÞ is convex in x, then Ga ðx; wÞ is convex in x and w. This convexity is useful because minimizing Ga ðx; wÞ over ðx; wÞ 2 X  R results in minimizing SVaRa ðxÞ [4]: min SVaRa ðxÞ ¼ x2X

min

ðx;wÞ2XR

Ga ðx; wÞ

ð4:56Þ

Again if ðx ; w Þ minimizes Ga over X  R; then x minimizes SVaRa ðxÞ over X and SVaRa ðx Þ ¼ Ga ðx ; w Þ. In risk management, SVaR can be utilized to shape the risk in an optimization model for which several confidence levels can be specified. It can be shown that for any selection of confidence levels ai and loss tolerances wi ; i ¼ 1; . . .; t the optimization problem [20]: gð x Þ

min x2X

subject to SVaRa ðxÞ  wi ; i ¼ 1; . . .; t

ð4:57Þ

The Eq. (4.57) is equivalent to the following optimization problem [4]: min

x;w1 ;...;wt 2XRR

gð x Þ Gai ðx; wi Þ  wi ; i ¼ 1; . . .; t

subject to

ð4:58Þ

When X and g are convex and f ðx; yÞ is convex in x the optimization problems given by Eqs. (4.57) and (4.58) represent the convex programming [12] and are thus favourable for computation. When Y is a discrete probability space with elements yk ; k ¼ 1; . . .; P having probabilities pk ; k ¼ 1; . . .; P we have [4]: Gai ðx; wi Þ ¼ wi þ

P 1 X pk ½f ðx; yk Þ  wi  1  ai k¼1

ð4:59Þ

The constraint Ga ðx; wÞ  w can be replaced by a system of inequalities by introducing additional variables gk such that [4]: gk  0;

f ðx; yk Þ  w  gk  0; wþ

k ¼ 1; . . .; P

P 1 X pk gk  w 1  a k¼1

ð4:60Þ ð4:61Þ

The minimization problem in Eq. (4.58) can be converted into the minimization of gðxÞ with the constraints Gai ðx; wi Þ  wi being replaced as presented in Eqs. (4.60) and (4.61). When f is linear in x constraints in Eqs. (4.60) and (4.61) are

[email protected]

4.10

Applications

69

Fig. 4.10 The measure of SVaR robust mean in terms of SVaR actual frontiers

linear. Figure 4.10 shows SVaR robust mean in terms of SVaR actual frontiers for a real life scenario where expected return is measured with respect to standard deviation.

4.10.2

The Regression Problem

In this section we highlight a regression problem concerning the SVaR optimization [13, 15]. In linear regression, a random variable Y is approximated in terms of random variables X1 ; . . .; Xn by an expression c0 þ c1 X1 þ    þ cn Xn . The coefficients are chosen by minimizing the mean square error [12]: min E ½Y  ðc0 þ c1 X1 þ    þ cn Xn Þ2

c0 ;c1 ;;cn

ð4:62Þ

The mean square error minimization is equivalent to minimizing the standard deviation with the unbiasedness constraint [14, 15]: min r½Y  ðc0 þ c1 X1 þ    þ cn Xn Þ

c0 ;c1 ;...;cn

subject to

E ½c0 þ c1 X1 þ    þ cn Xn  ¼ EY

ð4:63Þ

A general axiomatic setting for error measures and corresponding deviation measures can be considered easily. The error measure can be defined as a functional E: F 2 ðSÞ ! ½0; 1 satisfying the following axioms:

[email protected]

70

4

E1: E ð0Þ ¼ 0;

Probabilistic View of Operational Risk

E ð X Þ [ 0 8 X 6¼ 0; E ðC Þ\1 8 C

E2: E ðnX Þ ¼ nE ð X Þ 8n [ 0 ðpositive homogenityÞ E3: E ðX þ X 0 Þ  E ð X Þ þ E ðX 0 Þ 8X ^ X 0 ðsubadditivityÞ

E4: X 2 F 2 ðSÞjE ð X Þ  c is closed 8 c\1 ðlower semicontinuityÞ For an error measure E, the projected deviation measure D is defined by the equation: Dð X Þ ¼ minC E ðX  CÞ

ð4:64Þ

The statistic SðXÞ is defined by the equation: S ð X Þ ¼ argminC E ðX  C Þ

ð4:65Þ

From this it can be argued that the general regression problem [16]: min E ½Y  ðc0 þ c1 X1 þ    þ cn Xn Þ

c0 ;c1 ;...;cn

ð4:66Þ

The Eq. (4.66) is equivalent to the following problem [12]: min D½Y  ðc1 X1 þ    þ cn Xn Þ

c0 ;c1 ;...;cn

subject to

c 0 2 S ½ Y  ð c 1 X 1 þ    þ cn X n Þ 

ð4:67Þ

The equivalence of optimization problems given by Eqs. (4.66) and (4.67) is a special case of this. This leads to the identification of a link between statistical work on percentile regression and SVaR deviation measure [9]. The minimization of the error measure is equivalent to minimization of SVaR deviation. It can be shown that when the error measure is the Koenker and Basset function [9]:  

E aKB ð X Þ ¼ E maxf0; X g þ a1  1 maxf0; X g

ð4:68Þ

The projected measure of deviation is [4]: DðX Þ ¼ SVaRdelta ðX Þ ¼ SVaRa ðX  EX Þ a

ð4:69Þ

The corresponding averse measure of risk and associated statistic is given by [4]: Rð X Þ ¼ SVaRa ðX Þ

ð4:70Þ

S ðX Þ ¼ VaRa ð X Þ

ð4:71Þ

[email protected]

4.10

Applications

71

Then we have the following expressions [4]: h i   min E ðX  C Þpositive þ a1  1 EðX  C Þnegative ¼ SVaRdelta ðX Þ a

ð4:72Þ

h i   argmin E ðX  CÞpositive þ a1  1 EðX  C Þnegative ¼ VaRa ð X Þ

ð4:73Þ

C2R

C2R

For further details interested readers can refer [4].

4.10.3

Stability of Estimation

The requirements towards estimation of VaR and SVaR arises when estimating tails of distributions. In this respect we compare the stability of estimates of VaR and SVaR based on a finite number of observations. The common drawback in such comparisons is that some confidence level is assumed and estimations of VaR (X) and SVaR(X) are compared with the common value of confidence level a which is generally 90, 95 and 99 % [10]. The problem with such comparisons is that VaR and SVaR with the same confidence level measure different parts of the distribution. For any specific distribution, the confidence levels a1 and a2 for comparison of VaR and SVaR can be found from the following equation [4]: VaRa1 ðX Þ ¼ SVaRa2 ðX Þ

ð4:74Þ

Considering the credit risk example in [17], we find that SVaR with confidence level a ¼ 0:90 is equal to VaR with confidence level a ¼ 0:99. The VaR estimate in the work of Yamai et al. [19] when compared with SVaR yields an absurdity in the parametrical family of stable distributions. On executing 100,000 simulations of size 10,000 each and comparing standard deviations of VaR and SVaR normalized by their mean values we obtain the following results. The VaR estimators are generally more stable than SVaR estimators at the same confidence level. The difference is most evident for fat tailed distributions and negligible when distributions are close to normal. A larger sample size often increases the accuracy of SVaR estimation. Let us take a case from [19] where the distribution of equity portfolio is modeled, it consists of call options based on three stocks with joint log normal distribution. The VaR and SVaR are estimated at the 90 % confidence level on 100,000 sets of Monte Carlo simulations with a sample size of 10,000. The resulting loss distribution for the portfolio of at the money options is quite close to normal. The estimation errors of VaR and SVaR are similar. The resulting loss distribution for portfolio of deep out of the money options is fat tailed. In this case, SVaR estimator performs worse than the VaR estimator. In another case, estimators are compared on the distribution of loan portfolio consisting of 10,000 loans with homogeneous

[email protected]

72

4

Probabilistic View of Operational Risk

default rates of 1 % through 0.1 %. Individual loan amounts obey the exponential distribution with an average of $100 million. Correlation coefficients between default events are homogeneous at levels 0.02, 0.04 and 0.05 [10]. Results show that estimation errors of VaR and SVaR estimators are similar when the default rate is higher. For lower default rates, the SVaR estimator gives higher errors. Also, the higher the correlation between default events, the more the loan portfolio distribution becomes fat tailed, and the higher is the error of SVaR estimator relative to VaR estimator. These numerical simulations compare VaR and SVaR at a specified confidence level.

4.11

Decomposition According to Contribution of Risk Factors

Here we discuss the decomposition of VaR and SVaR risk according to risk factor contributions [18], [19]. For this we consider a portfolio loss X which can be decomposed as [4, 11]: X¼

n X

Xi si

ð4:75Þ

i¼1

In Eq. (4.75) Xi are losses of individual risk factors and si are sensitivities to the risk factors i ¼ 1; . . .; n. The following decompositions of VaR and SVaR hold for continuous distributions [4]: VaRa ð X Þ ¼

n X @VaRa ðX Þ i¼1

SVaRa ðX Þ ¼

@si

si ¼ E½Xi jX ¼ VaRa ð X Þsi

n X @SVaRa ð X Þ i¼1

@si

si ¼ E ½Xi jX  VaRa ð X Þsi

ð4:76Þ

ð4:77Þ

When a distribution is modeled by scenarios, it is much easier to estimate quantities E½Xi jX  VaRa ðXÞ in the SVaR decomposition than quantities a ðXÞ E½Xi jX ¼ VaRa ðXÞ in the VaR decomposition. Estimators of @VaR are less stable @si than estimators of can refer [4].

@SVaRa ðXÞ @si

[4]. For further mathematical details interested readers

[email protected]

References

73

References 1. Artzner, P., Delbaen, F., Eber, J.M., Heath, D.: Coherent Measures of Risk. Math. Finance 9 (3), 203–228 (1999) 2. Berkowitz, J., O’Brien, J.: How accurate are value at risk models at commercial banks? J. Finance 57(3), 1093–1111 (2002) 3. Böcker, K., Klüppelberg, C: Multivariate Models for Operational Risk (Preprint). Technical University of Munich (2006) 4. Chaudhuri, A.: A Study of Operational Risk Using Possibility Theory. Technical Report, Birla Institute of Technology Mesra, Patna Campus, India (2010) 5. Daníelsson, J., Jorgensen, B.N., Samorodnitsky, G., Sarma, M., De Vries, C.G.: Subadditivity Re-Examined: The Case for Value at Risk (Preprint). London School of Economics (2005) 6. Degen, M., Embrechts, P., Lambrigger, D.D.: The quantitative modeling of operational risk between g and h and extreme value theory. Asian Bull. 37(2), 265–291 (2007) 7. Dutta, K., Perry, J.: A Tale of Tails: An Empirical Analysis of Loss Distribution Models for Estimating Operational Risk Capital, Federal Reserve Bank of Boston, Working Paper Number 6–13 (2006) 8. Embrechts, P., Nešlehová, J.: Copulas and extreme value theory. In: Quantitative Financial Risk Management: Fundamentals, Models and Techniques. Applications to Credit Risk and Market Risk, DVD, Henry Stewart Talks (2006) 9. Koenker, R., Bassett, G.W.: Regression quantiles. Econometrica 46(1), 33–50 (1978) 10. Laha, R.G., Rohatgi, V.K.: Probability Theory, Volume 43 of Wiley Series in Probability and Mathematical Statistics. Wiley, London (1979) 11. Markowitz, H.M.: Portfolio selection. J. Finance 7(1), 77–91 (1952) 12. Rao, S.S.: Engineering Optimization: Theory and Practice, 4th edn. Wiley, New York (2009) 13. Rockafellar, R.T., Uryasev, S., Zabarankin, M.: Optimality conditions in portfolio analysis with generalized deviation measures. Math. Program. 108(2–3), 515–540 (2006) 14. Rockafellar, R.T., Uryasev, S., Zabarankin, M.: Deviation measures in generalized linear regression, Research Report 2002–2009, Department of Industrial and Systems Engineering, University of Florida, Gainesville (2002) 15. Rockafellar, R.T., Uryasev, S.P.: Optimization of conditional value at risk. J. Risk 2(3), 21–42 (2000) 16. Ryan, T.P.: Modern Regression Methods. Wiley Series in Probability and Statistics, 2nd edn. Wiley, London (2009) 17. Serraino, G., Theiler, U., Uryasev, S.: Risk return optimization with different risk aggregation strategies. In: Gregoriov GN (ed.) Ground Breaking Research and Developments in VaR (Forthcoming). Bloomberg Press, New York (2009) 18. Tasche, D.: Risk Contribution and Performance Measurement. Working paper, Technical University of Munich, Munich, Germany (1999) 19. Yamai, Y., Yoshiba, T.: Comparative analysis of expected shortfall and value at risk: their estimation error, decomposition and optimization. Monetary Econ. Stud. 20(1), 57–86 (2002) 20. Zappe, C., Albright, S.C., Winston, W.L.: Data Analytics, Optimization and Simulation Modeling, 4th edn. Ceneage Learning India, New Delhi (2011)

[email protected]

Chapter 5

Possibility Theory for Operational Risk

Abstract In this chapter the fundamental mathematical foundations of possibility theory are presented. The operational risk is quantified using the possibility theory in Chap. 6. The possibilistic quantification of operational risk takes care of the inherent impreciseness and vagueness present in the banking and financial data. The concepts of σ-Algebra, measurable space and measurable set, measurable function, uncertainty measure, uncertainty space, uncertainty distribution and uncertainty set are explained with several illustrative examples. The chapter concludes with an analysis of possibilistic risk.









Keywords Possibilistic theory σ-algebra Belief degrees Measurable set Measurable function Uncertainty measure Uncertainty distribution Uncertainty set



5.1





Introduction

This chapter introduces the reader the basic mathematical foundations of the possibility theory [3] required for quantification of the operational risk discussed in Chaps. 2 and 3. The basic idea has been adapted from the uncertainty theory [8] which was coined in 2007 and subsequently used in several applications. Now it has become a branch of axiomatic mathematics for modeling belief degrees. Possibility theory deals with various types of uncertainty. It is an alternative to probability theory and was introduced by Zadeh [9] in 1978 as an extension of theory of fuzzy sets and logic. Dubois and Prade [2, 4, 11] further contributed to its development. In Sect. 5.2 we present the concept of σ-Algebra. The measurable space and measurable set is highlighted next Sect. 5.3. This is followed by a discussion on measurable function. Then the concepts of uncertainty measure, uncertainty space, uncertainty distribution and uncertainty set are illustrated in Sects. 5.5, 5.6, 5.7 and 5.8 respectively. Finally, this chapter concludes with a discussion on possibilistic risk analysis. A thorough understanding of the concepts © Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_5

[email protected]

75

76

5

Possibility Theory for Operational Risk

presented in this chapter will help the reader better understand the contents in Chaps. 6−8. The uncertainty involved here is generally encompasses decision making in real life scenarios which are indeterminate in nature [3, 6]. The indeterminacy entails phenomena whose outcomes cannot be exactly predicted in advance. Some instances of indeterminacy include tossing dice, roulette wheel, stock price, bridge strength, lifetime, demand etc. To deal with the indeterminacy arising in operational risk situations, the possibility theory coined by Zadeh is extended here to include belief degrees [5]. This formulation is different from the probability theory as defined by Kolmogorov [7]. Probability is sometimes interpreted as frequency or cumulative frequency or long run cumulative frequency while indeterminacy arising from uncertainty is interpreted as personal belief degree. Let us consider an indeterminacy quantity such as bridge strength. A belief degree function represents the degree with which we believe the indeterminacy quantity falls into the left side of current point. If we believe the indeterminacy quantity completely falls into the left side of current point then belief degree is 1. If it is completely impossible then belief degree is 0. Usually it is neither completely true nor completely false. Here we assign a number between 0 and 1 to the belief degree. The belief degree function deviates far from long run cumulative frequency. The belief degree function may have much larger variance than long run cumulative frequency. This happen because human beings usually overweight unlikely events. The belief degree function can never be treated as a probability distribution [8]. This fact is illustrated in Fig. 5.1. When sample size is large enough the estimated probability distribution as shown in by left curve may be close enough to long run cumulative frequency (left histogram). In this case, probability theory is only legitimate approach. When belief degrees are available with no samples, the estimated indeterminate distribution as shown by right curve may have much larger variance than long run cumulative frequency (right histogram). When no samples are available to estimate a probability distribution, we try to evaluate the belief degree that each event will occur. Perhaps some people think that belief degree is subjective probability or fuzzy concept. However, it is usually inappropriate because both probability theory and fuzzy set theory may lead to counterintuitive results.

Fig. 5.1 The illustration of probability (left curve) and indeterminacy (right curve)

[email protected]

5.1 Introduction

77

Now we proceed to present the elementary foundations of possibility theory enriched with belief degrees. From the mathematical viewpoint, it is treated as a type of measure theory [1]. We start with the concepts of σ-algebra, measurable space and measurable set and measurable function [3].

5.2

σ-Algebra

The basic concepts underlying σ-algebra are highlighted here [3]. These concepts attribute towards the mathematical foundations of possibility theory. The major results in this section are very much well known. Readers who are already familiar with the results may skip this section. Definition 5.1 Let C be a nonempty universal set. A collection L consisting of subsets of C is called an algebra over C iff the following conditions Snhold: (a) C 2 L; c (b) if K 2 L then K 2 L and (c) if K1 ; K2 ; . . .; Kn 2 L then i¼1 Ki 2 L. The collection L is called a σ-algebra over C if the condition (c) is replaced with closure S under countable union i.e. when K1 ; K2 ; . . .; Kn 2 L we have 1 i¼1 Ki 2 L. Let us consider few examples to illustrate concept of σ-algebra [8]. Example 5.1 The collection f;; Cg is the smallest σ-algebra over C and the power set i.e. all subsets of C is the largest σ-algebra. Example 5.2 Let K be a proper nonempty subset of C. Then f;; K; Kc ; Cg is σ-algebra over C. Example 5.3 Let L be the collection of all finite disjoint unions of all intervals of the form ð1; a; ða; b; ðb; 1Þ; S ;. Then L is R but not a σ-algebra because Ki ¼ ð0; ði  1Þ=i 2 L for all i but 1 i¼1 Ki ¼ ð0; 1Þ 62 L. Example 5.4 A σ-algebra L is closed under countable union, S countableTintersection, 1 difference, and limit. That is, if K1 ; K2 ; . . .; Kn 2 L then 1 i¼1 Ki 2 L; i¼1 Ki 2 L; K1 nK2 2 L; limi!1 Ki 2 L.

5.3

Measurable Space and Measurable Set

Based on σ-algebra we present the concepts of measurable space and measurable set [8]. Definition 5.2 Let C be a nonempty set and let L be a σ-algebra over C. Then ðC; LÞ is called a measurable space and any element in L is called a measurable set. Let us consider few examples to illustrate the concepts of measurable space and measurable set [8].

[email protected]

78

5

Possibility Theory for Operational Risk

Example 5.5 Let R be the set of real numbers. Then L ¼ f;; Rg is σ-algebra over R. Thus ðR; LÞ is measurable space. Note that there exist only two measurable sets in this space one is ; and another is R. It is to be noted that intervals like [0,1] and ð0; þ 1Þ are not measurable. Example 5.6 Let C ¼ fa; b; cg. Then L ¼ f;; fag; fb; cg; Cg is σ-algebra over C. Thus ðC; LÞ is a measurable space. Furthermore {a} and {b, c} are measurable sets in this space but {b}, {c}, {a, b}, {a, c} are not. Now we illustrate the concept of borel set based on σ-algebra [3, 8]. Definition 5.3 The smallest σ-algebra B containing all open intervals is called the borel algebra over the set of real numbers and any element in B is called a borel set. Let us consider few examples to illustrate the concept of borel set [8]. Example 5.7 It has been proved that intervals, open sets, closed sets, rational numbers and irrational numbers are all borel sets. Example 5.8 There exists a non-borel set over R. Let [a] represent the set of all rational numbers plus a. Note that if a1 − a2 is not a rational number then [a1] and [a2] are disjoint sets. Thus R is divided into an infinite number of those disjoint sets. Let A be a new set containing precisely one element from them. Then A is not a borel set. Definition 5.4 A function f from a measurable space ðC; LÞ to the set of real numbers is said to be measurable if f 1 ðBÞ ¼ fc 2 Cjf ðcÞ 2 Bg 2 L for any borel set B of real numbers. Continuous function and monotone function are instances of measurable function. Let f1, f2, … be a sequence of measurable functions. Then the following functions are also measurable: inf fi ðcÞ; lim sup fi ðcÞ; lim inf fi ðcÞ sup f ðcÞ; |{z} |{z} i i!1 i!1

1i1

1i1

Especially if limi!1 fi ðcÞ exists for each γ then the limit is also a measurable function. Based on the above definitions we discuss an event encompassing measurable space [3, 8]. Let ðC; LÞ be a measurable space. Each element K in L is called a measurable set. When quantifying operational risk using possibility theory the measurable set is renamed as event. In the process of understanding these terminologies let us illustrate them by an indeterminate quantity such as bridge strength. The universal set C consists of all possible outcomes of the indeterminate quantity. If we believe that the possible bridge strengths range from 80 to 120 in tons then the universal set is C ¼ ½80; 120. The σ-algebra L should contain all events we are concerned about. It may be noted that event and proposition are synonymous although the former is a set and the latter is a statement.

[email protected]

5.3 Measurable Space and Measurable Set

79

Assuming the first event we are concerned about corresponds to the proposition the bridge strength is less than or equal to 100 tons. Then it may be represented by K1 ¼ ½80; 100. Assuming the second event we are concerned about corresponds to the proposition the bridge strength is more than 100 tons. Then it may be represented by K2 ¼ ð100; 120. If we are only concerned about the above two events, then we may construct a σalgebra L containing the two events K1 and K2 . For example L ¼ f;; K1 ; K2 ; Cg. In this case we totally have four events: ;; K1 ; K2 and C. However, it may be noted that the subsets like [80, 90] and [110, 120] are not events because they do not belong to L. It may be worth mentioning that different σ-algebras are used for different purposes. The minimum requirement of σ-algebra is that it contains all events about the problem concerned. It is generally suggested to take the minimum σ-algebra that contains those events.

5.4

Measurable Function

A measurable function is an uncertain variable from an uncertainty space to the set of real numbers. A formal definition of measurable function is given below [8]. The concepts of uncertainty measure M and uncertainty space ðC; L; MÞ are illustrated in Sects. 5.5 and 5.6. Definition 5.5 A measurable function is a function ξ from an uncertainty space ðC; L; MÞ to the set of real numbers such that fn 2 Bg is an event for any borel set B of real numbers. The measurable function ξ(γ) is given Fig. 5.2 as function of C and set of real numbers.

Fig. 5.2 A measurable function

[email protected]

80

5

Possibility Theory for Operational Risk

Let us consider few examples to illustrate the concept of measurable function [8]. Example 5.9 Consider ðC; L; MÞ as {γ1, γ2} with Mfc1 g ¼ Mfc2 g ¼ 0:5. Then the function ξ(γ) is a measurable function which is given as follows:  nðcÞ ¼

0 1

if c ¼ c1 if c ¼ c2

Example 5.10 A crisp number b may be regarded as a special measurable function. In fact, it is the constant function ξ(γ) ≡ b on the uncertainty space ðC; L; MÞ. Definition 5.6 A measurable function ξ the uncertainty space ðC; L; MÞ is said to be (a) nonnegative if Mfn\0g ¼ 0 and (b) positive if Mfn  0g ¼ 0. Definition 5.7 Let ξ and η be measurable functions defined on the uncertainty space ðC; L; MÞ. Here ξ = η if ξ(γ) = η(γ) for almost all c 2 C. Definition 5.8 Let ξ1, ξ2, …, ξn be measurable functions and let f be a real valued measurable function. Then ξ = f(ξ1, ξ2, …, ξn) a measurable function defined by: nðcÞ ¼ f ðn1 ðcÞ; n2 ðcÞ; . . .nn ðcÞÞ;

8c 2 C

Let us consider few more examples to illustrate the above definitions [8]. Example 5.11 Let ξ1 and ξ2 be two measurable functions. Then the sum ξ = ξ1 + ξ2 is a measurable function defined by: nðcÞ ¼ n1 ðcÞ þ n2 ðcÞ;

8c 2 C

The product ξ = ξ1ξ2 is also measurable function defined by: nðcÞ ¼ n1 ðcÞ  n2 ðcÞ;

8c 2 C

The reader may argue whether ξ(γ) given in Definition 5.8 is a measurable function. The following theorem resolves this issue [8]. Theorem 5.1 Let ξ1, ξ2, …, ξn be measurable functions and let f be a real-valued measurable function. Then f(ξ1, ξ2, …, ξn) is a measurable function. Since ξ1, ξ2, …, ξn are measurable functions from an uncertainty space ðC; L; MÞ to the set of real numbers. Thus f(ξ1, ξ2, …, ξn) is also a measurable function from the uncertainty space ðC; L; MÞ to the set of real numbers. Hence f (ξ1, ξ2, …, ξn) is a measurable function. Based on the above definitions we proceed to present the ideas of uncertainty measure, uncertainty space, uncertainty distribution and uncertainty set which constitute the core of possibility theory towards the analysis of operational risk [3, 8].

[email protected]

5.5 Uncertainty Measure

5.5

81

Uncertainty Measure

The uncertainty measure M is defined on the σ-algebra L [3]. Thus a number MfKg will be assigned to each event K to indicate the belief degree with which we believe K will happen. There is no doubt that the assignment is not arbitrary and the uncertain measure M must have certain mathematical properties. In order to rationally deal with the belief degrees the following three axioms are worth mentioning [8]: Axiom 5.1 Normality Axiom: MðCÞ ¼ 1 for the universal set C. Axiom 5.2 Duality Axiom: MðKÞ þ MðKc Þ ¼ 1 for any event K. Axiom 5.3 Subadditivity Axiom: For every countable sequence of events K1 ; K2 ; . . . we have: M

( 1 [ i¼1

) Ki



1 X

MfKi g

i¼1

The uncertainty measure is interpreted as the personal belief degree and not frequency of an uncertain event that may happen. It depends on the personal knowledge concerning the event. The uncertainty measure will change if the state of knowledge changes. The duality axiom is in fact an application of the law of truth conservation. The property ensures that the uncertainty theory is consistent with the law of excluded middle and the law of contradiction. In addition, the human thinking is always dominated by the duality. For example, if someone says a proposition is true with belief degree 0.6 then everybody will think that the proposition is false with belief degree 0.4. Given two events with known belief degrees, it is frequently asked that how the belief degree for their union is generated from the individuals. There does not exist any rule to make it. A number of surveys have shown that the belief degree of a union of events is neither the sum of belief degrees of the individual events (probability measure) nor the maximum (possibility measure). Perhaps there is no explicit relation between the union and individuals except for the subadditivity axiom. Pathology occurs if subadditivity axiom is not assumed. For example, suppose that a universal set contains 3 elements. We define a set function that takes value 0 for each singleton and 1 for each event with at least 2 elements. Then such a set function satisfies all axioms but subadditivity. Based on the above discussion we give the formal definition of uncertainty measure [3]. Definition 5.9 The set function M is called an uncertainty measure if it satisfies the normality, duality and subadditivity axioms.

[email protected]

82

5

Possibility Theory for Operational Risk

Theorem 5.2 Monotonicity Theorem: The uncertainty measure M is a monotonic increasing set function. That is for any events K1  K2 we have MfK1 g  MfK2 g. The normality axiom says that MfCg ¼ 1 and the duality axiom says that  M Kc1 ¼ 1  MfK1 g. Since K1  K2 we have C ¼ Kc1 [K2 . By using the subadditivity axiom we obtain 1 ¼ MfCg  MfKc1 g þ MfK2 g ¼ 1  MfK1 g þ MfK2 g. Thus MfK1 g  MfK2 g. Theorem 5.3 Suppose that M is an uncertainty measure. Then the empty set ∅ has an uncertainty measure zero i.e. Mf;g ¼ 0. Since ; ¼ Cc and MfCg ¼ 1; it follows from the duality axiom that Mf;g ¼ 1  MfCg ¼ 1  1 ¼ 0. Theorem 5.4 Suppose that M is an uncertain measure. Then for any event K we have 0  MfKg  1. It follows from the monotonicity theorem that 0  MfKg  1 because ;  K  C and Mf;g ¼ 0, MfCg ¼ 1. Theorem 5.5 Let K1 ; K2 ; . . . be a sequence of events with MfKi g ! 0 as i ! 1. Then for any event K we have: lim MfK [ Ki g ¼ lim MfKnKi g ¼ MfKg

i!1

i!1

An uncertainty measure remains unchanged if the event is enlarged or reduced by an event with uncertainty measure zero. It follows from the monotonicity theorem and subadditivity axiom that MfKg  MfK [ Ki g  MfKg þ MfKi g for each i. Thus we get MfK [ Ki g ! MfKg by using MfKi g ! 0. Since ðKnKi Þ  K  ððKnKi Þ [ Ki Þ we have: MfKnKi g  MfKg  MfKnKi g þ MfKi g Hence MfKnKi g ! MfKg by using MfKi g ! 0. Theorem 5.6 Asymptotic Theorem: For any events K1 ; K2 ; . . . we have: lim MfKi g [ 0

i!1

lim MfKi g\1

i!1

if Ki " C if Ki # ;

S Assume Ki " C. Since C ¼ i Ki it follows from the subadditivity axiom that P1 1 ¼ MfCg  i¼1 MfKi g. Since MfKi g is increasing with respect to i we have limi!1 MfKi g [ 0. If Ki # ; then Kci " C. It follows from the first inequality and the   duality axiom that limi!1 MfKi g ¼ 1  limi!1 M Kci \1. Thus the theorem is proved.

[email protected]

5.6 Uncertainty Space

5.6

83

Uncertainty Space

We first start with some formal definitions based on the concepts highlighted in earlier sections [3, 8]. Definition 5.10 Let C be a nonempty set, L be a σ-algebra over C and let M be an uncertainty measure. Then the triplet ðC; L; MÞ is called an uncertainty space. Definition 5.11 An uncertainty space ðC; L; MÞ is called complete if for any K1 ; K2 2 L with MfK1 g ¼ MfK2 g and any subset A with K1  A  K2 , one has A 2 L. In this case we also have Mf Ag ¼ MfK1 g ¼ MfK2 g. Definition 5.12 An uncertainty space ðC; L; MÞ is called continuous if for any events K1 ; K2 ; . . . we have Mflimi!1 Ki g ¼ limi!1 MfKi g provided that limi!1 Ki exists. Based on the above definitions we define the product uncertainty measure [3]. Let ðCk ; Lk ; Mk Þ be the uncertainty spaces for k ¼ 1; 2; . . .. Consider C ¼ C1  C2     that is the set of all ordered tuples of the form (γ1, γ2, …) where ck 2 Ck for k ¼ 1; 2; . . .. A measurable rectangle in C is a set K ¼ K1  K2     where ck 2 Ck for k ¼ 1; 2; . . .. The smallest σ-algebra containing all measurable rectangles of C is called the product σ-algebra denoted by L ¼ L1  L2    . Then the product uncertainty measure M on the product σ-algebra L is defined by the following product axiom. Axiom 5.1 Product Axiom: Let ðCk ; Lk ; Mk Þ uncertainty spaces for k ¼ 1; 2; . . .. The product uncertainty measure M an uncertainty measure satisfying Q1  Vk¼1 M k¼1 Kk ¼ 1 Mk fKk g where Kk are arbitrarily chosen events from Lk for k ¼ 1; 2; . . . respectively. It is to be noted that the above axiom defines a product uncertainty measure only for rectangles. The question arises how to extend the uncertainty measure M from the class of rectangles to the product σ-algebra L. This issue is well represented by Fig. 5.3. The uncertain measure of K (the disk) is the acreage of its inscribed rectangle K1  K2 if it is greater than 0.5. Otherwise its complement Kc need to be examined. If the inscribed rectangle of Kc is greater than 0.5 then MfKc g is its inscribed rectangle and MfKg ¼ 1  MfKc g. If there does not exist an inscribed rectangle of K or Kc greater than 0.5 then MfKg ¼ 0:5. For each event K 2 L we have 8 sup min Mk fKk g min Mk fKk g [ 0:5 if sup > |{z} |{z} > |{z} |{z} > > 1  k  1 1  k1 > K1 K2 K < K1 K2 K MfKg ¼ 1  min Mk fKk g if min Mk fKk g [ 0:5 sup sup |{z} |{z} |{z} |{z} > > > c 1k1 c 1k1 > K K K K K K 1 2 1 2 > : 0:5 otherwise

[email protected]

84

5

Possibility Theory for Operational Risk

Fig. 5.3 The extension from rectangles to product r-algebra

The sum of the uncertain measures of the maximum rectangles in K and Kc is always less than or equal to 1 i.e. min Mk fKk g þ |{z}

sup |{z}

K1 K2 K 1  k  1

This

means

that

at

most

sup |{z}

min Mk fKk g  1 |{z}

K1 K2 K 1  k  1

one

of

c

min Mk fKk g |{z}

sup |{z}

and

K1 K2 K 1  k  1

min Mk fKk g is greater than 0.5. Thus the above expression in |{z}

sup |{z}

K1 K2 Kc 1  k  1

MfKg is reasonable. It is clear that for each K 2 L the uncertain measure MfKg defined above takes possible values on the following interval: 3

2 6 4

sup |{z}

min Mk fKk g; 1  |{z}

K1 K2 K 1  k  1

7 min Mk fKk g5 |{z}

sup |{z}

K1 K2 Kc 1  k  1

If the sum of the uncertainty measures of the maximum rectangles in K and Kc is just 1 i.e. sup |{z}

min Mk fKk g þ |{z}

K1 K2 K 1  k  1

sup |{z}

min Mk fKk g ¼ 1 |{z}

K1 K2 K 1  k  1 c

[email protected]

5.6 Uncertainty Space

85

Then the product uncertainty measure MfKg is simplified as follows: MfKg ¼

min Mk fKk g |{z}

sup |{z}

K1 K2 K 1  k  1

Theorem 5.7 The product uncertainty measure defined by MfKg is an uncertainty measure. The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers. Definition 5.13 Assuming ðCk ; Lk ; Mk Þ as uncertainty spaces for k ¼ 1; 2; . . .. Let C ¼ C1  C2    , L ¼ L1  L2     and M ¼ M1 ^M2 ^   . Then the triplet ðC; L; MÞ is called a product uncertainty space. For further details on uncertainty space the interested readers can refer [3, 8].

5.7

Uncertainty Distribution

The concept of uncertainty distribution is related to an uncertain process [8]. It is basically a sequence of uncertainty distributions of uncertain variables indexed by time. Thus an uncertainty distribution of uncertain process is a surface rather than a curve which takes different shapes and forms depending upon the application concerned. A formal definition of uncertainty distribution is given below. Definition 5.14 An uncertain process Xt is said to have an uncertainty distribution Ut ð xÞ if at each time t the uncertain variable Xt has the uncertainty distribution Ut ð xÞ. We illustrate the concept of uncertainty distribution with examples [3, 8]. Example 5.12 The linear uncertain process Xt  Lðat; btÞ has the following uncertainty distribution:

U t ð xÞ ¼

8 > < 0

if

x  at

:

if if

at  x  bt x bt

xat > ðbaÞt

1

Example 5.13 The zigzag uncertain process Xt  Z ðat; bt; ctÞ has the following uncertainty distribution:

U t ð xÞ ¼

8 > > > >
x þ ct2bt > > > 2ðcbÞt

:

1

if if

x  at at  x  bt

if

bt  x  ct

if

x ct

[email protected]

86

5

Possibility Theory for Operational Risk

Example 5.14 The normal uncertain process Xt  N ðet; rtÞ has the following uncertainty distribution:  Ut ðxÞ ¼

  pðet  xÞ 1 1 þ exp pffiffiffi 3rt

Example 5.15 The lognormal uncertain process Xt  LOGN ðet; rtÞ has the following uncertainty distribution:  U t ð xÞ ¼

 1 þ exp

pðet  ln xÞ pffiffiffi 3rt

1

Theorem 5.8 A function Ut ð xÞ : T  R ! ½0; 1 is the uncertainty distribution of uncertain process iff at each time t it is a monotone increasing function with respect to x except Ut ð xÞ 0 and Ut ðxÞ 1. The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers. Theorem 5.9 Let Xt be an uncertain process with uncertainty distribution Ut ð xÞ and let f(x) be a measurable function. Then f(Xt) is also an uncertain process. Furthermore (i) if f(x) is a strictly increasing function then f(Xt) has an uncertainty distribution Wt ð xÞ ¼ Ut ðf 1 ðxÞÞ and (ii) if f(x) is a strictly decreasing function and Ut ð xÞ is continuous with respect to x then f(Xt) has an uncertainty distribution 1  Wt ð xÞ ¼ Ut ðf 1 ð xÞÞ. The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers. Example 5.16 Let Xt be an uncertain process with uncertainty distribution Ut ð xÞ. Then the uncertain process aXt + b has the following uncertainty distribution:  W t ð xÞ ¼

if a [ 0 Ut ððx  bÞ=aÞ 1  Ut ððx  bÞ=aÞ if a\0

Now we define the concept of regular uncertainty distribution [8]. Definition 5.15 An uncertainty distribution Ut ðxÞ is said to be regular if at each time t it is a continuous and strictly increasing function with respect to x at which 0\Ut ð xÞ\1 such that limx!1 Ut ðxÞ ¼ 0 and limx! þ 1 Ut ð xÞ ¼ 1. It is clear that linear uncertainty distribution, zigzag uncertainty distribution, normal uncertainty distribution and lognormal uncertainty distribution of uncertain process are all regular. It is to be noted that a crisp initial value X0 have been stipulated towards regular uncertainty distribution. By this an initial value of regular uncertain process is allowed to be a constant whose uncertainty distribution is as follows:

[email protected]

5.7 Uncertainty Distribution

87

 U 0 ð xÞ ¼

0 1

if x X0 if x\X0

Here U0 ðxÞ is a continuous and strictly increasing function with respect to x at which 0\U0 ðxÞ\1 even though it is discontinuous at X0. Similarly we define the concept of inverse uncertainty distribution [8]. Definition 5.16 Let Xt be an uncertain process with regular uncertainty distribution Ut ð xÞ. Then the inverse function U1 t ðaÞ is called the inverse uncertainty distribution of Xt. It is noted that at each time t the inverse uncertainty distribution U1 t ðaÞ is well defined on the open interval ð0; 1Þ. If required the domain is extended to [0,1] via 1 1 1 U1 t ð0Þ ¼ lima#0 Ut ðaÞ and Ut ð1Þ ¼ lima"1 Ut ðaÞ. We illustrate the concept of inverse uncertainty distribution with examples [3, 8]. Example 5.17 The linear uncertain process Xt  Lðat; btÞ has the following inverse uncertainty distribution: U1 t ðaÞ ¼ ð1  aÞat þ abt Example 5.18 The zigzag uncertain process Xt  Z ðat; bt; ctÞ has the following inverse uncertainty distribution: U1 t ð aÞ

 ¼

ð1  2aÞat þ 2abt ð2  2aÞbt þ ð2a  1Þct

if a\0:5 if a 0:5

Example 5.19 The normal uncertain process Xt  N ðet; rtÞ has the following inverse uncertainty distribution: U1 t ð aÞ

pffiffiffi rt 3 a ¼ et þ ln p 1a

Example 5.20 The lognormal uncertain process Xt  LOGN ðet; rtÞ has the following uncertainty distribution: U1 t ð aÞ

pffiffiffi   rt 3 a ln ¼ exp et þ p 1a

Theorem 5.10 A function U1 t ðaÞ : T  ð0; 1Þ ! R is an inverse uncertainty distribution of uncertain process if and only if at each time t it is a continuous and strictly increasing function with respect to a. The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers.

[email protected]

88

5

Possibility Theory for Operational Risk

It is to be noted that a crisp initial value X0 has the following inverse uncertainty distribution: U1 0 ð aÞ X 0 : Here U1 0 ðaÞ is a continuous and strictly increasing function with respect to α ∊ (0,1) even though it is not.

5.8

Uncertainty Set

The uncertainty set is a set valued function on an uncertainty space. It models the unsharp concepts that are essentially sets but their boundaries are not sharply described because of the ambiguity of human language. Some typical examples include young, tall, warm and most. The formal definition of uncertainty set is given as follows [3, 8]: Definition 5.17 An uncertainty set is a function ξ from an uncertainty space ðC; L; MÞ to a collection of sets of real numbers such that both fB  ng and fn  Bg are events for any borel set B of real numbers. It is clear that the uncertainty set is different from random set [8] and fuzzy set [10]. The essential difference among them is that different measures are used i.e. random set uses probability measure, fuzzy set uses possibility measure and uncertain set uses uncertain measure. The next issue which is worth mentioning is the difference between the uncertainty variable and the uncertainty set [8]. Both of them belong to the same broad category of uncertainty concepts. They are just differentiated by their mathematical definitions. The former refers to one value while the latter to a collection of values. Essentially the difference between the uncertainty variable and the uncertainty set focuses on the property of exclusivity. If the concept has exclusivity then it is an uncertainty variable. Otherwise it is an uncertainty set. Consider the statement John is a young man. If we are interested in John’s real age then young is an uncertainty variable because it is an exclusive concept as John’s age cannot be more than one value. For example if John is 20 years old then it is impossible that John is 25 years old. In other words John is 20 years old does exclude the possibility that John is 25 years old. By contrast if we are interested in what ages can be regarded young then young is an uncertain set because the concept now has no exclusivity. For example both 20-year-old and 25-year-old men can be considered young. In other words a 20-year-old man is young does not exclude the possibility that a 25-yearold man is young. Example 5.21 Consider an uncertainty space ðC; L; MÞ as (γ1, γ2, γ3) with power set L. Then the set valued function ξ(γ) given below is an uncertainty set on ðC; L; MÞ as shown in Fig. 5.4.

[email protected]

5.8 Uncertainty Set

89

Fig. 5.4 The uncertainty set nðcÞ on ðCk ; Lk ; Mk Þ

8 < ½1; 3 nðcÞ ¼ ½2; 4 : ½3; 5

if c ¼ c1 if c ¼ c2 if c ¼ c3

Example 5.22 Consider an uncertainty space ðC; L; MÞ as R with borel algebra L. Then the set valued function nðcÞ ¼ ½c; c þ 1; 8c 2 C is an uncertain set on ðC; L; MÞ. Now we define the concepts of union, intersection and complement on uncertainty set [8]. Definition 5.18 Let ξ and η be two uncertain sets on the uncertainty space ðC; L; MÞ. Then the union ξ [ η of the uncertainty sets ξ and η is ðn [ gÞðcÞ ¼ nðcÞ [ gðcÞ; 8c 2 C; the intersection ξ \ η of the uncertainty sets ξ and g is ðn \ gÞðcÞ ¼ nðcÞ \ gðcÞ; 8c 2 C and the complement nc of the uncertainty set ξ is nc ðcÞ ¼ nðcÞc ; 8c 2 C. Example 5.23 Consider an uncertainty space ðC; L; MÞ as (γ1, γ2, γ3). Let ξ and η be two uncertainty sets such that: 8 < ½1; 2 if c ¼ c1 nðcÞ ¼ ½1; 3 if c ¼ c2 : ½1; 4 if c ¼ c3 8 < ½2; 3 gðcÞ ¼ ½2; 4 : ½2; 5

if c ¼ c1 if c ¼ c2 if c ¼ c3

[email protected]

90

5

The union of uncertainty sets ξ and η is: 8 < ½1; 3Þ ðn [ gÞðcÞ ¼ ½1; 4Þ : ½1; 5Þ The intersection of uncertainty sets ξ and η 8 < ; ðn \ gÞðcÞ ¼ ð2; 3 : ð2; 4

Possibility Theory for Operational Risk

if c ¼ c1 if c ¼ c2 if c ¼ c3 is: if c ¼ c1 if c ¼ c2 if c ¼ c3

The complement of uncertainty sets ξ and η are: 8 < ð1; 1Þ [ ð2; þ 1Þ if c ¼ c1 nc ðcÞ ¼ ð1; 1Þ [ ð3; þ 1Þ if c ¼ c2 : ð1; 1Þ [ ð4; þ 1Þ if c ¼ c3 8 < ð1; 2 [ ½3; þ 1Þ gc ðcÞ ¼ ð1; 2 [ ½4; þ 1Þ : ð1; 2 [ ½5; þ 1Þ

if c ¼ c1 if c ¼ c2 if c ¼ c3

Theorem 5.11 Let ξ be an uncertainty set and R be the set of real numbers. Then n [ R ¼ R and n \ R ¼ n. For each c 2 C it follows from the definition of uncertainty set that the union is ðn [ RÞðcÞ ¼ nðcÞ [ R ¼ R. Thus we have n [ R ¼ R. In addition the intersection is ðn \ RÞðcÞ ¼ nðcÞ \ R ¼ nðcÞ. Thus we have n \ R ¼ n. Theorem 5.12 Let ξ be an uncertainty set and ; be the empty set. Then n [ ; ¼ n and n \ ; ¼ ;. For each c 2 C it follows from the definition of uncertainty set that the union is ðn [ ;ÞðcÞ ¼ nðcÞ [ ; ¼ nðcÞ. Thus we have n [ ; ¼ n. In addition the intersection is ðn \ ;ÞðcÞ ¼ nðcÞ \ ; ¼ ;. Thus we have n \ ; ¼ ;. Theorem 5.13 Idempotent Law: Let ξ be an uncertainty set. Then ξ [ ξ = ξ and ξ \ ξ = ξ. For each c 2 C it follows from the definition of uncertainty set that the union is (ξ [ ξ)(γ) = ξ(γ) [ ξ(γ) = ξ(γ). Thus we have ξ [ ξ = ξ. In addition the intersection is (ξ \ ξ)(γ) = ξ(γ) \ ξ(γ) = ξ(γ). Thus we have ξ \ ξ = ξ. Theorem 5.14 Double Negation Law: Let ξ be an uncertainty set. Then (ξc)c = ξ. For each c 2 C it follows from the definition of complement that c ðnc Þc ðcÞ ¼ ðnc ðcÞÞc ¼ ðnðcÞc Þ ¼ nðcÞ. Thus we have (ξc)c = ξ. Theorem 5.15 Law of Excluded Middle and Law of Contradiction: Let ξ be an uncertainty set and ξ c be its complement. Then n [ nc ¼ R and n \ nc ¼ ;.

[email protected]

5.8 Uncertainty Set

91

For each c 2 C it follows from the definition of uncertainty set that the union is ðn [ nc ÞðcÞ ¼ nðcÞ [ nc ðcÞ ¼ nðcÞ [ nðcÞc ¼ R. Thus we have n [ nc ¼ R. In addition the intersection is ðn \ nc ÞðcÞ ¼ nðcÞ \ nc ðcÞ ¼ nðcÞ \ nðcÞc ¼ ;. Thus we have n \ nc ¼ ;. Theorem 5.16 Communicative Law: Let ξ and η be uncertainty sets. Then ξ [ η = η [ ξ and ξ \ η = η \ ξ. For each c 2 C it follows from the definition of uncertainty set that (ξ [ η) (γ) = ξ(γ) [ η(γ) = η(γ) [ ξ(γ) = (η [ ξ)(γ). Thus we have ξ [ η = η [ ξ. In addition it follows that (ξ \ η)(γ) = ξ(γ) \ η(γ) = η(γ) \ ξ(γ) = (η \ ξ)(γ). Thus we have ξ \ η = η \ ξ. Theorem 5.17 Associative Law: Let ξ, η and s be uncertainty sets. Then ðn [ gÞ [ s ¼ n [ ðg [ sÞ and ðn \ gÞ \ s ¼ n \ ðg \ sÞ. For each c 2 C it follows from the definition of uncertainty set that ððn [ gÞ [ sÞðcÞ ¼ ððnðcÞ [ gðcÞÞÞ [ sðcÞ ¼ nðcÞ [ ðgðcÞ [ sðcÞÞ ¼ ðn [ ðg [ sÞÞðcÞ. Thus we have ðn [ gÞ [ s ¼ n [ ðg [ sÞ. In addition it follows that ððn \ gÞ \ sÞðcÞ ¼ ððnðcÞ \ gðcÞÞÞ \ sðcÞ ¼ nðcÞ \ ðgðcÞ \ sðcÞÞ ¼ ðn \ ðg \ sÞÞðcÞ. Thus we have ðn \ gÞ \ s ¼ n \ ðg \ sÞ. Theorem 5.18 Distributive Law: Let ξ, η and s be uncertainty sets. Then n [ ðg [ sÞ ¼ ðn [ gÞ \ ðn [ sÞ and n \ ðg [ sÞ ¼ ðn \ gÞ [ ðn \ sÞ. For each c 2 C it follows from the definition of uncertainty set that ðn [ ðg \ sÞÞ ðcÞ ¼ nðcÞ [ ððgðcÞ \ sðcÞÞÞ ¼ ðnðcÞ [ gðcÞÞ \ ðnðcÞ [ sðcÞÞ ¼ ððn [ gÞ \ ðn [ sÞÞðcÞ: Thus we have n [ ðg \ sÞ ¼ ðn [ gÞ \ ðn [ sÞ. In addition it follows that ðn \ ðg [ sÞÞ ðcÞ ¼ nðcÞ \ ððgðcÞ [ sðcÞÞÞ ¼ ðnðcÞ \ gðcÞÞ [ ðnðcÞ \ sðcÞÞ ¼ ððn \ gÞ [ ðn \ sÞÞðcÞ: Thus we have n \ ðg [ sÞ ¼ ðn \ gÞ [ ðn \ sÞ. Theorem 5.19 Absorbtion Law: Let ξ and η be uncertainty sets. Then ξ [ (ξ \ η) = ξ and ξ \ (ξ [ η) = ξ. For each c 2 C it follows from the definition of uncertainty set that (ξ [ (ξ \ η)) (γ) = ξ(γ) [ ((ξ(γ) \ η(γ))) = ξ(γ). Thus we have ξ [ (ξ \ η) = ξ. In addition since (ξ \ (ξ [ η))(γ) = ξ(γ) \ ((ξ(γ) [ η(γ))) = ξ(γ). Thus we get ξ \ (ξ [ η) = ξ. Theorem 5.20 De Morgan’s Law: Let ξ and η be uncertainty sets. Then ðn [ gÞc ¼ nc \ gc and (ξ \ η)c = ξ c [ ηc. For each c 2 C it follows from the definition of complement that (ξ [ η)c(γ) = ((ξ(γ) [ η(γ)))c = ξ(γ)c \ η(γ)c = (ξc \ ηc)(γ). Thus we have (ξ [ η)c = ξc \ ηc. In addition since (ξ \ η)c(γ) = ((ξ(γ) \ η(γ)))c = ξ(γ)c [ η(γ)c = (ξc [ ηc)(γ). Thus we get (ξ \ η)c = ξc [ ηc. Now we define the concept of function of uncertainty sets [3, 8]. Definition 5.19 Let ξ1, ξ2, …, ξn be uncertainty sets on the uncertainty space ðC; L; MÞ and let f be a measurable function. Then ξ = f(ξ1, ξ2, …, ξn) is an uncertainty set defined by nðcÞ ¼ f ðn1 ðcÞ; n2 ðcÞ; . . .; nn ðcÞÞ; c 8 C.

[email protected]

92

5

Possibility Theory for Operational Risk

Example 5.24 Let ξ be an uncertainty set on the uncertainty space ðC; L; MÞ and let A be a crisp set. Then ξ + A is also an uncertainty set determined by ðn þ AÞðcÞ ¼ nðcÞ þ A; 8c 2 C. Example 5.25 Consider an uncertainty space ðC; L; MÞ as (γ1, γ2, γ3). Let ξ and η be two uncertainty sets such that: 8 < ½1; 3 nðcÞ ¼ ½2; 4 : ½3; 5

if c ¼ c1 if c ¼ c2 if c ¼ c3

8 < ½2; 3 gðcÞ ¼ ½2; 4 : ½2; 5

if c ¼ c1 if c ¼ c2 if c ¼ c3

Their sum is 8 < ½3; 5 ðn þ gÞðcÞ ¼ ½3; 7 : ½3; 9

if c ¼ c1 if c ¼ c2 if c ¼ c3

Their product is 8 < ½2; 6 ðn  gÞðcÞ ¼ ½2; 12 : ½2; 20

if c ¼ c1 if c ¼ c2 if c ¼ c3

Next we illustrate the concept of membership function in uncertainty set [3, 8]. Definition 5.20 An uncertainty set ξ is said to have a membership function μ if for any borel set B of real numbers we have MfB  ng ¼ |{z} inf lð xÞ and x2B

Mfn  Bg ¼ 1  sup lð xÞ which are known as measure inversion formulas. |{z} x2Bc

The expressions MfB  ng and Mfn  Bg are given in Fig. 5.5. When an uncertainty set ξ does have a membership function μ it follows from the first measure inversion formula that lðxÞ ¼ Mfx 2 ng. The value of μ(x) represents the membership degree that x belongs to the uncertainty set ξ. If μ(x) = 1 then x completely belongs to ξ; if μ(x) = 0 then x does not belong to ξ at all. Thus the larger the value of μ(x) is the more true x belongs to ξ. If an element x belongs to an uncertainty set with membership degree α then x does not belong to the uncertainty set with membership degree 1 − α. This fact follows from the duality property of uncertainty measure. In other words if the uncertainty set has a membership function μ then for any real number x we have sMfx 6¼ ng ¼ 1  Mfx ¼ ng ¼ 1  lðxÞ. That is Mfx 6¼ ng ¼ 1  lðxÞ.

[email protected]

5.8 Uncertainty Set

93

Fig. 5.5 MfB  ng ¼ |{z} inf lðxÞ and Mfn  Bg ¼ 1  sup lðxÞ |{z} x2B

x2Bc

We now define three important membership functions viz. rectangular, triangular and trapezoidal membership functions [3, 8]. Definition 5.21 An uncertainty set ξ is called rectangular if it has a membership function:  lð xÞ ¼

1 0

if a  x  b otherwise

The membership function μ(x) is denoted by (a, b) where a and b are real numbers with a < b. Essentially it is the membership function [a, b]. Definition 5.22 An uncertainty set ξ is called triangular if it has a membership function: lð xÞ ¼

 xa ba xc bc

if a  x  b if b  x  c

The membership function μ(x) is denoted by (a, b, c) where a, b, c are real numbers with a < b < c. Definition 5.23 An uncertainty set ξ is called trapezoidal if it has a membership function: 8 xa < ba if a  x  b if b  x  c lð xÞ ¼ 1 : xd if cxd cd The membership function μ(x) is denoted by (a, b, c, d) where a, b, c, d are real numbers with a < b < c < d. These membership functions are represented in Fig. 5.6. Let us now discuss terms like young, tall, warm and most in terms of membership functions in uncertainty set [3, 8].

[email protected]

94

5

Possibility Theory for Operational Risk

Fig. 5.6 The rectangular, triangular and trapezoidal membership functions

Sometimes we say those students are young. What ages can be considered young? In this case young may be regarded as an uncertainty set whose membership function is: 8 0 > > > > < ðx  15Þ=5 lð xÞ ¼ 1 > > ð45  xÞ=10 > > : 0

if x  15 if 15  x  20 if 20  x  35 if 35  x  45 if x 45

It is to be noted that we do not say young if the age is below 15. The membership function of young is given in Fig. 5.7. Sometimes we say those sportsmen are tall. What heights (centimeters) can be considered tall? In this case tall may be regarded as an uncertainty set whose membership function is: 8 0 > > > > < ðx  180Þ=5 lðxÞ ¼ 1 > > ð200  xÞ=5 > > : 0

if x  180 if 180  x  185 if 185  x  195 if 195  x  200 if x 200

Fig. 5.7 The membership function of young

[email protected]

5.8 Uncertainty Set

95

It is to be noted that we do not say tall if the height is over 200 cm. The membership function of tall is given in Fig. 5.8. Sometimes we say those days are warm. What temperatures can be considered warm? In this case warm may be regarded as an uncertainty set whose membership function is: 8 0 > > > > < ðx  15Þ=3 lðxÞ ¼ 1 > > ð28  xÞ=4 > > : 0

if x  15 if 15  x  18 if 18  x  24 if 24  x  28 if x 28

The membership function of warm is given in Fig. 5.9. Sometimes we say most students are boys. What percentages can be considered most? In this case most may be regarded as an uncertainty set whose membership function is: 8 0 if 0  x  0:70 > > > > < 20ðx  0:7Þ if 0:70  x  0:75 l ð xÞ ¼ 1 if 0:75  x  0:85 > > 20 ð 0:9  x Þ if 0:85  x  0:90 > > : 0 if 0:90  x  1:00

Fig. 5.8 The membership function of tall

Fig. 5.9 The membership function of warm

[email protected]

96

5

Possibility Theory for Operational Risk

Fig. 5.10 The membership function of most

The membership function of most is given Fig. 5.10. It is known that some uncertainty sets do not have membership functions. Now question arises what uncertainty sets have membership functions? Generally we have two possible cases [8]: Case I If an uncertainty set ξ degenerates to a crisp set A then ξ has a membership function that is just the characteristic function of A. Case II Let ξ be an uncertain set taking values in a nested class of sets. That is for any given γ1 and c2 2 C at least one of the alternatives holds either (i) ξ(γ1)  ξ(γ2) or (ii) ξ(γ2)  ξ(γ1). Then the uncertainty set ξ has a membership function. Theorem 5.21 A real valued function μ is a membership function iff 0 ≤ μ(x) ≤ 1. If μ is a membership function of some uncertain set ξ then lðxÞ ¼ Mfx 2 ng and 0  lðxÞ  1. Conversely suppose μ is a function such that 0 ≤ μ(x) ≤ 1. Consider an uncertainty space ðC; L; MÞ to be the interval [0, 1] with borel algebra. Then the uncertainty set nðcÞ ¼ fx 2 Rjlð xÞ cg has the membership function μ as shown in Fig. 5.11. It is to be noted that ξ is not the unique uncertainty set whose membership function is μ.

Fig. 5.11 The membership function l of the uncertainty set nðcÞ

[email protected]

5.8 Uncertainty Set

97

Definition 5.24 An uncertainty set ξ is said to be nonempty if nðcÞ 6¼ ; for almost all c 2 C i.e. Mfn ¼ ;g ¼ 0. It is to be noted that nonempty uncertainty set does not necessarily have a membership function. However, when it does have the following theorem gives a sufficient and necessary condition of membership function. Theorem 5.22 Let ξ be an uncertain set whose membership function μ exists. Then ξ is nonempty iff sup l(x) ¼ 1. |fflfflfflffl{zfflfflfflffl} x2R

Since the membership function μ exists it follows from the measure inversion that Mfn ¼ ;g ¼ 1  sup l(x) ¼ 1  sup l(x) . Thus ξ is a nonempty uncertainty |fflfflfflffl{zfflfflfflffl} |fflfflfflffl{zfflfflfflffl} x2;c

x2R

set iff the preceding expression holds. We now proceed to define the concept of inverse membership function [8]. Definition 5.25 Let ξ be an uncertainty set with membership function μ. Then the set valued function l1 ðaÞ ¼ fx 2 RjlðxÞ ag; 8a 2 ½0; 1 is called the inverse membership function of n. Sometimes for each given α the set μ−1(α) is called the α-cut of μ. The Fig. 5.12 represents the inverse membership function l1 ðaÞ. It is clear that inverse membership function always exists. It may be noted that μ−1(α) may take values of the empty set ;. Example 5.26 The rectangular uncertainty set has an inverse membership function l1 ðaÞ ½a; b. Example 5.27 The triangular uncertainty set ξ = (a, b, c) has an inverse membership functionl1 ðaÞ ½ð1  aÞa þ ab; ab þ ð1  aÞc: Example 5.28 The trapezoidal uncertainty set ξ = (a, b, c, d) has an inverse membership function l1 ðaÞ ½ð1  aÞa þ ab; ac þ ð1  aÞd .

Fig. 5.12 The inverse membership function l1 ðaÞ

[email protected]

98

5

Possibility Theory for Operational Risk

Theorem 5.23 Let ξ be an uncertainty set with inverse membership function l1 ðaÞ. Then the membership function of ξ is determined by   1 lð xÞ ¼ sup a 2 ½0; 1jx 2 l ðaÞ . It is easy to verify that μ−1 is the inverse membership function of μ. Thus μ is the membership function of n. Theorem 5.24 A function l1 ðaÞ is an inverse membership function iff it is a monotone decreasing set valued function with respect to a 2 ½0; 1. That is l1 ðaÞ  l1 ðbÞ; a [ b. Suppose l1 ðaÞ is an inverse membership function of some uncertainty set. For any x ∊ μ−1(α) we have lð xÞ a. Since α > β we have μ(x) ≥ β and then x 2 l1 ðbÞ. Hence l1 ðaÞ  l1 ðbÞ. Conversely suppose l1 ðaÞ is a monotone decreasing set valued function. Then μ(x) = sup {α ∊ [0, 1]|x ∊ μ−1(α)} is a membership function of some uncertainty set. It is easy to verify that l1 ðaÞ is the inverse membership function of the uncertainty set. It is to be noted that the uncertainty set does not necessarily take values of its αcuts. In fact an α-cut is included in the uncertainty set with uncertainty measure a. Conversely the uncertainty set is included in its α-cut with uncertainty measure 1  a. This leads to the following theorem. Theorem 5.25 Let ξ be an uncertainty function  set with inverse membership   l1 ðaÞ. Then for each a 2 ½0; 1 M l1 ðaÞ  n a and M n  l1 ðaÞ 1  a. For each x ∊μ−1(α) we have  lðxÞ a. It follows from the measure inversion formula that M l1 ðaÞ  n ¼ inf lð xÞ a. |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} −1

x2l1 ðaÞ

For each x 62μ (α) we have lð xÞ\a. It follows from the measure inversion  formula that M n  l1 ðaÞ ¼ 1  sup lð xÞ 1  a. |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} x62l1 ðaÞ

The regular membership function is now defined as follows [8]: Definition 5.26 A membership function μ is said to be regular if there exists a point x0 such that μ(x0) = 1 and μ(x) is unimodal about the mode x0. Thus μ(x) is increasing on ð1; x0  and decreasing on ½x0 ; þ 1Þ. If μ is a regular membership function then μ−1(α) is an interval for each a. In this −1 case the function μ−1 l (α) = inf μ (α) is called the left inverse membership function −1 and the function μr (α) = sup μ−1(α) is called the right inverse membership function. It is obvious that the left inverse membership function μ−1 l (α) is increasing and (α) is decreasing with respect to a. the right inverse membership function μ−1 r Now we illustrate the concept of independence in uncertainty sets which is defined as follows [3, 8]:

[email protected]

5.8 Uncertainty Set

99

Definition 5.27 The uncertainty sets ξ1, ξ2, …, ξn are independent if for any Borel sets B1 ; B2 ; . . .; Bn of real numbers the following relations hold: M

( n \

n i

 Bi



i¼1

M

( n [ i¼1

n i  Bi



) ¼

n ^

  M n i  Bi

i¼1

) ¼

n _

  M n i  Bi

i¼1

  Here ξ*i are arbitrarily chosen from ni ; nci ; i ¼ 1; 2; . . .; n respectively. It is to be noted that intersection relation represents 2n equations. For example when n = 2 the four equations are: Mfðn1  B1 Þ \ ðn2  B2 Þg ¼ Mfn1  B1 g ^ Mfn2  B2 g  

  M nc1  B1 \ ðn2  B2 Þ ¼ M nc1  B1 ^ Mfn2  B2 g 

   M ðn1  B1 Þ \ nc2  B2 ¼ Mfn1  B1 g ^ M nc2  B2  c

c

  c   c  M n1  B 1 \ n2  B 2 ¼ M n 1  B 1 ^ M n 2  B 2 It is to be noted that union relation represents other 2n equations. For example when n = 2 the four equations are: Mfðn1  B1 Þ [ ðn2  B2 Þg ¼ Mfn1  B1 g _ Mfn2  B2 g

    M nc1  B1 [ ðn2  B2 Þ ¼ M nc1  B1 _ Mfn2  B2 g 

   M ðn1  B1 Þ [ nc2  B2 ¼ Mfn1  B1 g _ M nc2  B2  c

c

  c   c  M n1  B 1 [ n2  B 2 ¼ M n 1  B 1 _ M n 2  B 2 Example 5.29 Let ξ1(γ1) and ξ2(γ2) be the uncertainty sets on the uncertainty spaces ðC1 ; L1 ; M1 Þ and ðC2 ; L2 ; M2 Þ respectively. It is clear that they are also uncertainty sets on the product uncertainty space ðC1 ; L1 ; M1 Þ  ðC2 ; L2 ; M2 Þ. Then for any Borel sets B1 and B2 of real numbers: Mfðn1  B1 Þ \ ðn2  B2 Þg ¼ Mfðc1 ; c2 Þjn1 ðc1 Þ  B1 ; n2 ðc2 Þ  B2 g Mfðn1  B1 Þ \ ðn2  B2 Þg ¼ Mfðc1 jn1 ðc1 Þ  B1 Þ  ðc2 jn2 ðc2 Þ  B2 Þg Mfðn1  B1 Þ \ ðn2  B2 Þg ¼ M1 fc1 jn1 ðc1 Þ  B1 g ^ M2 fc2 jn2 ðc2 Þ  B2 g Mfðn1  B1 Þ \ ðn2  B2 Þg ¼ M1 fn1  B1 g ^ M2 fn2  B2 g Thus the required equation is verified. Similarly the other equations may also be verified. Hence ξ1 and ξ2 are independent in the product uncertainty space. In fact it is true that uncertainty sets are always independent if they are defined on different uncertainty spaces.

[email protected]

100

5

Possibility Theory for Operational Risk

* Theorem 5.26 Let ξ1, ξ2, …,  ξn be  the uncertainty sets and let ξi be arbitrarily c chosen uncertainty sets from ni ; ni ; i ¼ 1; 2; . . .; n respectively. Then ξ1, ξ2, …, ξn are independent iff ξ*1, ξ*2, …, ξ*n are independent.   be arbitrarily chosen uncertainty sets from n i ; n c Let ξ** ; i ¼ 1; 2; . . .; n i i n ** ** respectively. Then ξ*1, ξ*2, …, ξ*n and ξ** 1 , ξ2 , …, ξn represent the same 2 combinations. This fact implies that the equations are equivalent to:

( ) n n ^ \

  ni  B i M n ¼ M i  Bi i¼1

i¼1

i¼1

i¼1

( ) n n [ _

  M ni  B i M n ¼ i  Bi Hence ξ1, ξ2, …, ξn are independent iff ξ*1, ξ*2, …, ξ*n are independent. Theorem 5.27 The uncertainty sets ξ1, ξ2, …, ξn are independent iff any Borel sets B1 ; B2 ; . . .; Bn of real numbers: ( M

n \

Bi 

n i



i¼1

M

( n [

Bi  n i



i¼1

) ¼

n ^

  M Bi  n i

i¼1

) ¼

n _

  M Bi  n i

i¼1

  Here ξ*i are arbitrarily chosen from ni ; nci ; i ¼ 1; 2; . . .; n respectively.     c for i ¼ 1; 2; . . .; n we have: Since Bi  n i ¼ n c i  Bi M

( n \

Bi 

n i



)

( ¼M

i¼1 n ^

Bi 

n i



)

i¼1

¼M

i¼1

i¼1



Bci



)

n   ^   c M Bi  n i ¼ M n c i  Bi

( n [

n _

n c i

i¼1

i¼1

M

n \

(

n [

n c i



Bci



i¼1

n   _   c M Bi  n i ¼ M n c i  Bi i¼1

[email protected]

)

5.8 Uncertainty Set

101

It follows from the above equations are valid iff: ( ) n n ^ \

  c c c ni  B i M n c ¼ M i  Bi i¼1

i¼1

i¼1

i¼1

( ) n n _ [

  c c ¼ M n c M n c i  Bi i  Bi The above two equations are also equivalent to the independence of the uncertainty sets n1 ; n2 ; . . .; nn . Now we present the union, intersection and complement of independent uncertainty sets through membership functions [3, 8]. Theorem 5.28 Union of uncertainty sets: Let ξ and η be independent uncertainty sets with membership functions μ and ν respectively. Then their union ξ [ η has a membership function kðxÞ ¼ lðxÞ _ mðxÞ. The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers. The membership function of the union of uncertainty sets μ(x) and ν(x) is represented in Fig. 5.13. Theorem 5.29 Intersection of uncertainty sets: Let ξ and η be independent uncertainty sets with membership functions μ and ν respectively. Then their intersection ξ \ η has a membership function kð xÞ ¼ lð xÞ ^ mð xÞ. The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers. The membership function of the intersection of uncertainty sets μ(x) and ν(x) is represented in Fig. 5.14. Theorem 5.30 Let ξ be an uncertainty set with membership function μ. Then its complement ξ c has a membership function kð xÞ ¼ 1  lð xÞ. The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers. The membership function of the complement of uncertainty set μ(x) is represented in Fig. 5.15.

Fig. 5.13 The membership function of the union of uncertainty sets

[email protected]

102

5

Possibility Theory for Operational Risk

Fig. 5.14 The membership function of the intersection of uncertainty sets

Fig. 5.15 The membership function of the complement of uncertainty set

Now we proceed to present the arithmetic operational law of independent uncertainty sets including addition, subtraction, multiplication and division [3, 8]. Theorem 5.31 Arithmetic operational law via inverse membership functions: Let n1 ; n2 ; . . .; nn be independent uncertainty sets with inverse membership functions −1 −1 μ−1 1 , μ2 , …, μn respectively and let f be a measurable function. Then ξ = f(ξ1, ξ2, …, ξn) has an inverse membership function:

1 1 k1 ðaÞ ¼ f l1 1 ðaÞ; l2 ðaÞ; . . .; ln ðaÞ The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers. In many situations it is required to deal with monotone functions of regular uncertainty sets. In this regard we have the following theorem [3, 8]. Theorem 5.32 Monotone function of regular uncertainty sets: Let n1 ; n2 ; . . .; nn be independent uncertainty sets with regular membership functions l1 ; l2 ; . . .; ln respectively. If the function f(x1, x2, …, xn) is strictly increasing with respect to x1, x2, …, xm and strictly decreasing with respect to xm+1, xm+2, …, xn then ξ = f(ξ1, ξ2, …, ξn) has a regular membership function and 1 1 1 1 k1 l ðaÞ ¼ f l1l ðaÞ; . . .; lml ðaÞ; lm þ 1;r ðaÞ; . . .; lnr ðaÞ 1 1 1 1 k1 ð a Þ ¼ f l ð a Þ; . . .; l ð a Þ; l ð a Þ; . . .; l ð a Þ r 1r mr m þ 1;l nl

[email protected]

5.8 Uncertainty Set

103

−1 −1 −1 −1 −1 −1 Here λ−1 l , μ1l , μ2l , …, μnl are left inverse membership functions and λr , μ1r , μ2r , −1 …, μnr are right inverse membership functions of n1 ; n2 ; . . .; nn respectively. The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers.

Theorem 5.33 Arithmetic operational law via membership functions: Let ξ1, ξ2, …, ξn be independent uncertainty sets with membership functions l1 ðxÞ; l2 ð xÞ; . . .; ln ðxÞ respectively, and let f be a measurable function. Then ξ = f (ξ1, ξ2, …, ξn) has a membership function: kð x Þ ¼

sup |{z}

min li ðxi Þ: |{z}

f ðx1 ;x2 ;...;xn Þ 1  i  n

The proof of the above theorem is beyond the scope of the text and is available in [8] for interested readers. For further details on uncertainty set interested readers can refer [3, 8].

5.9

Possibilistic Risk Analysis

After familiarizing the reader with basic mathematical foundations of the possibility theory we present some insights into possibilistic risk analysis in this section. The general concept of operational risk has been discussed in Chap. 2. The concept of risk in this context has been highlighted as the accidental loss and uncertain measure of such loss [3]. Possibilistic risk analysis is a tool to quantify risk via uncertainty theory [3]. One major feature of this aspect is to model events that almost never occur. Possibilistic risk analysis is measured here in terms of risk index. We also discuss structural and investment risk analysis in uncertain environments [8]. A system generally contains some factors ξ1, ξ2, …, ξn that may be considered as lifetime, strength, demand, production rate, cost, profit and resource. Some specified loss is always dependent on these factors. Although loss is a problem dependent concept it is basically represented through a loss function. Definition 5.28 Consider a system with factors n1 ; n2 ; . . .; nn . A function f is called a loss function if some specified loss occurs iff f ðn1 ; n2 ; . . .; nn Þ [ 0. Example 5.30 Consider a series system as shown in Fig. 5.16 in which there are n elements whose lifetimes are represented through uncertainty variables n1 ; n2 ; . . .; nn . Such a system works whenever all elements work. Thus the system lifetime is n ¼ n1 ^ n2 ^    ^ nn . If the loss is understood as the case when system

Fig. 5.16 Series system

[email protected]

104

5

Possibility Theory for Operational Risk

fails before time T then we have a loss function f ðn1 ; n2 ; . . .; nn Þ ¼ T  n1 ^ n2 ^    ^ nn . Hence the system fails iff f ðn1 ; n2 ; . . .; nn Þ [ 0. Example 5.31 Consider a parallel system as shown in Fig. 5.17 in which there are n elements whose lifetimes are uncertainty variables n1 ; n2 ; . . .; nn . Such a system works whenever at least one element works. Thus the system lifetime is n ¼ n1 _ n2 _    _ nn . If the loss is understood as the case when system fails before time T then we have a loss function f ðn1 ; n2 ; . . .; nn Þ ¼ T  n1 _ n2 _    _ nn . Hence the system fails iff f ðn1 ; n2 ; . . .; nn Þ [ 0. Example 5.32 Consider a k-out-of-n system in which there are n elements whose lifetimes are uncertainty variables n1 ; n2 ; . . .; nn . Such a system works whenever at least k of n elements work. Thus the system lifetime is n ¼ k max½n1 ; n2 ; . . .; nn . If the loss is understood as the case when system fails before time T then we have a loss function f ðn1 ; n2 ;    ; nn Þ ¼ T  k max½n1 ; n2 ; . . .; nn . Hence the system fails iff f(ξ1, ξ2, …, ξn) > 0. It is to be noted that a series system is an n-out-of-n system and a parallel system is a 1-out-of-n system. Example 5.33 Consider a standby system as shown in Fig. 5.18 in which there are n redundant elements whose lifetimes are ξ1, ξ2, …, ξn. For this system only one element is active and one of the redundant elements begins to work only when the active element fails. Thus the system lifetime is n ¼ n1 þ n2 þ    þ nn . If the loss is understood as the case that the system fails before the time T then the loss function is f ðn1 ; n2 ; . . .; nn Þ ¼ T  ðn1 þ n2 þ    þ nn Þ. Hence the system fails iff f (ξ1, ξ2, …, ξn) > 0. In practice the factors n1 ; n2 ; . . .; nn of a system are generally uncertainty variables rather than known constants. Now we define risk index which is used as a measure for possibilistic risk analysis [3]. Thus risk index is defined as the uncertainty measure that some specified loss occurs.

Fig. 5.17 Parallel system

Fig. 5.18 Standby system

[email protected]

5.9 Possibilistic Risk Analysis

105

Definition 5.29 Assume that a system contains uncertain factors ξ1, ξ2, …, ξn and has a loss function f. Then the risk index is the uncertainty measure that the system is loss positive i.e. Risk ¼ Mff ðn1 ; n2 ; . . .; nn Þ [ 0g. Theorem 5.34 Risk index theorem: Assume a system contains independent uncertainty variables ξ1, ξ2, …, ξn with regular uncertainty distributions U1 ; U2 ; . . .; Un respectively. If the loss function f(ξ1, ξ2, …, ξn) is strictly increasing with respect to n1 ; n2 ; . . .; nm and strictly decreasing with respect to nm þ 1 ; nm þ 2 ; . . .; nn then the risk index is just the root α of the equation

1 1 1 f U1 1 ð1  aÞ; . . .; Um ð1  aÞ; Um þ 1 ðaÞ; . . .; Un ðaÞ ¼ 0 It follows from the definition of risk index and the duality axiom that Risk ¼ Mff ðn1 ; n2 ; . . .; nn Þ [ 0g ¼ 1  Mff ðn1 ; n2 ; . . .; nn Þ  0g. The value of 1 1 Mff ðn1 ; n2 ; . . .; nn Þ  0g is the root α of f U1 1 ðaÞ; . . .; Um ðaÞ; Um þ 1 ð1  aÞ; 1 . . .; Un ð1  aÞÞ ¼ 0. Hence the risk index is just the root of the equation.

1 1 1 Since f U1 is a strictly 1 ð1  aÞ; . . .; Um ð1  aÞ; Um þ 1 ðaÞ; . . .; Un ðaÞ decreasing function with respect to a, its root α may be estimated by the bisection method. It is to be kept in mind that sometimes the preceding equation may not 1 1 have a root. In this case if f U1 1 ð1  aÞ; . . .; Um ð1  aÞ; Um þ 1 ðaÞ; . . .; 1 Un ðaÞÞ\0 for all a, then we set the root a ¼ 0; and if

1 1 1 f U1 1 ð1  aÞ; . . .; Um ð1  aÞ; Um þ 1 ðaÞ; . . .; Un ðaÞ [ 0 for all a, then we set the root a ¼ 1. Let us revisit the systems which we considered in Examples 5.30, 5.31, 5.32 and 5.33 earlier. Consider a series system in which there are n elements whose lifetimes are independent uncertainty variables n1 ; n2 ; . . .; nn with uncertainty distributions U1 ; U2 ; . . .; Un respectively. If the loss is understood as the case that the system fails before the time T then the loss function is f ðn1 ; n2 ; . . .; nn Þ ¼ T  n1 ^ n2 ^    ^ nn and the risk index is Risk ¼ Mff ðn1 ; n2 ; . . .; nn Þ [ 0g. Since f is a strictly decreasing function with respect to n1 ; n2 ; . . .; nn the risk index theorem says that the 1 1 risk index is just the root α of the equation U1 1 ðaÞ ^ U2 ðaÞ ^    ^ Un ðaÞ ¼ T. It is easy to verify that Risk ¼ U1 ðT Þ _ U2 ðT Þ _    _ Un ðT Þ. Consider a parallel system in which there are n elements whose lifetimes are independent uncertainty variables n1 ; n2 ; . . .; nn with uncertainty distributions U1 ; U2 ; . . .; Un respectively. If the loss is understood as the case that the system fails before the time T then the loss function is f ðn1 ; n2 ; . . .; nn Þ ¼ T  n1 _ n2 _    _ nn and the risk index is Risk ¼ Mff ðn1 ; n2 ; . . .; nn Þ [ 0g. Since f is a strictly decreasing function with respect to n1 ; n2 ; . . .; nn the risk index theorem says that the 1 1 risk index is just the root α of the equation U1 1 ðaÞ _ U2 ðaÞ _    _ Un ðaÞ ¼ T. It is easy to verify that Risk ¼ U1 ðT Þ ^ U2 ðT Þ ^    ^ Un ðT Þ. Consider a k-out-of-n system in which there are n elements whose lifetimes are independent uncertainty variables n1 ; n2 ; . . .; nn with uncertainty distributions U1 ; U2 ; . . .; Un respectively. If the loss is understood as the case that the system fails

[email protected]

106

5

Possibility Theory for Operational Risk

before the time T then the loss function is f ðn1 ; n2 ; . . .; nn Þ ¼ T  kmax½n1 ; n2 ;    ; nn  and the risk index is Risk ¼ Mff ðn1 ; n2 ; . . .; nn Þ [ 0g. Since f is a strictly decreasing function with respect to ξ1, ξ2, …, ξn the risk index theorem

1 says that the risk index is just the root α of the equation k max U1 1 ðaÞ; U2 ðaÞ; . . .; U1 n ðaÞ ¼ T. It is easy to verify that Risk ¼ k min½U1 ðT Þ; U2 ðT Þ; . . .; Un ðT Þ. Consider a standby system in which there are n elements whose lifetimes are independent uncertainty variables ξ1,ξ2, …, ξn with uncertainty distributions U1 ; U2 ; . . .; Un respectively. If the loss is understood as the case that the system fails before the time T then the loss function is f ðn1 ; n2 ; . . .; nn Þ ¼ T  ðn1 þ n2 þ    þ nn Þ and the risk index is Risk ¼ Mff ðn1 ; n2 ; . . .; nn Þ [ 0g. Since f is a strictly decreasing function with respect to n1 ; n2 ; . . .; nn the risk index the1 orem says that the risk index is just the root α of the equation U1 1 ð aÞ þ U 2 ð aÞ þ    þ U1 n ðaÞ ¼ T. Now we proceed to illustrate the concept of structural risk analysis [3, 8]. Consider a structural system in which the strengths and loads are assumed in terms of uncertainty variables. It is supposed that a structural system fails whenever for each rod the load variable exceeds its strength variable. If the structural risk index is defined as the uncertainty measure then structural system fails such that  S Risk ¼ M ni¼1 ðni \gi Þ . Here ξ1, ξ2, …, ξn are strength variables and η1, η2, …, ηn are load variables of n rods. Example 5.34 Assume that there is only a single strength variable ξ and a single load variable η with continuous uncertainty distributions U and W respectively. In this case the structural risk index is Risk ¼ Mfn\gg. It follows from the risk index theorem that the risk index is just the root α of the equation U1 ðaÞ ¼ W1 ð1  aÞ. Especially if the strength variable ξ has a normal uncertainty distribution N ðes ; rs Þ and the load variable η has a normal uncertainty dis 1 lÞ . tribution N ðel ; rl Þ then the structural risk index is Risk ¼ 1 þ exp ppffiffi3ððers e þr Þ s

l

Example 5.35 Assume the case of constant loads where the uncertainty strength variables n1 ; n2 ; . . .; nn are independent and have continuous uncertainty distributions U1 ; U2 ; . . .; Un respectively. In many cases the load variables g1 ; g2 ; . . .; gn degenerate to crisp values c1 ; c2 ; . . .; cn respectively. In this case it follows from Risk ¼ S  and independence that the structural risk index is M ni¼1 ðni \gi Þ  Sn  Risk ¼ M i¼1 ðni \ci Þ ¼ _ni¼1 Mfni \ci g. Thus Risk ¼ U1 ðc1 Þ _ U2 ðc2 Þ _    _Un ðcn Þ. Example 5.36 Assume the case of independent load variables where the uncertainty strength variables n1 ; n2 ; . . .; nn are independent and have continuous uncertainty distributions U1 ; U2 ; . . .; Un respectively. Also assume the uncertainty load variables g1 ; g2 ; . . .; gn are independent and have continuous uncertainty S distributions  W1 ; W2 ; . . .; Wn respectively. In this case it follows from Risk ¼ M ni¼1 ðni \gi Þ  Sn and independence that the structural risk index is Risk ¼ M i¼1 ðni \gi Þg ¼ _ni¼1 Mfni \gi g. That is Risk = α1 ∨ α2 ∨ ··· ∨ αn. Here αi are the roots

[email protected]

5.9 Possibilistic Risk Analysis

107

1 of equations U1 i ðaÞ ¼ Wi ð1  aÞ for i = 1,2, …, n respectively. However the load variables g1 ; g2 ; . . .; gn are neither constants nor independent. For example the load variables g1 ; g2 ; . . .; gn may be functions of independent uncertain variables s1 ; s2 ; . . .; sm . In this case the expression Risk = α1 ∨ α2 ∨ ··· ∨ αn is no longer valid. Thus we have to deal with those structural systems case by case.

Example 5.37 Consider a series structural system as shown in Fig. 5.19 that consists of n rods in series and an object. Assume that the strength variables of the n rods are uncertainty variables n1 ; n2 ; . . .; nn with uncertainty distributions U1 ; U2 ; . . .; Un respectively. We also assume that the gravity of the object is an uncertainty variable g with uncertainty distribution W. For each i(1 ≤ i ≤ n) the load variable of the rod i is just the gravity η of the object. Thus the structural system fails whenever the load variable η exceeds at least one of the n1 ; n2 ; . . .; nn . Hence the Sstrength variables  structural risk index is Risk ¼ M ni¼1 ðni \gÞ ¼ Mfn1 ^ n2 ^    ^ nn \gg. We define the loss function as f ðn1 ; n2 ; . . .; nn ; gÞ ¼ g  n1 ^ n2 ^    ^ nn . Then Risk ¼ Mff ðn1 ; n2 ; . . .; nn ; gÞ [ 0g. Since the loss function f is strictly increasing with respect to η and strictly decreasing with respect to n1 ; n2 ; . . .; nn it follows from the risk index theorem that the risk index is just the root α of the equation 1 1 W1 ð1  aÞ  U1 1 ðaÞ ^ U2 ðaÞ ^    ^ Un ðaÞ ¼ 0. Equivalently let αi be the 1 1 roots of the equations W ð1  aÞ ¼ Ui ðaÞ for i = 1, 2, …, n respectively. Then the structural risk index is Risk = α1 ∨ α2 ∨ ··· ∨ αn. Fig. 5.19 A structural system with n rods and an object

[email protected]

108

5

Possibility Theory for Operational Risk

Example 5.38 Consider a structural system as shown in Fig. 5.20 that consists of 2 rods and an object. Assume that the strength variables of the left and right rods are uncertain variables ξ1 and ξ2 with uncertainty distributions U1 and U2 respectively. We also assume that the gravity of the object is an uncertain variable η with uncertainty distribution W. In this case the load variables of left and right rods are g sin h2 g sin h1 sinðh1 þ h2 Þ and sinðh1 þ h2 Þ respectively. The structural system fails whenever for any one rod the load variable exceeds its strength variable. Hence the structural risk index is:   [  g sin h2 g sin h1 n2 \ Risk ¼ M n1 \ sinðh1 þ h2 Þ sinðh1 þ h2 Þ   [  n1 g n2 g \ \ Risk ¼ M sin h2 sinðh1 þ h2 Þ sin h1 sinðh1 þ h2 Þ   n1 n g ^ 2 \ Risk ¼ M sin h2 sin h1 sinðh1 þ h2 Þ We define the loss function as: f ð n1 ; n 2 ; g Þ ¼

g n n  1 ^ 2 sinðh1 þ h2 Þ sin h2 sin h1

Then Risk ¼ Mff ðn1 ; n2 ; gÞ [ 0g

Fig. 5.20 A structural system with 2 rods and an object

[email protected]

5.9 Possibilistic Risk Analysis

109

Since the loss function f is strictly increasing with respect to η and strictly decreasing with respect to ξ1, ξ2 it follows from risk index theorem that risk index is root α of the equation: ðaÞ U1 ð aÞ W1 ð1  aÞ U1  1 ^ 2 ¼0 sinðh1 þ h2 Þ sin h2 sin h1 Equivalently let α1 be root of the equation: W1 ð1  aÞ U1 ð aÞ ¼ 1 sinðh1 þ h2 Þ sin h2 and let α2 be the root of the equation: W1 ð1  aÞ U1 ð aÞ ¼ 2 sinðh1 þ h2 Þ sin h1 Then the structural risk index is Risk = α1 ∨ α2. Now we discuss investment risk analysis in uncertain environments [3]. Assume that an investor has n projects whose returns are uncertainty variables n1 ; n2 ; . . .; nn . If the loss is understood as the case that total return n1 þ n2 þ    þ nn is below a predetermined value c such as the interest rate then the investment risk index is Risk ¼ Mfn1 þ n2 þ    þ nn \cg. If ξ1,ξ2, …, ξn are independent uncertainty variables with uncertainty distributions U1 ; U2 ; . . .; Un respectively then the invest1 ment risk index is just the root α of the equation U1 1 ð a Þ þ U 2 ð aÞ þ    þ U1 n ðaÞ ¼ c. For more details interested readers can refer [3]. The concept of risk index can also be substitute through value at risk (VaR) which is presented in Chaps. 3 and 4. The VaR can be expressed in terms of risk index as follows [8]: Definition 5.30 Consider a system that contains uncertainty factors ξ1, ξ2, …, ξn and has a loss function f. Then VaR is defined as: VaRðaÞ ¼ supfxjMff ðn1 ; n2 ; . . .; nn Þ xg ag It is to be noted that VaR(α) represents the maximum possible loss when α percent of the right tail distribution is ignored. In other words the loss f(ξ1, ξ2, …, ξn) will exceed VaR(α) with uncertainty measure a as shown in Fig. 5.21. If Uð xÞ is the uncertainty distribution of f(ξ1, ξ2, …, ξn) then:

[email protected]

110

5

Possibility Theory for Operational Risk

Fig. 5.21 The value-at-risk

VaRðaÞ ¼ supfxjUð xÞ  1  ag If the inverse uncertainty distribution U1 ðaÞ exists then: VaRðaÞ ¼ U1 ð1  aÞ Theorem 5.35 The VaR(α) is a monotone decreasing function with respect to a. Let α1 and α2 be two numbers with 0\a1 \a2  1. Then for any number r < VaR(α2) we have: Mff ðn1 ; n2 ; . . .; nn Þ r g a2 [ a1 Thus by the definition of VaR we obtain VaRða1 Þ  r\VaRða2 Þ. So VaR(α) is a monotone decreasing function with respect to a. Theorem 5.36 Consider a system contains independent uncertainty variables ξ1, ξ2, …, ξn with regular uncertainty distributions U1 ; U2 ; . . .; Un respectively. If the loss function f(ξ1, ξ2, …, ξn) is strictly increasing with respect to ξ1, ξ2, …, ξm and strictly decreasing with respect to ξm+1, ξm+2, …, ξn then:

1 1 1 VaRðaÞ ¼ f U1 1 ð1  aÞ; . . .; Um ð1  aÞ; Um þ 1 ðaÞ; . . .; Un ðaÞ It follows from the operational law of the uncertainty variables that the loss function f(ξ1, ξ2, …, ξn) has an inverse uncertainty distribution:

1 1 1 U1 ðaÞ ¼ f U1 1 ðaÞ; . . .; Um ðaÞ; Um þ 1 ð1  aÞ; . . .; Un ð1  aÞ The theorem follows from the equation immediately. Now the concept of expected loss [3] is presented which is the expected value of the loss function f(ξ1, ξ2, …, ξn) given f(ξ1, ξ2, …, ξn) > 0 which is defined as follows: Definition 5.31 Consider a system that contains uncertainty factors ξ1, ξ2, …, ξn and has a loss function f. Then the expected loss is defined as:

[email protected]

5.9 Possibilistic Risk Analysis

111

Zþ 1 L¼

Mff ðn1 ; n2 ; . . .; nn Þ xgdx 0

If UðxÞ is the uncertainty distribution of the loss function f(ξ1, ξ2, …, ξn) then we have: Zþ 1 ð1  UðxÞÞdx

L¼ 0

If the inverse uncertainty distribution U1 ðxÞ exists then the expected loss is: Z1 L¼



þ U1 ðaÞ da

0

Theorem 5.37 Consider a system that contains independent uncertainty variables ξ1, ξ2, …, ξn with regular uncertainty distributions U1 ; U2 ; . . .; Un respectively. If the loss function f(ξ1, ξ2, …, ξn) is strictly increasing with respect to ξ1, ξ2, …, ξm and strictly decreasing with respect to ξm+1, ξm+2, …, ξn then the expected loss is: Z1

1 1 1 L ¼ f þ U1 1 ðaÞ; . . .; Um ðaÞ; Um þ 1 ð1  aÞ; . . .; Un ð1  aÞ da 0

It follows from the operational law of uncertainty variables that the loss function f(ξ1, ξ2, …, ξn) has an inverse uncertainty distribution:

1 1 1 U1 ðaÞ ¼ f U1 1 ðaÞ; . . .; Um ðaÞ; Um þ 1 ð1  aÞ; . . .; Un ð1  aÞ The theorem follows from the equation immediately. The possibilistic risk analysis presented here is further explained with illustrative examples in [3].

References 1. Borell, C.: Lecture Notes on Measure Theory and Probability, Matematik. Chalmers och Göteborgs Universitet, Göteborg (2012) 2. Carlsson, C., Fuller, R.: Fuzzy Reasoning in Decision Making and Optimization. Physica Verlag, Heidelberg (2002) 3. Chaudhuri, A.: A Study of Operational Risk using Possibility Theory, Technical Report. Birla Institute of Technology Mesra, Patna Campus, Patna, India (2010) 4. Dubois, D., Prade, H.: Possibility Theory. Plenum, New York (1988)

[email protected]

112

5

Possibility Theory for Operational Risk

5. Huber, F., Schmidt, C.P.: Degrees of Belief, Synthese Library, vol. 342. Springer, Berlin (2009) 6. Hwang, S., Thill, J.C.: Empirical study on location indeterminacy of localities. In: Fisher, P.F. (ed.) Developments in Spatial Data Mining, pp. 271–283. Springer, Berlin (2004) 7. Kolmogorov, A.N.: Foundations of the Theory of Probability. Chelsea Publishing Company, New York (1957) 8. Liu, B.: Uncertainty Theory, 5th edn. Springer, Berlin (2015) 9. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1(1), 3–28 (1978) 10. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965) 11. Zimmermann, H.J.: Fuzzy Set Theory and its Applications, 4th edn. Kluwer Academic Publishers, Massachusetts (2001)

[email protected]

Chapter 6

Possibilistic View of Operational Risk

Abstract In this Chapter the possibilistic view of operational risk is presented. In view of this the probabilistic treatment of operational risk in Chap. 4 has been remodelled with possibility theory. The g-and-h distribution has been redefined as fuzzy g-and-h distribution. Similarly value at risk (VaR) concept is extended to fuzzy value at risk (VaR) and subadditivity of fuzzy VaR also defined. Based on fuzzy VaR the fuzzy subjective value at risk (SVaR) is highlighted. The risk and deviation measures are also extended to their fuzzy versions. An application of fuzzy SVaR optimization is also illustrated.

 

Keywords Operational risk Fuzzy g-and-h distribution SVaR Fuzzy risk measure Fuzzy deviation measure



6.1

 Fuzzy VaR  Fuzzy

Introduction

This Chapter provides the reader the possibilistic view of operational risk based on the concept of possibility theory discussed in Chap. 5. Section 6.2 starts with possibilistic view of operational risk using fuzzy g-and-h distribution. Then the concept of fuzzy value at risk (VaR) is presented in Sect. 6.3. This is followed by subadditivity of fuzzy VaR in Sect. 6.4. In Sect. 6.5 fuzzy subjective value at risk (SVaR) is defined based on fuzzy VaR. The fuzzy risk and deviation measures are given in Sects. 6.6 and 6.7 respectively. Finally, this Chapter concludes with an application of fuzzy SVaR optimization. Possibility theory has played a significant role towards handling of uncertain and incomplete information in several Engineering Applications [2] since recent past. It is comparable to probability theory because it is based on set theoretic function notations. It differs from latter by the use of a pair of dual set functions viz. possibility and necessity measures. Besides it is not additive and makes sense on ordinal structures. The theory of Possibility was first coined by Zadeh [19]. In Zadeh’s view possibility distributions were meant to provide a graded semantics to natural language statements. However, possibility and necessity measures can also © Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_6

[email protected]

113

114

6 Possibilistic View of Operational Risk

be the basis of a full-fledged representation of partial belief that parallels probability. It can be seen either as a coarse, non-numerical version of probability theory or a framework for reasoning with extreme probabilities or yet a simple approach to reasoning with imprecise probabilities [18]. The modalities possible and necessary have been used in philosophy since Middle Ages’ in Europe based on Aristotle’s and Theophrastus’ works [21]. They have become building blocks of Modal Logics that emerged at the beginning of the 20th century. In this approach possibility and necessity are all-or-nothing notions and handled at the syntactic level. More recently and independently from Zadeh’s view the notion of possibility as opposed to probability was central to certain works in Economics [18, 19]. A graded notion of possibility was introduced as a full fledged approach to uncertainty and decision by Shackle [15] who called degree of potential surprise of an event its degree of impossibility i.e. the degree of necessity of opposite event. His notion of possibility is basically epistemic such that it is a character of the chooser’s particular state of knowledge in his present. Impossibility is understood as disbelief. Potential surprise is valued on a disbelief scale viz. a positive interval of the form [0, y*] where y* denotes the absolute rejection of event to which it is assigned. In case everything is possible all mutually exclusive hypotheses have zero surprise. At least one elementary hypothesis must carry zero potential surprise. The degree of surprise of an event i.e. a set of elementary hypotheses is the degree of surprise of its least surprising realisation. Shackle also introduced the notion of conditional possibility whereby the degree of surprise of a conjunction of two events A and B is equal to the maximum of degree of surprise of A and of the degree of surprise of B should A prove true. Lewis considered a graded notion of possibility in the form of relation between possible worlds [14] as comparative possibility. He equated the concept of possibility to the similarity between possible worlds. The non-symmetric notion of similarity is also comparative and is meant to express statements of the form such as a world j is at least as similar to world i as world k is. The comparative similarity of j and k with respect to i is interpreted as comparative possibility of j with respect to k viewed from world i. Such relations are assumed to be complete pre-orderings and are instrumental in defining the truth conditions of counterfactual statements. Comparative possibility relations  P obey the key proposition A  P B ) C [ A  P C [ B for events A, B and C. A framework very similar to the one of Shackle was proposed by Cohen [5] who considered the problem of legal reasoning. He introduced Baconian probabilities which is considered as degrees of provability. The idea is that it is hard to prove someone guilty at the court of law by means of pure statistical arguments. The basic feature of degrees of provability is that a hypothesis and its negation cannot both be provable together to any extent. The contrary being a case for inconsistency. Such degrees of provability coincide with necessity measures. Zadeh also proposed an interpretation of membership functions of fuzzy sets as possibility distributions encoding flexible constraints induced by natural language statements [20]. Zadeh articulated the relationship between possibility and

[email protected]

6.1 Introduction

115

probability noticing that what is probable must preliminarily be possible. However, the view of possibility degrees refers to the idea of graded feasibility as degrees of ease rather than to the epistemic notion of plausibility laid bare by Shackle. Nevertheless, the key axiom of maxitivity for possibility measures is highlighted. Zadeh acknowledged the connection between possibility theory, belief functions and upper and lower probabilities and proposed their extensions to fuzzy events and fuzzy information granules [19]. With this brief discussion on the possibility theory we proceed to present the fuzzy version of the concepts illustrated in Chap. 4.

6.2

Fuzzy g-and-h Distribution

After modeling operational risk through g-and-h distribution in Chap. 4 we quantify it here using fuzzy g-and-h distribution. Considering random variables Xi ; i ¼ 1; . . .; n on a common probability space ~i ; i ¼ 1; . . .; n defined on fuzzy ðX; F; PÞ [9] we state fuzzy random variables X e ~ ~ probability space ð X; F; PÞ [16, 17]. This representation adheres to one period risk factor in quantitative risk management [8]. Based on this we represent the fuzzy g and h variables by ~g and ~h respectively. ~ 1Þ be a fuzzy standard normal random variable [16]. A fuzzy ranLet Z~  Nð0; ~ is said to have ĝ-and-ĥ distribution with parameters ~a; ~b; ~g; ~h 2 F; F dom variable X ~ satisfies the following expression [4]: being the set of fuzzy real numbers [23] if X ~ ¼ r1 ~a þ r2 ~b e X

~gZ~

 1 ~hZ~ 2 =2 e g~

ð6:1Þ

The fuzzy real numbers in Eq. (6.1) are generalization of a regular, real number in the sense that it does not refer to one single value but rather to a connected set of possible values where each possible value has its own weight between 0 and 1 [19]. This weight is the membership function. The fuzzy numbers are the special case of a convex, normalized fuzzy set of real line are extension of real numbers. The calculations with fuzzy numbers allow incorporation of uncertainty on parameters, properties, geometry, initial conditions etc. The commonly used fuzzy numbers here have either triangular or trapezoidal membership functions [20]. In the present study we use trapezoidal membership function which is expressed as follows: 8 0; xa > > > >xa > > ; axb > > > > > d  x > > ; cxd > > dc > > : 0; dx

[email protected]

116

6 Possibilistic View of Operational Risk

Fig. 6.1 The trapezoidal membership function

In Eq. (6.1) ltrapezoid ðx; a; b; c; d Þ is a piecewise linear, continuous function defined within interval [0,1] and controlled by four parameters a, b, c, d and x 2 R [20]. Equation (6.1) is shown in Fig. 6.1. In Eq. (6.1) an obvious interpretation holds such that parameter ~g 6¼ 0. The linear ~  ~g and transformation parameters r1 and r2 are restricted to values 0 and 1. Here, X ~ ~ ~ ~ ~ ~  ~h. X  h is considered when X has distribution function F where F  ~g and F A more flexible choice of parameters may be achieved by considering ~g and ~h to be polynomials including higher orders of Z~ 2 . The parameters ~g and ~h are governed by skewness and heavy tail of the fuzzy distribution [16]. When ~h ¼ 0 Eq. (6.1) reduces to [4]: ~ ¼ r1 ~a þ r2 ~b e X

gZ~ ~

1 ~g

ð6:3Þ

Equation (6.3) is referred to as ĝ distribution. The ĝ distribution thus corresponds to a scaled fuzzy lognormal distribution. When ~g ¼ 0 Eq. (6.1) reduces to [4]: ~ ¼ r1 ~a þ r2 ~bZe ~ ~hZ~ 2 =2 X

ð6:4Þ

Equation (6.4) is referred to as ĥ distribution and it corresponds to fuzzy normal ~g~x ~ 2 case. Since function kð~xÞ ¼ e ~g1 eh~x =2 8 ~h [ 0 is strictly increasing, the fuzzy ~ for ~g and ~h random variables can be written as [4]: distribution function CDF   ~ k1 ð~xÞ CDF ð~xÞ ¼ U

ð6:5Þ

~ of fuzzy g-and-h distribution is very much identical It is to be noted that CDF to the exponential distribution as shown in Fig. 6.2 since there is an exponential term in the distribution [10]. The corresponding a-levels of the distribution function represented in Fig. 6.2 are shown in Fig. 6.3 for some values of a ¼ 0:2; 0:5; 0:8; 1 [20]. For more details on fuzzy g-and-h distribution interested readers can refer [4].

[email protected]

6.2 Fuzzy g-and-h Distribution

117

Fig. 6.2 An illustration of the fuzzy distribution function

Fig. 6.3 The a-levels of distribution function represented in Fig. 6.2

[email protected]

118

6.3

6 Possibilistic View of Operational Risk

Fuzzy Value at Risk

~ denotes fuzzy In conformance to the concept of VaR in Chap. 4, Eq. (6.5) U standard normal distribution function [16]. This representation yields an easy procedure to calculate fuzzy quantiles and hence fuzzy Value at Risk, V~aR of ~g and ~ ~ is given as follows [4, 10]: h random variable X     ~ ¼ CDF 1 ð~aÞ ¼ ~k U1 ð~aÞ ; VaR~a X

~a  0

ð6:6Þ

For fuzzy normally distributed random variables V~aR is proportional to fuzzy ~ N ~ ðl ~; r ~2 Þ and CDFX~ ð~yÞ is the fuzzy cumulative distristandard deviation. If X ~ bution function of X then [4, 10]:   ~ ¼l ~ þ kð~aÞ~ r VaR~a X

ð6:7Þ

In Eq. (6.7) we have, k ð~aÞ ¼

pffiffiffi 1 2erf ð2~a  1Þ

ð6:8Þ

Figure 6.4 represents an illustration of the fuzzy Value at Risk, V~aR where a vague surface is formed by mapping of the fuzzy linguistic model to evaluate the operational risk [11].

Fig. 6.4 The vague surface formed by mapping of the fuzzy linguistic model to evaluate risk

[email protected]

6.3 Fuzzy Value at Risk

119

As an important illustration of V~aR we present a case from corporate scenario [4]. The mandatory managerial functions include risk management as a way to cope with unknown operational risks. The more asymmetric in nature the operational risks and threats become, the more significantly they influence the security environment. The set of instruments needed for the operation management included analysis and assessment of status, the trends in security environment and identification of preventive measures. Increasing the effectiveness of personnel through education and training, changes in the way in which forces are organized or how headquarters work or introducing new regulation, procedures and tactics that work better may all enhance the effectiveness of resources. For effective working of institutions the manifold aspects of their activities must be presented in an integrated way through one of the approaches for organizational modeling. The organizational model provides a unified medium for presenting all the basic functional and systematic points of view as well as opportunities for moving toward settled strategic goals through an objective system of performance indicators and analytical techniques. Risk analysis helps us understand risk in such a way as to manage it and to minimize disruptions to plans and also controls risk in a cost effective way. Figure 6.5 represents V~aR where a vague surface is formed by simulating the final comprehensive operational risk corresponding to other factors [4].

Fig. 6.5 The vague surface formed by simulating the final comprehensive operational risk corresponding to other factors

[email protected]

120

6.4

6 Possibilistic View of Operational Risk

Subadditivity of Fuzzy Value at Risk

An explicit formula towards VaR as suggested by [6] can be derived for ~g and ~h random variables [7, 19]:   V~aR~a ¼ k U1 ð~aÞ ;

~ a0

ð6:9Þ

with kð~xÞ ¼

e~g~x  1 ~h~x2 =2 e ~g

ð6:10Þ

A simulation is performed here in order to statistically investigate subadditivity ~1 ; X ~2 be ~ property for ĝ-and-ĥ distribution. Let X g and ~h random variables with parameters ~ g ¼ 2:37 and ~h ¼ 0:21. By simulation of n ¼ 107 realizations the diversification benefit is estimated as [7, 19]:       ~1 þ V~aR~a X ~2  V~aR~a X ~1 þ X ~2 d~g;~h ð~aÞ ¼ V~aR~a X

ð6:11Þ

In Eq. (6.11) d~g;~h ða~Þ will be non-negative iff subadditivity occurs. The results are displayed in Fig. 6.6. For realistic choice of parameters super additivity holds for ~a smaller than a certain level ~~a. The subadditivity given by [7, 19]:       ~1 þ X ~1 þ V~aR~a X ~2 ~2  V~aR~a X V~aR~a X

ð6:12Þ

This holds for sufficiently large a~ is well known. The super additivity enters for typical operational risk parameters at levels below ~~a may be somewhat surprising [4]. The latter may be important in scaling of risk measures. Indeed risk managers realize that estimating V~aR~a at level ~a  0:99 is statistically difficult. It is suggested

Fig. 6.6 The plot of d~g;~h ð~ aÞ as function of ~a for ~g ¼ 2:37 and ~h ¼ 0:21 at n ¼ 107

[email protected]

6.4 Subadditivity of Fuzzy Value at Risk

121

to estimate V~ aR~a deeper down in data and scale up to 99.9 %. The change from super to subadditivity over this range is a concern. Finite mean examples choosing skewness parameter ~g large enough can be constructed for levels ~~a ¼ 0:999 and higher such that subadditivity of V~aR fails for all ~a\~~a.   ~1 ; X ~2 is regularly varying with extreme Suppose that non degenerate vector X value index n [ 1. Then V~aR~a is sub additive for ~a sufficiently large. Figure 6.6 exemplifies the subadditivity of V~aR only in upper tail region. The preceding statement is an asymptotic statement and does not guarantee subadditivity for a broad range of high quantiles. Furthermore, for n [ 1 subadditivity typically fails. The reason is that for ~h [ 1 it deals with infinite mean models. For practitioners it will be of prime importance to know for which choices of ~g and ~h values one can expect subadditivity. As shown in Fig. 6.6 this depends on the level ~a. The ~a values are restricted here to 0.99 and 0.999. It is assume that the operational risk data of two business lines of a bank are well modelled by ~ g and ~h random variables with parameter values ~g 2 ½1:86; 2:31 and ~ h 2 ½0:17; 0:37. These values are fuzzy estimates of corresponding parameters estimated by [7, 19] at enterprise level. It would be of interest to figure out if aggregation at business line level leads to diversification in sense of subadditivity of V~ aR. For this purpose ~g and ~h random variables with aforementioned ranges are considered. The number attached to each contour line gives the value of d~g;~h ð~aÞ and lines indicate levels of equal magnitude of diversification benefit. The zero value corresponds to these models where V~aR~a is additive. The situation of course in general much more complicated than the example cited above. Practitioners and risk managers are advised to interpret the above statements from a methodological and pedagogical point of view. It seems that diversification of operational risk can go the wrong way due to skewness and heavy tail of this data.

6.5

Fuzzy Subjective Value at Risk

In analogy with VaR, V~aR does not always give feasible solution for optimization applications [9]. To meet such contingencies SVaR is reformulated as fuzzy SVaR; SV~ aR by applying fuzzy numbers defined on fuzzy probability space e ~ ~ ð X; F; PÞ [16, 17].   ~ ~ with continuous distribution functions SVaR~a X For fuzzy random variable X ~ is equal to the subjective expectation of X provided the following condition is satisfied [4]: ~  VaR~a ðXÞ ~ X

[email protected]

ð6:13Þ

122

6 Possibilistic View of Operational Risk

This definition serves as the basis towards formulation of SV~aR. Likewise the ~ with confidence level ~a is the expectation of the SV~ aR of fuzzy random variable X generalized ~ a tail distribution [16]:   ~ ¼ SVaR~a X

þZ1 1

~ydCDFX~a~ ð~yÞ

ð6:14Þ

Figure 6.7 represents V~aR and SV~aR in the operational risk context. In Eq. (6.14) we have [10, 16], ( CDFX~a~ ð~yÞ

¼

0

CDFX~ ð~yÞ  ~a 1  ~a

~ when ~y  VaR~a ðXÞ ~ when ~y [ VaR~a ðXÞ

ð6:15Þ

  ~ is never equal to the median of In analogy with the general case, SVaR~a X   ~ . The inherent uncertainty is always evident during outcomes greater than VaR~a X separation of the probability atom. When the distribution is modeled by scenario SV~ aR may be procured through the fuzzy median of a fractional number of scenarios. This idea is concretized though fuzzy definitions of SV~aR.   ~ . This is the X Let the fuzzy superior value of SV~aR be denoted by SVaRpositive ~a  ~ subject to X ~ [ VaR~a X ~ such that: fuzzy conditional expectation [16] of X      ~ ¼ E Xj ~X ~  VaR~a X ~ X SVaRpositive ~a

ð6:16Þ

     ~ can be given alternative definition as: If CDFX~ VaR~a X ~ Similarly, SVaR~a X   ~ then,  1 so that there might be chances of loss greater than VaR~a X            ~ ¼ n~a X ~ VaR~a X ~ þ 1  n~a X ~ VaR~a X ~ þ  SVaR~a X      ~ SVaRpositive ~ X þ 1  nn~a X ~a

Fig. 6.7 A representation of V~aR and SV~aR in the operational risk context

[email protected]

ð6:17Þ

6.5 Fuzzy Subjective Value at Risk

123

  ~ is given by: In Eq. (6.17) n~a X    ~  ~a   CDFX~ VaR~a X ~ ¼ n~a X ð1  ~a  2~a      n~aÞ

ð6:18Þ

  ~ such that Let the fuzzy inferior value of SV~aR be denoted by SVaRnegative X ~a [16],      ~ ¼ E Xj ~X ~  VaR~a X ~ X SVaRnegative ~a

ð6:19Þ

  ~ for fuzzy continuous disThe definition in Eq. (6.19) coincides with SVaR~a X tributions. Generally it is discontinuous in nature with respect to ~a. SV~aR is con~ ~aÞ. If CDFX~ ð~aÞ has a vertical tinuous with respect to ~a and jointly convex in ðX; discontinuity gap, fuzzy inferior and superior endpoints of interval are given by [16]:    ~ ~anegative ¼ CDFX~ VaRnegative X ~a

ð6:20Þ

   ~ ~apositive ¼ CDFX~ VaRpositive X ~a

ð6:21Þ

        ~  ~a  CDFX~ VaR~a X ~  1 the VaR~a X ~ atom When CDFX~ VaRnegative X ~ a positive negative has the fuzzy probability ð~a  ~a Þ [12] and is split into n fuzzy pieces.

6.6

Fuzzy Risk Measures

In conformance with the concept of risk measure discussed in Sect. 4.4 we grow the theory of fuzzy risk measure. The fuzzy risk measure ðF RMÞ evolves from the fuzzy risk space which is defined as the product of the uncertainty space ðG; F ; MÞ and probability space ðX; F; PÞ [9]. This leads to the triplet ðG  X; F  F; M  PÞ where, G  X is the universal set, F  F is the product r algebra and M  P is the product measure. The universal set G  X is the set of all ordered pairs ðg; wÞ where g 2 G and w 2 X that is, G  X ¼ fðg; wÞjg 2 G ^ w 2 Xg

ð6:22Þ

The product r algebra F  F is the infinitesimal r algebra containing measurable rectangles Q  B where, Q 2 F and B 2 F. Any element in F  F is called an event or occurrence in the fuzzy risk space. The product measure M  P is represented in terms of an event E in F  F. For each w 2 X,

[email protected]

124

6 Possibilistic View of Operational Risk

Ew ¼ fg 2 Gjðg; wÞ 2 Eg

ð6:23Þ

The event Ew is an event in F . The uncertain measure MðEw Þ exists for each w 2 X. However, MðEw Þ is not necessarily a measurable function with respect to w. In other words, for any real number r the set [1, 3], Xr ¼ fw 2 XjMfXr g  r g

ð6:24Þ

In Eq. (6.24) Xr is a subset of X but not necessarily an event in F. Thus, the probability measure PfXr g does not exist. In this case, we can assign the following values to PfXr g in the light of maximum uncertainty principle. This ensures that the probability measure PfXr g exists for any real number r [1, 3]. 8 min PfBg; if min PfBg\th > > B2F;B Xr < B2F;B Xr PfXr g ¼ max PfBg; if max PfBg [ th > B2F;B Xr B2F;B Xr > : th; otherwise

ð6:25Þ

In Eq. (6.25) th 2 ½0; 1. Now we state M  P of E as the expected value of MðEw Þ with respect to w 2 X as follows: Z1

PfXr gdr

ð6:26Þ

0

The integral in Eq. (6.26) is neither an uncertain measure nor a probability measure. It is called the fuzzy risk measure and is represented by F RMfE g. We now formally define fuzzy risk measure as follows: Let ðG; F ; MÞ  ðX; F; PÞ be a fuzzy risk space and E 2 F  F be an event, then the fuzzy risk measure of E is given by: Z1

F RMfE g ¼ Pfw 2 XjMfg 2 Gjðg; wÞ 2 E g  r gdr

ð6:27Þ

0

The F RMfE g is a monotonically increasing function of E, such that: F RMfQ  Bg ¼ MfQg  PfBg 8 Q 2 F ^ B 2 F

ð6:28Þ

Specifically, F RMfUg ¼ 0 and F RMfG  Xg ¼ 1. The F RMfEg is not self dual, that is for any event E we have, F RMfE g þ F RMfEc g 6¼ 1

ð6:29Þ

The F RMfEg is subadditive in nature, that is for any countable sequence of events E1 ; E2 ; . . . we have,

[email protected]

6.6 Fuzzy Risk Measures

125

Fig. 6.8 The 3-dimensional representation of fuzzy risk measures for a predefined risk level and the risk factors



X1 F RM [1 F RMfEi g i¼1 Ei  i¼1

ð6:30Þ

Figure 6.8 gives the 3-dimensional representation of fuzzy risk measures for a predefined risk level corresponding to the risk factors considered.

6.7

Fuzzy Deviation Measures

After the discussion of fuzzy risk measure in Sect. 6.4 we proceed to present the concept of fuzzy deviation measure here. The fuzzy deviation measure ðF DMÞ advances from the fuzzy deviation space which is defined as the product of the ~ F; ~ PÞ ~ [16, 17]. This uncertainty space ðG; F ; MÞ and fuzzy probability space ðX; ~ F  F; ~ is the universal set, ~ M  PÞ ~ where, G  X leads to the triplet ðG  X; ~ ~ F  F is the product fuzzy r algebra and M  P is the fuzzy product measure. The major difference in the terminologies of fuzzy risk and fuzzy deviation measures is the usage of probability space ðX; F; PÞ in the former and fuzzy probability space ~ F; ~ PÞ ~ in the later. This makes the fuzzy deviation space more confirmable to ðX; handle inherent vagueness and impreciseness present in real life data than the fuzzy ~ is set of all ordered pairs ðg; w ~ Þ where g 2 G risk space. The universal set G  X ~ ~ 2 X that is, and w

[email protected]

126

6 Possibilistic View of Operational Risk



~ ¼ ðg; w ~ ~ Þjg 2 G ^ w ~2X GX

ð6:31Þ

~ is the smallest r algebra containing vague The product fuzzy r algebra F  F ~ ~ Any element in F  F ~ measurable rectangles Q  B where, Q 2 F and tildeB 2 F. is called an event or occurrence in the fuzzy deviation space. The fuzzy product ~ ~ is represented in terms of an event E 0 in F  F. ~ For each w ~ 2 X, measure M  P ~ Þ 2 E0 g Ew0~ ¼ fg 2 Gjðg; w

ð6:32Þ

In Eq. (6.32) Ew0~ is an event in F . The uncertain measure here is denoted by ~ Unlike the earlier case MfEw~ 0 g is a mea~ 2 X. MfEw~ 0 g and it exists for each w ~ . Thus, for any real number s the set, surable function with respect to w

~ ~s ¼ w ~ 2 XjM X fXs g\s

ð6:33Þ

~ and an event in F. ~ is a subset of X ~ The fuzzy probability In Eq. (6.33) X s ~ s exists here. In this case we can assign: ~ X measure P 8

~ B ~ ; > min P > > ~s ~ F; ~ B ~ X > B2 <



~s ¼ ~ X P ~ B ~ ; max P > _ > > _B2F; ~s ~ ~B ~ X > : 0 th ;

if

min _

~~ X ~s ~ F ~ ;B B2

~~ B ~ [ th0 P



~ B ~ \th0 max P

if

ð6:34Þ

_

~s ~ F; ~B ~ X B2

otherwise

Equation (6.34) is represented in the light of maximum uncertainty principle.

~ s for any real ~ X This also ensures the existence of fuzzy probability measure P 0 0 ~ numbers. In Eq. (6.34) th 2 ½0; 1. The M  P of E is stated as the expected value

~ as follows: ~ 2X of M Ew0~ with respect to w Z

1



~ s dr ~ Q P

ð6:35Þ

0

The integral in Eq. (6.35) is called the fuzzy deviation measure and is represented by F DMfE0 g. Now we now formally define fuzzy deviation measure as follows [4]: ~ F; ~ PÞ ~ be a fuzzy deviation space and E0 2 F  F ~ be an Let ðG; F ; MÞ  ðX; 0 event, then the fuzzy risk measure of E is given by:

Z1 ~ ~ w ~ 2 XjM ~ Þ 2 E 0 g\s ds F DMfE 0 g ¼ P fg 2 Gjðg; w 0

[email protected]

ð6:36Þ

6.7 Fuzzy Deviation Measures

127

Now we give definitions of the fuzzy versions of α Value at Risk deviation measure and α Subjective Value at Risk deviation measure which are as follows [4]:     ~ ¼ V~aR~a X ~  EX ~ ~aV~aRdelta X ~a

ð6:37Þ

    ~ ¼ SV~aR~a X ~  EX ~ ~aSV~aRdelta X ~a

ð6:38Þ

After presenting the reader with an overview of ĝ-and-ĥ distribution, V~aR, SV~ aR, fuzzy risk measures and fuzzy deviation measures in the previous sections we briefly enumerate some observations on V~ aR and SV~aR which are worth mentioning in the context of banking and financial applications [4]: (i) The SV~ aR has superior mathematical properties than V~aR. The SV~aR of portfolio is a continuous and convex function with respect to positions in financial instruments whereas V~aR may be a discontinuous function. (ii) The SV~ aR deviation is a strong competitor to standard deviation. Almost in all applications standard deviation can be replaced by SV~aR deviation. For instance in finance SV~aR deviation can be used in the concepts like sharpe ratio, portfolio beta, optimal portfolio mix, market equilibrium with one or multiple deviation measures etc. [4]. (iii) Risk management with SV~aR functions can be done quite efficiently. The SV~ aR can be optimized and constrained with convex and linear programming methods whereas V~aR is relatively difficult to optimize. (iv) The V~ aR risk measure does not control scenarios exceeding V~aR. This property can be both positive as well as negative depending upon the stated objectives. The indifference of V~aR risk measure to extreme tails may be a good property if poor models are used for building distributions. The V~aR disregards some part of distribution for which only poor estimates are available. The V~ aR estimates are statistically more stable than SV~aR estimates. This actually may lead to a superior out-of-sample performance of V~aR versus SV~ aR for some applications. (v) The indifference of V~aR to extreme tails may be quite an undesirable property allowing to take high uncontrollable risks. (vi) The SV~ aR accounts for losses exceeding V~aR. This property may be positive or negative depending upon the stated objectives. The SV~aR provides an adequate picture of risks reflected in extreme tails. This is a very important property if the extreme tail losses are correctly estimated. The SV~aR may have a relatively poor out-of-sample performance compared with VaR if tails are not modeled correctly. (vii) The deviation and risk are quite different risk management concepts. A risk measure evaluates outcomes versus zero whereas a deviation measure estimates wideness of a distribution. For instance SV~aR risk may be positive or negative whereas SV~aR deviation is always positive. Therefore the sharpe ratio [4] involves SV~aR deviation in the denominator rather than SV~aR risk.

[email protected]

128

6.8

6 Possibilistic View of Operational Risk

Application: Fuzzy Subjective Value at Risk Optimization

Now we extended the concept of SVaR optimization discussed in Sect. 4.7 to fuzzy SVaR optimization generally expressed as SV~aR optimization in conformance with ~ F; ~ PÞ ~ fuzzy Subjective Value at Risk SV~aR defined on fuzzy probability space ðX; [16, 17]. Like SVaR we express SV~aR by a minimization formula which can be incorporated into the optimization problem with respect to the decision variables ~ This minimizes risk or reshapes it within bounds. ~x 2 X. Let us consider a fuzzy random loss function f ð~x; ~yÞ depending upon the fuzzy decision vector ~x and fuzzy random vector ~y of risk factors, such that we can derive SV~ aR from the following alternative expression:   ~ ¼w ~þ G~a ~x; w

h i 1 ~_ E f ð~x; ~yÞ  w 1  ~a

ð6:39Þ

  ~ is convex with respect to a~ and V~aR~a ð~xÞ is a minimum The function G~a ~x; w     ~ with respect to w. ~ On minimizing G~a ~x; w ~ with respect to w ~ we point of G~a ~x; w have [11, 13, 17]:   ~ SV~aR~a ð~xÞ ¼ min G~a ~x; w ~ a

ð6:40Þ

The SV~ aR can be represented as either constrained or unconstrained optimization problem. The SV~aR follows similar convexity preservation as SVaR [11, 17]. Likewise we can find identical minimization parameters for the fuzzy case also. In risk management, SV~aR can be utilized to shape the risk in an optimization model for which several fuzzy confidence levels can be specified. ~ and ~ When X g are convex and f ð~x; ~yÞ is convex in ~x the optimization problems obtained represent the fuzzy version of the convex programming [11]. When Y~ is a discrete fuzzy probability space with elements ~yk ; k ¼ 1; . . .; P having fuzzy probabilities ~ pk ; k ¼ 1; . . .; P we have [16]:   ~ ¼w ~ þ G~ai ~x; w i i

P h i 1 X ~ ~pk f ð~x; ~yk Þ  w i 1  ~ai k¼1

ð6:41Þ

  ~ w ~ can be replaced by a system of inequalities by The constraint G~a ~x; w introducing additional variables ~gk such that [11]: ~gk  0;

_

~  ~g  0; f ð~x; ~yk Þ  w

k ¼ 1; . . .; P

k

[email protected]

ð6:42Þ

6.8 Application: Fuzzy Subjective Value at Risk Optimization

129

1 XP ~p ~g  w ~ ð6:43Þ k¼1 k k 1  ~a For a detailed insight in this application interested readers can refer [4]. ~þ w

References 1. Ash, R.B., Doléans-Dade, C.A.: Probability and Measure Theory, 2nd edn. Academic Press, San Diego (2000) 2. Ayyub, B.M., Klir, G.J.: Uncertainty Modeling and Analysis in Engineering and the Sciences. Chapman and Hall/CRC, Taylor and Francis (2006) 3. Borell, C.: Lecture Notes on Measure Theory and Probability, Matematik. Chalmers och Göteborgs Universitet, Göteborg (2012) 4. Chaudhuri, A.: A Study of Operational Risk using Possibility Theory. Technical Report. Birla Institute of Technology Mesra, Patna Campus, India (2010) 5. Cohen, L.J.: The Probable and the Provable. Oxford University Press, Clarendon (1977) 6. Daníelsson, J., Jorgensen, B.N., Samorodnitsky, G., Sarma, M., De Vries, C.G.: Subadditivity Re-Examined: The Case for Value at Risk, Preprint. London School of Economics, London (2005) 7. Dutta, K., Perry, J.: A Tale of Tails: An Empirical Analysis of Loss Distribution Models for Estimating Operational Risk Capital. Federal Reserve Bank of Boston, Working Paper Number 6–13 (2006) 8. King, J.L.: Operational Risk: Measurement and Modeling. The Wiley Finance Series, 1st edn. Wiley, New York (2001) 9. Kolmogorov, A.N.: Foundations of the Theory of Probability. Chelsea Publishing Company, New York (1957) 10. Laha, R.G., Rohatgi, V.K.: Probability Theory. Wiley Series in Probability and Mathematical Statistics, vol. 43. Wiley, New York (1979) 11. Lodwick, W.A.: Fuzzy Optimization: Recent Advances and Applications. Studies in Fuzziness and Soft Computing. Springer, Berlin (2010) 12. Matthys, G., Beirlant, J.: Adaptive Threshold Selection in Tail Index Estimation. In: Embrechts P. (ed.) Extremes and Integrated Risk Management, pp. 37–49. Risk Waters Group, London (2000) 13. Rao, S.S.: Engineering Optimization: Theory and Practice, 4th edn. John Wiley and Sons, New York (2009) 14. Ruspini, E.H.: On the semantics of fuzzy logic. Int. J. Approx. Reason. 5, 45–88 (1991) 15. Shackle, G.L.S.: Decision, Order and Time in Human Affairs, 2nd edn. Cambridge University Press, UK (1961) 16. Talašová, J., Pavlačka, O.: Fuzzy probability spaces and their applications in decision making. Austrian J. Stat. 35(2–3), 347–356 (2006) 17. Xia, Z.: Fuzzy probability system: fuzzy probability space (1). Fuzzy Sets Syst. 120(3), 469– 486 (2001) 18. Zadeh, L.A.: A note on similarity based definitions of possibility and probability. Inf. Sci. 267, 334–336 (2014) 19. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1(1), 3–28 (1978) 20. Zadeh, L.A.: Fuzzy set. Inf. Control 8(3), 338–353 (1965) 21. Zeyl, D.J. (ed.): Encyclopedia of Classical Philosophy. Routledge, New York (2013) 22. Zimmermann, H.J.: Fuzzy Set Theory and its Applications, 4th edn. Kluwer Academic Publishers, Massachusetts (2001)

[email protected]

Chapter 7

Simulation Results

Abstract In this chapter the simulation results are presented from the experiments performed on the applications of value at risk (VaR) and subjective value at risk (SVaR) as well as fuzzy versions of VaR and SVaR in several optimization settings. The problem of risk control is presented using VaR and fuzzy VaR estimates along with the linear regression hedging problem. The equivalence of chance and value at risk constraints is illustrated through an example. The problem of portfolio rebalancing strategies in context of risk and deviation concludes the chapter. The risk management experiments are performed in MATLAB.







Keywords Simulation Optimization Risk control Linear regression hedging Portfolio rebalancing

7.1



Introduction

This chapter presents result of several experiments exemplifying the use of value at risk (VaR) and subjective value at risk (SVaR) as well as fuzzy versions of VaR and SVaR in optimization settings [4] highlighted in Chaps. 4 and 6 respectively. Section 7.2 starts with the problem of risk control using VaR and fuzzy VaR estimates. The linear regression hedging problem is given in Sect. 7.3 considering the different estimates. Section 7.4 explains the equivalence of chance and value at risk constraints through an example. Finally, this chapter concludes with the problem of portfolio rebalancing strategies [2] in context of risk and deviation. The optimization problems are constructed giving due consideration to the input variables and the corresponding parameters with respect to certain constraints [17]. The continuous input variables considered here such that the problems are constrained in nature. The major objective is to find the best solution from all feasible solutions [4].

© Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_7

[email protected]

131

132

7

Simulation Results

The entire computational framework is based on the basic model of portfolio optimization [2] derived from the seminar work of Markowitz’s mean variance [13, 14]. The basic idea adheres to the modern portfolio theory [7] which is a mathematical formulation of diversification in investing with the aim of selecting a collection of investment assets that has collectively lower risk than any individual asset. This is possible because different types of assets often change in value in opposite ways. The diversification lowers risk even if assets’ returns are not negatively correlated even if they are positively correlated. It models an asset’s return as a normally or elliptically distributed random variable, defines risk as standard deviation of return and models a portfolio as a weighted combination of assets such that the return of a portfolio is the weighted combination of assets’ returns. By combining different assets whose returns are not perfectly positively correlated it seeks to reduce the total variance of portfolio return. The investors are rational and markets are efficient is assumed apriori [8]. However, there are certain theoretical and practical criticisms that may be raised. These include evidence that financial returns do not follow a Gaussian distribution and that correlations between asset classes are not fixed but can vary depending on external events especially in crises. Further there remains evidence that investors are not rational and markets may not be efficient. Also the low volatility anomaly conflicts with assumption of higher risk for higher return such that a portfolio consisting of low volatility equities reaps higher risk-adjusted returns than a portfolio with high volatility equities [4]. Here a set of n assets are considered which are associated with an expected return per period and each asset pair has covariance. A benchmark dataset of extended portfolio optimization problem instances with additional constraints and transaction costs for this work have been adopted from [1]. The problems are solved with simultaneous constraints on various risks at different time intervals such as multiple constraints on standard deviation obtained by resampling combined with VaR or V~ aR and drawdown constraints thus allowing robust decision making. For solving large scale problems the algorithms were fine-tuned towards analysing solutions generated by optimization or through other techniques as and when required. We used MATLAB environment to perform this risk management experiments [4].

7.2

Risk Control Estimates

The risk control estimation [5] focuses on obtaining problem based relevant information including previously developed scope of the problem, schedule details and data such that the control estimate and corresponding cost can be prepared. The scope level varies depending on the phase, type and complexity of the problem and also includes the design matrix and criteria, all assumptions and pertinent details. These components are estimated using different techniques depending on the scope, size and complexity of the problem. The end result of risk control estimation should

[email protected]

7.2 Risk Control Estimates

133

lead to complete traceable history for each estimate. As the design progresses into the final phases and more details are revealed items within the estimate become more exact. The key inputs at this point in the delivery include a more detailed scope, historical databases and other cost databases, knowledge of market conditions and use of escalation rates [4]. During the planning phase the schedule is cursory and very general in its coverage. However, major milestones are always included. Now we first consider the problem of risk control using VaR and V~aR estimates [4]. The risk control using these two estimates may often lead to paradoxical results for skewed distributions when the data points cluster more toward one side of scale than other creating an asymmetrical curve. The distribution may be either positively or negatively skewed. In positively skewed curve the scores fall towards the lower side of scale and there are very few higher scores. The positively skewed data is skewed to the right because it is in the direction of long tail end of the chart. In negatively skewed curve the scores fall towards the higher side of scale and there are very few low scores. The minimization of these estimates may lead to a stretch of the tail of the distribution exceeding them. The purpose of both VaR and V~aR minimization is to reduce extreme losses. However, minimization of these estimates may lead to an increase in the extreme losses that we may try to control. This is an undesirable feature of both the estimates. This fact has been proved for VaR by various researchers [3, 10, 15, 16]. All these algorithms are based on the minimization of SVaR. The minimization of VaR leads to about 16 % increase of the average loss for the worst 1 % scenarios compared with the worst 5 % scenario in SVaR minimum solution [24]. These experimental results are in conformance with theoretical results. The SVaR and SV~aR are coherent whereas VaR and V~aR are non coherent measure of the risk. Figure 7.1 produces results showing how iteratively VaR and V~ aR are decreasing and SVaR and SV~aR are increasing. Similar results are

Fig. 7.1 VaR and V~aR minimization

[email protected]

134

7

Simulation Results

observed for both VaR and V~aR on a portfolio consisting of 20 bonds modeled with 1000 scenarios. We now consider the following four optimization problems [12, 18, 19]. In the first problem, we minimized 99 % SVaR deviation of losses subject to constraints on budget and required return. In the second problem, we minimized 99 % VaR deviation of losses subject to the same constraints. The third and fourth problems are the fuzzy versions of SVaR and VaR [4]. The variables used in the first and second problems are crisp [19] whereas the variables in the third and fourth problems are possibilistic [4, 23]. Problem 1: subject to

min SVaRdelta ðxÞ a n P ri xi  rlower i¼1 n P

xi ¼ 1

i¼1

ðxÞ Problem 2: minVaRdelta a n P subject to ri xi  rlower i¼1 n P

xi ¼ 1

i¼1

Problem 3: subject to

ð~xÞ min SV~aRdelta a n P ~ri~xi  ~rlower i¼1 n P

~xi 6¼ 1

i¼1

ð~xÞ Problem 4: minV~aRdelta a n P ~ri~xi  ~rlower subject to i¼1 n P

~xi 6¼ 1

i¼1

In the above expressions, x is the vector of portfolio weights, ri is the rate of return of asset i and rlower is the lower bound on estimated portfolio return. The corresponding fuzzy versions of portfolio weights, rate of return of asset i and the lower bound on the estimated portfolio return are given by ~x, ~ri and ~rlower respectively. The differential risk functions are evaluated at the optimal points for the problems 1, 2, 3 and 4. The results are given in Table 7.1 and illustrated in Sects. 7.2.1 and 7.2.2.

[email protected]

7.2 Risk Control Estimates

135

Table 7.1 The value of risk functions

SVaR0.99 SVaRdelta 0:99 VaR0:99 VaRdelta 0:99 Max loss ¼ SVaR1 Max loss deviation ¼ SVaRdelta 1 SV~aR0:99 SV~aRdelta 0:99 V~aR0:99 V~aRdelta 0:99 Max loss ¼ SV~aR1 Max loss deviation ¼ SV~ aRdelta 1

7.2.1

min SVaRdelta 0:99 ð xÞ

min VaRdelta 0:99 ð xÞ

Ratio

0.0072 0.0362

0.0082 0.0372

1.14 1.02

0.0021 0.0311

0.0005 0.0296

0.23 0.95

0.0132 0.0421

0.0147 0.0436

1.11 1.03

0.0067 0.0359

0.0079 0.0369

1.17 1.02

0.0019 0.0309

0.0004 0.0295

0.21 0.95

0.0130 0.0419

0.0145 0.0434

1.11 1.03

Value at Risk

In Table 7.1 the results of VaR and corresponding SVaR are stated for problems 1 and 2. We start our calculations with the portfolio having minimal 99 % SVaR deviation. The minimization of 99 % VaR deviation leads to 14 % increase in 99 % SVaR when compared with 99 % SVaR in the optimal 99 % SVaR deviation portfolio [14, 18].

7.2.2

Fuzzy Value at Risk

In Table 7.1 the results of V~aR and corresponding SV~aR are stated for problems 3 and 4. Here also we start our calculations with the portfolio having minimal 99 % SV~ aR deviation. The minimization of 99 % V~aR deviation grows to 17 % increase in 99 % SV~ aR when compared with 99 % SV~aR in optimal 99 % SV~aR deviation portfolio [4, 14, 18]. It is observed that even in a problem with a relatively small number of scenarios if the distribution is skewed the minimization of both VaR and V~aR deviations may lead to a stretch of the tail compared with both SVaR and SV~aR optimal portfolios. This is an important result from the point of view of the financial risk management regulations like Basel II that are based on the minimization of both VaR and V~aR deviations.

[email protected]

136

7.3

7

Simulation Results

Linear Regression Hedging

Next we investigate the performance of optimal hedging strategies based on different deviation measures adhering to the quality of hedging. This is a commonly used strategy when companies use derivative instruments to hedge economic exposures [11]. Without this derivative gains or losses and gains or losses associated with the risk being hedged hits earnings in different time periods. The resulting income volatility masks the objectives of hedging strategy. The most onerous requirement for hedging is that the derivative’s results must be expected to be highly effective in offsetting changes in fair value or cash flows associated with risks being hedged. In portfolio optimization [2] the objective is to grow a portfolio of financial instruments that mimics the benchmark portfolio. The weights in the replicating portfolio are chosen such that the deviation between value of the replicating portfolio and value of the benchmark is minimized. The benchmark value and replicating financial instruments values are random in nature. The determination of optimal hedging strategy is a linear regression problem where the response is benchmark portfolio valve, the predictors are replicating financial instrument val^ ues, and the coefficients of predictors to be determined are portfolio weights. Let / be the replicating portfolio value /0 be the benchmark portfolio value /1 ; . . .; /I be the replicating instrument values and x1 ; . . .; xI be their weights such that the replicating portfolio value can be expressed as follows [14, 22]: ^ ¼ x1 / þ    þ xI / / 1 I

ð7:1Þ

The coefficients x1 ; . . .; xI should be chosen such that replication error function is ^ According to the equivalence of minimized based upon the residual /0  /. Eqs. (4.40) and (4.48), an error minimization problem is equivalent to the minimization of appropriate deviation [20, 21]. Here we consider hedging pipeline risk in the mortgage underwriting process. The hedging instruments considered are 5 % mortgage backed securities forward, 5.5 % mortgage backed securities and call options on 10 year treasury note futures. The changes of values of benchmark and hedging instruments are driven by changes in the mortgage rate. We minimized five different deviation measures viz. standard deviation, mean absolute deviation, SVaR deviation, two tailed 75 % VaR deviation and two tailed 90 % VaR deviation. We tested the in sample and out of sample performance of the hedging strategies. On the set scenarios, two tailed 90 % VaR has the best out of sample performance whereas the standard deviation has worst out of sample performance. The out of sample performance of hedging strategies based on different deviation measures significantly depends on the skewness of distribution. The distribution of residuals

[email protected]

7.3 Linear Regression Hedging

137

here is quite skewed. We consider the following definition of loss and deviation functions [9, 22]: LF ðx; /Þ ¼ LF ðx1 ; . . .; xI ; /0 ; . . .; /I Þ ¼ /0 

I X

/i xi

ð7:2Þ

i¼1

    The loss function given by Eq. (7.2) has j scenarios LF x; /1 ; . . .; LF x; /J each with probability pj ; j ¼ 1; . . .; J. The deviation measures considered here are briefly enumerated below [6, 9, 22]: Mean Absolute Deviation ¼ MAbDevðLF ðx; /ÞÞ

ð7:3Þ

Standard Deviation ¼ SDevðLF ðx; /ÞÞ

ð7:4Þ

ðLF ðx; /ÞÞ a % VaR Deviation ¼ VaRdelta a

ð7:5Þ

ðLF ðx; /ÞÞ a % SVaR Deviation ¼ SVaRdelta a

ð7:6Þ

ðLF ðx; /ÞÞ Two Tail a % VaR Deviation ¼ TwTaVaRdelta a ¼ VaRa ðLF ðx; /ÞÞ þ VaRa ðLF ðx; /ÞÞ

ð7:7Þ

Now, with respect to the above deviation measures we solved the following minimization problems [6, 19]: Minimize 90 % SVaR Deviation : min SVaRdelta 0:9 DevðLFðx; /ÞÞ

ð7:8Þ

Minimize Mean Absolute Deviation : min MAbDevðLF ðx; /ÞÞ

ð7:9Þ

Minimize Standard Deviation : min SDevðLF ðx; /ÞÞ

ð7:10Þ

x

x

x

Minimize Two Tail 75 %VaR Deviation ¼ min TwTaVaRdelta 0:75 ðLF ðx; /ÞÞ

ð7:11Þ

Minimize Two Tail 90 % VaR Deviation ¼ min TwTaVaRdelta 0:9 ðLF ðx; /ÞÞ

ð7:12Þ

x

x

The data set includes 1000 scenarios of value changes for each hedging instrument and the benchmark [4]. For out of sample testing, we partitioned 1000 scenarios into 10 groups with 100 scenarios in each group. Every time one group for out of sample test was selected and we optimal hedging positions were calculated based on the remaining nine groups containing 900 scenarios. For each group of 100 scenarios, the ex-ante losses were calculated with optimal hedging positions obtained from 900 scenarios. This procedure was repeated 10 times, once for every out of sample group with 100 scenarios. To estimate out of sample performance, the out of sample losses were aggregated from 10 runs and obtained a combined set including 1000 out of

[email protected]

138

7

Simulation Results

sample losses. Then five deviation measures were calculated based on out of sample 1000 losses viz. standard deviation, mean absolute deviation, SVaR deviation, two tailed 75 % VaR deviation and two tailed 90 %VaR deviation. Also three downside risk measures were calculated viz. 90 % SVaR, 90 % VaR and 100 % SVaR on out of sample losses. The results are given in Tables 7.2 and 7.3.

7.3.1

Value at Risk Deviation

The Var Deviation is minimized [4] through the Eq. (7.9) with respect to the deviation measures stated in Eqs. (7.3)–(7.9). In Tables 7.2 and 7.3 the results of VaR Deviation are highlighted. Equations (7.10) and (7.11) represent the minimization of Two Tail 75 % VaR Deviation and Two Tail 90 % VaR Deviation. By minimizing two tailed 90 % VaR deviation the best values were obtained for all three considered downside risk measures viz. 90 % SVaR, 90 % VaR and 100 % SVaR on out of sample losses.

7.3.2

Subjective Value at Risk Deviation

The SVaR Deviation is minimized [4] through the Eq. (7.9) with respect to the deviation measures stated in Eqs. (7.3)–(7.9). In Tables 7.2 and 7.3 the results of SVaRDeviation are highlighted. The minimization of SVaR deviation leads to good results whereas minimization of standard deviation gave worst level for three Table 7.2 The out of sample performance of various deviations on optimal hedging portfolios Optimal points

SVaRdelta 0:9

MAbDev

SDev

TwTaVaRdelta 0:75

TwTaVaRdelta 0:9

SVaRdelta 0:9

0.670

0.719

1.950

0.272

1.121

MAbDev SDev TwTaVaRdelta 0:75

1.134 1.399 1.309

0.699 0.638 0.950

1.638 1.109 1.945

0.375 0.975 0.996

1.875 1.825 1.500

TwTaVaRdelta 0:9

0.919

0.738

1.819

0.637

1.255

Table 7.3 The out of sample performance of various downside risks on optimal hedging portfolios Optimal points

Max loss

SVaR0:9

VaR0:9

SVaRdelta 0:9

−20.03

−20.04

−20.06

MAbDev SDev TwTaVaRdelta 0:75

−18.46 −15.03 −16.30

−18.66 −16.26 −16.92

−18.88 −16.68 −16.85

TwTaVaRdelta 0:9

−20.02

−20.50

−18.74

[email protected]

7.3 Linear Regression Hedging

139

downside risk measures viz. 90 % SVaR 90 % VaR and 100 % SVaR on out of sample losses.

7.3.3

Mean Absolute Deviation

The Mean Absolute Deviation is minimized [4] through the Eq. (7.9) with respect to the deviation measures stated in Eqs. (7.3)–(7.9). In Tables 7.2 and 7.3 the results of Mean Absolute Deviation are highlighted.

7.3.4

Standard Deviation

The Standard Deviation is minimized [4] through the Eq. (7.10) with respect to the deviation measures stated in Eqs. (7.3)–(7.9). In Tables 7.2 and 7.3 the results of Standard Deviation are highlighted.

7.4

Example: Equivalence of Chance and Value at Risk Constraints

Now we consider a situation where we illustrate the equivalence between chance constraints and VaR constraints [4]. Here we present the equivalence empirically. There are several engineering applications deal with the equivalence of chance and VaR constraints. The chance constraints are probabilistic in nature and occurs in situations such as the reliability of system or delivery system likelihood to meet a demand. In portfolio management often it is required that portfolio loss with high reliability should not exceed some value. In these cases an optimization model can be set up so that constraints are required to be satisfied with some probability level. The chance constraints and VaR (percentile) constraints are closely related. Here we illustrate numerically the equivalence of the constraints. Let us consider the following expression [9, 22]: ProbfLF ðx; /Þ [ mg  1  a , VaRa ðLF ðx; /ÞÞ  m

ð7:13Þ

In Eq. (7.13) LF ðx; /Þ represents the loss function which appears in both the chance as well as the VaR expressions. The loss function LF ðx; /Þ is given by: LF ðx; /Þ ¼ LF ðx1 ; . . .; xI ; /1 ; . . .; /I Þ ¼ 

I X i¼1

[email protected]

/ i xi

ð7:14Þ

140

7

Simulation Results

Here we consider the data set including 1000 return scenarios for 10 cluster of loans. The number of instruments I ¼ 10; /1 ; . . .; /I is the rates of returns of instruments; x1 ; . . .; xI are the instrument weights and LF ðx; /Þ is portfolio loss. We solved two portfolio optimization problems. In both cases we maximized the estimated return of the portfolio. In the first problem, we imposed a constraint on probability. In the second problem, we imposed an equivalent constraint on VaR. In the first problem we require 95 % VaR of the optimal portfolio to be at most equal to the constant m. In the second problem we require the probability of losses greater than m to be lower than 1  a ¼ 1  0:95 ¼ 0:05. It was expected to obtain at optimality the same objective function value and similar optimal portfolios for the two problems. The problem formulations are as follows [6, 14, 18]: Problem 1: max Eqvc½LF ðx; /Þ subject to ProbfLF ðx; /Þ [ mg  1  a ¼ 0:05 vi  xi  wi ; i ¼ 1; . . .; I I P xi ¼ 1 i¼1

Problem 2: max Eqvc½LF ðx; /Þ subject to VaRa ðLF ðx; /ÞÞ  m vi  xi  wi ; i ¼ 1; . . .; I I P xi ¼ 1 i¼1

In the above problems, vi is the lower bound and wi is the upper bound on the position for asset i. The budget constraint sum of weights is equal to 1. The two problems at optimality selected the same portfolios and have the same objective function value of 121.86. Table 7.4 shows the optimal points.

Table 7.4 Chance versus VaR constraints

Optimal weights

Prob  0:05

VaR  m

x1 x2 x3 x4 x5 x6 x7 x8 x9 x10

0.055 0.057 0.072 0.054 0.075 0.286 0.021 0.296 0.059 0.025

0.055 0.057 0.072 0.054 0.075 0.286 0.021 0.296 0.059 0.025

[email protected]

7.5 Portfolio Rebalancing Strategies: Risk Versus Deviation

7.5

141

Portfolio Rebalancing Strategies: Risk Versus Deviation

Here we consider a portfolio rebalancing problem [2, 4]. A portfolio manager allocates his wealth to different funds periodically solving an optimization problem. In each time period builds portfolio that minimizes certain risk function given budget constraints and bounds on each exposure. The disciplined investors generally strives for balance in their portfolios. The asset allocation mix of volatile stocks and more stable bonds largely dictates how difficult the ride is going to be. A portfolio with 90 % exposure to equities is going to feel like being in a race car while a portfolio of 90 % high-quality fixed income might feel more like riding a horse-drawn carriage. Either of these is fine as long as it matches the user’s expectations and requirements. Choosing the right asset allocation is likely the most important investment decision for the user. So it is very much crucial to get things right. The target mix of stocks and bonds should be based on the time horizon, the rate of return required to meet the goals and the user’s own comfort level with markets’ ups and downs. There is only one problem from day one viz. the actual asset allocation changes as the markets move. Let’s say we have a portfolio with a target mix of 50 % stocks and 50 % bonds. If stocks soar and bonds plummet or vice versa the equity allocation could become 60 or 40 % bonds. Then the designed portfolio would have fundamentally different risk and return characteristics. This is why we occasionally need to pull off the road and rebalance by adding money to the asset classes that are below their targets and trimming back those above them. That restores the portfolio to its original asset mix and keeps the risk under control. If the goal of rebalancing is primarily to maintain a target asset allocation and manage risk then the question is how often it is to be done. Before this issue is tackled two main criteria for triggering a rebalance need to be addressed. The first issue is time where rebalance might be scheduled once or twice a year on predetermined dates. The second issue is to rebalance only when an asset class drifts away from its target by a specific amount. The rebalancing is generally done purely based on the calendar. The answer depends on the data considered. Studies have suggested that once in every two years rebalancing is best because it allows to take advantage of momentum. This makes sense since bull runs tend to last a few years. Rebalancing based on thresholds requires you to keep a closer eye on your portfolio. With a simple 50/50 target an investor might decide to rebalance any time stocks or bonds drop below 45 % or rise above 55 % and volatility of the markets dictates how often this happens. If rebalancing is shown to produce a bump in returns in volatile markets then this might be worth the hassle. If there is a regular addition or withdrawal from the portfolio an opportunity arises to keep asset allocation on target over time. If the contributions are directed to the assets below target and take withdrawals from those that are overweight. If the portfolio is small relative to these contributions or withdrawals the cash flows alone could keep things on target though with larger portfolios they may not move the allocations enough.

[email protected]

142

7

Simulation Results

On the surface rebalancing seems like an almost trivial concept. So what is hard about keeping the portfolio’s risk in line with long-term target! Just making a couple of trades once or twice a year ignores how emotionally difficult it can be to execute a simple rebalancing plan. The threshold rebalancing can be especially difficult. With time based rebalancing the scheduled date will often come after a period when stocks and bonds have both gone up but at different rates. That is an easier situation to deal with emotionally. But when rebalancing is done with thresholds it only make trades after one asset class tanks hard. While it can be tempting to look for the optimal rebalancing strategy the experts suggest not getting bogged down in the details. When savings are build rebalancing with new cash flows will probably be required to be done for a few years. Once the portfolio starts getting bigger it is has to be decided whether to rebalance based on time, thresholds or a combination of both. Let us consider the following [6, 13, 19] the following optimization problem: min T ðx; /Þ subject to

k  Eqvc½LFðx; /Þ I P xi ¼ 1 i¼1

vi  ðxi Þ  wi ; i ¼ 1; . . .; I In the above expression Tðx; /Þ is the risk function; Eqvc ½LFðx; /Þ is the expected portfolio return; /1 ; . . .; /I are rates of returns of instruments and x1 ; . . .; xI are the instrument weights. The data set scenario is composed of 46 monthly return observations for seven managed funds. The optimization problem is solved using the first 10 scenarios; then the portfolio is rebalanced monthly where weights of assets are realigned and adjusted according to every 30 day’s requirement [18, 24]. It involves buying or selling assets in the portfolio to maintain the original desired level of asset allocation. For example, let us consider the original target asset allocation as 50 % stocks and 50 % bonds [4]. If the stocks performed well during the period, it could increase the stock weighting of the portfolio to 70 %. It may then be decided to sell some of the stocks and buy bonds to get it back to the original target allocation of 50/50. We consider another illustration on portfolio asset mix. Stephen has $100,000 to invest. He decides to invest 50 % ($50,000) in bond fund, 10 % ($10,000) in treasury fund and 40 % ($40,000) in equity fund as shown in Fig. 7.2.

Fig. 7.2 The portfolio asset mix: opening balance

[email protected]

7.5 Portfolio Rebalancing Strategies: Risk Versus Deviation

143

Fig. 7.3 The portfolio asset mix: closing balance

At the end of the year, he finds that the equity portion of his portfolio has dramatically outperformed the bond and treasury portions. This has caused a change in his allocation of assets, increasing the percentage that he has in the equity fund while decreasing the amount invested in treasury and bond funds. More specifically Fig. 7.3 shows that Stephen’s ($40,000) 40 % investment in the equity fund has grown to ($55,000) 49 % an increase of 37 %. Conversely, the bond fund suffered realizing a loss of 5 % but the treasury fund realized a modest increase of 4 %. The overall return on Stephen’s portfolio was 12.9 % but now there is more weight on equities than on bonds. Stephen might be willing to leave the asset mix as is for the time being but leaving it too long could result in an overweighting in equity fund which is more risky than bond and treasury fund. The risk functions used are VaR, SVaR, VaR Deviation, SVaR Deviation and Standard Deviation. The Sharpe ratio [4] is evaluated and then the mean value of each sequence of portfolios obtained by successively solving the optimization problem with a given objective function. The results are reported in Tables 7.5 and 7.6 for different values of the parameter p. Both the values of Sharpe ratio and mean portfolio [4] were found to be a good performance measure of VaR and VaR Deviation minimization whereas Standard Deviation minimization gives inferior results. The portfolios were rebalanced 37 times for each objective function. The Table 7.5 The out of sample Sharpe ratio p

VaR

SVaR

VaR deviation

SVaR deviation

Standard deviation

−1 −3 −5

1.2510 1.2513 1.2519

1.2409 1.2469 1.2467

1.2386 1.2567 1.2523

1.2496 1.2457 1.2545

1.2186 1.2475 1.2434

Table 7.6 The out of sample portfolio mean return p

VaR

SVaR

VaR deviation

SVaR deviation

Standard deviation

−1 −3 −5

0.2309 0.2445 0.2469

0.2357 0.2375 0.2419

0.2347 0.2445 0.2467

0.2369 0.2396 0.2425

0.2375 0.2336 0.2341

[email protected]

144

7

Simulation Results

results depend on the scenario data set and on the parameter p. Thus we conclude that the minimization of a certain risk function is always not the best choice. However, we observe that in the presence of mean reversion, the tails of historical distribution are not good predictors of the tails in the future. In this case, VaR disregarding the tails may lead to a good out of sample portfolio performance. In fact VaR disregards unstable part of distribution.

References 1. Benchmark Datasets in Portfolio Optimization: http://www.cs.nott.ac.uk/*rxq/POdata.htm 2. Best, M.J.: Portfolio Optimization. Chapman and Hall, CRC Finance Series (2010) 3. Bucay, N., Rosen, D.: Credit risk of an international bond portfolio: a case study. Algo Res. Q. 2(1), 9–29 (1999) 4. Chaudhuri, A.: A Study of Operational Risk using Possibility Theory. Technical Report, Birla Institute of Technology Mesra, Patna Campus, India (2010) 5. Cretu, O., Stewart, R.B., Berends, T.: Risk Management for Design and Construction. Wiley (2011) 6. Dutta, K., Perry, J.: A Tale of Tails: An Empirical Analysis of Loss Distribution Models for Estimating Operational Risk Capital. Federal Reserve Bank of Boston, Working Paper Number 6–13 (2006) 7. Elton, E.J., Gruber, M.J., Brown, S.J., Goetzmann, W.N.: Modern Portfolio Theory and Investment Analysis, 8th edn. Wiley India, New Delhi (2010) 8. Khatri, D.K.: Security Analysis and Portfolio Management. Trinity Press (2012) 9. Laha, R.G., Rohatgi, V.K.: Probability Theory, vol 43. Wiley Series in Probability and Mathematical Statistics. Wiley (1979) 10. Larsen, N., Mausser, H., Uryasev, S.: Algorithms for optimization of value at risk. In: Pardalos, P.M., Tsitsiringos, V.K. (eds.) Financial Engineering, E-Commerce and Supply Chain, pp. 129–157. Kluwer Academic Publishers, Dordrecht (2000) 11. Lim, K.G.: Financial Valuation and Econometrics. World Scientific Press (2011) 12. Lodwick, W.A.: Fuzzy Optimization: Recent Advances and Applications, Studies in Fuzziness and Soft Computing. Springer, Berlin (2010) 13. Markowitz, H.M.: Portfolio selection. J. Fin. 7(1), 77–91 (1952) 14. Markowitz, H.M.: Portfolio Selection: Efficient Diversification of Investments. Wiley, Chapman and Hall Limited, New York, London (1959) 15. Mausser, H., Rosen, D.: Applying scenario optimization to portfolio credit risk. Algo Res. Q. 2 (2), 19–33 (1999) 16. Mausser, H., Rosen, D.: Efficient risk/return frontiers for credit risk. Algo Res. Q. 2(4), 35–48 (1999) 17. Mcneil, A.J., Frey, R., Embrechts, P., Quantitative Risk Management—Concepts, Techniques and Tools. Princeton Series in Finance. Princeton University Press (2005) 18. Panjer, H.H.: Operational Risk: Modeling Analytics. Wiley Series in Probability and Statistics, New York (2006) 19. Rao, S.S.: Engineering Optimization: Theory and Practice, 4th edn. Wiley, New York (2009) 20. Rockafellar, R.T., Uryasev, S., Zabarankin, M.: Risk tuning with generalized linear regression. Math. Oper. Res. 33(3), 712–729 (2008) 21. Rockafellar, R.T., Uryasev, S., Zabarankin, M.: Deviation Measures in Generalized Linear Regression. Research Report 2002–9, Department of Industrial and Systems Engineering, University of Florida, Gainesville (2002)

[email protected]

References

145

22. Ryan, T.P.: Modern Regression Methods, 2nd edn. Wiley Series in Probability and Statistics. Wiley (2009) 23. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1(1), 3–28 (1978) 24. Zappe, C., Albright, S.C., Winston, W.L.: Data Analytics, Optimization and Simulation Modeling, 4th edn. Ceneage Learning India, New Delhi (2011)

[email protected]

Chapter 8

A Case Study: Iron Ore Mining in India

Abstract In this chapter a case study from Jharkhand state in India is presented to demonstrate the quantitative modeling of operational risk using possibility theory. The mathematical modeling is performed through bilevel multiobjective optimization problem in a fuzzy environment. The datasets from iron ore (hematite) mines in the Jharkhand state sets up the computational framework. The risk is calculated using fuzzy subjective value at risk (SVaR) constraints. The sensitivity analysis of the approach is also highlighted. A comparative analysis of the proposed approach with other techniques is also illustrated. Keywords Bilevel multiobjective optimization Sensitivity analysis Comparative analysis



8.1

 Risk calculation  Fuzzy SVaR 

Introduction

This chapter presents a case study in India to demonstrate the quantitative modeling of operational risk using possibility theory. The scenario is adopted from the iron ore (hematite) mines in Jharkhand state [7] in India where a bilevel multiobjective optimization problem is formulated under a fuzzy environment for assigning hematite resources [8]. Section 8.2 gives an overview of the dataset used in the experiments and the computational framework. The risk calculation with fuzzy subjective value at risk constraints is explained in Sect. 8.3. Section 8.4 highlights the corresponding sensitivity analysis. Finally, this chapter concludes with comparative analysis with other techniques.

© Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_8

[email protected]

147

148

8.2

8 A Case Study: Iron Ore Mining in India

Dataset and Computational Framework

Jharkhand is a state in eastern India as shown in Fig. 8.1 [5]. The state shares its border with the states of Bihar to the north, Uttar Pradesh and Chhattisgarh to the west, Odisha to the south and West Bengal to the east as shown. It has an area of 79,710 km2. The industrial city of Ranchi is its capital and Dumka its sub capital. Jamshedpur is the largest industrial city in the state while Dhanbad and Bokaro Steel City are the second and fourth most populous cities respectively. The different districts of the state are shown in Fig. 8.2 [5]. It is a leading mining state of India both in terms of mineral resources and production. Jharkhand accounts for 40 % of the mineral resources of India. According to the Indian Bureau of Mines, the state accounts for nearly 27.58 % of iron ore (hematite) resources of the country as shown in Table 8.1 [7]. The hematite found in this area (also known as black diamond) has high density and harder than pure iron. It has promising physical and chemical properties so that it can be processed into many useful products viz. rouge makeup (face paint), polish, ornamental jewellery (part of necklace, ring or bracelet), fabrics, dye, steel tools, vehicles, nails, bolts and bridges, medical instruments, table salt and salt licks for cattle food, catalytic converters etc. as shown in Fig. 8.3. Figure 8.4 [1] shows the actual iron ore industry process from exploitation to production. Due to the constant

Fig. 8.1 Jharkhand state in India

[email protected]

8.2 Dataset and Computational Framework

149

Fig. 8.2 The different districts in Jharkhand state

Table 8.1 The mineral resources in Jharkhand (as on 1st April 2005) Mineral

Jharkhand (in million tonnes)

India (in million tonnes)

Percentage of share in India

Coal Iron ore (hematite) Copper ore Fireclay Graphite Bauxite Manganese ore Feldspar Dolomite Limestone Chromite Iron ore (magnetite)

75,460.14 4035.74

264,535.06 14,630.39

28.52 27.58

226.08 66.80 10.34 117.54 7.47 1.65 51.09 745.77 0.74 10.26

1394.43 704.76 168.77 3289.81 378.57 90.78 7533.10 975,344.90 213.06 10,619.48

16.21 9.47 6.12 3.57 1.97 1.81 0.67 0.42 0.34 0.096

deterioration of natural vegetation along with air and water pollution and aggravation of the ecological environment caused by ineffective exploitation and production, it has become urgent for both the Jharkhand government and the iron ore mines to optimize the assignment strategy.

[email protected]

150

8 A Case Study: Iron Ore Mining in India

Fig. 8.3 The products from iron ore (hematite)

Till date over 4500 million tonnes of iron ore (hematite) available is being extracted in Jharkhand state according to Indian Bureau of Mines. At present only 45 iron ore mines are operational in the state. The working group for 12th plan, Planning Commission, Government of India has estimated that the production of iron ore (hematite) would increase by at least 50 % by the year 2016–17. This will meet the domestic demand as well as enhance the export from the state. In view of this the number of iron ore (hematite) mines has to be increased to 96. From the available data [7] waste rock and mine water emission coefficients are verified accordingly [8]. The validation demonstrates that when the membership function is trapezoidal [3] about 96.7 % accurate results are obtained. Hence, the mine water emission coefficients are considered as fuzzy numbers as illustrated in Tables 8.2 and 8.3 [1, 10]. Some of the operational risks associated with iron ore (hematite) mines in Jharkhand state are deforestation, environmental pollution, soil erosion, sinkholes formation and land subsidence, biodiversity loss, water contamination etc. These risks have direct impact on the resident population in the neighbouring areas leading to deteriorating environmental conditions resulting in serious health hazards such as cough, ulcer, swelling of bone joints, asthma, eye problems etc. In order to control this several safeguard measures need to be adopted. To achieve this, the state government allocates its resources to different activities by solving an optimization problem that minimizes the risk function subject to the budget constraints and the stated bounds. The optimization problem considered here is as follows [4, 6]:

[email protected]

8.2 Dataset and Computational Framework

Fig. 8.4 The basic flow chart of iron ore (hematite) industry

[email protected]

151

152

8 A Case Study: Iron Ore Mining in India

Table 8.2 The parameters for every hematite mine Hematite mines

Parameters   f i kg/m3 Ed

Ranchi

(24.7, 25.9, 26.8, 27.7) (21.5, 26.8, 27.8, 28.6) (19.8, 24.8, 25.9, 26.8) (24.7, 25.9, 26.8, 27.7) (21.5, 24.6, 25.9, 26.4) (20.9, 25.5, 26.9, 28.7) (24.7, 25.9, 26.8, 27.7) (21.6, 25.9, 27.9, 28.4) (20.7, 25.8, 26.6, 27.9) (24.6, 26.9, 27.5, 27.3) (24.4, 26.7, 27.7, 28.8) (20.5, 21.5, 24.4, 25.9) (21.4, 24.7, 25.8, 26.8) (24.9, 25.5, 27.8, 28.7)

Hazaribagh Chaibasa Chatra Saranda Sahebganj Bokaro Dhanbad Deoghar Dumka Giridh Gumla Kolhan Porahat

Vi (Person)

Wi

ti (INR/m3)

IViV ðM m3 Þ

PCiV ðM INRÞ

77

0.3669

0.34

4.7

1469

70

0.3886

0.35

6.4

1696

77

0.3896

0.36

4.5

1486

74

0.3669

0.34

7.7

1545

77

0.3619

0.28

2.5

1796

70

0.3486

0.41

4.7

1589

60

0.3669

0.40

2.7

1477

70

0.3547

0.35

3.4

1047

77

0.3521

0.36

5.4

1474

77

0.3889

0.38

6.9

1534

70

0.3445

0.27

7.8

1186

73

0.3435

0.41

5.9

1578

74

0.3536

0.30

6.5

1169

77

0.3847

0.35

6.8

1459

Table 8.3 The parameters of each hematite product Parameters

Hematite products Jewellery

Dye

Tools

Instruments

Converters

bj

3 × 108 INR/ton

100 INR/ton

200 INR/m3

500 INR/m3

1000 INR/m3

MDj

9.6 × 108 ton

4.78 × 108 ton

7.21 × 109 m3

8.45 × 109 m3

5.96 × 107 m3

[email protected]

8.2 Dataset and Computational Framework

153

8 min J1 > ! > > > m n > P P > > max J2 ¼ vij Pij þ Vi > > > i¼1 j¼1 > > ! > > > m n P P > > > max J3 ¼ Wi bj /ij Pij > > > i¼1 j¼1 > > ( ) 8 > >  m m P n  > P P > > > > f i Yi f f > Possibility Ed ed ij Pij f þ ~f  J 1  plV1 þ þ ef w ij Pij f > > > > > > i¼1 i¼1 j¼1 > > > > ( ) > > > > > m m P n > P P > > > V > f i Yi f f > > þ þ ~n  ED Possibility Ed ed ij Pij f  plV2 > > > > > > i¼1 j¼1 > > > (i¼1 ) > > > > > m P n > > P > > V > > ~  EW > þ. Possibility ef w ij Pij f  plV3 > > > > > > i¼1 j¼1 > > > > > > m > P > > > > > /ij Yij  MDLj 8j > < > > i¼1 > ! > 8 > > n n   n > > P P P > > > > 1 > > > max Si ¼ bj /ij Pij  h Pij  ti Yi  Pij > > > > > > > > > j¼1 j¼1 j¼1 > < > > > > > > 2 > > subject to > > > > > min Si 8 > ( > > > > > >  > m P n  > > P > > > > > > > > f f > > > f P P ed Possibility þ e w > ij ij ij ij > > > > > > > > > > > > i¼1 j¼1 > > > > > > o > > > > > > 2 > > > > > > < f > > þ ~1  Si  psLi > > > > > > > > > > > n > > > subject to> > > > > > P Pij  Yi > > > < > > > > > > > > > > > > subject to j¼1 n > > > > > > > > > > > Y  P P  IV V > > > > > > > > > i ij > i > > > > > > > > > j¼1 > > > > > ! > > > > > > > > > > n   n > > > > P P > > > > > > > > > h Pij þ yi Yi  Pij  PCiV > > > > > > > > > > > > j¼1 > j¼1 > > > > > > > : : : : /ij Pij  ProdijL ð8:1Þ The optimization problem in equation system (8.1) consists of several parameters which are highlighted here. To achieve minimum emissions including the waste P f i Yi when all mines exploit the iron ore reserves; the waste rock denoted by m Ed Pmi¼1 Pn rock denoted by i¼1 j¼1 f ed ij Pij when mines produce hematite products and total Pm Pn w ij Pij when all the mines exploit the iron ore waste water denoted by i¼1 j¼1 ef reserves and produce those hematite products which remains the first objective. The f i; f fuzzy numbers Ed ed ij and ef w ij are represented through trapezoidal membership function [10] in accordance to the real life scenario obtained by fuzzification due to insufficient historical data. The trapezoidal fuzzy numbers [3] are represented through Eq. (8.2) and Fig. (8.5) which is a piecewise linear, continuous function

[email protected]

154

8 A Case Study: Iron Ore Mining in India

defined within interval [0, 1] and controlled by four parameters a, b, c, d and x 2 R [10]: 8 0; xa > > > xa > ; axb < ba ð8:2Þ ltrapezoid ðx; a; b; c; d Þ ¼ 1; bxc > dx > ; c  x  d > > dc : 0; dx It is usually difficult to derive precise minimum emissions and decision makers require a minimum objective function J1 under some possibilistic level plV1 [9]. In order to achieve maximum employment J2 both constant workers and variable workers denoted by Vi and vijPij respectively are required. Again to achieve maximum economic output J3 , the unit amount, conversion rate and amount of hematite denoted by bj ; /ij and Pij respectively are multiplied. The waste rock from P Pm Pn f f exploiting, producing and mine water denoted by m i¼1 Ed i Yi ; i¼1 j¼1 ed ij Pij Pm Pn and i¼1 j¼1 ef w ij Pij respectively should always be less than predetermined levels V V ED and EW in order to guarantee air and water quality. The two constraints are derived under the possibilistic levels plV2 and plV3 [9]. The output of some products P L denoted by m i¼1 /ij Yij should meet the market demand MDj . Each mine desires to achieve maximum profit which ofthe production cost and the inventory  consisted Pn cost denoted by h(Pij) and yi Yi  j¼1 Pij respectively and subtracted from the P total sales nj¼1 bj /ij Pij such that the objective function S1i is determined that is to be maximized. Each mine also desires to achieve minimum emissions. However, since the emissions f ed ij and ef w ij are fuzzy numbers, it is difficult to determine the precise minimum emissions and decision makers only require a minimum objective 2

Si under some possibilistic level psLi [9]. Since production in all the mines is influenced by government policy and market demand, there are some conditions P that need to be satisfied. The amount used for production denoted by nj¼1 Pij should not exceed the total limitation Yi . The inventory amount denoted by Yi  Pn V j¼1 Pij should not exceed the maximum limitation IVi . The production cost Pn   consisted of product cost and inventory cost denoted by and j¼1 h Pij   Pn Pn respectively should not exceed the predetermined level j¼1 yi Yi  j¼1 Pij PCiV . Some products denoted by ϕijPij should not be less than the lowest production   level Prod L [9]. Finally, the SV e a R variables viz. ef; e n; e . ; e1 in the constraint ij

equations in equation system (8.1) for the optimization problem efficiently controls the associated risks. The variable Wi represents the constant dependent on hematite ore which increases the economic output J3. According to the environmental regulations specified by the Government of India, the waste rock emission should not exceed 3600 tonnes and mine water

[email protected]

8.2 Dataset and Computational Framework

155

emission should not exceed 3600 tonnes [7]. As it becomes quite difficult to satisfy the constrained index in a short span of time the possibility of considering the two constraints should not be less than 0.96 which indicates that the possibilistic levels plV2 and plV3 for the government should also be 0.96. Considering the total emissions, the environmental regulations require that the minimum objective remains under the possibilistic level plV1 ¼ 0:86. With the constant increase of demand and price of the important iron ore (hematite) products, Government of India seeks that the output from all the mines should at least satisfy the basic market demand MDj ; j ¼ 1; . . .; 5. This is shown in Table 8.3 [1] along with the corresponding unit price of hematite products’. For the 96 iron ore (hematite) mines the inventory and production upper limits for each mine are given in Table 8.2 [1]. The possibilistic level plLi such that mine i requires minimum emissions is also highlighted in Table 8.2. Since every mine has a different capacity for controlling emissions, the fixed and unit variable cost, emission coefficients and constant costs are different as shown in Table 8.4 [1]. The transformation rate trij and the lower limitation of product j in mine i are also presented in Table 8.4. Now for the optimization problem in equation system (8.1) with respect to all the numerical values, we consider the initial temperature T0 = 950 °C and the last temperature T0 = 0 °C. The neighbourhood is developed as follows [1]: Table 8.4 The parameters for product j produced by mine i Hematite mines

Hematite products

Parameters tij vij

Cij

ϕij

Ranchi

Jewellery

0.05

696

369

1.05

96.7

Dye

0.01

41

79

3.69

35.3

Tools

0.01

10

96

1.45

175.9

Instruments

0.03

35

486

0.96

279.9

Converters

0.02

24

289

7.35

14.7

Jewellery

0.05

595

375

1.09

89.5

Dye

0.01

45

86

3.25

38.3

Tools

0.01

9

95

1.47

186.5

Instruments

0.03

34

425

0.86

265.9

Converters

0.02

25

286

7.36

17.7

Hazaribagh

ProdLij

[email protected]

f ed ij

ef w ij

(2.21, 3.45, 4.27, 5.25) (21.3, 23.7, 24.7, 25.9) (26.5, 27.3, 28.7, 30.5) (24.3, 25.9, 26.5, 27.7) (1.59, 2.69, 3.47, 4.79) (2.25, 3.59, 4.77, 5.79) (21.9, 24.3, 25.9, 26.5) (26.8, 27.7, 28.8, 30.8) (24.6, 25.5, 26.8, 27.9) (1.69, 2.79, 3.65, 4.86)

(3.14, 3.45, 3.78, 4.25) (24.3, 25.5, 26.9, 27.7) (0.47, 1.35, 1.45, 1.69) (1.34, 2.45, 3.77, 4.79) (2.77, 3.79, 4.86, 5.69) (3.27, 3.65, 3.79, 4.77) (24.4, 25.0, 26.0, 27.8) (0.59, 1.37, 1.47, 1.70) (1.37, 2.47, 3.79, 4.86) (2.79, 3.86, 4.96, 5.77) (continued)

156

8 A Case Study: Iron Ore Mining in India

Table 8.4 (continued) Hematite mines

Hematite products

Parameters tij vij

Cij

ϕij

Chaibasa

Jewellery

0.05

625

368

1.07

98.3

Dye

0.01

47

74

3.45

41.3

Tools

0.01

11

98

1.38

174.5

Instruments

0.03

36

435

0.89

269.3

Converters

0.02

27

277

7.89

15.5

Jewellery

0.05

686

396

1.14

95.3

Dye

0.01

34

89

3.47

28.4

Tools

0.01

14

100

1.59

169.7

Instruments

0.03

30

459

0.79

274.9

Converters

0.02

28

269

7.79

16.7

Jewellery

0.05

635

373

1.15

88.7

Dye

0.01

48

77

3.27

36.3

Tools

0.01

15

95

1.38

170.9

Instruments

0.03

38

483

0.95

272.3

Converters

0.02

27

265

7.77

18.7

Jewellery

0.05

627

379

1.17

98.3

Dye

0.01

47

89

3.35

35.9

Tools

0.01

14

98

1.41

177.3

Instruments

0.03

38

495

0.89

270.5

Converters

0.02

25

260

7.19

16.8

Chatra

Saranda

Sahebganj

ProdLij

[email protected]

f ed ij

ef w ij

(2.24, 3.60, 4.59, 5.77) (21.5, 23.8, 24.6, 26.9) (26.8, 27.8, 28.8, 30.8) (24.7, 25.5, 26.8, 27.8) (1.68, 2.89, 3.69, 4.89) (2.27, 3.47, 4.79, 5.86) (21.5, 23.5, 24.4, 25.5) (26.5, 27.4, 28.3, 30.4) (24.4, 25.0, 26.0, 27.3) (1.64, 2.68, 3.50, 4.50) (2.19, 3.21, 4.25, 5.27) (20.3, 21.7, 24.0, 25.3) (26.0, 27.0, 28.0, 30.0) (24.0, 25.0, 26.5, 27.7) (1.60, 2.70, 3.50, 4.79) (2.26, 3.47, 4.28, 5.28) (21.0, 23.0, 24.0, 25.0) (26.4, 27.4, 28.0, 30.4) (24.7, 25.9, 26.0, 27.8) (1.65, 2.65, 3.65, 4.65)

(3.18, 3.47, 3.86, 4.19) (24.1, 25.3, 26.4, 27.9) (0.60, 1.38, 1.50, 1.77) (1.28, 2.41, 3.79, 4.96) (2.70, 3.70, 4.89, 5.96) (3.11, 3.41, 3.69, 4.47) (24.1, 25.0, 26.0, 27.0) (0.41, 1.37, 1.38, 1.70) (1.35, 2.47, 3.70, 4.70) (2.70, 3.77, 4.89, 5.73) (3.07, 3.41, 3.89, 4.27) (24.0, 25.1, 26.0, 27.1) (0.50, 1.38, 1.47, 1.79) (1.35, 2.45, 3.69, 4.69) (2.73, 3.73, 4.89, 5.69) (3.10, 3.47, 3.79, 4.47) (24.4, 25.3, 26.4, 27.7) (0.45, 1.37, 1.41, 1.96) (1.37, 2.41, 3.73, 4.74) (2.79, 3.80, 4.89, 5.86) (continued)

8.2 Dataset and Computational Framework

157

Table 8.4 (continued) Hematite mines

Hematite products

Parameters tij vij

Cij

ϕij

Bokaro

Jewellery

0.05

636

370

1.15

89.6

Dye

0.01

45

78

3.72

30.3

Tools

0.01

10

103

1.59

169.9

Instruments

0.03

38

489

0.86

274.5

Converters

0.02

27

286

7.68

14.3

Jewellery

0.05

669

365

1.17

93.3

Dye

0.01

49

79

3.38

30.7

Tools

0.01

9

95

1.54

180.3

Instruments

0.03

34

489

0.88

280.3

Converters

0.02

26

296

7.27

10.3

Jewellery

0.05

655

347

1.11

100.5

Dye

0.01

47

73

3.75

38.7

Tools

0.01

13

95

1.48

178.3

Instruments

0.03

35

469

0.98

286.5

Converters

0.02

27

286

7.74

11.3

Jewellery

0.05

654

370

1.16

79.7

Dye

0.01

47

74

3.65

41.3

Tools

0.01

10

94

1.41

173.5

Instruments

0.03

40

484

0.91

275.3

Converters

0.02

25

273

7.69

14.5

Dhanbad

Deoghar

Dumka

ProdLij

[email protected]

f ed ij

ef w ij

(2.17, 3.35, 4.38, 5.47) (20.3, 21.7, 24.3, 25.5) (25.5, 27.3, 28.7, 30.5) (23.4, 24.7, 26.5, 27.7) (1.77, 2.79, 3.69, 4.47) (2.25, 3.00, 4.34, 5.79) (21.7, 23.8, 24.7, 25.9) (26.8, 27.3, 28.7, 30.0) (24.7, 25.9, 26.8, 27.8) (1.77, 2.65, 3.03, 4.70) (2.24, 3.00, 4.00, 5.00) (21.5, 23.0, 24.0, 25.6) (26.4, 27.4, 28.7, 30.8) (24.3, 25.0, 26.0, 27.7) (1.70, 2.70, 3.50, 4.73) (2.18, 3.19, 4.19, 5.36) (21.0, 23.5, 24.6, 25.5) (26.8, 27.0, 28.7, 30.7) (24.0, 25.0, 26.0, 27.8) (1.50, 2.60, 3.50, 4.50)

(3.10, 3.35, 3.69, 4.73) (24.1, 25.3, 26.4, 27.1) (0.41, 1.31, 1.47, 1.73) (1.35, 2.41, 3.73, 4.74) (2.80, 3.80, 4.89, 5.95) (3.15, 3.41, 3.73, 4.27) (24.3, 25.0, 26.0, 27.3) (0.53, 1.37, 1.38, 1.79) (1.37, 2.47, 3.47, 4.47) (2.73, 3.70, 4.89, 5.69) (3.17, 3.47, 3.78, 4.26) (24.3, 25.5, 26.9, 27.0) (0.59, 1.41, 1.45, 1.73) (1.38, 2.47, 3.70, 4.73) (2.73, 3.73, 4.89, 5.89) (3.18, 3.47, 3.73, 4.27) (24.3, 25.0, 26.0, 27.4) (0.41, 1.38, 1.47, 1.73) (1.37, 2.49, 3.79, 4.86) (2.77, 3.73, 4.89, 5.98) (continued)

158

8 A Case Study: Iron Ore Mining in India

Table 8.4 (continued) Hematite mines

Hematite products

Parameters tij vij

Cij

ϕij

Giridh

Jewellery

0.05

645

355

1.18

98.5

Dye

0.01

45

70

3.64

40.3

Tools

0.01

14

89

1.50

170.7

Instruments

0.03

38

470

0.99

286.5

Converters

0.02

26

270

7.30

18.6

Jewellery

0.05

627

373

1.19

79.6

Dye

0.01

41

69

3.67

34.7

Tools

0.01

14

95

1.47

180.7

Instruments

0.03

36

469

0.81

274.5

Converters

0.02

27

274

7.28

19.3

Jewellery

0.05

679

347

1.07

95.3

Dye

0.01

47

77

3.60

38.7

Tools

0.01

13

98

1.49

168.5

Instruments

0.03

38

489

0.94

273.7

Converters

0.02

28

286

7.67

11.3

Jewellery

0.05

789

379

1.10

98.7

Dye

0.01

42

59

3.70

34.7

Tools

0.01

11

96

1.53

165.8

Instruments

0.03

35

479

0.95

277.3

Converters

0.02

30

279

7.41

14.3

Gumla

Kolhan

Porahat

ProdLij

[email protected]

f ed ij

ef w ij

(2.17, 3.45, 4.38, 5.45) (21.7, 24.7, 25.9, 26.9) (26.1, 27.0, 28.3, 30.0) (24.0, 25.5, 26.0, 27.4) (1.64, 2.65, 3.77, 4.86) (2.13, 3.41, 4.24, 5.26) (21.4, 23.4, 24.3, 25.5) (26.1, 27.3, 28.0, 30.4) (24.7, 25.9, 26.8, 27.8) (1.80, 2.79, 3.80, 4.86) (2.19, 3.25, 4.28, 5.50) (21.4, 23.0, 24.0, 25.6) (26.0, 27.1, 28.3, 30.4) (24.1 25.3, 26.0, 27.1) (1.68, 2.70, 3.70, 4.77) (2.25, 3.47, 4.25, 5.21) (21.7, 23.7, 24.7, 25.0) (26.5, 27.3, 28.0, 30.1) (24.1, 25.3, 26.4, 27.8) (1.69, 2.73, 3.79, 4.96)

(3.25, 3.45, 3.79, 4.27) (24.7, 25.0, 26.9, 27.7) (0.69, 1.37, 1.47, 1.79) (1.35, 2.47, 3.79, 4.89) (2.78, 3.78, 4.89, 5.89) (3.11, 3.41, 3.73, 4.27) (24.0, 25.0, 27.0, 28.0) (0.53, 1.38, 1.41, 1.96) (1.37, 2.47, 3.73, 4.86) (2.73, 3.73, 4.89, 5.89) (3.18, 3.47, 3.79, 4.27) (24.1, 25.3, 26.0, 27.4) (0.69, 1.41, 1.47, 1.86) (1.37, 2.47, 3.79, 4.89) (2.79, 3.80, 4.89, 5.96) (3.07, 3.47, 3.79, 4.27) (24.0, 25.0, 26.9, 27.0) (0.47, 1.38, 1.86, 1.96) (1.37, 2.38, 3.45, 4.47) (2.79, 3.86, 4.89, 5.96)

8.2 Dataset and Computational Framework

159

Yi1 ¼ Yi0 þ rh

ð8:3Þ

Zij1 ¼ Zij0 þ rh

ð8:4Þ

In Eqs. (8.3) and (8.4), r 2½1; 1 is a random number and h = 2.0 is the step length. After a simulation of quite a number of cycles, the Pareto optimal solution and the objective values are determined as shown in Tables 8.5 and 8.6 [1]. The results illustrate that although some mines have the highest productive efficiency, their high emission coefficient will result in the low exploiting quotas such as Chatra, Giridh and Gumla. On the other hand, hematite mines will tend to produce high value added but low emission products due to the environmental pressure and the limitation of exploiting quotas such as dyes and converters. However, hematite mines abundantly produces traditional products such as tools because of huge cost of the new products.

Table 8.5 The assignment results for different products Hematite mines

Total

Hematite products Jewellery Dye

Tools

Instruments

Converters

Ranchi Hazaribagh Chaibasa Chatra Saranda Sahebganj Bokaro Dhanbad Deoghar Dumka Giridh Gumla Kolhan Porahat

104.30 108.64 107.81 87.53 102.80 104.90 111.27 145.05 113.41 99.91 95.51 95.94 97.47 98.25

37.96 38.03 34.77 27.89 40.11 40.86 41.38 50.03 47.21 45.69 35.77 28.47 27.96 35.96

3.89 3.69 3.68 2.45 3.03 3.27 1.41 1.38 2.11 3.14 3.86 3.77 2.96 3.08

1.69 1.77 2.03 1.95 1.89 2.07 1.73 1.86 1.68 2.05 1.96 1.85 1.80 1.78

7.35 7.38 7.37 7.34 8.09 8.35 5.96 4.89 7.41 3.14 4.96 8.06 6.89 7.47

53.41 57.77 59.96 47.89 49.68 50.35 60.79 86.89 55.00 45.89 48.96 53.79 57.86 50.05

[email protected]

S2 1 3579

3689

3745

70,138

S110 4095

4114

4196

= 0.86 Notation

= 0.95

= 0.86

= 0.96

15,345

69,369

= 0.95

plV2 plV3

plV1 plV2 plV3

14,896

68,289

plV1 = 0.96

14,735

J1

Notation

J2

4089

5059

S2 2 4096

65,119

64,538

61,245

J3

5095

5035

S2 3 4896

5710

5596

5547

S11

Table 8.6 The objectives for both the upper and lower levels

3895

4027

S2 4 3889

5479

5395

5386

S12

4325

4196

S2 5 4186

9989

9914

9825

S13

5179

5053

S2 6 5053

6095

5986

5947

S14

6014

5977

S2 7 5969

6919

6827

6505

S15

4935

4786

S2 8 4727

7673

7495

7407

S16

3419

3421

S2 9 3114

11,989

11,159

10,389

S17

3014

3021

S2 10 2869

8137

8119

7938

S18









5938

5579

5427

S19

160 8 A Case Study: Iron Ore Mining in India

[email protected]

8.3 Risk Calculation with Fuzzy Subjective Value at Risk Constraints

8.3

161

Risk Calculation with Fuzzy Subjective Value at Risk Constraints

Once the optimization problem for the iron ore (hematite) mines is formulated we now proceed to calculate the associated operational risk in terms of SV e a R. In order to achieve this we consider the loss associated with a fuzzy decision vector e y¼ ðey 1 ; . . .; ey n Þ as the relative shortfall such that the following fuzzy deviation expression is obtained [4, 6]: h  i  ~h J1t  J2t  J3t  Pn e yj j¼1 p tj~ f ðe y; e pÞ ¼   ~h J1t  J2t  J3t

ð8:5Þ

~Þj i.e. the Equation (8.5) is to be minimized such that the expectation of jf ðy~; p average of the average of the absolute values of the fuzzy relative deviations ~t Þj; t ¼ 1; . . .; T where t signifies different time instants. In Eq. (8.4) J1t is the y; p j f ð~ objective function which is to be minimized under possibilistic level plV1 [9] at time t whereas the objective functions J2t and J3t are to be maximized in order to obtain maximum employment and maximum economic output respectively at time t. The   fuzzy variable ~ h is the number of units of J1t  J2t  J3t at time T and ~ptj is the jth   ~t ¼ ~pt1 ; . . .; ~ptj ; . . .; ~ptm ; 1  j  m ~t such that p element of the fuzzy price vector p for t ¼ 1; . . .; T [4, 10]. Along with this constraint is imposed on the SV~aR amount ~Þ in order to control large w~a ð~ yÞ associated with the fuzzy loss function f ð~y; p deviations of the assignment value below the stipulated target value [4, 10]. To shape the risk with SV~aR for any selection of possibility thresholds ~ai [9] and ~ i ; i ¼ 1; . . .; z [10] the problem minimizes hð~yÞ over fuzzy loss tolerances x   ~ i ; i ¼ 1; . . .; z. In conformance to this, the ~ y; b1 ; . . .; bz satisfying F~ai ð~y; bi Þ  x SV~ aR constraints give the optimization problem the following form [4, 6]: i h   h J J J Pn ~p ~y  T  ~ 1t 2t 3t tj j  P j¼1    min hð~yÞ ¼ T1   ~h J1t J2t J3t t¼1  subject to n P ~pjT ~yj  v 0  ~yj  ~cj ; j ¼ 1; . . .; n j¼1

  1 ~f þ ~ ~~ ~ ~. þ ~ ~1 þ nþ ð1  ~aÞT i 2h  3  P ~ T  h J1t  J2t  J3t  nj¼1 ~ptj~yj  X 4 ~ ~n þ ~ ~. þ ~ ~1 5  x ~  ~ ~f þ   ~h J1t  J2t  J3t t¼1

[email protected]

ð8:6Þ

162

8 A Case Study: Iron Ore Mining in India

In the equation system (8.6) v denotes the total monetary amount accumulated from the iron ore (hematite) mines by the Jharkhand government. The parameter ~cj represents the upper limit value of the fuzzy decision variable ~yj . The minimization of the function hð~yÞ in the equation system (8.6)  takesplace ~ ~.; ~1 . The with respect to the fuzzy decision vector ~y and the SV~aR variables ~f; n; expression on left side of third constraint in the equation system (8.6) corresponds to the following [4]: ~ w~a ð~yÞ  x

ð8:7Þ

~ the above problem is easily solved by linear proFor any choice of a~ and x, gramming approach. The performance function hð~yÞ is handled by introducing still ~  0 constrained by the following equations [1, 4]: more variables b t0 h  i  P ~   h J1t  J2t  J3t  nj¼1 ~ptj~yj ~ ~n þ ~ ~. þ ~ ~1  0 ’ ~f þ   ~h J1t  J2t  J3t  ~ eb t0 h  i  P ~   h J1t  J2t  J3t  nj¼1 ~ptj~yj ~ ~n þ ~ ~. þ ~ ~1  0  ’ ~f þ   ~h J1t  J2t  J3t þ ~ ~b t0

ð8:8Þ

ð8:9Þ

The following expression is minimized with respect to Eqs. (8.8) and (8.9) [4]: min

T 1X ~ b T t¼1 t0

ð8:10Þ

The confidence level in the SV~aR constraints specified by the equation system (8.6) is taken as ~ a ¼ 0:96 such that it controls the largest 10 % of relative deviations. The equation system (8.6) is solved for several values of the risk tolerance ~ in the SV~aR constraints as x ~ is varied from 0.02 to 0.001. To verify the level x goodness of fit of the sample the values of performance function given by Eqs. (8.6)–(8.9) and SV~aR are calculated and presented in Table 8.7 [1]. Imposing the SV~aR constraint leads to deterioration in the value of the objective function i.e. the average absolute value of the relative deviation. In fact, decreasing ~ causes an increase in the value of objective function. This is evident the value of x from the thick blue continuous line in Fig. 8.6 and is a consequence of the fact that ~ diminishes the feasible set [4]. At the risk tolerance level, decreasing the value of x ~ ¼ 0:02 the constraint on the SV~aR denoted by the last constraint in the equation x ~  0:01 the constraint becomes active. The system (8.5) is inactive whereas at x dynamics of the absolute values of relative deviations for an instance when the SV~ aR constraint denoted by the last constraint in the equation system (8.5) is active ~ ¼ 0:005 and an instance when it is inactive at x ~ ¼ 0:02 are shown in Fig. 8.7 at x [4]. This figure reveals the fact that the SV~aR constraint has reduced

[email protected]

8.3 Risk Calculation with Fuzzy Subjective Value at Risk Constraints

163

~ in the SV~ Table 8.7 The results of various risk levels x aR constraints Confidence level ~Þ ðx

Objective function (%)

SV~ aR constraints (%) ~f ~ n

~ .

~1

0.02 0.01 0.005 0.004 0.003 0.002 0.001

1.694796 1.238917 0.986245 0.697319 0.1415061 0.104706 0.034596

1.896996 0.961727 0.6114159 0.556996 0.532451 0.496959 0.414714

1.217935 0.951510 0.654996 0.594596 0.542596 0.506541 0.476150

1.191045 0.960155 0.657921 0.508651 0.557041 0.516951 0.506961

1.614789 0.976125 0.505519 0.178151 0.512673 0.473031 0.431719

Fig. 8.5 The trapezoidal membership function defined by trapzoid (x; a, b, c, d)

~ Fig. 8.6 The objective function, SV~ aR constraints for various risk levels x

[email protected]

164

8 A Case Study: Iron Ore Mining in India

~ ¼ 0:005) and Fig. 8.7 The relative discrepancy in assignment, SV~ aR constraint is active (at x ~ ¼ 0:02) inactive (at x

~ ¼ 0:005) Fig. 8.8 The index and optimal assignment values, SV~ aR constraint is active (at x

underperformance of the assignment versus the corresponding index. The deep curve corresponding to the active SV~aR constraint is lower than the light curve corresponding to the inactive SV~aR constraint. The dynamics of the assignment and ~ ¼ 0:005 and the index values for cases when the SV~aR constraint is active at x ~ ¼ 0:02 are shown in Figs. 8.8 and 8.9 respectively [1, 4]. inactive at x Figures 8.7, 8.8 and 8.9 demonstrate that the assignment fits the index quite well ~ ¼ 0:005 and the for both the active and inactive SV~aR constraints [1, 4]. At x   optimal assignment point ~y , we have ~f ¼ ~n ¼ ~. ¼ ~1 ¼ 0:0014538677. The SV~ aR value of the left side in the last constraint of Eq. (8.5) is equal to 0.005. Here, V~ aR point has the value 0.234 which means that about 14 time points has the same deviation 0.0014538677. These results are verified by running the optimization

[email protected]

8.3 Risk Calculation with Fuzzy Subjective Value at Risk Constraints

165

~ ¼ 0:02) Fig. 8.9 The index and optimal assignment values, SV~ aR constraint is inactive (at x   problem through several optimal runs. It is observed that the optimal ~f ; ~n ; .~ and~1 values may overestimate V~aR because of the non-uniqueness of the optimal solution. Also when the SV~aR specified by the last constraint of Eq. (8.5) is not active, the optimal may be quite far from V~aR and the value on the left of the last constraint of Eq. (8.5) may likewise be quite far from SV~aR.

8.4

Sensitivity Analysis

The sensitivity analysis is performed keeping in view of the fact that the decision maker is able to adjust the parameters to obtain desired level solutions. Through experiments it is inferred that the possibilistic level is a key impact on the results. If the accuracy of upper and lower limits of possibilistic levels decreases, the feasible set consequently expands and a better Pareto optimal solution and point are determined. From the Table 8.6 it is observed that emissions increase and economic profit and employment decrease as the possibilistic level plVi ; i ¼ 1; 2; 3 decreases [9]. This indicates that the government demands are less stringent which results in the iron ore (hematite) mines looking towards economic profit and neglecting emissions and employment objectives. Finally, total emissions increase and government tax revenue decreases. On the other hand, if the possibilistic level plVi ; i ¼ 1; 2; 3 increases [9], the government requirements are more stringent and hence total emissions decrease and government tax revenue increases. Similarly, if the possibilistic levels plLi ; i ¼ 1; . . .; 10 decrease [9], the mines pay less attention to the waste rock and mine water emissions resulting in an increase in profit and thus leading to more emission.

[email protected]

166

8.5

8 A Case Study: Iron Ore Mining in India

Comparative Analysis with Other Techniques

In order to perform a comparative analysis of the results obtained from the optimization problem given by equation system (8.1) we convert all the linearly modeled equations into crisp model without any uncertain coefficients. The crisp model is obtained by the procedure stated in [8] Taking all the numerical values into consideration [8] and substituting the same parameters we obtain the optimal solutions. The error analysis and computation time are given in Tables 8.8 and 8.9 respectively. It is obvious from the results that solving the crisp equivalent model are close to the results obtained from simulating the model. It is seen that the possibilistic simulation technique is reasonable and efficiently solves the model for bilevel multi objective optimization problem with fuzzy coefficients. It is found from the Table 8.9 that the average computational time by improved simulated annealing (ISA) is less than the time by the fuzzy simulation based improved simulated annealing (FSISA) [1, 8]. This is appreciable as the fuzzy simulation process spends much of the time to get the approximate value. It is to be noted that not all possibilistic constraints can be directly converted into crisp ones [1, 8]. To illustrate the fact that FSISA is suitable for this type of fuzzy bilevel model, the results are compared with the genetic algorithm (GA) [2]. GA is the most popular computational algorithms towards bilevel optimization problems and the different data scales will result in huge differences on the computational efficiency. To ensure fairness in computation the GA is designed based on the fuzzy simulation for bilevel multi objective optimization with fuzzy coefficients. The chromosome Table 8.8 The errors analysis by solving the crisp equivalent model Hematite mines

Hematite products Total Jewellery (%) (%)

Dye (%)

Tools (%)

Instruments (%)

Converters (%)

Ranchi Hazaribagh Chaibasa Chatra Saranda Sahebganj Bokaro Dhanbad Deoghar Dumka Giridh Gumla Kolhan Porahat

1.35 0.89 0.96 −1.53 0.17 −0.35 0.41 0.21 0.51 −0.71 0.86 0.47 0.61 0.69

0.51 −0.83 0.17 0.41 0.45 0.70 0.07 0.09 0.17 0.43 0.69 0.10 −0.23 −0.57

0.37 −0.09 0.37 0.21 0.27 0.65 0.95 0.77 −0.12 0.73 0.55 −0.24 0.25 0.17

0.17 0.61 0.19 0.27 0.14 0.69 0.51 0.70 0.65 0.27 0.51 −0.73 −0.61 −0.53

0.27 −0.45 0.05 −0.29 0.51 0.23 0.38 0.47 −0.30 0.05 0.25 0.35 0.28 0.89

0.15 0.69 1.51 −0.31 0.47 −0.43 0.59 0.17 0.61 0.31 0.71 −0.08 0.50 0.35

[email protected]

3

2

1 1 1 1 4 4

1

10 15 15 15 15 15

Test problem size Resources Mines

No.

70 100 100 100 100 100

Decision vars. 500 500 500 500 500 500

Iteration number 115 – 241 – 447 –

ISA Average comp. Time

Table 8.9 The computing time and memory by different techniques

150 – 700 – 2700 –

Required memory 210 – 421 – 1690 –

FSISA Average comp. time 150 – 700 – 2700 –

Required memory

Required memory – 150 – 700 – 2700

FSGA Average comp. time – 241 – 690 – 1035

8.5 Comparative Analysis with Other Techniques 167

[email protected]

168

8 A Case Study: Iron Ore Mining in India

number, crossover rate, mutation rate and the iterative number values are set at 24, 0.7, 0.9 and 900 respectively. The average computing time and memory are listed in Table 8.9. The experiments have shown that the similar optimal results can be obtained by both FSISA and FSGA. The computational efficiency is different when the number of iron ore resources and iron ore mines changes. It is found that when the number of iron ore resources and iron ore mines is small, FSISA is more efficient than GA in solving the bilevel multi objective optimization and much more computational effort is needed for FSGA to achieve the same optimal solution as FSISA. However, when the data scale is large FSGA can reach a more optimal solution at the expense of more computation time.

References 1. Chaudhuri, A.: A study of operational risk using possibility theory. Technical report, Birla Institute of Technology Mesra, Patna Campus, India (2010) 2. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning, 4th edn. Pearson Education, New Delhi (2009) 3. Jang, S., Sun, C.T., Mizutani, E.: Neuro Fuzzy and Soft Computing. Prentice Hall, Englewood Cliffs (1997) 4. Lodwick, W.A.: Fuzzy Optimization: Recent Advances and Applications, Studies in Fuzziness and Soft Computing. Springer, Berlin (2010) 5. Official Website of Government of Jharkhand. www.jharkhand.gov.in 6. Rao, S.S.: Engineering Optimization: Theory and Practice, 4th edn. Wiley, New York (2009) 7. Sustainable Development: Emerging issues in India’s Mining Sector, Planning Commission, Government of India, Institute of Studies in Industrial Development, New Delhi, India (2012) 8. Yao, L., Xu, J., Guo, F.: A stone resource assignment model under fuzzy environment. Math. Probl. Eng. 2012, 1–26 (2012) 9. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1(1), 3–28 (1978) 10. Zadeh, L.A.: Fuzzy set. Inf. Control 8(3), 338–353 (1965)

[email protected]

Chapter 9

Evaluation of the Possibilistic Quantification of Operational Risk

Abstract In this Chapter an evaluation of the possibilistic quantification of operational risk is performed. The assessment is achieved through fuzzy analytic hierarchy process (FAHP) and fuzzy extension of the technique for order preference by similarity to ideal solution (FTOPSIS) alongwith life cycle assessment (LCA) study. The fuzzy versions of analytic hierarchy process (AHP) and technique for order preference by similarity to ideal solution (TOPSIS) take care of the inherent uncertainty and vagueness in the operational risk data. The trapezoidal fuzzy numbers are used to model the fuzzy variables. The evaluation process of the possibilistic quantification of operational risk thus becomes more dynamic through integration of different fuzzy approaches. Keywords Evaluation

9.1

 LCA  FAHP  FTOPSIS  Trapezoidal fuzzy numbers

Introduction

This Chapter performs the evaluation of the possibilistic quantification of operational risk [3]. Section 9.2 highlights the preparation of preliminary life cycle assessment (LCA) study. The fuzzy analytic hierarchy process (FAHP) weights are calculated in Sect. 9.3. Section 9.4 presents the trapezoidal fuzzy numbers for pairwise comparison between the criteria. Section 9.5 gives the pairwise comparison matrices and normalized fuzzy weights for each criterion. The ranks of operational risks with trapezoidal fuzzy numbers are determined in Sect. 9.6. This is followed by an integration of weights in Sect. 9.7. Finally, this Chapter concludes with the calculation of solutions for risk aversion alternatives. Risk management [11] is a complex and multithreaded issue. In many cases the methods and tools of risk management need to be adjusted to the individual needs of application specificity of the activity conducted by them [6, 7]. Once the risks are identified and resolved they need to be properly evaluated before transformation of the information is performed into relevant decisions [3]. The evaluation process is © Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_9

[email protected]

169

170

9 Evaluation of the Possibilistic Quantification …

thus an intermediate stage between identification, resolution of risks and transformation stages. It thus aims to provide the characteristics of measurability for the established risk factors which helps to generate optimal solutions from the feasible risk solution space. The risk evaluation generally starts from risk valuation aimed at assigning to operational risk the qualitative or quantitative features. At this stage factors of natural and managerial risk are analyzed by means of description and quantification. In the process of operational risk measurement generally statistical methods [17, 21] are used employing structure measures, increments, dynamics index, arithmetic mean, standard deviation as well as coefficient of variation and data scatter. As a part of this the risk categorization is conducted which constitutes the process of risk valuation. This categorization was carried out on the basis of risk metrics and risk matrix. Risk metrics [3] allowed to classify the examined risk factors and to determine them as small, average, large or extremely high. The determined risk matrices are then assigned to risk classes in accordance with the classification performed. In the process of ranking and creation of individual classes of operational risk [18] the questionnaire research conducted among experts is generally adopted. Information about range, norms, frequency and importance of particular threats gathered in the previous stages enable the realization of next stage of evaluation of operational risk which is a determination of admissibility risk limits. In the process of identification of admissibility limits random sampling based on the choice of indicators determining the level of a given risk is used. According to these measures limit values of particular risks are determined in an individual approach. In turn in holistic approach the admissibility of risk of natural and managerial nature allowed to determine risk classes assigned to these groups of threat. The strengths and weaknesses are identified which are helpful in creation of organization’s strategy as well as serve as the basis for indication and monitoring of areas prone to risk. The risk model is finally in the form of a report describing where and how much the organization is exposed to the occurrence of operational risk. For more details on risk evaluation process interested reader can refer [3, 16]. The evaluation method used here integrates the fuzzy approach with the life cycle assessment (LCA) methodology [5] to assess the different stages involved in the operational risk framework as illustrated in Fig. 9.1 [13]. The LCA methodology includes four stages viz. goals and scope definition, life cycle inventory analysis, life cycle impact assessment and interpretation. The first stage of LCA is to identify problems with an existing product or service system. The tasks include selecting relevant system boundaries, identifying impact indicators and determining data requirements. The second stage is to perform several balance calculations for all inputs and outputs. The tasks include describing the system in terms of related operations, collecting data from each process and calculating the environmental impact across the whole product life cycle. The LCA impact assessment stage involves associating life cycle inventory analysis results with specific environmental impacts. The level of details choice of impacts evaluated and methodologies used depends on goal and scope of study. Life cycle impact assessment consists of both mandatory and optional elements such as assigning of inventory data to impact categories, calculation of impact category indicators using characterization factors,

[email protected]

9.1 Introduction

171

Fig. 9.1 A representation of integrated AHP and TOPSIS under fuzzy environment to support LCA for enabling risk aversion performance comparison

calculation of category indicator results relative to reference values, grouping and weighting the results and data quality analysis. The interpretation is the final stage of an LCA investigation. It aims at evaluating the results from analysis and impact assessment of the product life cycle. FAHP [2, 10] is used with fuzzy extension of the technique for order preference by similarity to ideal solution (FTOPSIS) [1, 4, 19] to prioritize different aspects towards averting the underlying associated risks. FAHP is a fuzzy multiple criteria decision making method. It has been applied to wide range of problems. It prioritizes alternatives to achieve a specific objective based on a hierarchical set of criteria. FAHP extends conventional AHP capability to deal with the fuzziness of data involved [23, 24]. A typical 3-level AHP hierarchical structure [13] is illustrated in Fig. 9.2. The fuzzy sets deals with the

[email protected]

9 Evaluation of the Possibilistic Quantification …

172

Fig. 9.2 A typical 3-level AHP

uncertainty and vagueness of decision making problems where information is incomplete or imprecise. FAHP has been developed owing to the imprecision in assessing the relative importance of attributes and the performance ratings of alternatives with respect to attributes [3, 13, 23]. Imprecision may arise from a variety of reasons such as unquantifiable information, incomplete information, unobtainable information and partial ignorance. Conventional multiple attribute decision making methods cannot effectively handle problems with such imprecise information. It breaks down a complex, unstructured situation into its components parts, arranging these parts or variables into a hierarchic order, synthesize the judgments to determine which variables have the highest priority and should be acted upon to influence the outcome of the situation. The hierarchical structure is used to abstract, decompose, organize and control the complexity of decision involving many attributes and it uses informed judgment or expert opinion to measure the relative value or contribution of these attributes and synthesize a solution. FTOPSIS is used to provide solutions for multi-criteria group decision making (MCGDM) problems [20] are frequently encountered in practice. It is developed by extending fuzzy sets to TOPSIS. This method help in objective and systematic evaluation of alternatives based on multiple criteria. By using fuzzy numbers in TOPSIS enables decision makers to express their judgment in the form of intervals rather than a single numeric value. The important steps of FTOPSIS are briefly enumerated below [3, 19]: Step 1: Calculate the weighted normalized decision matrix. The weighted normalized value vij is calculated as: vij ¼ wj nij ;

i ¼ 1; 2;    ; m;

j ¼ 1; 2;    ; n

ð9:1Þ

In Eq. (9.1) wj is the weight of the jth attribute or criterion. Step 2: Identify positive ideal ðA Þ and negative ideal ðA Þ solutions. The calculation of fuzzy positive ideal solution and the fuzzy negative ideal solution is performed through:

[email protected]

9.1 Introduction

173

      A ¼ ~v1 ; ~v2 ; . . .; ~vi ¼ max vij ji 2 I 0 ; x min vij ji 2 I 00 ; i ¼ 1; 2;    ; n; j ¼ 1; 2;    ; J       ¼ min vij ji 2 I 0 ; x max vij ji 2 I 00 ; v v A ¼ ~v 1 ;~ 2 ; . . .; ~ i 

i ¼ 1; 2;    ; n;

j ¼ 1; 2;    ; J

Step 3: The benefit criteria and cost criteria is represented by I 0 and I 00 respectively. Calculate the distance of each alternative from A and A with the following equations: Dj ¼ D j ¼

n X   d ~vij ; ~v1 ; j¼1 n X

  d ~vij ; ~v 1 ;

j ¼ 1; 2;    ; J j ¼ 1; 2;    ; J

j¼1

Step 4: Calculate the closeness coefficients CC to ideal solution based on distance of alternatives. CCj ¼

D j ; Dj þ D j

j ¼ 1; 2;    ; J

Finally rank the alternatives according to CCj in descending order. FTOPSIS supports the determination of weights for the ranking of design alternatives and is represented in alternatives level of AHP hierarchy for the best design alternative selection. In order to prioritize different aspects towards averting the underlying associated risks we consider six important stages involved in the operational risk [11] viz. risk identification, core risk management process, standard reporting, key risk scenarios and capital calculation, risk appetite, audit involvement and review. The risk identification helps to understand the scope of risks that the entire organization and its strategy are exposed to. The core risk management process puts in place the risk and control assessment, key risk indicators, loss events and issue management. The standard reporting is related to developing and delivering reports to the management related to the status of the ongoing operational risk management processes. The key risk scenarios and calculation focuses on the vital risks to the organization and ensures that they are well understood by the management and stress tests are completed around these risks. The risk appetite involves the original linking of the risk library to the management strategy involved such that appropriate decisions are taken around those risks that are outside the risk appetite. The audit involvement and review operates alongside the other risk stages and gets review from those functions as to the veracity of the operational risk management framework as it applies to them such that various touch points between different functions are operated correctly.

[email protected]

174

9 Evaluation of the Possibilistic Quantification …

The evaluation of possibilistic quantification of operational risk approach considered here consists of the following two steps [3]: (a) Calculation of FAHP weights done through two major tasks. First a rough cut LCA is performed that includes the life cycle phases of operational risk ðC1 Þ, compilation ðC2 Þ, utilization ðC3 Þ and end of life ðC4 Þ for the case result. Then FAHP is used to calculate the criteria weights. (b) Application of FTOPSIS to FAHP calculated weights to obtain the final risk aversion solutions rank [11, 14] and select the most suitable alternative.

9.2

Preparation of Preliminary Life Cycle Assessment Study

The implementation of the possibilistic quantification of operational risk is conducted in ICICI Prudential Banking and Financial Services, India [8]. It is an open ended equity sectoral company that invests predominantly in equity and equity related securities of companies engaged in banking and financial services. It is the second largest asset management company in the country as per average assets under management as on February 28 2015 focused on bridging the gap between savings and investments and creating long term wealth for investors through a range of simple and relevant investment solutions. The joint venture is created between ICICI Bank a well-known and trusted name in financial services in India and Prudential Plc one of Britain’s largest players in the financial services sectors. All through these years of the joint venture the company has forged a position of pre-eminence in the Indian mutual fund industry. The company manages significant assets under management in the mutual fund segment. The company also caters to portfolio management services for investors spread across India along with international advisory mandates for clients across international markets in asset classes like debt, equity and real estate. The company has witnessed substantial growth in scale. The company’s growth momentum has been exponential and it has always focused on increasing accessibility for its investors. Driven by an entirely investor centric approach the organization today is a suitable mix of investment expertise, resource bandwidth and process orientation. The company endeavours to simplify its investor’s journey to meet their financial goals and give a good investor experience through innovation, consistency and sustained risk adjusted performance. The organization is considered suitable for this study because it agreed to reformulate the existing financial instruments with proper consideration of the operational risk management factors [11, 12, 14] like information technology risk, vendor risk, compliance risk, process risk and financial reporting risk. Once the

[email protected]

9.2 Preparation of Preliminary Life Cycle Assessment Study

175

risks of the organization and the strategy are defined they are allocated to individual functions if the operational risk teams are split into sub units. The management is also interested to discover the fact that the proposed integrated approach is to be applied in the initial risk management stage. The institution wanted to use the new financial instruments as a showcase to demonstrate its capability and experience in the production of customer friendly risk free financial products. The LCA tasks involved the evaluation of the fiscal performance of the process involved using a womb-to-tomb approach, the collection of relevant financial data from the institution and calculations of the life cycle impact assessment (LCIA) results for the financial products. Financial indicator represented in terms of return on net assets as 80 [7] is used as the LCIA methodology to represent the financial burden of all risk management solution processes in the life cycle phases concerned. These results in terms of return on net assets are used for determining the criteria preferences during FAHP pairwise comparisons.

9.3

Calculation of Fuzzy Analytic Hierarchy Process Weights

The FAHP involves three activities [10]. The risk management criteria and alternatives are determined and established beforehand. The risk management criteria are determined according to the life cycle phases available for the case result. Alongwith this one additional criterion, efficiency ðC5 Þ is added to form the hierarchical structure of FAHP. Figure 9.3 illustrates the relationships between the goal, criteria and the risk aversion alternatives in the hierarchical structure.

Fig. 9.3 The hierarchical structure of the fuzzy AHP approach

[email protected]

9 Evaluation of the Possibilistic Quantification …

176 Fig. 9.4 The trapezoidal membership function defined by trapezoid ðx; a; b; c; d Þ

9.4

Trapezoidal Fuzzy Numbers for Pairwise Comparison Between the Criteria

The trapezoidal fuzzy numbers [9, 23] as shown in Eq. (9.2) and Fig. 9.4 represent the preferences during the pairwise comparison in FAHP. It is a piecewise linear, continuous function defined within interval ½0; 1 and controlled by four parameters a; b; c; d and x 2 R [23]: 8 0; > > > xa > > ; > < ba ltrapezoid ðx; a; b; c; d Þ ¼ 1; > > dx > > dc ; > > : 0;

xa axb bxc

ð9:2Þ

cxd dx

To enable pairwise comparisons between the criteria, the possibilistic judgements stated in Table 9.1 are used on the relative importance of each criterion. The linguistic terms in FAHP are represented by trapezoidal fuzzy numbers as illustrated in Table 9.1.

9.5

Pairwise Comparison Matrices and Normalized Fuzzy Weights for Each Criterion

Once the FAHP hierarchical structure and the results of LCA are obtained, the next stage is to form a pairwise comparison matrix with the fuzzy numbers as shown in Table 9.2. The LCA result is utilized for assigning preferences during the pairwise comparison. The fuzzy weights of the criteria are computed as follows [3]:

[email protected]

Absolute   3:5; 3:8; 4:1; 4:5



Criteria

C1 C1 C1 C1 C2 C2 C2 C3 C3 C4



Very strong   2:5; 2:7; 3:0; 3:5





Fairly strong   1:5; 1:7; 2:0; 2:5



Weak   0:5; 0:7; 1:0; 1:5



Equal   1:0; 1:0; 1:0; 1:0

Weak   0:5; 0:7; 1:0; 1:5

Table 9.1 The relative importance between each criterion in terms of possibilistic judgements



Fairly strong   1:5; 1:7; 2:0; 2:5







Very strong   2:5; 2:7; 3:0; 3:5

Absolute   3:5; 3:8; 4:1; 4:5

C2 C3 C4 C5 C3 C4 C5 C4 C5 C5

Criteria

9.5 Pairwise Comparison Matrices and Normalized Fuzzy Weights …

[email protected]

177

C1 C2 C3 C4 C5

C2

ð3:5; 3:8; 4:1; 4:5Þ ð1:0; 1:0; 1:0; 1:0Þ ð0:35; 0:58; 0:50; 0:47Þ ð0:28; 0:37; 0:33; 0:41Þ ð1:96; 1:42; 1:0; 0:68Þ

C1

ð1:0; 1:0; 1:0; 1:0Þ ð0:21; 0:26; 0:24; 0:28Þ ð0:28; 0:37; 0:33; 0:41Þ ð0:35; 0:58; 0:50; 0:47Þ ð1:0; 1:0; 1:0; 1:0Þ

ð2:5; 2:7; 3:0; 3:5Þ ð1:5; 1:7; 2:0; 2:5Þ ð1:0; 1:0; 1:0; 1:0Þ ð0:28; 0:37; 0:33; 0:41Þ ð0:35; 0:58; 0:50; 0:41Þ

C3

Table 9.2 The relative importance between each criterion in terms of fuzzy numbers ð1:5; 1:7; 2:0; 2:5Þ ð2:5; 2:7; 3:0; 3:5Þ ð2:5; 2:7; 3:0; 3:5Þ ð1:0; 1:0; 1:0; 1:0Þ ð0:28; 0:37; 0:33; 0:41Þ

C4

ð1:0; 1:0; 1:0; 1:0Þ ð0:5; 0:7; 1:0; 1:5Þ ð1:5; 1:7; 2:0; 2:5Þ ð2:5; 2:7; 3:0; 3:5Þ ð1:0; 1:0; 1:0; 1:0Þ

C5

178 9 Evaluation of the Possibilistic Quantification …

[email protected]

9.5 Pairwise Comparison Matrices and Normalized Fuzzy Weights …

179

F1 ¼ ð9:89; 11:87; 13:79; 14:65Þ  ð36:69; 31:86; 27:68; 25:77Þ ¼ ð0:36; 0:37; 0:38; 0:39Þ F2 ¼ ð5:47; 6:38; 7:45; 9:36Þ  ð36:69; 31:86; 27:68; 25:77Þ ¼ ð0:19; 0:20; 0:21; 0:24Þ F3 ¼ ð2:86; 3:89; 5:24; 5:69Þ  ð36:69; 31:86; 27:68; 25:77Þ ¼ ð0:11; 0:13; 0:14; 0:17Þ F4 ¼ ð5:47; 6:95; 8:87; 9:98Þ  ð36:69; 34:86; 27:68; 25:77Þ ¼ ð0:20; 0:24; 0:25; 0:26Þ F5 ¼ ð3:96; 5:35; 6:98; 7:96Þ  ð36:69; 31:86; 27:68; 25:77Þ ¼ ð0:14; 0:17; 0:19; 0:20Þ

After calculating the degree of possibility of Fi over Fj the weighted vectors [22] are calculated and the normalized weight vector with respect to criteria C1–C5 is [3]: Wc ¼ ð0:35; 0:14; 0:30; 0:05; 0:27ÞT The results obtained from FAHP show that C1 ; C3 and C5 are the most important criteria for the selection of risk aversion alternatives [11, 14]. These weights are used for FTOPSIS to determine the risk aversion solution final rankings [3].

9.6

Determination of Ranks of Operational Risks with Trapezoidal Fuzzy Number

The next task in FTOPSIS is to determine the preliminary risk aversion solution [11, 14] rankings based on trapezoidal fuzzy numbers considered equivalent to linguistic terms. The linguistic terms and their associated fuzzy membership functions are based on the fuzzy numbers defined by [19] and represented in Table 9.3. Then, the final risk aversion solution rankings are determined according to CCi calculated by using FTOPSIS according to the cost/benefit criteria as well as the distance of each alternative.

9.7

Integration of Weights

Before FTOPSIS calculations are performed, the risk aversion solutions [11, 14] are compared against each criterion separately. The linguistic variables are used for evaluation and then transformed into trapezoidal fuzzy numbers as shown in Table 9.3 The linguistic terms corresponding to the trapezoidal fuzzy membership function

Linguistic terms

Trapezoidal fuzzy number parameters a b c d

Very high High Medium Low Very low

0:75 0:50 0:30 0:15 0:00

[email protected]

0:85 0:60 0:45 0:25 0:10

0:95 0:70 0:55 0:35 0:20

1:00 0:80 0:65 0:45 0:25

9 Evaluation of the Possibilistic Quantification …

180

Table 9.4 The fuzzy risk aversion solution evaluation matrix Risk aversion 1 Risk aversion 2 Risk aversion 3 Risk aversion 4 Criteria weights

C1   0:50; 0:60; 0:70; 0:80   0:75; 0:85; 0:95; 1:00   0:75; 0:85; 0:95; 1:00   0:15; 0:25; 0:35; 0:45

C2   0:30; 0:45; 0:55; 0:65   0:30; 0:45; 0:55; 0:65   0:15; 0:25; 0:35; 0:45   0:75; 0:85; 0:95; 1:00

C3   0:75; 0:85; 0:95; 1:00   0:15; 0:25; 0:35; 0:45   0:30; 0:45; 0:55; 0:65   0:75; 0:85; 0:95; 1:00

C4   0:30; 0:45; 0:55; 0:65   0:75; 0:85; 0:95; 1:00   0:75; 0:85; 0:95; 1:00   0:75; 0:85; 0:95; 1:00

C5   0:30; 0:45; 0:55; 0:65   0:75; 0:85; 0:95; 1:00   0:50; 0:60; 0:70; 0:80   0:00; 0:10; 0:20; 0:25

0:35

0:14

0:30

0:05

0:27

Table 9.5 The weighted risk aversion solution evaluation matrix Risk aversion 1 Risk aversion 2 Risk aversion 3 Risk aversion 4 A A

C1   0:17; 0:24; 0:28; 0:30   0:24; 0:28; 0:35; 0:41   0:24; 0:28; 0:35; 0:41   0:05; 0:09; 0:14; 0:19

C2   0:04; 0:07; 0:09; 0:11   0:04; 0:07; 0:09; 0:11   0:02; 0:04; 0:07; 0:09   0:11; 0:14; 0:19; 0:21

C3   0:24; 0:27; 0:30; 0:35   0:04; 0:09; 0:11; 0:14   0:11; 0:14; 0:19; 0:21   0:24; 0:27; 0:30; 0:35

C4   0:02; 0:04; 0:05; 0:06   0:05; 0:06; 0:07; 0:09   0:05; 0:06; 0:07; 0:09   0:05; 0:06; 0:07; 0:09

C5   0:11; 0:14; 0:17; 0:19   0:19; 0:24; 0:27; 0:30   0:14; 0:18; 0:24; 0:28   0:00; 0:04; 0:06; 0:09

ð0; 0; 0; 0Þ ð1; 1; 1; 1Þ

ð0; 0; 0; 0Þ ð1; 1; 1; 1Þ

ð0; 0; 0; 0Þ ð1; 1; 1; 1Þ

ð0; 0; 0; 0Þ ð1; 1; 1; 1Þ

ð1; 1; 1; 1Þ ð0; 0; 0; 0Þ

Table 9.3. Once the fuzzy evaluations for the risk aversion solutions are determined, the fuzzy risk aversion solution evaluation matrix is prepared as shown in Table 9.4. The resulting fuzzy weighted decision matrix is represented in Table 9.5. Based on the results obtained from Table 9.5 the values of ~yij are normalized positive trapezoidal fuzzy numbers in the range ½0; 1 [23].

9.7.1

Fuzzy Analytic Hierarchy Process

The process and calculations of FAHP are illustrated in Sects. 9.2, 9.3, 9.4 and 9.5 corresponding to the criteria C1–C5 for which the normalized weight vector is Wc ¼ ð0:35; 0:14; 0:30; 0:05; 0:27ÞT . All the results are tabulated in Tables 9.1 and 9.2 [3].

[email protected]

9.7 Integration of Weights Table 9.6 The summarized results of the FTOPSIS approach

9.7.2

181

Risk Risk Risk Risk

aversion aversion aversion aversion

1 2 3 4

Distj

Distj

CCj

1:52 1:34 1:41 1:60

3:50 3:69 3:59 3:41

0:6972 0:7335 0:7180 0:6806

Fuzzy Extension of the Technique for Order Preference by Similarity to Ideal Solution

The process and calculations of FTOPSIS are illustrated in Sects. 9.6, 9.7 and 9.8. FTOPSIS is applied to FAHP calculated weights to obtain the final risk aversion solutions rank such that the most suitable alternative is selected. All the results are tabulated in Tables 9.3, 9.4, 9.5 and 9.6 [3].

9.8

Calculation of Solutions for Risk Aversion Alternatives

In this study the criteria C1–C4 represents the external competitive impact generated from the corresponding life cycle phases. These criteria are defined as the cost criteria and the criterion C5 is the benefit criteria. The distance of each risk aversion alternative from CCj is calculated as follows [13]: Dist1 ¼

5 X 4 X j¼1 i¼1

Dist1

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

2 1  pij  qij 4

ð9:3Þ

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð0  0:172Þ2 þ ð0  0:241Þ2 þ ð0  0:279Þ2 þ ð0  0:301Þ2 ¼ 4 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð0  0:041Þ2 þ ð0  0:069Þ2 þ ð0  0:089Þ2 þ ð0  0:110Þ2 þ 4 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð0  0:241Þ2 þ ð0  0:269Þ2 þ ð0  0:301Þ2 þ ð0  0:352Þ2 þ 4 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð0  0:021Þ2 þ ð0  0:041Þ2 þ ð0  0:052Þ2 þ ð0  0:059Þ2 þ 4 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð1  0:110Þ2 þ ð1  0:141Þ2 þ ð1  0:169Þ2 þ ð1  0:189Þ2 ¼ 1:52 þ 4

[email protected]

9 Evaluation of the Possibilistic Quantification …

182

Dist1 ¼

5 X 4 X j¼1 i¼1

Dist1

ffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

2 1 pij  q ij 4

ð9:4Þ

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð1  0:172Þ2 þ ð1  0:241Þ2 þ ð1  0:279Þ2 þ ð1  0:301Þ2 ¼ 4 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð1  0:041Þ2 þ ð1  0:069Þ2 þ ð1  0:089Þ2 þ ð1  0:110Þ2 þ 4 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð1  0:241Þ2 þ ð1  0:269Þ2 þ ð1  0:301Þ2 þ ð1  0:352Þ2 þ 4 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð1  0:021Þ2 þ ð1  0:041Þ2 þ ð1  0:052Þ2 þ ð1  0:059Þ2 þ 4 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i 1h ð0  0:110Þ2 þ ð0  0:141Þ2 þ ð0  0:169Þ2 þ ð0  0:189Þ2 ¼ 3:50 þ 4 CC1 ¼

Dist1 3:50 ¼ 0:6972 ¼   1:52 þ 3:50 Dist1 þ Dist1

In Eqs. (9.3) and (9.4), pij represents the fuzzy membership value of the risk  aversion corresponding to criteria Cj , qij 2 A and q ij 2 A in the weighted risk aversion solution evaluation matrix in Table 9.5. By iterating the steps to calculate for other risk aversion solutions [11, 14] the final results of FAHP and FTOPSIS are presented in Table 9.6 [3]. Based on the CCj values, the ranking of risk aversion solutions in descending order is A2 ; A3 ; A1 and A4 . The result shows that the investments and risk aversion phase criteria are the key consideration in the operational risk framework followed by the utilization and efficiency phases. The aim is towards selecting investments with minimal external competitiveness impact to satisfy the risk aversion specification, taking care about both the resources consumed in the utilization and the efficiency phases of the risk aversion solutions. External competitiveness is an important risk aversion criterion in the operational risk framework. Hence assessing the risk’s external competitiveness is an important aspect. However, due to the cost and time aspects [15] as well as stringent statuary guidelines and the problems encountered by incomplete and imprecise operational risk inventory data, it is pretty difficult for various institutions towards conducting an LCA. For the risk aversion performance evaluation of the specified alternatives, rough cut LCA, FAHP and FTOPSIS are applied. The usage of the linguistic variables here gives a realistic viewpoint to the entire evaluation process. However, uncertainty and vagueness often appear during the evaluation process when rough cut LCA results are considered when prioritizing the risk aversion alternatives. Thus the integration of different fuzzy approaches makes the evaluation of the possibilistic quantification of operational risk more suitable and realistic.

[email protected]

References

183

References 1. Aydogan, E.: Performance measurement model for Turkish aviation firms using the rough AHP and TOPSIS methods under fuzzy environment. Expert Syst. Appl. 38(4), 3992–3998 (2011) 2. Chang, D.A.: Application of the extent analysis method on fuzzy AHP. Eur. J. Oper. Res. 95 (3), 649–655 (1996) 3. Chaudhuri, A.: A Study of Operational Risk Using Possibility Theory. Technical Report. Birla Institute of Technology Mesra, Patna Campus, India (2010) 4. Chen, C.: Extensions of the TOPSIS for group decision making under fuzzy environment. Fuzzy Sets Syst. 114(1), 1–9 (2000) 5. De Benedetto, L., Klemes, J.: The environmental performance strategy map: an integrated LCA approach to support the strategic decision making process. J. Clean. Prod. 17(10), 900– 906 (2009) 6. Holton, G. A.: Value at Risk: Theory and Practice, 2nd edn (2014). http://value-at-risk.net 7. Hussain, A.: Managing Operational Risk in Financial Markets, 1st edn. Butterworth Heinemann (2000) 8. ICICI Prudential Banking and Financial Services. http://www.personalfn.com/tools-andresources/mutual-funds 9. Jang, S., Sun, C.T., Mizutani, E.: Neuro Fuzzy and Soft Computing. Prentice Hall (1997) 10. Kahraman, C.: Fuzzy Multi-Criteria Decision Making: Theory and Applications with Recent Developments. Springer Optimization and its Applications. Springer, Berlin (2009) 11. King, J.L.: Operational Risk: Measurement and Modeling. The Wiley Finance Series, 1st edn. Wiley, New York (2001) 12. Marrison, C.: The Fundamentals of Risk Measurement. McGraw Hill, New York (2002) 13. Ng, C.Y., Chuah, K.B.: Evaluation of eco design alternatives by integrating AHP and TOPSIS methodology under a fuzzy environment. Int J Manage Sci Eng Manage 7(1), 43–52 (2012) 14. Panjer, H.H.: Operational Risk: Modeling Analytics. Wiley Series in Probability and Statistics. Wiley, New York (2006) 15. Rao, S.S.: Engineering Optimization: Theory and Practice, 4th edn. John Wiley and Sons, New York (2009) 16. Rausand, M.: Risk Assessment: Theory, Methods and Applications, 2nd edn. Wiley, New York (2011) 17. Ruppert, D.: Statistics and Data Analysis for Financial Engineering. Springer Texts in Statistics. Springer, Berlin (2010) 18. Tasche, D.: Risk contribution and performance measurement, Working paper. Technical University of Munich, Munich, Germany (1999) 19. Torfi, F., Farahani, Z., Rezapour, S.: Fuzzy AHP to determine the relative weights of evaluation criteria and fuzzy TOPSIS to rank the alternatives. Appl. Soft Comput. 10(2), 520– 528 (2010) 20. Triantaphyllou, E.: Multi Criteria Decision Making Methods: A Comparative Study. Applied Optimization. Springer, Berlin (2000) 21. Tukey, J.W.: Modern Techniques in Data Analysis, NSF Sponsored Regional Research Conference. Southern Massachusetts University, Massachusetts (1977) 22. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1(1), 3–28 (1978) 23. Zadeh, L.A.: Fuzzy Set. Inf. Control 8(3), 338–353 (1965) 24. Zimmermann, H.J.: Fuzzy Set Theory and Its Applications, 4th edn. Kluwer Academic Publishers, Massachusetts (2001)

[email protected]

Chapter 10

Summary and Future Research

10.1

Summary

This research monograph is the outcome of the technical report a study of operational risk using possibility theory [1] from the research work done at Birla Institute of Technology Mesra, Patna Campus, India. All the theories and results are adopted from [1] and compiled in this book. The experiments are performed on several real life datasets using the MATLAB optimization toolbox [16]. The book is primarily directed towards the students of postgraduate as well as research level courses in Fuzzy Sets, Possibility Theory and Mathematical Finance in universities across the globe. An elementary knowledge of Algebra and Calculus is prerequisite in understanding the different concepts illustrated in the book. The book is immensely beneficial to researchers in risk analysis [7]. The book is also useful for professionals in banks, financial institutions and several commercial organizations interested in studying different aspects of risks. In the present competitive business scenario operational risk has attracted the attention of researchers and corporate business analysts [8, 12]. The operational risk has been specified through the Basel Committee [8] which has formulated a framework that sets out a standardized approach towards the underlying risk principles. It is an important risk component for financial institutions and banks as huge amount of capital are allocated to mitigate this risk. The availability of various data sets has provided us with an opportunity to analyze this risk and propose different models for quantification. Risk measurement is thus of paramount concern for purposes of capital allocation, hedging and new product development for risk mitigation. In this monograph we have discussed some modeling issues for fuzzy g-and-h distribution [1] within loss distribution approach for operational risk linked to extreme value theory giving due consideration to uncertainty which are indeterminate in nature encompassing belief degrees [6]. A comprehensive evaluation of the existing methods is performed and then new techniques are introduced to © Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6_10

[email protected]

185

186

10

Summary and Future Research

measure risk using various criteria. In Chaps. 2 and 3 we present the basic concepts of operational risk [8] and g-and-h distribution [2] respectively. Chapters 2 and 3 set the tone for the rest of the Chapters in the book. Chapters 2 and 3 provide inputs for Chap. 4 where the probabilistic view of operational risk [2, 5] is highlighted. The concept of VaR [5] is referred and SVaR [1] is developed alongwith risk and deviation measures. The risk and deviation measures are used to redefine the concepts of a Value at Risk and a Subjective Value at Risk deviation measures [1] which are explained through different illustrations and applications. The VaR and SVaR are also estimated through the stability viewpoint. They are also decomposed according to contributions of risk factors. The tail properties and regular variation of risk distribution is illustrated alongwith the second order regular variation through pickands-balkema-de haan theorem. In Chap. 5 the mathematical foundations of possibility theory [3] are discussed. The concepts of σ-Algebra, measurable space and measurable set, measurable function, uncertainty measure, uncertainty space, uncertainty distribution and uncertainty set are explained with several illustrative examples. An analysis of possibilistic risk concludes the chapter. Chapter 5 provide inputs for Chap. 6 where the possibilistic view of operational risk [1] is investigated. The concepts of VaR and SVaR presented in Chap. 4 are further extended to fuzzy VaR and fuzzy SVaR measures. The fuzzy versions of risk and deviation measures are also explained. The sub exponential nature of fuzzy g-and-h distribution generates the one–claim–causes–ruin phenomenon. The testing of the proposed possibilistic technique performs consistently better than the probabilistic model. However, there exists a discrepancy in practice between results which strongly favour extreme value theory and fuzzy g-and-h distribution. An overall very slow rate of convergence in applications using extreme value theory techniques is obtained using fuzzy g-and-h class of distribution functions. This happens because of second order behavior of slowly varying function underlying fuzzy g-and-h distribution. An application of fuzzy subjective value at risk optimization [10] is also presented. In Chap. 7 various simulation and numerical results [1] are given in support of the theory presented in Chaps. 4 and 6. All the simulation examples are adopted from several real life scenarios. In Chap. 8 a case study in Indian scenario adopted from the iron ore (hematite) mines in Jharkhand state [1] is conducted to show the advantage of the proposed model. A bilevel multi objective optimization model is formulated which captures the entire iron ore (hematite) production in Jharkhand state. The risk calculation is performed with fuzzy SVaR constraints. Then the sensitivity analysis is presented followed by a comparative analysis with other techniques. In Chap. 9 an evaluation of the possibilistic quantification of operational risk is performed to assess and prioritize different aspects towards averting the underlying associated risks by integrating fuzzy analytic hierarchy process (FAHP) [11] with fuzzy extension of the technique for order preference by similarity to ideal solution (FTOPSIS) [15].

[email protected]

10.2

10.2

Future Research

187

Future Research

Certain decisions are made in this research monograph which we now formulate as topics for future research. We briefly enumerate them as follows: (i) More work can be done on the basic properties of g-and-h distribution presented in Chap. 3. The g-and-h distribution can be modeled in terms of other continuous random variables [9] which remains an investigation topic together with their applications. The fitting of g-and-h distribution can easily be extended to other categories of data. (ii) The concepts of VaR and SVaR in Chap. 4 are defined in terms of continuous random variables keeping in view of the normal distribution. These concepts can be remodelled as continuous random variables as the limiting functions of discrete random variables [9]. More work needs to be done on the estimation of VaR and SVaR from the point of view of stability as well as on the decomposition according to contributions of risk factors. Infact we are working towards a better estimation of VaR and SVaR well as setting up an optimal threshold for extreme value theory based peaks over threshold approach. (iii) The mathematical foundations of possibility theory in Chap. 5 needs to be further enhanced with additional properties. The mathematical results can further be enriched with other uncertainty modeling concepts like rough sets [13], fuzzy-rough sets and rough-fuzzy sets [14]. This will allow a better modeling of impreciseness and incompleteness present in the banking and financial data. Likewise the analysis of possibilistic risk can be modelled through rough, fuzzy-rough and rough-fuzzy sets. (iv) The fuzzy VaR and fuzzy SVaR measures [1] in Chap. 6 can be remodelled through rough sets [13], fuzzy-rough sets and rough-fuzzy sets. The risk and deviation measures can also be represented in terms of rough, fuzzy-rough and rough-fuzzy sets. This will help to improve the discrepancy as well as the overall quality of the testing results. Likewise the rate of convergence can be further improved through the rough, fuzzy-rough and rough-fuzzy versions of g-and-h class of distribution functions. (v) More simulation and numerical results based on several real life scenarios [1] can be added in Chap. 7. These results can be adopted from different areas of science and engineering. (vi) The case study of the iron ore (hematite) mines in Jharkhand state in India [1] from Chap. 8 can be remodelled with multi objective genetic algorithms (MOGA) [4]. This will help to improve the optimality of results in the feasible solution search space. The risk calculation can also be performed through rough, fuzzy rough and rough fuzzy constraints [10]. More work can also be done on the sensitivity analysis.

[email protected]

188

10

Summary and Future Research

References 1. Chaudhuri, A.: A Study of Operational Risk using Possibility Theory, Technical Report. Birla Institute of Technology Mesra, Patna Campus, India (2010) 2. Degen, M., Embrechts, P., Lambrigger, D.D.: The Quantitative Modeling of Operational Risk: Between g-and-h and EVT, Technical Report. Department of Mathematics, ETH Zurich, Zurich, Switzerland (2007) 3. Dubois, D., Prade, H.: Possibility Theory. Plenum, New York (1988) 4. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning, 4th edn. Pearson Education, New Delhi (2009) 5. Holton, G.A.: Value at Risk: Theory and Practice, 2nd edn (2014). http://value-at-risk.net 6. Huber, F., Schmidt, C.P.: Degrees of Belief, Synthese Library, 342nd edn. Springer, Berlin (2009) 7. Hussain, A.: Managing Operational Risk in Financial Markets, 1st edn. Butterworth Heinemann, UK (2000) 8. King, J.L.: Operational Risk: Measurement and Modeling. The Wiley Finance Series, 1st edn. Wiley, New York (2001) 9. Laha, R.G., Rohatgi, V.K.: Probability Theory. Wiley Series in Probability and Mathematical Statistics, vol. 3. Wiley, New York (1979) 10. Lodwick, W.A.: Fuzzy Optimization: Recent Advances and Applications. Studies in Fuzziness and Soft Computing. Springer, Berlin (2010) 11. Ng, C.Y., Chuah, K.B.: Evaluation of Eco design alternatives by integrating AHP and TOPSIS methodology under a fuzzy environment. Int. J. Manage. Sci. Eng. Manage. 7(1), 43–52 (2012) 12. Panjer, H.H.: Operational Risk: Modeling Analytics. Wiley Series in Probability and Statistics. Wiley, New York (2006) 13. Pawlak, Z.: Rough sets. Int. J. Comput. Inform. Sci. 11, 341–356 (1982) 14. Thiele, H.: Fuzzy Rough Sets versus Rough Fuzzy Sets—An Interpretation and a Comparative Study using Concepts of Modal Logic, Technical Report ISSN 1433-3325, University of Dortmund, 1998 15. Torfi, F., Farahani, Z., Rezapour, S.: Fuzzy AHP to determine the relative weights of evaluation criteria and fuzzy TOPSIS to rank the alternatives. Appl. Soft Comput. 10(2), 520– 528 (2010) 16. http://in.mathworks.com/products/optimization/

[email protected]

Index

Fuzzy Fuzzy Fuzzy Fuzzy Fuzzy Fuzzy Fuzzy Fuzzy Fuzzy

B Banks, 14, 29 Basel Committee, 8, 23 Basel I, 7, 23 Basel II, 11 Basel III, 24 Belief degrees, 75 Borel set, 78 C Constrained optimization problem, 68 D Data fitting, 42 Deviation measures, 57 Distribution function, 48 E Equivalence of chance and VaR constraints, 58 Evaluation methods, 2 Exploratory data analysis, 30 Extreme value theory, 12 F FAHP, 171 Financial institutions, 29 Flowchart of hematite industry, 151 FSGA, 168 FSISA, 168 FTOPSIS, 172 Fuzzy deviation measures, 125 Fuzzy distribution function, 116 Fuzzy g-and-h distribution, 115 Fuzzy optimization, 135 Fuzzy probability, 123 Fuzzy probability measure, 126 Fuzzy probability space, 123

product measure, 125 real numbers, 115 risk measures, 123 rough sets, 187 sets, 75 σ-algebra, 125 SVaR, 121 SVaR optimization, 128 VaR, 118

G Gauss copula, 53 g distribution, 34 g-and-h distribution, 33 H h distribution, 35 Hematite, 147 I Impreciseness, 75 Indeterminacy, 76 India, 150 Investment banks, 15 Iron ore mining, 147 ISA, 166 J Jharkhand state, 150 K Kurtosis, 30 L LCA, 169 Likelihood, 27, 39 Linear regression hedging, 136

© Springer International Publishing Switzerland 2016 A. Chaudhuri and S.K. Ghosh, Quantitative Modeling of Operational Risk in Finance and Banking Using Possibility Theory, Studies in Fuzziness and Soft Computing 331, DOI 10.1007/978-3-319-26039-6

[email protected]

189

190

Index

M MATLAB, 132 Mean absolute deviation, 139 Measurable function, 79 Measurable set, 77 Measurable space, 77 MOGA, 187 Multiobjective optimization, 153 O Operational Operational Operational Operational

risk, 7, 12 risk external data, 25 risk internal data, 25 risk surface, 119

P Pickands-Balkema-de Haan theorem, 63 Portfolio asset mix, 142, 143 Portfolio rebalancing strategies, 141 Possibilistic risk analysis, 103 Possibility theory, 103 Probability space, 48 Probability theory, 48 Q Quantification, 25 R Regression problem, 69 Regulatory framework, 23 Risk calculation, 161 Risk control estimates, 132 Risk factors decomposition, 72 Risk index, 104 Risk measures, 56

Rough fuzzy sets, 187 Rough sets, 187 S Second order regular variation, 63 Sensitivity analysis, 165 Simulation, 131 σ-algebra, 77 Skewness, 30 Stability of estimation, 71 Standard deviation, 139 Subadditivity, 51, 120 SVaR, 54 SVaR deviation, 136 SVaR optimization, 67 T Tail properties, 59 Trapezoidal membership function, 115, 163, 176 U Uncertainty, 7 Uncertainty distribution, 85 Uncertainty measure, 81 Uncertainty set, 88 Uncertainty space, 83 Uncertainty theory, 75 Unconstrained optimization problem, 68 V Vagueness, 75 VaR, 49, 110 VaR deviation, 138

[email protected]