Application of Security Metrics in Auditing Computer Network Security ...

8 downloads 1684 Views 190KB Size Report
Network. Security, Information Security Metrics, Information Security. Auditing, Computer Network ... A security metric is a quantified aspect of network security.
Application of Security Metrics in Auditing Computer Network Security: A Case Study Upeka Premaratne, Jagath Samarabandu, Tarlochan Sidhu

Bob Beresh, Jian-Cheng Tan

Department of Electrical and Computer Engineering University of Western Ontario London, Ontario, Canada Email:{upremara,jagath}@uwo.ca,[email protected]

Abstract—This paper presents a case study of the application of security metrics to a computer network. A detailed survey is conducted on existing security metric schemes. The Mean Time to Compromise (MTTC) metric and VEA-bility metric are selected for this study. The input data for both metrics are obtained from a network security tool. The results are used to determine the security level of the network using the two metrics. The feasibility and convenience of using both metrics are then compared. Keywords—Information Security, Computer Network Security, Information Security Metrics, Information Security Auditing, Computer Network Security Auditing, Common Vulnerability Scoring System, VEA-bility Score, Mean Time to Compromise

I. I NTRODUCTION Due to the increased use and sensitivity of contents of computer networks, network security is a prime concern. Modern computer networks are both usually highly interconnected and highly heterogeneous in terms of technology and vendors. Hence there is always the potential for loopholes to exist. This leads to the need of evaluating the security of such networks. This can be done either in the perspective of a defender or an attacker. Security analysis and evaluation in the perspective of the defender involves looking at the security requirements of the client (defender). This leads to a security policy which in turn requires security mechanisms to enforce [1]. The enforceability of a security policy depends on the mechanisms used [2] which should be selected in a manner that they do not compromise the performance of the system [3], [4]. The other method of security design involves looking at the problem through the perspective of the attacker [5]. Intuitively, this perspective is more effective because an attacker is always motivated to achieve a set goal. Research in this context is more realistic because, realistic data can be obtained through simulated attacks [6] and by setting up bait networks known as Honeypots [7]. In a Honeypot, a network is set up with the intent of recording the behavior of attackers. It is probably the most realistic because intrusions on them are made by real world attackers [8]. Attack scenarios can also be simulated by using game theory [9].

c 978-1-4244-2900-4/08/$25.00 2008 IEEE

200

Kinectrics Inc. Toronto, Ontario, Canada Email:{bob.beresh,jian-cheng.tan}@kinectrics.com

II. S ECURITY M ETRICS A. Motivation A security metric is a quantified aspect of network security. The main motivation behind research into security metrics for network security is to provide people involved secure networks with a tangible means of measuring the security of a network [10]. This would have different implications on different types of jobs and people. For example, to a system designer it would provide a reasonable method to analyze the costs and benefits of a security mechanism. For a security auditor it would provide a means to assess and convey the security of a network to a client. The main challenges involved in designing proper security metrics for networks include, 1) Conversion of highly qualitative assessments into a quantitative form 2) Dealing with vague linguistic terms 3) Reflecting the aspects of information and network security so that there is no bias 4) Finding an optimum set of metrics which are both realistic as well as reasonable to assess B. Existing Metric Schemes Numerous authors have proposed different types of metrics to asses the security of a network. These include Pamula et. al. [11] who express the security of a metric in terms of the weakest adversary that can compromise it. Another proposed metric is the Mean Time to Compromise (MTTC), put forward by McQueen et. al. [12] and Leversage and Byres [13]. The scoring system employed by the SANS Institute [14] known as the SANS Institute Critical Vulnerability Analysis Scale Ratings uses a questionnaire. The answers are run through a propriety algorithm and the final ranking of the threat is given as critical, high, moderate and low. The Common Vulnerability Scoring System (CVSS) [15] is a widely used scheme in the field. It is used as the basis for security metric systems such as VEA-bility [16] and was cited as an emerging standard in IEEE Security and Privacy [17]. In CVSS, a base score is first calculated using a set of evaluating questions. These are then adjusted according to a

ICIAFS08

temporal score and an environmental score. The environmental score takes into account the potential for the vulnerability to cause bodily harm to employees (termed collateral damage).

1) Process 1: Process 1 is the process where the attacker has identified one or more known vulnerabilities and has one or more exploits on hand. The probability of the attacker being in this process is given by,

C. Analysis

P1

Expressing the strength of a network in terms of the weakest opponent [11] requires the use of attack graphs [18]. The main drawback of attack graphs is their complexity. Usually a single node in a network will require an extensive attack graph. Hence, when using it during a security audit, it may be too complex for the average client to interpret. Expressing the strength of a system in terms of its MTTC [13], [12], is more convenient. It can be calculated using a set of equations based on the known vulnerabilities of the system. The result is also a single number which is simple to comprehend to an average client. The SANS CVA rating is a good method to assess a network infrastructure. However the algorithm it uses to obtain the final result is not in the public domain. Hence, it cannot be verified independently. The CVSS has a wide coverage of aspects of security similar to that of the SANS CVA rating. It also gives a numerical result and its assessment method is also in the public domain. However, its main application is to assess a newly discovered vulnerability in a system. Systems that use CVSS such as VEA-bility [16], will become very complex to assess if it were not for security analysis tools. III. T HE C ASE S TUDY The case study involves the application of the MTTC obtained using both the McQueen [12] and Leversage and Byres [13] method and the VEA-bility [16] metric to a real computer network. After the results are obtained, they are compared in terms of reasonability and feasibility. IV. D ETAILED A NALYSIS OF S ELECTED M ETRICS A. Mean Time to Compromise Metric The MTTC metric is obtained by breaking up the actions of the attacker into three statistical processes. The time to compromise is the total time taken for the three processes and given by, T

= t1 P1 + t2 (1 − P1 )(1 − u) + t3 u(1 − P1 )

(1)

Where the periods t1 , t2 and t3 are taken in days. The value of T can be calculated either by the McQueen method [12] or the Leversage and Byres method [13]. The value t1 is taken as 1 day based upon the stipulation of McQueen et. al.[12]. The value P1 is obtained from Equation 2. Similarly t2 is calculated from Equation 3. The values of u and t3 are obtained from Equations 6 and 7 respectively for the Leversage and Byres method or from from Equations 8 and 9 respectively for the McQueen method.

=

1 − e−

αmV k

(2)

where V is the number of vulnerabilities, α is the visibility factor, m is the number of readily available exploits to the attacker depending on the skill of the attacker. The value k is the total number of non-duplicate known vulnerabilities. The values of α, m and k are given in Table I. When considering the skill of an attacker, a beginner is an attacker only capable of utilizing existing code, tools and attack methods. An intermediate is an attacker capable of modifying existing code, tools and attack methods while an expert is capable of creating new code, tools and attack methods. 2) Process 2: This process is when the attacker has identified one or more known vulnerabilities but does not have an exploit on hand. The time spend for this is given by, =

t2

5.8N

(3)

where N is the expected number of tries. The expected number of tries is given by,  n     V −V A +1   VM − i + 2  VA N = n 1+ (4) V V −i+1 n=2 i=2 where VA is the average number of vulnerabilities the attacker can exploit for the given skill level. VM is the number of vulnerabilities that the attacker cannot exploit. It is related to V and VA by the equation, V

= VA + VM

(5) VA V

The value of VA is given in terms of the ratio (Table I). The derivation of Equation 4 is given in detail in McQueen et.al. [12]. 3) Process 3: Process 3 is when there are no known exploits or vulnerabilities available. This depends on the success of Process 2. According to the Leversage and Byres method, the probability of Process 2 being unsuccessful is given by, u

=

(1 − s)αV

(6)

where s is a variable that indicates the skill level of the attacker. The possible values of s are given in Table I. The time spend on Process 3 is given by,   1 t3 = 30.42 − 1 + 5.8 (7) s When the McQueen method is used, the values of u and t3 are obtained in terms of VVA . Therefore the respective equations become,    αV VA (8) u = 1− V   V t3 = 30.42 − 1 + 5.8 (9) VA

201

TABLE I VARIABLES AND C ONSTANTS FOR MTTC C ALCULATION Variable k α

Definition Total number of non-duplicate known vulnerabilities Visibility reduction factor of vulnerabilities due to boundary devices such as firewalls (depends on the number of security reviews conducted during a year)

s

Variable to quantify the skill level of an attacker

m

Number of readily available exploits (McQueen method)

m

Number of readily available exploits (Leversage and Byres method) Ratio of the average number of vulnerabilities which the attacker can exploit at the given skill level to the total number of available vulnerabilities.

VA V

4) Feasibility: When the MTTC for a given number of vulnerabilities obtained from both methods is plot, it becomes apparent that the results from the McQueen method (Fig. 1) are highly consistent. The oscillations in the graph are due to the rounding up of fractional values of VVA . The results from the Leversage-Byres method (Fig. 2) appear to be inconsistent for the case of an intermediate user (Fig. 2b) where the MTTC increases for less than 20 vulnerabilities, when it should decrease. However, this deficiency does not appear for the case of a beginner or expert where both methods closely tally with each other.

Value 9447 1 (no reviews) 0.3 (semi annual) 0.12 (quarterly) 0.05 (monthly) 0.5 (beginner) 0.9 (intermediate) 1 (expert) 150 (beginner) 250 (intermediate) 450 (expert) 450s 0.3 (beginner) 0.55 (intermediate) 1 (expert)

Source ICAT Database [12] Leversage and Byres [13]

Leversage and Byres [13] McQueen et.al. [12]

Leversage and Byres [13] McQueen et.al. [12]

(a) Beginner

B. VEA-bility Metric The VEA-bility metric is based upon the CVSS system proposed by Tupper and Zincir-Heywood [16]. This system begins by evaluating the known vulnerabilities of each individual node of the network. For each vulnerability, the impact score (IS (v)), temporal score (TS (v)) and exploitability score (ES (v)) are obtained. From this, the severity of the vulnerability is obtained from, IS (v) + TS (v) (10) 2 This is then used to find the metrics for the host due to the total number of vulnerabilities. The metrics include the host vulnerability,

 V (h) = min 10, ln eS(v) (11) S(v) =

and the host exploitability,

 uh eES (v ) E(h) = min 10, ln uN

(b) Intermediate

(12)

where uh and uN are the number of services on the host and network respectively. The attackerbility of the host is given by, A(h)

=

10pA pN

(13)

where pA is the number of attack paths and pN is the number of network paths. Upon calculating the scores for each host,

(c) Expert Fig. 1. Mean Time to Compromise Estimate vs. Number of Vulnerabilities (McQueen Method) An estimate of the time a (a) beginner (b) intermediate (c) expert will take to compromise a host depending on the number of vulnerabilities.

202

University of Western Ontario. The network consists of 9 hosts of which all run different types of services. The details of each host are given in Table II. All of the hosts are scanned using TABLE II IRIS H OST D ETAILS Host ID Host 1 Host 2 Host 3 Host 4 Host 5 Host 6 Host 7 Host 8 Host 9

(a) Beginner

Host Name Amila Nilwala Mahaweli Deduru Asela Kala Kalu Mee Kelani

Operating System Gentoo Linux (Kernel 2.6) Windows XP SP2 Gentoo Linux (Kernel 2.6) Windows XP SP2 Gentoo Linux (Kernel 2.6) Gentoo Linux (Kernel 2.6) Windows XP SP2 Windows XP Professional Windows XP SP2

the security tool Nessus for vulnerabilities. Table III contains the number of open ports and vulnerabilities of each host as detected by Nessus. Table IV contains the CVSS scores of the important vulnerabilities of each host as detected by Nessus. The data from the scan is then entered into a Microsoft Access database and then calculated using MATLAB. (b) Intermediate TABLE III IRIS H OST V ULNERABILITIES Host Host Host Host Host Host Host Host Host Host

(c) Expert Fig. 2. Mean Time to Compromise Estimate vs. Number of Vulnerabilities (Leversage-Byre Method) An estimate of the time a (a) beginner (b) intermediate (c) expert will take to compromise a host depending on the number of vulnerabilities.

In order to test the two metrics on real data, both metrics are tested on the hosts of the IRIS network (Fig. 3) of the Computer Vision and Mobile Robotics Laboratory of the

2 7 7 13 2 5 3 3 4

Host

CVE

Host 2

Host 3 Host 4

From this the final VEA-bility score of the network (R(n)) is calculated from,   V (n) + E(n) + A(n) R(n) = 10 − (17) 3 A. Sample Data

1 2 3 4 5 6 7 8 9

Low Risk 13 24 27 35 12 18 21 9 22

Vulnerabilities Medium Risk 2 3 2 3 1 5 2 1 3

High Risk 0 1 2 1 0 0 0 0 1

TABLE IV IRIS H OST V ULNERABILITY CVSS S CORES

the corresponding values for the entire network are obtained from,

 (14) V (n) = min 10, ln eE(h)  E(n) = E(h) (15)  A(n) = A(h) (16)

V. R ESULTS

Open Ports

Host 7 Host 9

CVE-1999-0505 CVE-2002-1117 CVE-2005-1794 CVE-2002-1117 CVE-2007-2446 CVE-1999-0505 CVE-2002-1117 CVE-2005-1794 CVE-1999-0505 CVE-2002-1117 CVE-1999-0505 CVE-2002-1117 CVE-2005-1794

Base 7.2 5 6.4 5 10 7.2 5 6.4 7.2 5 7.2 5 6.4

Score Impact 10 2.9 4.9 2.9 10 10 2.9 4.9 10 2.9 10 2.9 4.9

Exploit 3.9 10 10 10 10 3.9 10 10 3.9 10 3.9 10 10

B. Calculation For the sample calculation of the MTTC, the total vulnerabilities for a host is obtained by the formula, V

203

=

0.1VLR + 0.5VM R + VHR

(18)

Fig. 3.

IRIS Network

where VLR , VM R and VHR are respectively the number of low, medium and high risk vulnerabilities. After the MTTC is calculated for all hosts, the mean is taken as the MTTC score of the entire network. The McQueen method is used. In order to obtain the VEA-bility metric, the host attainability is taken as zero since the entire network is protected from external threats by a firewall.

TABLE VI IRIS VEA- BILITY R ESULTS Host V (h) Host 1 0 Host 2 10 Host 3 10 Host 4 10 Host 5 0 Host 6 0 Host 7 10 Host 8 0 Host 9 10 VEA-bility Score

C. Calculation Results The scan results are then used to calculate the MTTC. Table V) shows the MTTC for the network for the worst case (expert attacker) for no firewall updates (α = 1) and monthly updates (α = 0.05). In either case, the value is calculated for the initial network condition (before) and after the vulnerabilities are mitigated by patching the high risk vulnerabilities (after). The VEA-bility score of the network is also calculated (Table VI). The initial VEA-bility score of the network is 4.2029. After the vulnerabilities are mitigated, it increases to 10. TABLE V IRIS MTTC R ESULTS

Host Host 1 Host 2 Host 3 Host 4 Host 5 Host 6 Host 7 Host 8 Host 9 MTTC (days)

MTTC (days) α=1 α = 0.05 Before After Before After 5.4 5.4 5.8 5.8 4.8 5.0 5.7 5.8 4.6 5.0 5.7 5.8 4.6 4.8 5.7 5.7 5.4 5.4 5.8 5.8 5.0 5.0 5.8 5.8 5.2 5.2 5.8 5.8 5.6 5.6 5.8 5.8 4.8 5.0 5.7 5.8 5.0 5.1 5.8 5.8

E(h) 0 1.5217 1.5217 2.8261 0 0 0.6522 0 0.8696 4.2029

D. Interpretation of Results The VEA-bility score of the network increases from 4.2029 to 10 once the vulnerabilities are mitigated. However, the MTTC does not change significantly. However, according to the results of the the MTTC (Fig. 1) would be able to indicate this, especially if the number of vulnerabilities of the network were greater than 10 and the firewalls of the network were not updated. Since the VEA-bility metric cannot exceed 10, it will not quantify the insecurity of a network with a high amount of vulnerabilities. E. Alternative MTTC Formula The low resolution of the MTTC when the total number of vulnerabilities per host are calculated using Equation 19 appears to be impractically low. Hence an alternative form is tried out. In it the total number of vulnerabilities of the the entire network of NH hosts is calculated. This is then used to calculate the MTTC of the entire network directly. For this Equation 19 would become, V

=

NH  i=0

204

[0.1VLRi + 0.5VM Ri + VHRi ]

(19)

When the alternative MTTC is calculated, the results (Table VII) show more variation especially when α is unity. Therefore it is reasonable to consider α to be unity throughout the calculations. If its value is less than this, the MTTC calculation would become meaningless as a security metric. Even this MTTC calculation is unrealistic when considering the fact that in real life attackers usually take an attack path. For example, in order to compromise a particular machine, the attacker may have to look for a particular stepping stone vulnerability within another machine of the network. Hence, using Equation 19 can also be considered unrealistic. F. Attack Path MTTC This leads to the attack path MTTC approach taken by Leversage and Byres [13]. In this case the MTTC is calculated for each path and the total MTTC is obtained by summing up the MTTC of probable path times the likelihood of taking that path based on its relative difficulty. The likelihood and difficulty can be obtained from statistical data of real attacks. TABLE VII A LTERNATIVE MTTC FOR THE E NTIRE IRIS N ETWORK Network No Updates (α = 1) Monthly Updates (α = 0.05)

MTTC (days) Before After 1.9 2.2 5.4 5.5

VI. C ONCLUSION The results show that the use of the VEA-bility metric is highly practical for an average network. This is because data for calculating the VEA-bility metric can be obtained from commonly available network security evaluation tools such as Nessus. This metric is also capable of clearly indicating serious vulnerabilities within a reasonably secure network. There are however two drawbacks of this metric. The first is the need to manually enter the data obtained from the security tool into the program to calculate the metric. The second is that if the network is highly insecure, the metric will read fail to quantify the degree of insecurity since the all individual network scores do not exceed 10. The MTTC is capable of quantifying the risk of a highly insecure network but due to its exponential nature fails when applied to reasonably secure networks. Steps taken to solve this may be unrealistic in practice. McQueen [12] states that this metric can be interpreted as a measure of relative risk.

R EFERENCES [1] M. Bishop, “What is computer security?” IEEE Security and Privacy, vol. 1 (1), pp. 67–69, January/February 2003. [2] F. B. Schneider, “Enforceable security policies,” ACM Transactions on Information and System Security, vol. 3 (1), pp. 30–50, February 2000. [3] S. Hariri, Q. Guangzhi, T. Dharmagadda, M. Ramkishore, and C. Raghavendra, “Impact analysis of faults and attacks in large-scale networks,” IEEE Security and Privacy, vol. 1 (5), pp. 49–54, September/October 2003. [4] K. P. Yee, “Aligning security and usability,” IEEE Security and Privacy, vol. 2 (5), pp. 48–55, September/October 2004. [5] S. Evans and J. Wallner, “Risk-based security engineering through the eyes of the adversary,” in Proceedings of the 2005 IEEE Workshop on Information Assurance and Security, 2005, pp. 158–165. [6] E. Jonsson and T. Olovsson, “A quantitative model of the security intrusion process based on attacker behavior,” IEEE Transactions on Software Engineering, vol. 23 (4), pp. 235–245, April 1997. [7] “Honeynet attack data,” http://www.honeynet.org, Honeynet Project, accessed: 2008.06.30. [8] L. Spitzner, “The honeynet project:trapping the hackers,” IEEE Security and Privacy, vol. 1 (2), pp. 15–23, March/April 2003. [9] K. W. Lie and J. M. Wing, “Game strategies in network security,” International Journal of Information Security, vol. 4 (1-2), pp. 71–86, February 2005. [10] “Security metrics guide for informatioin technology,” http://csrc.nist.gov/publications/nistpubs/800-55/sp800-55.pdf, National Institute of Standards and Technology, accessed: 2008.06.21. [11] J. Pamula, S. Jajodia, P. Ammann, and V. Swarup, “A weakest adversary security metric for network configuration security analysis,” in Proceedings of the ACM Workshop on Quality of Protection, 2006, pp. 31–38. [12] M. A. McQueen, W. F. Boyer, M. A. Flynn, and G. A. Beitel, “Timeto-compromise model for cyber risk reduction estimation,” in First Workshop on Quality of Protection, Quality of Protection: Security Measurements and Metrics. Springer, 2005. [13] D. J. Leversage and E. J. Byres, “Estimating a system’s mean time to compromise,” IEEE Security and Privacy, vol. 6 (1), pp. 52–60, JanuaryFebruary 2008. [14] “Sans critical vulnerability analysis scale ratings,” http://www.sans.org/newsletters/cva/, SANS Institute, accessed: 2008.06.16. [15] “Common vulnerability scoring system,” http://www.first.org/cvss/v1/guide.html, CVSS Team, accessed: 2008.06.16. [16] M. Tupper and A. N. Zincir-Heywood, “Vea-bilty security metric: A network security analysis tool,” in The Third International Conference on Availability, Reliability and Security, 2008, pp. 950–957. [17] P. Mell, K. Scarfone, and S. Romanosky, “Common vulnerability scoring system,” IEEE Security and Privacy, vol. 4 (6), pp. 85–88, NovemberDecember 2006. [18] L. Wang, S. Anoop, and S. Jajodia, “Toward measureing network security using attack graphs,” in Proceedings of the ACM Workshop on Quality of Protection, 2007, pp. 49–55.

VII. F UTURE W ORK Future work would be directed towards the investigation of the feasibility of the attack path MTTC as proposed by Leversage and Byres [13]. This would include development of attack graphs as well as obtaining statistical data for determining likely attack paths. ACKNOWLEDGEMENTS The authors would like to thank Kinectrics, 800 Kipling Avenue, Toronto, Ontario, Canada M8Z 6C4 for funding for the research.

205