Network Security Risk Assessment Using Bayesian ... - Template.net

3 downloads 20 Views 295KB Size Report
Network Security Risk Assessment Using Bayesian Belief Networks. Suleyman KONDAKCI. Izmir University of Economics,. Faculty of Engineering & Computer ...

IEEE International Conference on Social Computing / IEEE International Conference on Privacy, Security, Risk and Trust

Network Security Risk Assessment Using Bayesian Belief Networks Suleyman KONDAKCI Izmir University of Economics, Faculty of Engineering & Computer Sciences, 35330 Balcova–Izmir, Turkey Email: [email protected]

Although there exist numerous work covering risk and vulnerability analysis, we have failed so far to find practically sound approaches similar to the CRAM method. As also mentioned in [3], risk analyses are often focused on physical accesses to systems, however, today the scope of information security threats is virtually widened throughout Internet, and it is now more difficult to achieve accurate results aligned with these virtual threats. The CRAM model is unique, simple, and easily applicable to solutions of quantitative risk assessment problems and modeling of the risk propagation in computer networks. Based on our laboratory experiments dealing with various types of attacks, application of Bayesian inference, compared to classical statistical methods, require less data and computational power to forecast eventual impacts. We define a security risk as the weighed combination of a system malfunction caused by a threat, an attempt to misuse/abuse information systems, and a threat exploiting some vulnerabilities that could cause harm to some assets. There are several risk analysis methodologies, both qualitative and quantitative, aimed at addressing the needs of a diverse range of environments, e.g., nuclear power plants, railway, energy transportation, medical, and ecological systems. Various research methods, analytical, statistical, and deterministic, have been used to analyze Internet attacks that provide decision analysis. An example of the decision analysis of statistically detecting distributed denial–of–service flooding attacks is presented in [4]. However, approaches applying BBNs to the analysis of information security risk propagation are hard to find. The core of CRAM is built on a single asset (atomic) model, which is further expanded to determine causal risks for multiple assets using the interdependence structure of the assets and threats to them. This approach is more effective, because, assets can face many types of threats of varying complexity and uncertainty, which can be more difficult to handle in larger sets. Hence, more accurate risk assessment results, and in turn, balanced safety measures for dynamically growing heterogeneous environments can be effectively achieved by using the atomic components and the associated BBN to build a propagation tree of threats for the overall assessment process. As can be quickly seen, BBNs can effectively model such trees of threats that can cause

Abstract—This paper presents a causal assessment model based on Bayesian Belief Networks to analyze and quantify information security risks caused by various threat sources. The proposed model can be applied to a variety of information security evaluation tasks, risk assessment, software development projects, IT products, and other decision making systems. This unique concept can also be used for the determination of joint risk propagation and interdependence structures within computer networks, information systems, and other engineering tasks in general. By this manner, we can facilitate the determination of probabilistic outputs caused by some precalculated input probabilities or by marginal/joint probabilities found so far within the chain of an interdependence structure. Keywords-information security; threat modeling; risk modeling; quantitative risk assessment;

I. I NTRODUCTION The purpose of this paper is to introduce a causal risk assessment method (CRAM), based on Bayesian Belief Networks (BBN), for identification and analyses of causal threats and quantification of risks associated with them. The CRAM model presented here should not be intermingled with the well–known risk analysis and management method CRAMM (www.cramm.com). Due to the rapid growth of the complexity of Internet interconnections and increased uncertainty in the diversity of threat sources and their impacts, appropriate threat analyses and risk assessment models providing quantitative outputs are also attracting many researchers and security product vendors. CRAM uses BBNs to make probabilistic inferences for the estimation of causal risks. The reason of choosing the BBNs as the base methodology is clear, because, BBNs offer consistent semantics for representing uncertainty and an intuitive graphical representation of the interactions between various causes and their effects. BBNs are useful when the information about the past and/or the current situation is vague, incomplete, conflicting, and uncertain [1], [2]. With the historical information stored in conditional probability tables, CRAM can be used to facilitate the automation of a decision-making process. In short, CRAM can be used to perform inductive reasoning (diagnosing a cause given an effect), and deductive reasoning (predicting an effect given a cause). 978-0-7695-4211-9/10 $26.00 © 2010 IEEE DOI 10.1109/SocialCom.2010.141

952

a risk propagation through the interdependence structure of the assets and interconnected systems. Mostly, existing standards, e.g., NIST02 (http://csrc.nist. gov/publications/), [5], and ISO27002 (www.27000.org), do not specify a certain methodology for risk assessment, however, they specify only that the organization should use a systematic approach to risk assessment and encourage the use of external tools and techniques available for test and evaluation facilities. The causal method proposed here can be incorporated into existing test and evaluation systems for the quantitative assessment of IT security risks. Known security tools such as intrusion detection systems and other vulnerability scanners are often used to track events in dynamic environments to assess risks. However, the definition of risk assessment from a broader perspective is more than a network security scanning; it is a proactive measure against the occurrence of security incidents in future, which can lead to unexpectedly high losses. Vulnerability scanning is an operational risk assessment process, which is effective only in discovering current design deficiencies and operational malfunctions. To support this, the causal risk assessment method (CRAM) can be used to effectively track sources of potential losses in IT environments in order to implement more precise and balanced security mechanisms against dynamically evolving threat patterns.

Primarily, to any risk management approach, it is fundamental to identify assets and all forms of information (electronic, non–electronic) that require protection. An approach to lifecycle information security risk assessment is presented in [10], which can provide useful guidance prior to starting a risk assessment process. At present, there is little formal guidance on how to combine what can be learned from the data using the Bayesian inference, however, there is a growing interest in how to apply Bayesian Networks (BNs) using both data and some evidence information. Related to this, a methodology focusing on parameterizing and evaluating a BN that deals with a risk assessment case study for ecological assets is presented in [11]. A composite concept for the generation of attack data and an associated risk assessment approach using a homogeneity algorithm for fast evaluation of large networks is presented in [12]. Instead of testing each asset separately by applying repetitive attacks and assessments over and over, the composite concept generates and executes attacks once for a set of assets with similar characteristics, composes risk data, and uses the risk data for the assessment of the remaining group of the assets as well as the entire network. Together with the growing activities of standard organizations, there are several other initiatives that immensely consider risk assessment methodologies and tools, some of which are discussed in [13]–[16]. An object-oriented toolset, called NetGraph, for risk assessment applications is presented in [13]. A full network security assessment of working networks is considered in [14], which proposes a method for the assessment of network elements with no actual touches of the network itself. We agree with [15] stating that both qualitative and quantitative methodologies to assess security are still missing. This is possibly due to the lack of knowledge about the major threat categories that must be parameterized for each asset. Another detailed approach for modeling and analyzing information system vulnerabilities is presented in [16]. The approach, called Vulnerability Take Grant, is a graph-based model consisting of subjects/objects as nodes and rights/relations as edges to represent protection states of the nodes.

A. Outline of the Paper In the following, Section II presents a brief review on related work and Section III introduces the BBN inference and presents the modeling of risk propagation during a security planing process. Section IV introduces the CRAM model starting with the per–asset (atomic) threat model, extends the atomic model to contain multiple assets, and applies the model to a real life example. Section V concludes the paper. II. R ELATED W ORK Indeed, information security threat modeling combined with quantitative risk assessment techniques similar to the concept presented here is hard to encounter. As also mentioned in [6], existing methods are typically experimental, e.g., [7], highly dependent of the assessor’s experience, while the security metrics and assessment approaches are usually of qualitative art. Some work (e.g., [8]) focus on vulnerability analysis based on specific software development systems. The qualitative risk analysis and management tool CRAMM was created in 1987 by the Central Computing and Telecommunications Agency (CCTA) of the United Kingdom. A CRAMM–based approach is presented in [9], which describes a method for risk analysis based on subjective logic to calculate the likelihood of an asset incident, which, in line with CRAM (presented here) and nearly all risk analysis methods asserts that risk is dependent on asset values, threats, and vulnerabilities.

III. M ODELING

THE

R ISK P ROPAGATION

First, by defining a BBN model, we present the basics of Bayesian Belief Networks for modeling and inferring the propagation of threat impacts in order to compute the risk for a single asset and multi–assets. The model can then be rendered both analytically and numerically by using a BBN toolset, e.g., Netica (www.norsys.com). The key probabilistic inference undertaken in a BBN application is always associated with a dependence graph and a conditional probability table (CPT) for each node in the dependence graph. The dependence graph is usually constructed as a directed acyclic graph (DAG) that defines the behavior of a system in terms of a series of

953

local conditional probabilities, where an associated BBN provides the correct global framework to propagate the local conditional information and related uncertainties. Indeed, BBNs provide a link between probability theory and a graphical structure, where the graphical structure represents an associated probabilistic structure composed of a joint probability distribution. Let us consider the DAG shown in Fig. 1 with its dependencies explicitly modeled. The joint probability distribution P (a, b, c, d, e) of the DAG can be easily verified to be

As a simple example, let us assess two systems (A and B) in order to evaluate their strengths (availability) against the same type of denial of service (DoS) attack. The assessment results will help us determine two statuses: (i) current system strength under the specific DoS attack, (ii) in a future assessment, determine some new causal effects using the results of the current attack on system say A and some observation data (evidence) when something happens to system B, or vice versa. Hence, for a probabilistic inference, our BBN–based assessment process will make use of two main sets of variables, hypothesis and information (or evidence). To clarify this, let us consider the necessary BBN setup shown in Fig. 2 with two different systems, System A and System B, being affected from the same type of DoS attack. There can be other threats but we focus on the computation of separate probabilities of two different systems, whether they are affected given the knowledge about the occurrence of a specific attack while ignoring the others. It can be readily seen from Fig. 2 that the hypothesis variables for the BBN are associated with our belief in System A or System B being affected (failed). These beliefs are constrained to exist between the two discrete states, true or false. The information variable on which the hypothesis variables, System A and System B, have some level of dependence is the Attack variable. For this example, there is a 10% chance of an attack at any given time.

P (a, b, c, d, e) = P (a | b)∗P (b | c, d)∗P (d | e)∗P (c)∗P (e). Note that, the notation P (x | y) expresses the conditional probability of x given the value of y, whereas P (x | y, z) expresses the conditional probability of x given the values of y and z. When two or more events will happen at the same

  



 Figure 1.

Dependence graph representation of a BBN

time, and the events are independent, then the special rule of multiplication law is used to find the joint probability. Hence, considering the general case of a joint probability structure of a BBN with nodes a1 , a2 , . . . , an such that some parents (ˆ aj , j = 1, 2, . . . ,) of node ai exist, then the joint probability distribution of the set can be determined by P (ai , . . . , an ) =

n Y

P (ai | a ˆj ),

j = 1, 2, . . . ,







  



(1)

i=1

That is, for each node in the DAG, the conditional probability is iteratively determined using the values of its parents. In general, each node may grow into several branches making a tree of parent nodes. Then, the joint distribution of the nodes a to z each with a finite number of parent nodes (denoted by m, j, . . . o) becomes P (a, . . . , z) =

m Y

i=1

P (ai | a ˆu )

n Y

j=1

P (bj | ˆbv ), . . . ,

o Y

  

) &

 Figure 2.

' (

   

 !  

 % &



" $

  # #

Probabilistic impacts of an attack on two different systems

Now, with the overall structure and variables explained above and the associated CPTs specified in Fig. 2, we can apply Bayesian theory to making necessary inference for determining states of the system A and System B; true =down, f lase = up. Thereafter, we can observe the current output in the BBN and use it as a new evidence to update the probabilities in order to determine the propagation patterns of new causal effects. For example, given the attack states {ℵ (Attack = true), ¬ℵ (Attack = false) }, we can calculate the marginal probability, P (A), that System A

P (zk | zˆy ).

k=1

Additionally, a CPT for each node in the tree should be defined and filled whether arbitrarily or by use of some historical data. As a simple example, prior to building the CPT for node b shown in Fig. 1, CPTs for the children nodes leading to b should already be computed. Here, for instance, the conditional distribution P (b | c, d) is needed in order to specify the CPT for P (a | b).

954

fails as:

A. Systems With Binary Outcomes Many types of attacks to information systems result in a binary valued impact, either fail or success. We can model this type of attack and its impact as a probabilistic decision tree shown in Fig. 3. A system with two states

P (A) = [P (A | ℵ) ∗ P (ℵ)] + [P (A | ¬ℵ) ∗ P (¬ℵ)] = (0.8 ∗ 0.1) + (0.1 ∗ 0.9) = 0.17

* 1−

ε (1 − p )(1 − ε ) Figure 3.

(2)

P (A | ℵ) ∗ P (ℵ) (0.8 ∗ 0.1) = = 0.47 P (A) 0.17

Note that, according to Bayes’ rule, the probability P (A) given in the denominator of Eq. (2) can be expressed as X X P (ℵj )P (A | ℵj ). P (A ∩ ℵj ) = P (A) = j

j

*

ε

ε

+

534234 (O0 , O1 )

(1 − p)ε pε

p(1 − ε )

Probabilities of a failure or success attack model

for i = 0, 1.

The probabilities denoting the success and failure states of an attack can be obtained by referring to the tree diagram shown in Fig. 3 as P [I0 ∩ O0 ] = (1 − p)(1 − ε), P [I0 ∩ O1 ] = (1 − p)ε, P [I1 ∩ O0 ] = pε, P [I1 ∩ O1 ] = p(1 − ε).

i

and expanding this to all combinations of A ∩ B as P (B) = P (A ∩ B) + P (¬A ∩ B)

= P (B | A)P (A) + P (B | ¬A)P (¬A),

01234 ( I 0 , I1 )

+

+

P [Ii ∩ Oi ],

Obviously, the observation of the evidence that ”System A has failed” significantly increases the probability (from 0.1 to 0.47) that there occurred an attack. Moreover, from this fact, we can use the revised CPTs to calculate the probability, P (B), that System B has also failed. Thus, according to the law of total probability, [17], defined as X X P (Ai )P (B | Ai ), P (B ∩ Ai ) = P (B) = i

/

under a specific attack (e.g., DoS) can be modeled as a binary system. Suppose that the attack fails (false state = 0) with probability 1 − p and succeeds (true state = 1) with probability p, and suppose that a random error ε denoting the probability whether there was a success or failure. Let Ii denote a successful attack, and let Oi be the output of this attack, where clearly (Ii , Oi ) ∈ {0, 1}. More generally, Let Ii be the event for input i, Oi be the event for output i, for i = 0, 1. State i = 1 denotes an attack for the input Ii , state i = 0 denotes no attack for the input Ii . Similarly, i = 1 states the success of an attack resulting in output Oi , i = 0 denotes failure of an attack resulting in output Oi . Hence, the result of an attack can be determined by probability

we can compute the conditional probability ℵ given the evidence that ”System A has failed”: P (ℵ | A) =

,-.

ε

P (A | ℵi )P (ℵi ) P (A ∩ ℵi ) = P , P (A) j P (ℵj )P (A | ℵj )

P (ℵi | A) =

*

1−

Similarly, using the CPT tables shown in Fig 2, we can compute the marginal probability, P (B), that System B is failed to be 0.51. After making some observations, we can revise the marginal probabilities in line with differences observed in the state variables. That is, during the observations we gather evidence data and update the CPTs. The tables, in our example, already contain the revised probabilities for System A being failed (0.8) and for System B being failed (0.6). Suppose, however, that we do not know if there occurred an attack, but do observe that System A has failed. Then, by instantiating A = true, we can determine (i) the probability, ℵ, that there occurred an attack, (ii) the probability that System B will also fail. Hence, by applying Bayes’ theorem given a partition (i.e., {ℵi }) of the event space

(4)

The impact (output) can be easily determined to be true (1) or false (0), depending on the value of ε, whether 1/2 < ε < 1/2. In terms of the tree diagram shown in Fig. 3, the probability of the input attack going either direction (1 − p or p) is considered equally to be 1/2/, which will be further modified by ε . This can be justified as follows. Let Ii be the event denoting the attack (input) of i (i = 0, 1), then I0 and I1 are the only variables of the input sample space. Likewise, Oi be the event denoting the impact (output) of i (i = 0, 1), where O0 and O1 are the only variables of the output sample space. Thus, due to sum of probabilities rule [18] applied to the right side of Fig 3, the probability of O1

(3)

we obtain P (B) = P (B | ℵ) ∗ P (ℵ) + P (B | ¬ℵ) ∗ P (¬ℵ) = (0.6 ∗ 0.47) + (0.5 ∗ 0.53) = 0.55. Hence, the observation of the evidence ”System A failed” implies the increase of the probability (from 0.5 to 0.55) that System B will also fail. These observations and probability propagations show also the inference of beliefs for various states of the BBN domain considered.

955

(successful attack) becomes H c1 Hc2

P (O1 ) = P (O1 | I0 )P (I0 ) + P (O1 | I1 )P (I1 ) 1 1 1 (5) +(1 − ε) = . =ε 2 2 2 For the determination of the input–output causal effect, the posterior probabilities can be obtained by applying the Bayes’ rule given in Eq. (2). Thus, given P (O1 ) and an empiric value of ε, input probabilities {P (Ii ) = 0, P (Ii ) = 1} can be obtained as follows:

Od Oc

Ic

Ic2 I c3

wa 1

Ic

Ec Ed

Ra 67789 :

Id

Figure 4. The compound risk model of a single asset with joint–direct and indirect threats

P (O1 | I0 )P (I0 ) ε/2 = =ε P (O1 ) 1/2 P (O1 | I1 )P (I1 ) (1 − ε)/2 P (I1 | O1 ) = = = 1 − ε. P (O1 ) 1/2 (6)

P (I0 | O1 ) =

(Ic = Ic1 ∪Ic2 ∪Ic3 ) produced by the joint effect of human– based (H), external (E), and by operational (O) type threats. That is, each incident of a threat can pose a direct or an indirect (also causal) effect or both at the same time, e.g., causal-operational (Oc ) and direct-operational (Od ) effects of the human-related threat as modeled in Fig. 4. The quantified value of Ra can be mapped either to a scalar risk value or to a probability distribution, as appropriate. The causal–internal threat Ic is composed of human and external threats, while the direct internal threat Id is specific to the asset itself. External attacks are intentional and dedicated to cause serious exploits such as DoS, intended buffer overflows, and malicious code injections using SQLinjection and cross-site scripting techniques. 1) Numerical Analysis: Suppose that, for the model shown in Fig. 4, we have run several experiments to gather the prior (input information) data defined as follows:

Thus, if ε < 1/2, then an attack (input = 1) is more likely when an attack (output = 1) is observed at the system. It is often conceivable that an arbitrarily large number of attacks will be launched until the victim system collapses, i.e., a denial of service (DoS) takes place. Suppose a DoS attack is repeated n times until a success achieved, and let Nk be the number of trials in which the kth attack results in the first success (DoS). For large n the relative frequencies of successes are Nk  1  k f (k) ≈ = , k = 1, 2, . . . . (7) n 2 Since we have either success or failure with the probability of 1/2 for each state, we can conclude that probability (αk ) of k attacks until the first success is  1 k . αk = 2 Probabilities for various numbers of attacks can be verified to add up to 1 by using the geometric series with α = 1/2: ∞ X

k=1

P (H) = 0.64, P (E) = 0.28, P (Ic ⊆ H) = 0.40, P (Ic ⊆ O) = 0.30,

P (I) = 0.08,

P (O ⊆ H) = 0.60, P (Ic ⊆ E) = 0.40.

(8)

That is, 64% of registered incidents are human-related (H), 60% of which are OD-threats (O ⊆ H) and 40% are the second proportion of causal–internal threat (Ic2 = Ic ⊆ H). Internal threat (I) is 8% and external threat (E) is 28%, 40% of which constitutes the third proportion of the internal threat (Ic3 = Ic ⊆ E). The first proportion of the internal threat is 30% of the OD-threats, (Ic1 = Ic ⊆ O). Later, during a security assessment we have observed the following conditional probabilities:

α αk = = 1. 1 − α α=1/2

B. Systems With Joint Impacts

An asset (or system) can be threatened by various threat types, and as a consequence, the impact of the attacks can be described as a compound likelihood of real numbers. The primary step is to construct a directed acyclic graph (DAG) containing all relations and dependence structures for the asset under consideration. Then, we can easily transform the DAG to a BBN model in order to make the necessary inference. A general model representing the single asset threats given as a simple DAG is shown in Fig. 4, in which the asset is associated with a risk value (Ra ) determined by the combination of an asset weight (wa ), a human–related parameter (α), a joint causal–internal parameter (β), an internal (δ) and an external (γ) threat parameter. The causal-internal threat is a compound quantity

P (H ′ ) = 0.76, = 0.30,

P (Ic′ ⊆ H ′ ) P (Ic′ ⊆ O′ )

= 0.32,

P (E ′ ) = 0.30, P (I ′ ) = 0.10, P (O′ ⊆ H ′ ) = 0.70, P (Ic′ ⊆ E ′ ) = 0.48.

(9)

That is, 76% of the observed incidents found to be from human–related (H ′ ), of which 70% makes the OD-threats, (O′ ⊆ H ′ ), and 30% makes the second proportion of the measured causal–internal threat (Ic′ ⊆ H ′ ). Other internal threat (I ′ ) is 10% and external threat (E ′ ) is 30%, of which 48% makes the third proportion of the causal–internal threat,

956

(Ic′ ⊆ E ′ ). The first proportion of the causal–internal threat is measured as 32% of the OD-threats, (Ic′ ⊆ O′ ). Now, given the above data and relationships, what is the probability that a randomly selected threat group presents increased incident? First of all, using the definitions given in (8), we compute the prior joint probabilities α, β, and γ for the model given in Fig. 4: P (O)

=

Given the above values and an asset weight a total risk for the asset under consideration can be easily computed using Raw = wP (a);

P (O)P (Ic ⊆ O) = 0.384 × 0.30 = 0.115, P (H)P (Ic ⊆ H) = 0.64 × 0.40 = 0.256,

P (Ic3 ) =

P (E)P (Ic ⊆ E) = 0.28 × 0.40 = 0.112.

RT4.0 = wP (a) = 4.0 × 0.393 = 1.57. Alternatively, considering the risk levels instead of using the probabilistic values given above, we can use Eq. (13) # "N X w sj Ri ; (Rw , Ri , w) ∈ [0, 5], sj ∈ [0, 1]. Rw = 5N i=1 (13) to compute the per–asset risk. Since each asset can be threatened by several threats, each threat may lead to an individual risk for a given asset. Here, Ri is used to index the individual risks for the asset. A subjectively defined constant sj denotes the relative strength of the jth individual PN risk relative to others, where k sj = 1. This assumes that the individual risk values RE , RH , and RI = Rδ have already been determined. For example, let RE = 3.0, RH = 4.0, RI = Rδ = 0.8, and w = 4.0, and to compute the total risk we need to first determine values of RO , Rα , Rβ , and γ:

The joint probabilities are then computed as α β

= P (O) − P (Ic1 ) = 0.384 − 0.115 = 0.269, = P (Ic1 ) + P (Ic2 ) + P (Ic3 ) = 0.483,

γ

= P (E) − P (Ic3 ) = 0.28 − 0.112 = 0.168,

δ

= P (I) = 0.080.

C. Finding the Dominating Threat Group Applying the above procedure we obtain the revised conditional probabilities α′ , β ′ , γ ′ , and δ ′ as α′ = 0.362, β ′ = 0.542, γ ′ = 0.156, and δ ′ = 0.10. By the law of total probability, X P (b | aj )P (aj ), (10) P (b) = j

RO

=

we obtain the probability of increased incidents by computing the total probability for asset a as

RH × 0.70 = 2.80,

RIc1 RIc2

= =

RO × 0.32 = 0.90, RH × 0.30 = 1.20,

P (a) = (α×α′ )+(β×β ′ )+(γ×γ ′)+(δ×δ)′ = 0.393. (11)

RIc3 Rα

= =

RE × 0.48 = 1.44, RO − RIc1 = 1.90,

Rγ Rβ

= =

RE − RIc3 = 1.56, RIc1 + RIc2 + RIc3 = 3.54.

The probability of any individual threat group assumed to cause incident, can be calculated by applying the Bayes’ theorem Eq. (2). Thus, adapting Eq. (2) to each threat group we get the incident rate (probability) of each group as: P (α | a) = P (β | a) = P (γ | a) = P (δ | a) =

(12)

Using Eq. (11) the per-asset risk probability P (a) is obtained as 0.393. For example, setting the asset value w = 4.0, the total per-asset risk is computed as

P (H)P (O ⊆ H) = 0.64 × 0.60 = 0.384,

P (Ic1 ) = P (Ic2 ) =

P (a) ∈ [0, 1], (Ra , w) ∈ [0, 5].

and, hence, the per–asset risk is obtained using Eq. (13) with sj = 1 and w = 4.0 as

α × α′ 0.269 × 0.362 = = 0.248, P (a) 0.393 β × β′ 0.483 × 0.542 = = 0.666, P (a) 0.393 γ × γ′ 0.168 × 0.156 = = 0.067, P (a) 0.393 0.08 × 0.10 δ × δ′ = = 0.020. P (a) 0.393

4.0 × (Rα + Rβ + Rγ + Rδ ) 5×4 4.0 × (1.90 + 3.54 + 1.56 + 0.8) = 1.56. = 20

R4.0 =

This is the total risk for a single asset consisting of multiple attributes (or sub–assets) where each attribute is assigned an individual risk value of 0 ≤ Ri ≤ 5. The result (1.56) corresponds to a medium risk level, given that the valid risk ranges are defined as:

Thus, to verify this, sum of the joint probabilities should be 1, i.e., P (α | a) + P (β | a) + P (γ | a) + P (δ | a) = 1. D. Finding the Per–asset Risk

Low: {0.0 − 1.0}, Medium: {1.1 − 2.0}, Medium-to-high: {2.1 − 3.0},

Though, mostly they are subjective values, for convenience, we choose risk values and asset values in tact as both varying between 0 to 5 (5 being the maximum value).

High: {3.1 − 4.0}, and Severe: {4.1 − 5.0}.

957

Note that, these ranges are subjectively chosen, which could be redefined by using different rang levels and score values for each range level. In case of M assets each with N sub–assets, the overall risk is iteratively computed by R=

N 1 X w R , M i=1 i

R ∈ [0, 1].

True False

True False

P(E) 28.0 72.0

True False

be ta = P(Ic) True 24.0 False 76.0

True False

AssetRisk 39.3 60.7

ga mm a = P(F via E) True 11.2 False 88.8 0±0

True False

delta = P(I) 10.0 90.0

p a2

VIA

b

a

p3

p4

pb4

pb2

p b6

pb5

Figure 6. The graph of risk propagation through multiple assets or network nodes

computer-based tools applying BBN algorithms to simplify calculations. In the following examples we use the Netica tool both for modeling the BBNs and making inferences. Although the dependence graph shown in Fig. 6 represents two simple networks each consisting of multiple nodes/assets, to compute beliefs for such networks exactly, the network must be converted to an equivalent singly connected one. There are a few ways to perform this task [19], however, the most common ways are variations of a technique known as clustering, in which, nodes are combined until the resulting graph is singly connected. Questions regarding proper asset protection, obstacles to the protection system, errors arising during the protection, and causal relation between the components can be answered by modeling a joint structure to expose the details. Thus, interdependence analysis of the components can give an idea about the overall security picture. The BBN model representation of the interdependence structure shown in Fig. 6 is given in Fig. 7. The inference is based on the method

Low High

Figure 5. BBN description of the joint risk for the single asset model shown in Fig 4

IV. R ISK P ROPAGATION

p3

(14)

a lpha = P(D via O) True 80.3 False 19.7 0.598 ± 0.33

P(O) 38.4 61.6

p5a

a

p1a

Where, Riw is computed by Eq. (13). 1) Using a BBN Tool for Inference: The risk parameters α, β, γ and, δ from Fig. 4 can also be incorporated into a BBN model shown in Fig. 5, using the Netica toolset, the joint probability of threats towards a single asset can be effectively computed. Given the CTPs of the prior distributions shown in Fig. 5, we obtain a joint risk value of 1.57. That is, with the risk probability of 39.3% and an asset value of w = 4.0 the risk–impact figure becomes as high as 1.57. It should be noted that the same result was also obtained earlier by the numerical analysis, see Section III-B1. Applying the same rule, we obtain 0.393 as the risk level for an asset weighed 1, and 1.97 for an asset weighed 5. That is, in addition to the asset weight, the risk level is mainly affected by the joint probabilities of the potential threats. P(H) 64.0 36.0

p1b ;

Suggest Documents