Generalization/Specialization as a Structuring ... - Semantic Scholar

17 downloads 13498 Views 68KB Size Report
1 Dept Computer & Info. ... misuse case: a sequence of actions, including variants, that a system (or other ..... Masters thesis, Chalmers University of Technology,.
Generalization/Specialization as a Structuring Mechanism for Misuse Cases Guttorm Sindre1, Andreas L. Opdahl2, Gøran F. Brevik2 1 Dept Computer & Info. Science, Norwegian U. Science & Technology [email protected] 2 Dept of Information Science, Univ. of Bergen, Norway [email protected]

Abstract. Use cases are becoming increasingly common in the early phases of requirements engineering, but they offer limited support for expressing security requirements. However, misuse cases can specify behavior not wanted in the system. This paper presents and builds on previous work on misuse cases, here focusing on misuse case generalization as a feature for structuring a larger number of misuse cases in a specification.

1

Introduction

Use cases [1, 2, 3] have become popular for eliciting, communicating and documenting requirements [4, 5, 6]. Many groups of stakeholders turn out to be more comfortable with descriptions of operational activity paths than with declarative specifications of software requirements [5]. But there are also problems with use-case based approaches to requirements engineering, such as oversimplified assumptions about the problem domain [7, 9] and premature design decisions [2, 8]. Another problem is that the Use Case notation is intended for functional requirements, and not extra-functional requirements [14]. With some slight modifications, use cases could provide a step towards integrating extrafunctional requirements, like requirements for security. After all, an interaction sequence not wanted in the system is also an interaction sequence. Hence many security breaches can be described in a stepwise fashion much similar to ordinary use cases [16, 17, 18, 19, 20, 21]. The only major deviation would be that a use case achieves something of value for the system owner and stakeholders, whereas the security breach results in harm. Here we take our previous work on misuse cases [18, 19] as a starting point, and enhance the discussion with some new insights, specifically related to the use of generalization as a structuring mechanism for misuse cases.

1

The rest of this paper is structured as follows: Section 2 presents the concepts, diagram notation and textual templates for misuse case description. Section 3 discusses generalization of misuse cases and how this can be used. Section 4 reviews related work and section 5 concludes the paper.

2

Misuse Cases: Concepts and Notation

In line with the OMG definitions of use case and actor [22], we designate the terms “misuse case” and “misactor” as follows: misuse case: a sequence of actions, including variants, that a system (or other entity) can perform, interacting with misactor roles of the system, resulting in harm for some organization or stakeholder if the sequence is allowed to complete. misactor: an actor (in the UML designation [23]) that initiates misuse cases. This may be done either intentionally or inadvertently. For a diagrammatic notation [18] suggested the same symbols for misuse cases and misactors as for use cases and actors, but in inverted graphics, as indicated in Figure 1.

Tamper With DB

THREAT

Register Account Reveal Customer

THREAT

Submit Review

THREAT THREAT

Get Privileges

Steal Sessionkey

INCLUDE

Input validation

Customer

Cracker

MITIGATE

THREAT

Obtain Password

MITIGATE

Order Goods

INCLUDE

Encrypt Communication

MITIGATE

Confine Debug Mode Site administrator

THREAT

MITIGATE

Steal Cardinfo

EXTEND: Do not allow debug mode in published application environment

THREAT

Enter Debug Mode

Pilfer Debug Info

Figure 1: Misuse related artifacts depicted "inverted".

2

In our initial misuse case publication [18] we identified two associations for what is now “mitigate”, namely “prevents” and “detects”. Moreover, we used the standard UML names “extend” and “include” [23] for what is now “threaten”. Following a suggestion from Ian Alexander [21], we realize that the above names are more generally applicable, and have therefore extended the notation with these artifacts. As stated in [3], a use case diagram can only give an overview of the wanted system functionality; the essence of use case modeling is in their textual description. We have found two useful ways of expressing misuse cases textually: 1. A lightweight approach, describing misuse within the textual description of a use case 2. A more heavyweight approach, giving the misuse case a full textual description of its own, with triggers, preconditions, basic and alternative paths, etc. A template for this was defined in [19].

Table 1 shows an example of the lightweight approach, where a normal use case has simply been extended with a “Threats” slot. Table 2 shows a full misuse case. The lightweight alternative can give a significant gain in customer and analyst awareness of security threats, with very little extra cost beyond normal use case specification. The added slot immediately signals that misuse is possible and must be addressed. Indeed, for applications where security is important, it would make sense to include a Threats slot as standard in the normal use case template. If the misuse case is itself just a twisted variety of normal application use, e.g. an unwanted but honest mistake, the Threats slot would probably offer sufficient explanation. If on the other hand the misuse case is an elaborate sequence of steps, with alternative paths, exceptions, assumptions and preconditions, the heavyweight approach would be more satisfactory.

3

Name: Register Customer Iteration: Filled Summary: The customer registers for the e-shop, giving name, address, email and phone. Basic course of events: 1. The customer selects to register 2. The system provides the registration form 3. The customer completes the form and submits 4. The system acknowledges registration, returning a customer reference number that the customer can use in further communication with the e-shop. Alternative paths: … Exception paths: E1. In step 3, the customer submits with some mandatory information still missing… E2. In step 3, the submitted info matches an already registered customer. The system asks the user if the customer is indeed the same. Possible outcomes: • The customer confirms that it is the same person. The system responds that registration is abandoned because the customer was already registered. • The customer confirms that it is a different person (incidentally having the same name and address as another customer). Registration is carried through (step 4 completed normally). Extension points: … Triggers: … Assumptions: … Preconditions: … Postconditions: The customer is now registered, and will be enabled to order goods from the eshop without providing contact info anew. Related business rules: … Threats: T1: The customer is not registering with his own name and address, but with a bogus or assumed identity. Possible outcomes: 1) a non-existing person is registered as customer. 2) an existing person is unwillingly and unknowingly registered as customer 3) it is revealed to a third party that the person in question is a customer of the e-shop (cf Exception path E2 above) T2: … Author: John Davis Date: 2001.05.23

Table 1: Misuse included in use case template

4

Name: Tamper With DB Summary: A crook manipulates the web query submitted from a search form, to update or delete information or to reveal information that should not be publicly available. Author: David Jones Date: 2001.02.23. Basic path: 1. The crook provides some values to a product web form (e.g. the use case Register Account) and submits. 2. The system displays the result matching the query. 3. The crook alters the submitted URL, introducing an error in the query and resubmits the query. 4. The query fails and the system displays the database error message to the crook, revealing more about the database structure. 5. The crook further alters the query, for instance adding a nested query to reveal secret data or update or delete data, and submits. 6. The system executes the altered query, changing the database or revealing content that should have been secret. Alternative paths: ap1. In step 3 or 5, the crook does not alter the URL in the address window, but introduces errors or nested queries directly into form input fields. Mitigation points: mp1. In step 4, the exact database error message is not revealed to the client. This will not entirely prevent the misuse, but the crook will have a much harder time guessing table and field names in step 5. mp2. In step 6, the system does not execute the altered query because all queries submitted from forms are explicitly checked in accordance with what could be expected from that form. This prevents the misuse case. Triggers: … Preconditions: pc1. The crook is able to search for products, either because this function is publicly available, or by having registered as a customer. Assumptions: … Mitigation guarantee: The crook is unable to access the database in an unauthorized manner through a publicly available web form (cf mp2). Related business rules: The services of the e-shop shall be available to customers over the internet. Potential misuser profile: Skilled. Knowledge of databases and query language, at least able to understand published exploits on cracker web sites. Stakeholders and threats: st1. E-shop: Loss of data if deleted. Potential loss of revenue if customers are unable to Order Product, or if prices have been altered. Badwill resulting from this. st2. Customers: potentially losing money (at least temporarily) if crook has malignantly increased product prices. Unable to order if data lacking, wasting time. Scope: … Abstraction level: … Precision level: … Table 2: Full misuse case description

5

3

Misuse Cases and Generalization

For normal use cases, the value of generalization has been questioned [25] and is seldom seen in practical examples. For misuse cases we contend that generalization/specialization can be quite valuable, since one misactor goal can be achieved in many radically different manners. One major reason for this is that the consideration of misuse requires a rather different approach than any ordinary use case. With wanted functionality, the developers may decide on one way of doing it, and disregard all other possibilities, as long as this works fine for the stakeholders (who would only be confused anyway, if there were 10 different ways to, e.g., "Register Customer"). But with misuse cases, the choice is not up to the developers but to the potential misactors, who will try many possible lines of attack to reach the same goal [24, 26]. Often these attacks are too different to be considered alternative paths of the same misuse case, for instance, a social engineering attack like "Obtain Password by Phone" may have no steps in common with "Obtain Password by Race Condition". Yet they are fellow implementations, so to speak, of the same misactor goal. Then, it makes sense to show these two as specializations of the same general misuse case “Obtain password” in a diagram (Figure 2), and make similar arrangements in the textual description (Table 3 and 4). This yields the following advantages: • It clearly shows that there are different ways of obtaining passwords – so to protect against this, one must protect against all variations, and start with the weakest link. • When misuse cases are related to each other (for instance, it may be a precondition for several other misuse cases that the misactor has already achieved “Obtain password”), it may suffice to indicate this with one edge in the diagram or with one cross-reference in the text – directed to the generalization. • The text for the general misuse case can describe the commonalities of all specializations (for instance, the stakeholder risks may be the same anyhow the crook obtains a password). These common parts need not be duplicated in the textual descriptions of the specializations, which can instead focus solely on what is different.

6

Obtain passwd

Obtain pwd by race

Obtain pwd by sniffing

Obtain pwd by phone

Figure 2: Misuse case generalization

Name: Obtain password Summary: A crook obtains password(s) for user accounts belonging to someone else, for the e-shop application typically e-shop clerks or system administrators. Known specializations: Obtain password by race condition, Obtain

password by sniffing, Obtain password by phone Author: John Davis Date: 2001.05.28. Assumptions: Passwords are used to authenticate e-shop clerks and system administrators. Related business rules: Only authorized users shall be able to access restricted services. Stakeholders and threats: st1. e-shop: main threat is in subsequent misuse cases that can be performed when the crook has the password to take on the identity of an authorized user, see the Threats sections of all use cases performed by System Administrator and E-Shop Clerk. The crook may also sell or give away the password to others who have an interest in harming the e-shop. Publication of successful password theft can in itself reduce customer trust in e-shop, and thus lost revenue. Scope: Entire business and business environment. Abstraction level: Mis-user subgoal. Precision level: Façade

Table 3: A generalized (abstract) misuse case

7

Name: Obtain password by phone Summary: A crook calls up a system administrator, claiming to be an employee of the company who has forgotten his password and therefore has trouble getting some urgent work done. Typical request may be to be told the password, have it changed, or as a last resort borrow another user account for a short while. Author: David Jones Date: 2001.02.23. Basic path: 1. The crook calls up a system administrator. 2. The system administrator answers the phone. 3. The crook presents himself as employee NN and tells a convincing tale (typically: password forgotten, urgent work pending, working remotely, cannot show up in person) 4. The system administrator looks up the system password file and tells the password. 5. The crook says thank you and hangs up. Alternative paths: ap1. In step 1, the crook does not call up the system administrator but another person who might know the impersonated employee’s password, e.g., the person’s secretary. Otherwise, the m Mitigation points: mp1. In step 4, the system administrator can tell by the voice that the employee is fake. This prevents the misuse case. On the other hand, if the crook is able to do a good enough voice impersonation, his tale will seem all the more convincing and put the administrator off guard, so that other mitigation options are skipped. mp2. In step 4, the administrator wants more evidence that the person really is the employee (either because he is suspicious, or because security policy demands it). Possible responses: • Hang up and call back to one or several numbers known to belong to the employee (office, home, mobile). This will prevent the misuse case unless the crook has stolen the employee’s mobile phone or intruded into his home, and the real employee fails to answer any call. • Demand that the employee call a colleague in the company who knows him really well (and who is preferably also known to the administrator) and have this person confirm that it is really the employee. • The administrator looks up the password, but before telling it to the “employee” demands that he remember at least parts of it. Combining these approaches may further mitigate the misuse case. mp3. In step 4, the system administrator outright refuses to help the “employee” (either because of suspicion, or because it is against security regulations to supply passwords over the phone). This will prevent the misuse case, but also effectively prevent the employee from getting urgent work done, should the tale indeed be true. Table 4: Part of description for specialized misuse case

The description in Table 4 could of course have contained more information (for instance triggers, assumptions, preconditions, terminology), but this has been left out for space considerations. For the abstract misuse cases (Table 3) an extra field should be included: Known specializations. In this field one can list all specializations that have been envisioned (i.e., various ways that this misactor goal

8

can be achieved). Specializations that have been documented can further be underlined (in a hypertext document this would be a link to the description of the specialized misuse case), whereas those that are only on the idea level are not (as with “Obtain password by sniffing” in this case) – signaling that someone in the project team thinks or fears that passwords could be obtained by sniffing, but it is not yet investigated how this would be done. An abstract misuse case would not contain any paths. These will have to be provided by various specializations, as indicated in Table 4. Information that was stated already in the general misuse case should not have to be restated for each specialization – this can be likened with inheritance for OO classes. Not all template fields should be inherited, though – the author and date of specializations could easily be different from the general misuse case.

Obtain passwd

Obtain pwd by race

Obtain pwd by sniffing

Obtain pwd by phone

Obtain pwd by requisition

Figure 3: Generalizing legal and illegal together

One could even envision generalizations where some specializations are misuse cases and some are normal use cases. For the “Obtain password” example, this would occur if we also included the legal way to get a user account – for example that a new employee will get a kind of requisition document which is then taken to the system administrator. In this case, with a generalization over some legal and some illegal misuse cases, the abstract one might be shown in grey color, indicating that it might be legal or illegal. This is shown in Figure 3. The rationale for doing something like this would again be the fact that obtaining a password could be the precondition for a lot of other mischievous actions in the system. The generalization of Figure 3 would then clearly indicate that to perform such misuses, one might either have obtained a password by some misuse case, or alternatively be an insider who has a legally obtained user account. Of course, the “Obtain password by requisition” use case could itself be susceptible to social engineering attacks. If so, this could be specified in a Threats slot of that use case. Making it a misuse case in its own respect would not be the right thing to do, since it is needed for the legitimate users.

9

4

Related work

We are not the only ones to have suggested negative use cases or scenarios in connection with security. Most closely related is the work by McDermott and Fox [16,17]. Their abuse cases are much the same as our misuse cases. However, there are some notable differences in the focus of the work. First of all, McDermott and Fox make an extra conceptual distinction between use and abuse which is not present for our definition of misuse cases (which is quite aligned with the UML definition of “use case” [23]). Whereas they see a use case as “a complete transaction between one or more actors and a system”, an abuse case is defined as “A family of complete transactions between one or more actors and the system that results in harm” [16]. In our definition, the harm part would be the only difference, here an additional difference “family of … transactions” is introduced. The motivation for this is clear enough – as we also suggested in the discussion of “Obtain password”: Misuse goals can often be achieved in a great number of ways, whereas normal use cases only need one or a couple of possible realizations. But from our perspective this can be solved by generalization hierarchies. On the specialized level, a misuse case need not be different from a use case – after all, both can have alternative paths. A second difference is that McDermott and Fox do not provide a new or extended diagram notation, but instead apply the traditional use case notation as is. This means that use and abuse cannot sensibly be shown in the same diagram, neither do they suggest this. Their examples of abuse case diagrams show abuse only, not abuse together with normal use. Similarly, they do not investigate relations between use and abuse (like our “threaten” or “mitigate”). Nor do they discuss extending use case templates with a Threats slot. So, both diagrammatically and textually, use and abuse are less integrated than in our proposal. McDermott and Fox suggest some textual templates, one for actors (= our misactors) and one for the abuse cases. The actor template has the following fields: Resources, Skills, and Objectives. We have not proposed any template for separate misactor descriptions. Instead information about the misactor is included in the Potential Misactor Profile slot of the misuse case. But that said, we clearly see the value of a separate description of misactors, especially when the same misactor may be linked to several misuse cases. The template of McDermott and Fox would be a good starting point if we were to extend our approach with a more detailed textual description of misactors. The abuse case template of McDermott and Fox contains the slots Harm, Privilege Range, and Abusive Interaction. The Harm slot corresponds more or less to our Stakeholders and Threats. Privilege Range may have some correspondence to Preconditions, but are on a more technical level. The Abusive Interaction slot would parallel our Basic Path, Alternative Paths, and Exception Paths. The two works have somewhat different angles, and therefore different strengths

10

and weaknesses. McDermott and Fox focus on the analysis of security requirements as such, and the subsequent design and testing of chosen security features. The integrated analysis of security requirements and functional requirements is given less attention. Our work, on the other hand, has so far not addressed the design and testing stages, but has put more attention on an integrated investigation of several kinds of requirements. Hence, an interesting way to continue might be to combine the strong sides of both works. Our approach is also related to the work of Yu and Liu [39], who show actors and misactors having contradictory goals in an extension of i* diagrams [15]. Links such as "Break", "Hurt" or "Some-" can show how an attack prevents the legitimate users from reaching their goals, or how countermeasures can thwart an attack. Other links ("Help", "Some+") could potentially be used to indicate how some functions of the system could make misuse easier, although this use of the link was not evident in the examples of [39]. Moreover, role names can be specialized to signify that the same role can be involved both in normal legitimate use of the system and in misuse, e.g., "Card Issuer", "Card Issuer as Attacker", "Card Issuer as Defender". The same could be achieved by generalization of actors/misactors in our diagrams, but we have not looked into this so far. The main differences between the two approaches are in focus, form and level of detail. The Yu/Liu approach focuses more on organizational goals and trust relationships, while our use case oriented approach focuses more on system functionality and concrete interaction paths, especially with the textual descriptions. The misuse case approach is also somewhat related to, and could be combined with other security methods or use case approaches. First of all, analysis methods inspired by traditional fault trees from the safety field, such as threat trees [27] or attack trees [28, 29], should be possible to combine with misuse cases. Such trees typically have AND and OR nodes. OR nodes would correspond to generalized misuse cases (for instance “Obtain password”, which can then be achieved in a lot of different ways). This would in itself not create complete correspondence, but with proper use of the “include” relationship one would have an alternative. In this way it would be possible to implement some of the advantages of Schneier’s Attack Tree notation, e.g. resolve cost by prizing subtasks. In figure 4 beneath, we have used an example from Schneier’s book [28:322], to illustrate how an attack tree could also be represented within the misuse case notation.

11

Get Combo From Target

Threaten

Blackmail

Eavsdrop

Bribe

and Listen to Conversation

Get Target to State Combo

Get Combo From Target

Threaten

Blackmail

Eavsdrop



Listen to Conversation

Bribe



Get Target to State Combo

Figure 4: Two examples of attack tree representation An important point of the misuse case approach is to be able to model threats explicitly and thus distinguish these from the countermeasures later chosen to deal with the threats. This distinction is not novel to our work. Security standards like [10,11] have sections classifying threats, separate from other sections that discuss security features. For instance, [11] initially identifies 29 threat categories, and then goes on to security objectives and security requirements. Such standards are often very technical both when it comes to language and format, i.e., they are hard to grasp for end-users and even some developers. But by using the suggestions of this paper, it could none the less prove useful in combination with misuse case analysis. A criticism that has been raised towards misuse (or abuse) cases is that there is no methodological drive towards completeness, so some abuses can easily be forgotten. It is like Schneier says in Secrets & Lies, “You think about the threats until you can’t think of any more, then you stop. And then you’re annoyed and surprised when some attacker thinks of an attack you didn’t” [38:318]. On the other hand using pre-identified threat categories such as those of [11] as input, could probably lead to a somewhat higher degree of confidence in the threats considered.

12

5

Conclusion and Further Work

Use cases are popular tools for eliciting and specifying functional requirements, but less suitable for extra-functional requirements, such as security requirements, which describe behaviors not wanted in the system. This paper has proposed two basic concepts — misuse cases and misactors — along with a diagram notation, a template for misuse cases and method guidelines. It has also shown how generalization of misuse cases can make specifications clearer and reduce the number of cross-references, as well as making the approach slightly more resembling fault- or attack trees. Our misuse case notation is used in ongoing research that investigates how to combine it with the concept of patterns. The goal of this work is to represent well known solutions to well known problems within the domain of application security. This work is also trying to establish the concept of a library of abstract misuse patterns, for use in eliciting security requirements. As was indicated in the related work section, concepts similar to misuse cases have been presented by others. What is novel in our work, however, is the integrated presentation of misuse cases together with normal use cases, which makes it possible to better analyze threats and security requirements together with functional requirements. Also, the templates suggested are more comprehensive than those found in other works. An obvious candidate for further work is to evaluate misuse cases in an industry project. The proposed approach is well suited for industrial evaluations because it extends UML concepts and notation, which are already heavily used in industry. Other interesting directions to pursue are integration with traditional techniques for risk analysis and costing that are popular in security and safety engineering. Safety is an interesting direction to widen the application of misuse cases. As stated in [37], better integration between informal and formal techniques is needed to make progress in safety analysis. But misuse case analysis alone makes no requirements engineering technique [12, 13, 14], and neither is it supposed to. The idea is rather to integrate it with other RE techniques, typically use case driven and goal-oriented [31, 32, 33, 34, 35, 36, 39], as well as with techniques from the security field. Misuse cases can, however, be applied as an informal and integrating front-end to more heavyweight techniques, making it easier for various stakeholders to participate at least part of the way when it comes to analyzing the security needs related to an information system being developed.

13

References 1. Jacobson I, Christerson M, Jonsson P, Overgaard G. Object-Oriented Software Engineering: A Use Case Driven Approach, Addison-Wesley, 1992. 2. Constantine LL, Lockwood LAD. Software for Use: A Practical Guide to the Models and Methods of Usage-Centered Design, ACM Press, 1999. 3. Cockburn A. Writing effective use cases, Addison-Wesley, 2001. 4. Rumbaugh J, Getting Started: Using use cases to capture requirements, Journal of Object-Oriented Programming, September 1994, pp. 8-23. 5. Kulak D, Guiney E. Use Cases: Requirements in Context, ACM Press, 2000. 6. Weidenhaupt K, Pohl K, Jarke M, Haumer P. Scenario Usage in System Development: A Report on Current Practice, IEEE Software, 15(2): 34-45, March/April 1998. 7. Arlow J, Use Cases, UML Visual Modelling and the Trivialisation of Business Requirements. Requirements Engineering Journal, 3(2):150-152, 1998. 8. Lilly S. Use Case Pitfalls: Top 10 Problems from Real Projects Using Use Cases. In: Firesmith D, Riehle R, Pour G, Meyer B, TOOLS 30: Proceedings of TOOLS USA 1999, pp.174-183, 1-5 Aug 1999. 9. Antón AI, Carter RA, Dagnino A, Dempster JH, Siege DF. Deriving Goals from a Use Case Based Requirements Specification. Requirements Engineering Journal, vol 6: 63-73, May 2001. 10. Common Criteria Implementation Board. Common Criteria for Information Technology Security Evaluation, version 2.1, Techical Report CCIMB-99031, August 1999. 11. ECMA. ECMA Protection Profile: E-COFC Public Business Class, ECMA Technical Report TR/78, December 1999. 12. Pohl K. The thrree dimensions of requirements engineering: a framework and its applications. Information Systems 19(3): 243-258, April 1994. 13. Loucopoulos P, Karakostas V. Systems Requirements Engineering, McGrawHill, 1995. 14. Kotonya G, Sommerville I. Requirements engineering: Processes and Techniques, Wiley, 1997. 15. Mylopoulos J, Chung L, Yu E. From Object-Oriented to Goal-Oriented Requirements Analysis, Communications of the ACM, 42(1): 31-37, January 1999. 16. McDermott J, Fox C. Using Abuse Case Models for Security Requirements Analysis. In: Proc. 15th Annual Computer Security Applications Conference (ACSAC’99), IEEE Computer Society Press, 1999. 17. McDermott J. Abuse-Case-Based Assurance Arguments. In: Proc. 17th Annual Computer Security Applications Conference (ACSAC’01), IEEE Computer Society Press, 2001.

14

18. Sindre G, Opdahl AL. Eliciting Secutiry Requirements by Misuse Cases. In: Henderson-Sellers B, Meyer B: Proc. TOOLS Pacific 2000, pp 120-131, IEEE Computer Society Press, 2000. 19. Sindre G, Opdahl AL, Templates for Misuse Cases, In: Achour-Salinesi CB, Opdahl AL, Pohl K, Rossi M: Proc. REFSQ’2001, Essener Informatik Beiträge, 2001. 20. Potts C, Scenario Noir (position statement for panel debate on Intrusion Scenarios for Security Requirements Engineering). In: Proc. Symposium on Requirements Engineering for Information Security (SREIS’01), CERIAS, Purdue University, 2001. 21. Alexander I F, Misuse Cases Elicit Non-Functional Requirements, accepted at RE’02 (forthcoming), 2002. 22. Object Management Group, Inc (OMG). Unified Modeling Language (UML), v1.4, http://www.omg.org/technology/documents/formal/uml.htm 23. Kruchten P. The Rational Unified Process – an Introduction. AddisonWesley, 2000. 24. Andress M. Surviving Security. How to Integrate People, Process, and Technology. Sams Publishing, 2002. 25. Cox K, Phalp K. A case study implementing the UML use case notation version 1.3. In: Proc REFSQ’2000, Stockholm, 2000. 26. Viega J, McGraw G. Building Secure Software: How to Avoid Security Problems the Right Way. Addison-Wesley, 2002. 27. Amoroso EJ. Fundamentals of Computer Security Technology. Prentice-Hall, 1994. 28. Schneier B. Secrets and Lies: Digital Security in a Networked World. Wiley, 2000. 29. Moberg F. Security analysis of an information system using an attack treebased methodology. Masters thesis, Chalmers University of Technology, Gothenburg, Sweden, 2000. 30. Varner PE. Vote Early, Vote Often, Vote Here. A Security Analysis of Vote Here. Masters thesis, University of Virginia, 2001. 31. Potts C. Using Schematic Scenarios to Understand User Needs. In: Proc DIS’95 – ACM Symposium on Designing Interactive Systems: Processes, Practices, and Techniques, U. Michigan, 1995. 32. van Lamsweerde A, Letier E. Handling Obstacles in Goal-Oriented Requirements Engineering. IEEE Transactions on Software Engineering, Special Issue on Exception Handling, 26(10): 978-1005, October 2000. 33. Antón AI, Earp JB. Strategies for Developing Policies and Requirements for Secure Electronic Commerce System, In: Proc. 1st ACM Workshop on Security and Privacy in E-Commerce, Nov 2000. 34. Maiden NAM, Minocha S, Manning K, Ryan M. CREWS-SAVRE: Systematic Scenario Generation and Use, Proc. Third IEEE International Conference

15

35.

36.

37. 38. 39.

on Requirements Engineering (ICRE’98), pp.148-155, IEEE Computer Society Press, 1998. Rolland C, Souveyet C, Achour-Salinesi CB. Guiding Goal Models Using Scenarios, IEEE Transactions on Software Engineering, 24(12): 1055-1071, Dec 1998. Achour-Salinesi CB, Rolland C, Maiden NAM, Souveyet C. Guiding Use Case Authoring: Results from an Empirical Study. In: Proc. IEEE 4th International Symposium on Requirements Engineering (RE'99), IEEE Computer Society Press, 1999. Lutz RR. Software Engineering for Safety: A Roadmap. In: Finkelstein A. The Future of Software Engineering, ACM Press, 2000. Schneier B. Secrets & Lies: digital security in a networked world, John Wiley, New York. Yu E, Liu L. Modelling Trust in the i* Strategic Actors Framework. Proceedings of the 3rd Workshop on Deception, Fraud and Trust in Agent Societies. Barcelona, Spain, June 3-4, 2000.

16