Applications requirements for controlled anonymity

1 downloads 301713 Views 752KB Size Report
vote can only be cast once), while other applications (e.g., electronic pay- ...... 13 December 1999 on a Community framework for electronic signatures, O.J. 19.1.2000 L ... Responder: the entity in the system that accepts an anonymous con-.
APES Anonymity and Privacy in Electronic Services

—– Deliverable 7 —–

Applications requirements for controlled anonymity Joris Claessens, Claudia D´ıaz, Svetla Nikova, Vincent Naessens, Bart De Win, Caroline Goemans, Stefaan Seys, Mieke Loncke, Jos Dumortier, Bart De Decker, and Bart Preneel Final Version – 2003/05/12

Anonymity and Privacy in Electronic Services

1/128

Executive summary The APES project is concerned with Anonymity and Privacy in Electronic Services [3]. During the first two years of the APES project, we focused on unconditional anonymity. The anonymity requirements and properties of a wide range of applications have been analyzed, and a model has been developed to describe these anonymity properties [79]. Basic building blocks for unconditional anonymity and privacy were then distilled from the existing solutions, which provide anonymity and privacy in the selected applications [34]. In the last phase we incorporated the appropriate privacyenhancing technologies in two different applications – privacy-preserving targeted advertising through web banners, and anonymous peer-to-peer networking – and we presented a number of new tools and technologies for anonymity and privacy in this context [35]. The next two years of the APES project focus on anonymity control. The same approach is followed. This deliverable therefore starts with a taxonomy of anonymity control in electronic services, expanding on the terminology proposed in Deliverable 2. We then analyze the anonymity control requirements and properties of a wide range of applications, including anonymous connections, anonymous e-mail, anonymous publishing, anonymous browsing, electronic payments, electronic voting, electronic auctions, digital credentials, anonymous data, and anonymous peer-to-peer networking. Each application and service has its own specific anonymity control requirements. In particular, some applications (e.g., electronic voting) require unconditional anonymity with no or limited control (e.g., ensuring that a vote can only be cast once), while other applications (e.g., electronic payment) require significant control mechanisms in order to thwart potential fraud. Consequently, mechanisms for anonymity control in electronic payments have already been extensively researched. As in Deliverable 2, we try to map the entities in the different applications to the same abstract roles, so that we later on may be able to generalize the mechanisms used in one applications and apply them in another application. The last chapter contains a number of important legal issues in the context of anonymity control in electronic services, that will be further examined. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

2/128

Contents Executive summary

1

I

7

Overview and model

1 Overview of the deliverable 1.1 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Anonymity control . . . . . . . . . . . . . . . . . . . . . . . .

8 8 8 9

2 Model and terminology for anonymity control 11 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Terminology for anonymity in electronic services . . . . . . . 11 2.3 Model for anonymity control in electronic services . . . . . . 13 2.4 Extra terminology for anonymity control . . . . . . . . . . . . 15 2.5 Entities and roles . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.6 Legal approach on terminology for anonymity in electronic services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.6.1 Identifiability . . . . . . . . . . . . . . . . . . . . . . . 17 2.6.2 Anonymity . . . . . . . . . . . . . . . . . . . . . . . . 19 2.6.3 Persistent anonymity or pseudonymity . . . . . . . . . 22

II

Infrastructure

3 Anonymous connections 3.1 Description of the system . . . . . . . . . . . . . . 3.2 Different roles/entities in the system . . . . . . . . 3.3 Requirements of the system . . . . . . . . . . . . . 3.3.1 Anonymity requirements/properties . . . . 3.3.2 Anonymity control requirements/properties 3.4 Overview of existing systems . . . . . . . . . . . .

23 . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

D7 – Applications requirements for controlled anonymity

24 24 25 25 25 26 28

Anonymity and Privacy in Electronic Services

3/128

4 Digital credentials 30 4.1 Description of the system . . . . . . . . . . . . . . . . . . . . 30 4.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 30 4.1.2 Different steps . . . . . . . . . . . . . . . . . . . . . . 30 4.2 Different entities in the system . . . . . . . . . . . . . . . . . 31 4.3 Requirements/Properties of the system . . . . . . . . . . . . . 31 4.3.1 Requirements unrelated to anonymity . . . . . . . . . 31 4.3.2 Anonymity Requirements/Properties . . . . . . . . . . 32 4.4 Anonymity Control Requirements/Properties . . . . . . . . . 32 4.4.1 Anonymity control mechanisms . . . . . . . . . . . . . 32 4.4.2 Specific requirements for revocable anonymous credential shows . . . . . . . . . . . . . . . . . . . . . . . 34 4.5 Trust Requirements . . . . . . . . . . . . . . . . . . . . . . . 34 4.6 Overview of existing solutions . . . . . . . . . . . . . . . . . . 35 4.6.1 Traditional PKI system . . . . . . . . . . . . . . . . . 35 4.6.2 Chaum’s work on pseudonyms . . . . . . . . . . . . . 35 4.6.3 Idemix . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.6.4 Brand’s Digital Credential System . . . . . . . . . . . 39 5 Anonymous data 5.1 Description of the system . . . . . . . . . . . . . . . . . . 5.2 Different roles/entities in the system . . . . . . . . . . . . 5.3 Anonymity requirements/properties . . . . . . . . . . . . 5.3.1 Problems producing anonymous data . . . . . . . . 5.4 Anonymity control requirements/properties . . . . . . . . 5.4.1 User-controlled conditional anonymity . . . . . . . 5.4.2 Trustee-controlled conditional anonymity . . . . . 5.5 Short overview of existing systems . . . . . . . . . . . . . 5.5.1 Main techniques for making a database anonymous 5.5.2 DatAnon . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Statistics Netherlands . . . . . . . . . . . . . . . . 5.5.4 Custodix . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 The privacy of the data requestor . . . . . . . . . . 6 Anonymous peer-to-peer networking 6.1 Description of the system . . . . . . . . . . 6.2 Different roles/entities in the system . . . . 6.3 Anonymity requirements/properties . . . . 6.4 Anonymity control requirements/properties 6.5 Overview of existing systems . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . .

40 40 40 41 41 42 42 42 42 42 43 44 46 46

. . . . .

47 47 47 47 48 49

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

III

4/128

Applications

50

7 Anonymous e-mail 7.1 Description of the system. . . . . . . . . . . . . . . . . . . 7.2 Different roles/entities in the system. . . . . . . . . . . . . 7.2.1 Entities involved in the communication . . . . . . 7.2.2 Entities involved in the anonymous communication 7.3 Anonymity requirements/properties. . . . . . . . . . . . . 7.3.1 Requirements unrelated to anonymity. . . . . . . . 7.3.2 Requirements related to anonymity. . . . . . . . . 7.3.3 Requirements related to controlled anonymity. . . 7.4 Overview of existing solutions . . . . . . . . . . . . . . . . 7.4.1 Type 0 Re-mailer: Penet . . . . . . . . . . . . . . . 7.4.2 Type 1 Re-mailer: CypherPunk . . . . . . . . . . . 7.4.3 Type 2 Re-mailer: MixMaster . . . . . . . . . . . . 7.4.4 Type 3 Re-mailer: MixMinion . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

51 51 51 52 53 53 53 54 55 56 56 56 57 57

8 Anonymous publishing 8.1 Description of the system. . . . . . . . . . . . . . . . . 8.2 Different roles/entities in the system. . . . . . . . . . . 8.2.1 Entities involved in publishing . . . . . . . . . 8.2.2 Entities involved in anonymous web publishing 8.3 Anonymity requirements . . . . . . . . . . . . . . . . . 8.3.1 Requirements related to anonymity. . . . . . . 8.3.2 Requirements related to controlled anonymity. 8.4 Overview of existing solutions . . . . . . . . . . . . . . 8.4.1 Eternity . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Publius . . . . . . . . . . . . . . . . . . . . . . 8.4.3 TAZ Servers . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

58 58 59 59 59 60 60 61 62 62 63 63

9 Anonymous browsing 9.1 Description of the system. . . . . . . . . . . . . . . . . . 9.2 Different roles/entities in the system. . . . . . . . . . . . 9.2.1 Entities involved in the web browsing . . . . . . 9.2.2 Entities involved in the anonymous web browsing 9.3 Anonymity requirements . . . . . . . . . . . . . . . . . . 9.3.1 Requirements unrelated to anonymity . . . . . . 9.3.2 Requirements related to anonymity. . . . . . . . 9.3.3 Requirements related to controlled anonymity. . 9.4 Overview of existing solutions . . . . . . . . . . . . . . . 9.4.1 LPWA . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Web Mixes . . . . . . . . . . . . . . . . . . . . . 9.4.3 CROWDS . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

64 64 64 64 65 66 66 66 67 69 69 69 69

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

5/128

10 Electronic payments 10.1 Description of the system . . . . . . . . . . . . . . . . . . . . 10.2 Different roles/entities in the system . . . . . . . . . . . . . . 10.3 Anonymity requirements/properties . . . . . . . . . . . . . . 10.4 Anonymity control requirements/properties . . . . . . . . . . 10.4.1 Anonymity control mechanisms . . . . . . . . . . . . . 10.4.2 Specific requirements for revocable anonymous payment systems . . . . . . . . . . . . . . . . . . . . . . . 10.5 Overview of existing systems . . . . . . . . . . . . . . . . . . 10.5.1 Different schemes for revocable anonymous payment . 10.5.2 Unlinkable revocable anonymous payment systems . . 10.5.3 Linkable revocable anonymous payment systems . . . 10.5.4 Trustee linkable revocable anonymous payment systems 10.5.5 On-line vs. off-line trustees . . . . . . . . . . . . . . . 10.5.6 Lowering the level of trust in trustees . . . . . . . . . 10.5.7 Revocation in anonymous communication based systems 10.5.8 Self-escrow and auditable tracing . . . . . . . . . . . . 10.5.9 Flow control and auditability . . . . . . . . . . . . . . 10.5.10 Discussion . . . . . . . . . . . . . . . . . . . . . . . . .

71 71 71 72 72 73

11 Electronic voting 11.1 Description of the system . . . . . . . . . . . . . . 11.1.1 Introduction . . . . . . . . . . . . . . . . . 11.1.2 Voting phases . . . . . . . . . . . . . . . . . 11.2 Different roles/entities in the system . . . . . . . . 11.2.1 Attackers . . . . . . . . . . . . . . . . . . . 11.3 Anonymity requirements/properties . . . . . . . . 11.3.1 Requirements unrelated to anonymity . . . 11.3.2 Anonymity related requirements . . . . . . 11.3.3 Anonymity control requirements/properties 11.3.4 Trust requirements . . . . . . . . . . . . . . 11.4 Short overview of existing systems . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

85 85 85 85 86 86 87 87 87 88 88 89

12 Electronic auctions 12.1 Description of the system . . . . . . . . . . 12.1.1 Different auction properties . . . . . 12.1.2 Auction types . . . . . . . . . . . . . 12.1.3 Different steps in an auction process 12.2 Different entities in the system . . . . . . . 12.3 Anonymity requirements/properties . . . . 12.4 Anonymity control requirements/properties 12.5 Short overview of existing solutions . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

90 90 90 91 91 92 92 93 96

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

D7 – Applications requirements for controlled anonymity

74 75 75 76 77 78 79 80 80 81 82 83

Anonymity and Privacy in Electronic Services

IV

6/128

Legal issues

13 Legal issues on controlled anonymity 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Legitimated motives to use anonymity: brief overview . . . . 13.3 Legitimated limits to use of anonymity . . . . . . . . . . . . . 13.3.1 From a private law perspective . . . . . . . . . . . . . 13.3.2 From a public law perspective . . . . . . . . . . . . . . 13.4 Illegal behavior justifying a controlled use of anonymity . . . 13.4.1 Examples of illegal behavior . . . . . . . . . . . . . . . 13.4.2 A-priori control or/and a posteriori control of anonymity? . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Limitations on privacy (and anonymity) for law enforcement purposes and duty to collaborate . . . . . . . . . . . . . . . . 13.6 Liability issues . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.1 Liability in general contract law . . . . . . . . . . . . 13.6.2 Criminal liability . . . . . . . . . . . . . . . . . . . . . 13.6.3 Liability in specific legislation . . . . . . . . . . . . . .

V

Conclusions

97 98 98 99 101 101 102 103 103 106 108 111 111 113 113

117

14 Conclusions and future work

118

References

119

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

7/128

Part I

Overview and model

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

8/128

Chapter 1

Overview of the deliverable 1.1

Goal

The goal of this deliverable is to describe a taxonomy of anonymity control in electronic applications and services, and to analyze the requirements – from a technical and a legal point of view – for anonymity control in each of these applications and services. The state-of-the-art of the solutions that already provide some form of anonymity control will also be presented. We focus on the same applications that were analyzed in Deliverable 2 [79]: anonymous connections, anonymous e-mail, anonymous publishing, anonymous browsing, electronic payments, electronic voting, and electronic auctions. We analyze three additional applications that were not considered in Deliverable 2: digital credentials, anonymous data, and anonymous peer-topeer networking. We start with a definition of the model and terminology for anonymity control. The model and terminology is technologically driven, but its legal relevance is verified. We then look into the different applications and services, and analyze their anonymity control requirements.

1.2

Applications

For this deliverable, we have structured the applications and services as follows: • Infrastructure-related mechanisms: – anonymous connections, – digital credentials, – anonymous data, – anonymous peer-to-peer networking.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

9/128

These are generic applications and services, for which anonymity should be provided, and that are useful and/or required for the specific applications and services. • Applications and services: – anonymous e-mail, – anonymous publishing, – anonymous browsing, – electronic payments, – electronic voting, – electronic auctions. These are specific electronic applications and services, for which anonymity is, or can be, an important requirement in order to protect the privacy of the users. We discuss each of these applications. Where appropriate we will refer to the corresponding section in Deliverable 2. We focus on the anonymity control issue, and describe the requirements and properties, and the stateof-the-art of the current solutions.

1.3

Anonymity control

The anonymity control requirements that are described in this deliverable, intend to provide the right balance between anonymity and control in applications and services. That is, anonymity should always be guaranteed to legitimate users, but specific control mechanisms are introduced to prevent or discourage abuse of the anonymous application. The anonymity control requirements of each of the electronic applications and services have an impact on the technical mechanisms with which the anonymous applications and services are implemented, and are therefore usually described in a technical sense. For each of the applications and services, it is important to keep in mind that the anonymity and anonymity control requirements are imposed by a legal framework. Anonymity requirements are introduced to protect the user’s privacy in the electronic world. Controls are needed to counter fraud and abuse of anonymity, and to enable law enforcement. The last chapter of the deliverable specifically addresses the legal issues on controlled anonymity. Note that this deliverable discusses potential anonymity control requirements. It may be very difficult, even impossible, to implement certain proposed controls without sufficiently meeting the desired anonymity requirements of legitimate users. We may also have overlooked other control D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

10/128

mechanisms that more effectively prevent certain forms of abuse, and are more privacy-friendly at the same time.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

11/128

Chapter 2

Model and terminology for anonymity control 2.1

Introduction

We summarize the definitions for anonymity in electronic services given in Deliverable 2 [79] and by Pfitzmann and K¨ohntopp [66]. Subsequently, we propose a taxonomy for anonymity control in electronic services, and add extra terminology in the context of anonymity control. We recall the notion of entities and roles, and enhance this notion with new roles that may be present in the context of anonymity control. Finally, we verify the legal relevance of our definitions and model.

2.2

Terminology for anonymity in electronic services

We adopt the following definitions with respect to anonymity in electronic services: • identifiability: “identifiability is the possibility to know the real identity of some party in the system by means of actual data exchanged in the system” [79]. • anonymity: “anonymity is the state of being not identifiable within a set of subjects, the anonymity set” [66]. • unlinkability: “unlinkability of two or more items of interest (e.g., subjects, events, actions, etc.) means that within this system, these items are no more and no less related than they are related with respect to the a-priori knowledge” [66].

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

12/128

• unobservability: “unobservability is the state of items of interest being indistinguishable from any other items of interest at all”. Also: “unobservability requires that users and/or subjects cannot determine whether an operation is being performed” [66]. • traceability: in the context of unconditional anonymity, “traceability is the possibility to trace communication between application components and as such acquire private information” [79]; traceability is the ability to obtain information about the communicating parties by observing the communication context (e.g., through the IP address). Note that in the context of revocation the term ‘tracing’ has a slightly different meaning (see Sect. 2.4). With respect to identifiability, identity information is acquired through the actual data communicated in the system; traceability focuses on the context of the communication to get this information. Besides the identity, this also concerns all private information. Regarding the relationship between traceability and linkability, note that by tracing actions or objects we may be able to find a relationship between two items and, therefore, link them. In this sense, tracing focuses more on the action of finding this relationship while linking is the result of this action. The anonymity properties of an electronic application or service can change over time: • durability: denotes a quantification of the persistence of the anonymity properties over time (sometimes also referred to as forward secrecy). • persistent anonymity or pseudonymity: uses a secure anonymous identity (often called a pseudonym) to hide the real identity. Since one of the properties is linkability of the actions, these actions need to be performed by the same pseudonym. The pseudonyms can be generated and/or managed by a trustee or by the client himself. The definition given in [66] is: “pseudonymity is the use of pseudonyms as IDs;” and “a pseudonym is an identifier of a subject (or a set of subjects).” The properties defined for a pseudonym are: (1) a pseudonym is unique as ID and, (2) suitable to be used to authenticate the holder. The following kinds of pseudonyms are distinguished [66]: (1) person pseudonym; (2) role pseudonym; (3) relationship pseudonym; (4) rolerelationship pseudonym; (5) transaction pseudonym (one-time pseudonym).

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

13/128

• The last type of pseudonym corresponds with the definition of onetime anonymity given in [79]: a new pseudonym is used for every transaction, different transactions are unlinkable.

2.3

Model for anonymity control in electronic services

Anonymity is a key mechanism for preserving the privacy of users in electronic applications and services. However, as will also be motivated from a legal point of view in the last chapter of this deliverable, unconditional anonymity can be abused in some applications, potentially allow fraud, and may disable law enforcement controls. Controls must thus be put in place to counter fraud and abuse, and enable law enforcement.1 However, the right balance must be found between the privacy of the users on the one hand, and the controls on the other hand. It is of crucial importance here that a solution with anonymity control should always provide anonymity to legitimate users. Solutions for anonymity control must be conform to legal regulations, both with respect to privacy as with respect to control. We propose the following taxonomy for anonymity control mechanisms in electronic services, and start with making a distinction between uncontrolled and controlled anonymous applications/services. • An uncontrolled anonymous application/service is an application or service that provides unconditional anonymity to all users and that does not implement any special control mechanism in order to prevent abuse, or limit in some way the use of the system. We refer to Deliverable 2 [79] for an overview of such applications and services. The privacy of the user is unconditionally preserved here, but abuse and fraud may be possible, while law enforcement will be difficult or even impossible. • A controlled anonymous application/service is an application or service in which the user is offered anonymity. The use of the anonymous application or service is however limited in some way. That is, certain controls are built in such that the anonymity cannot be abused (e.g., access control). Note again that a solution with anonymity control must always provide anonymity to legitimate users. 1

Note that, in principle, anonymity control can also be considered in a more positive way in some applications. For example, with anonymous auctions, you need to identify yourself later on, and be able to prove that you are the one who placed a winning anonymous bet before.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

14/128

For the second category, controls and limitations can be provided in three different ways. The first form of anonymity control preserves the property of unconditional anonymity. The second and third form provide anonymity control based on conditional anonymity. That is, in specific circumstances the anonymity will be lost. • Controls with unconditional anonymity Controls and limitations can be built into the application or service, while still providing unconditional anonymity. Examples can be found in the area of electronic payments (see chapter 10): flow control (amount-limitedness and non-transferability) and auditability (misuse can be detected and stopped by all parties). Another example is the fact that voters should not be able to cast a vote twice in an electronic voting system (this is required by definition, but we can identify it as an anonymity control feature in this context). As a general alternative example, in the case of persistent anonymity, reputation can be a – from a technical point of view, weaker – form of social control. This form of anonymity control is very interesting as it can prevent abuse and fraud of a system while unconditionally preserving the privacy of the users. • User-controlled conditional anonymity. A previously anonymous entity can be identified (or an anonymous attribute can be linked to a pseudonym) as a result of certain actions controlled by the user. The identity (or pseudonym) can be retrieved when the entity performs certain actions, such as double-spending of electronic cash, or such as releasing his own identity when appropriate (e.g., self-escrow of electronic cash; or identifying yourself for positive purposes). Note that the anonymity of a specific entity should only be lost depending on specific actions that can only be performed by the entity itself. • Trustee-controlled conditional anonymity. A previously anonymous entity can be identified (or an anonymous attribute can be linked to a pseudonym) only with the help of a trustee. The anonymity of an entity can be revoked independent of the entity’s actions. We prefer the notion of ‘revocability’; alternative terms that are commonly used include ‘identity escrow’ and ‘fair anonymity’ (the latter is however rather vague in our opinion and does not indicate the nature of the mechanism). Note that the trustee is sometimes also called the judge. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

15/128

The anonymity of an entity should only be revoked in specific circumstances, and when it is legally authorized. Revocation should technically only be possible with the help of the trustee. The trustee should not be able to revoke the anonymity of other entities in the system. The trustee should not have any power other than revocation. The trustee is not necessarily a single entity. The power of the trustee can be distributed among several entities. Typically, a certain minimal threshold of trustees must then cooperate before revocation can be performed. This lowers the level of trust that should be put in an individual trustee. Note that alternatively all the users and/or service providers of the system together could in principle play the role of the trustee. Again, a certain minimal threshold of them must then cooperate before revocation can be performed. This in fact puts the trust into the common sense and democratic values of the community. Specific circumstances or conditions that may lead to revocation of anonymity include all activities which are judged to be illegal or suspicious. Alternatively, there may also be very objective conditions, such as the time that has passed since the anonymous transaction was performed (i.e., one is only anonymous for a certain amount of time). In the following chapters of this deliverable we will describe the anonymity control requirements of each specific application according to the model proposed in this section.

2.4

Extra terminology for anonymity control

In the context of anonymity control in electronic services, we can expand our definitions with the following terminology: • If two or more items are trustee linkable in the system, then they can only be linked by the trustee, but they are unlinkable otherwise. • In the context of revocation, we can define the notion of tracing in two directions (note that this is different from the standard ‘traceability’ property): – tracing the identity (or pseudonym) based on the results of the action (e.g., owner tracing for e-payments); – tracing the results of the action based on the identity (or pseudonym) (e.g., coin tracing for e-payments).

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

16/128

• When tracing is performed by a trustee or by the service provider, it can be made auditable (this is also called ‘fair tracing’ which is again vague in our opinion), that is, tracing can be verified and proven to a third party by the individual who is traced. • Accountability is the responsibility in general of an entity for his acts. However, this does not necessarily imply financial responsibility (an employee is accountable for his acts but may not be declared liable for damaging acts). • Liability goes one step further than accountability. It takes into account the consequences of accountability: the financial, legal responsibility towards an act (contractual liability / delictual liability); or, in other words, the state in which a natural or legal person can be condemned to financial redress for damages.

2.5

Entities and roles

In each application or electronic service there will be different applicationspecific entities or parties that are involved. Each of these entities will be a primary player when analyzing the anonymity characteristics of the system. Entities will have different responsibilities and tasks, and will not all have the same knowledge about sensitive information. We use the notion of roles to distinguish between generic responsibilities. The roles related to the application are: initiator, recipient or responder, sender, and receiver. Anonymity related roles are: anonymized initiator, anonymized recipient or responder, anonymity provider, informed and uninformed provider, and trustee. Other roles include the software/hardware vendor/provider, and of course the attacker. Note that an entity can have different roles in an application. The anonymity properties given above are clearly dependent on the specific roles that are involved in the application. Therefore, any statement about anonymity is always relative to a specific role. We refer to Deliverable 2 [79] for detailed information on the different roles. With respect to anonymity control, the trustee and the informed provider are very important roles, as both know information that can lead to the identity of other participating entities. Strict boundaries to the power of the trustee should therefore be technically enforced.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

2.6 2.6.1

17/128

Legal approach on terminology for anonymity in electronic services Identifiability

In section 2.2 of the present chapter, ‘identifiability’ is defined as: the possibility to know the real identity of some party in the system by means of actual data exchanged in the system. A clear cut definition of identifiability is quite relevant from a legal point of view: as long as information can possibly identify a person, this information will – from a legal point of view – be considered as ‘personal data’ and consequently fall under the scope and requirements of the data protection legislation. Consequently the ‘controller’ of those data (= the entity who determines the purposes and means of the processing of personal data) will have a number of principles and obligations to respect, such as the fair processing rule, the finality principle, information obligations, security obligations, access rights of the data subject (= identified or identifiable natural person whose personal information is processed). The general working definition of ‘identifiability’ proposed in section 2.2 is in line with the general legal definition ruled in the data protection Directive 95/46/EC. This Directive specifies ‘by means of actual data’ – as proposed in the above mentioned general working definition – in the following way: an identifiable person is “one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity.”2 Up to date, there are however divergent legal interpretations within the EU Member States on how the process of ‘identifiability’ should be further delimited. In particular there is no uniform interpretation on the terms of considerans 26 of the data protection Directive 95/46/EC, stating that: “to determine whether a person is identifiable, account should be taken of all the means likely reasonably to be used either by the controller (= person responsible for the processing of the data) or by any other person to identify the said person.” In the explanatory memorandum of the Belgian data protection law, a quite radical viewpoint is adopted whereby a person is considered as identifiable as long as somebody – a person responsible for processing personal data (= a controller) or anybody else – is able to trace back information to a well determined person. The Belgian Privacy Commission has followed this viewpoint, e.g., in its advice on the IFPI case.3 2

Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, O.J. L 281, 23 November 1995. 3 Advice of the Belgian Privacy Commission, No 44/2001 of 12 november 2001, regard-

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

18/128

In other EU Member States, for example Germany, U.K. and The Netherlands, the interpretation of identifiability is more pragmatic and focuses on a contextual analysis, namely whether or not the identity of the individual can be established in that particular case from the data itself and from other information in the possession of the data user.4 The viewpoint of Dirk Debot on the interpretation of identifiability is interesting: Debot concludes that from the definition of identifiability in the EU Directive a presumption of identifiability should be understood. In other words, it is up to the controller who is processing information, to evidence5 that he does not dispose of the means or cannot dispose of the means to identify the person. According to this viewpoint – which to our view seems workable – one should admit that when a controller proves that he has no possibility to trace back the data to natural persons, he does not dispose of personal data.6 Accordingly requirements of data protection legislation – for example information requirements towards the data subject – will not apply. We can, at this stage, conclude that the divergent interpretations of ‘identifiability’ harm legal security and it will be up to the European Court of Justice to provide clarification, should it have the opportunity to rule on this issue. However, the radical broad interpretation defended in the explanatory memorandum of the Belgian law, which is a literal transposition of considerans 26 of the EU Directive,has not been followed in a number of EU Member States. Consequently, from the viewpoint of the Privacy Commission, who follows the approach in the explanatory memorandum of the Belgian law, the technical notion of controlled anonymity would not seem in line with the legal notion of anonymity (or non identifiability). The Privacy Commission would in this context rather describe “anonymity control” as “identification control”. The different viewpoints could be further articulated in future research. Finally, we should mention the legal notion of “encoded data”, defined in the Royal Decree of 13 February 2001 implementing certain aspects of the Belgian Law on personal data protection: ing the compatibility of criminal investigation on copyright infringements perpetrated on the Internet with the legal regulations protecting personal data in the telecommunication sector, http://www.privacy.fgov.be/. In this case, the Privacy Commission took the view that both dynamic and permanent IP addresses should be considered as personal data. 4 Schwartz P., and Reidenberg R., Data Protection Law and Online Services: Regulatory Responses, p. 36. 5 Such evidence can be brought during a court action or following a complaint filed to the Privacy Commission. A decision to qualify data as identifiable or not identifiable is obviously always taken at a specific time. Such decision may well be reversed at a later stage if circumstances have changed. 6 Debot D., Verwerking van persoonsgegevens, p. 32, Kluwer 2001.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

19/128

“Encoded data are data that contain no element to identify a person and to which a code is attributed only with a view to make a link between the data and the relevant data subject.” Encoded data are defined in the Royal Decree in the context of processing data for specific purposes which in a later stage are used for historical, statistical or scientific purposes.7 From the viewpoint of the researcher, who is the receiver of the encoded data, those data have no (longer a) link with an identified or identifiable person. Indeed, according to the law it is up to an intermediary organisation to encode the data and to transmit the codes without the keys to the researcher. However, those data are not considered as anonymous data but still as a personal data given that linking the data to an identified or identifiable person is still possible, although not by the researcher himself but by the person holding the keys of the code. The data protection rules are thus still applicable, given that encoded data are considered as personal data according to Belgian data protection law.

2.6.2

Anonymity

In section 2.2 of the present chapter, ‘anonymity’ is defined as: “the state of being not identifiable within a set of subjects, the anonymity set.” This working definition maps the current general legal definition, laid down in the Royal Decree of 13 February 2001 implementing certain aspects of the Belgian Law on personal data protection: “anonymous data” are defined as “data which cannot be linked with an identified or identifiable person and consequently cannot be qualified as personal data.”8 Consequently, the rules on the processing of personal data are not applicable to anonymous data, as the latter are not or no longer considered as personal data.9 In other words, the processing of anonymous data does not require any guarantees from a data protection perspective. A general definition of “anonymity” is also given in decision no 45.218 of 10 December 1993 of the Belgian Council of State in which data are considered as anonymous “if from a whole of available data (e.g., town, age, hospital, period of hospitalisation) there is reasonably no identification possible.”10 Types of anonymity – as defined in a technical context (see section 2.2 7 Principally those data have to be anonymised. Only if anonymisation of the data is not possible to reach the historical, statistical or scientific objective, the data can be encoded according to a specific procedure, in particular through the entity of an intermediary organisation. 8 Article 1, 5 Belgian Royal Decree implementing the law of 8 december 1992 regarding the protection of privacy in respect with the processing of personal data, 13 February 2001, published in the “Belgisch Staatsblad”, 13 March 2001. 9 A “personal data’ is any information relating to an identified or identifiable natural person (‘data subject’). 10 Raad van State, nr. 45.218, 10 December 1993, Vl.T.Gez.1993-1994, 281.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

20/128

and 2.3; for example the notion of unlinkablity and untraceability)– have up to date no specific equivalent in legislation. When those types of anonymity are used in on-line anonymity services (applications), it may be advisable to describe them in a practice statement. A first attempt to cope with the legal need for differentiating the notion of ‘anonymity’ can be found in the recent Belgian law on anonymity of witnesses of 8 April 200211 , although drafted in a wholly off line environment. The object of this law is to specify the conditions and modalities under which witnesses can be heard in ‘partial’ anonymity or in ‘full’ anonymity. A witness in criminal inquiries can benefit from ‘partial anonymity’ (‘gedeeltelijke anonimiteit’) which means that one or more of his/her personal data (age, name, forename, profession, domicile) will not be revealed despite the legal requirement to do so, upon condition of the proof of a reasonable presumption that the revelation of those personal data could seriously harm the witness or a person of his immediate environment.12 A witness in criminal inquiries can – upon fulfillment of a number of conditions – benefit from ‘full anonymity’ if the personal integrity of the witness is still endangered despite a ‘partial anonymity’. The full anonymity is described as the fact that ‘the judge will take all measures which are reasonably necessary to hide the identity of the witness’. This includes not only the authorized derogation to mention all legal required personal data (age, name, forename, profession, domicile) of the witness but also concealing of external characteristics ‘de visu or/and de auditu’ through for example voice scramble and disguise.13 The law specifies that ‘the judge will take all measures that are reasonably necessary to conceal the identity of the witness’. The preparatory works of the law mention that this will be evaluated in concreto. As a matter of example, the identity of the witness may be hidden to the attorney of the parties in the procedure; the witness may be heard in a separate court room, depending on the specific circumstances. Although this regulation is in the first place thought in an off-line context, the concept can also be interesting in an on-line environment.14 This is especially the case in respect with the attempt to reach a fair balance of different interests at stake, being on the one hand a contradictory hearing, trustfulness of the witness statements, rights of the defence in general and on the other hand the protection of physical and moral integrity of the witness and willingness to witness without fear of retaliation. Striking in this law, is that the judge – as a trusted third party – fully ‘controls’ the use of two possible types of anonymity, by allowing the – to his 11 “Wet betreffende de anonimiteit van getuigen”, 8 April 2002, publication in the ”Belgisch Staatsblad” of 31 May 2002, inserted in article 75bis of the Belgian Code of Criminal Procedure. 12 Article 2 of the law on anonymity of witnesses. 13 See preparatory works (memorie van toelichting) Doc 50 11185/001, p.27. 14 Moreover, it is envisageable that in future witnesses may be heard on-line.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

21/128

view – appropriate strength of anonymity, which will be different depending on the specific circumstances. In this model, the strength of anonymity can only be defined in relation with a specific party. This anonymity model could be regarded as ‘trustee-controlled conditional anonymity’ as defined in section 2.3 in a technical context. Finally, it is important to note that legal effect is given to an anonymous act (namely witnessing) given that the result of the witnessing will be taken into account as an element of evidence in the court procedure. Attempts to differentiate the notion of anonymity are also made in few, recent legal doctrine. Prins and Grijpink15 take the view that a distinction between various degrees of anonymity is important to evaluating the legal implications. Their notion of ‘organised semi-anonymous legal transactions’ is comparable to the technical term of ‘trustee controlled conditional anonymity’ (see section 2.3).16 Van Dellen takes the view that gradations of anonymity do not exist. “One is either anonymous (or acts anonymously) or not. Nevertheless, gradations of reducibility of the identity do exist. The moment reducibility has become too difficult and therefore no longer possible, there is anonymity.”17 In practice, those views may not be that conflicting as they seem at first sight. They indicate that there is a grey area between anonymity and identity. This grey area is relevant both from a legal and technical point of view as experienced in our research. To our view, the model for anonymity control in electronic services, as described in section 2.3 can be used both from a technical and legal point of view. The legal translation of ‘trustee controlled conditional anonymity’, a core notion in the proposed model, will be further analysed in chapter 12 on legal issues. 15

Grijpink, J., and Prins, C., New rules for anonymous electronic transactions? An exploration of the private law implications of digital anonymity, Journal of Information Law and Technology (JILT), July 2001. 16 For this purpose, Grijpink and Prins, see ibid., make a distinction between: absolute anonymous transactions, whether or not with the use of a self-chosen pseudonym (there are no traces that make it possible to establish someone’s identity); spontaneous semi-anonymous legal transactions, whether or not with the use of a self-chosen pseudonym (there are traces that make it possible to establish someone’s identity); organised semi-anonymous legal transactions with the use of a pseudonym issued by a third party; spontaneous personalised transactions using unverified or unverifiable identifying personal details organised personalised transactions with the use of identifying personal details which have been accurately verified by an authorised third party. 17 VAN DELLEN, M., Anonymity on the Internet, What does the concept of anonymity mean?, Electronic Communication Law Review 9:1-6, 2002. Essential to Van Dellen’s view, is the distinction between legally non-relevant anonymity/identity and legally relevant anonymity/identity.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

2.6.3

22/128

Persistent anonymity or pseudonymity

In section 2.2, pseudonymity is defined as “the use of a secure anonymous identity (often called pseudonym) to hide the real identity. Since one of the properties is linkability, the same pseudonym will be used over time. The pseudonyms can be generated and/or managed by a trustee (conditional) or by the client himself (unconditional).” From a legal perspective, the concept of a pseudonym is derived from copyright law: artists who create works of art are entitled to use a pseudonym, for reason of artistic freedom of expression. The notion as such is not defined in the law.18 ‘Pseudonymity’ is also used in the EU Directive on electronic signatures, underlining that Member States are not allowed to prohibit the use of a pseudonymous certificate instead of the real name of the subscriber. Once again, the concept itself is not defined.19 The use of ‘encoded data’ in the context of scientific research, as explained above under the notion of ‘identifiability’, could be regarded as pseudonymous data. However, one should underline that encoded data are still considered as personal data and thus fall under the scope of data protection legislation. Legal issues for controlled anonymity in electronic services are further explored in Chapter 13 of this report.

18

copyright law of 30 June 1994, Belgisch Staatsblad , 27 July 1994. Article 8, Directive 1999/93/EC of the European Parliament and of the Council of 13 December 1999 on a Community framework for electronic signatures, O.J. 19.1.2000 L 13/12, http://europa.eu.int/eur-lex/en/search/search_oj.html 19

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

23/128

Part II

Infrastructure

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

24/128

Chapter 3

Anonymous connections 3.1

Description of the system

Any telecommunications network requires users to have a network address during communication. When one computer is communicating with another computer on the Internet, by definition, the originating IP address of the communication is revealed to the receiving computer. The originating IP address is either the IP address of the sending computer itself, or it is the IP address of an intermediate machine. This intermediate machine can be a firewall, a network address translator, or a proxy. The intermediate machine is usually part of the same network of the sending computer, in particular the intermediate machine is usually the access point of that network to the Internet. Consequently, in either case, the IP address can be linked to a group of users, sometimes to one particular user. The network address itself thus constitutes identifiable information, both technically and legally. A solution that hides the user’s network address provides connection anonymity. Such a solution is not obvious to achieve, and is certainly less trivial than a solution for data anonymity. Goldberg [44] observes that it is not possible to add anonymity at the application (or data) layer, if the communications (or connection) layer does not offer anonymity. Anonymity should in fact be the default, it is always easy to add authentication at a higher layer. In other words, just as security is needed in all parts of the system chain, anonymity is required at all layers of a system. In particular, real anonymous services require both data and connection anonymity. For anonymous payments, this has already been indicated in the past by several researchers, e.g., by Simon [81]. If there is no connection anonymity, so-called anonymous coins, for example, can be traced back to their originators, just by looking at the network addresses of the wallets from which they were spent.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

3.2

25/128

Different roles/entities in the system

Anonymous connections are designed to be independent of the application. We can distinguish the following generic roles: • Initiator: the entity in the system that initiates an anonymous connection for an application. • Responder: the entity in the system that accepts an anonymous connection for an application. • Provider(s): an anonymous connection is typically implemented by using one or more intermediate entities whose incoming and outgoing connections cannot be linked easily; depending on the solution, these providers of the anonymous connection can be informed or uninformed.

3.3 3.3.1

Requirements of the system Anonymity requirements/properties

In order to provide connection anonymity, a solution is needed that hides a user’s network address from the receiving computer and vice versa. We here consider real-time, bidirectional communication between an initiator and a responder at the transport layer (i.e., we do not consider specific applications). During the communication, only the initiator should know with whom (i.e., the responder’s IP address) he is communicating. Other entities in the network should not know a particular initiator is communicating with a particular responder. This is anonymous communication with initiator anonymity. It is important to understand towards whom the initiator is anonymous. We can actually distinguish different ‘attack models’. If the adversary is local (e.g., the responder itself looking at the incoming connections, or the local ISP that is curious about whom the initiator is communicating with), then an intermediate proxy that relays the communication already ensures anonymity (e.g., this is one of the core mechanisms of the Anonymizer [2]). If one considers that the adversary is able to observe the global network, this proxy will not be sufficient. In between these two extremes is the case in which the adversary consists of a number of collaborating local observers. Solutions for anonymous communication mostly rely on the assumption that there exist a number of entities that can be trusted not to collaborate. Ideally, solutions for anonymous communication should not require more trust than that. As a counterexample, we observe that a simple intermediate proxy knows the correspondence between an initiator and a responder and should thus be trusted not to disclose nor log this correspondence. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

26/128

Similarly, anonymous communication with responder anonymity protects the identity (address) from the responder towards the initiator and the other entities in the network. The complete list of requirements for an anonymous communications system is as follows: • Bidirectional and real-time communication. • Initiator unobservability. Nobody should know when an entity is initiating a connection. • Initiator anonymity. If initiator unobservability cannot be provided, then nobody should know with whom the initiator is communicating. Particularly, the responder should not know the identity of the initiator. • Responder unobservability (optional). Nobody should know when an entity is accepting a connection. • Responder anonymity (optional). If responder unobservability cannot be provided, then nobody should know with whom the responder is communicating. Particularly, the initiator should not know the identity of the responder. • Unlinkability of connections. Nobody (except for the initiator, if there is no responder anonymity) should be able to link two different connections between the same initiator and responder. • Untraceability. It should not be possible to trace the connection from initiator to responder, or vice versa.

3.3.2

Anonymity control requirements/properties

Regardless of the electronic application or service, there is clearly a risk that a solution for unconditional anonymity in the communications infrastructure can be misused as well. Thus, also at the connection level, anonymity control seems to ensure the balance between the users’ right to privacy and the various concerns of governments and organizations. Hence, the principles of anonymity control could be applied to the solutions for anonymous communication as well: • Controls with unconditional anonymity This form of anonymity control seems to be less helpful at first sight. For example, while amount-limitedness is a very valuable anonymity control property for payment schemes, as small numbers of low-value transactions do not severely harm the system, it is of less value for D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

27/128

anonymity control in communications: even if anonymous communications is only possible for a very limited period of time, or even if only a very limited amount of anonymous connections may be established in a certain period of time, it is still perfectly possible for a person with malicious intentions to misuse this anonymity with disastrous results (i.e., a malicious communication is not necessarily lengthy nor does it have to take many different sessions). Nevertheless, some access control can be very useful to prevent (or, at least, reduce significantly) overflooding of the anonymous network infrastructure. This control can be implemented among others by means of anonymous credentials or blinded tickets, or the user could be required to solve a puzzle to access the network (this would significantly increase the user’s cost of establishing many anonymous connections at the same time). • User-controlled conditional anonymity It is possible that, in certain circumstances, a user desires to revoke his/her anonymity in a specific connection. However, we feel that this anonymity control feature should be implemented in each application if required. A mechanism that prevents overflooding of the anonymous network by using blinded tickets, may reveal the identity of the initiator that double-spends tickets. • Trustee-controlled conditional anonymity In particular, solutions that provide conditional anonymity with a trustee seem to be relevant. In an unconditional anonymous system, it is impossible under any circumstance to find out the identity behind a particular transaction. In contrast, a revocable anonymous system provides a backdoor with which an identity can be traced back. Revocation should be provided according to some rules: revocation should only be technically possible when a judge or other dedicated trusted party cooperates; this trustee should not be involved in the anonymity service itself; upon revocation, only the identity of the particular targets should be revealed, while all other transactions and/or users remain anonymous. In the case of anonymous communications, revocation could make sense in a number of situations; for example, tracing of a user who uploaded or downloaded illegal content on a particular web server, tracing of a hacker who broke into a particular host, tracing of users who communicated with a suspicious party, etc. A revocable anonymity service can be intended for users throughout the whole Internet, or

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

28/128

it might also be developed for users within a certain organization or ISP. Note that a system for revocable anonymous communications provides a mechanism for trustee-controlled conditional anonymity, independent of the application. Revocation is about the ability to trace back. Some equivalences can be drawn between anonymous payment systems and anonymous communication. A revocable anonymous payment system provides coin tracing, i.e., it is possible to trace the electronic coins withdrawn by a particular user. For anonymous communication, the equivalent is responder tracing, i.e., it is possible to trace the identity of the responder a particular initiator has communicated with. A revocable anonymous payment system also provides owner tracing, i.e., it is possible to trace the identity of the user who has spent a particular electronic coin. In a system for anonymous communication, this is equivalent to initiator tracing, i.e., it is possible to trace the initiator who has communicated with a certain responder. The last chapter of this deliverable gives an overview of the scenarios in which anonymity revocation may be required by law enforcement entities. The difficulties and requirements of revocable anonymous connections are described in more detail in [27] and [28].

3.4

Overview of existing systems

Chaum’s mix-net [23] is the basis for almost all practical solutions to anonymous communication, and is particularly suited to provide initiator anonymity. In the remainder of this section, we focus on solutions based on mixnets. Other basic approaches for anonymous communication are Chaum’s dc-net [24] and broadcast-based mechanisms (see for example Sherwood et al. [80]). These other approaches can also provide recipient anonymity and initiator–recipient anonymity. Chaum’s dc-net seems more of theoretical interest: the scheme can provide unconditional anonymity (unconditional as opposed to computational, and not as opposed to revocable), but it is less practical for performance reasons. A mix is a network entity that achieves anonymous communication by hiding the correspondence between the messages it receives on its input and the messages it forwards on its output. The mix hides the order of arrival of the messages by reordering, delaying and padding traffic. As we consider real-time and bidirectional communication, for example delaying is not really possible. Practical solutions therefore require a chain of mixes through which messages are routed in order to provide an adequate level of D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

29/128

anonymity. Channels then exist between mixes in which different messages with same (intermediate) origin and destination are multiplexed, so that it is difficult for an adversary to identify single messages. Within the set of practical solutions, there are again two essentially different approaches for anonymous communication. Both approaches achieve initiator anonymity by setting up a path from initiator to responder through several intermediate entities. Note that this is the main reason why practical solutions for anonymous communication have a substantial impact on the performance (i.e., decrease in bandwidth) that users will experience. The first approach is the Crowds system of Reiter and Rubin [70]. The second approach is the Onion Routing system of Reed, Syverson and Goldschlag [69]. We refer to [27] and Deliverable 2 [79] for a detailed description of the main properties and differences of these solutions. Solution for control with unconditional anonymity Web MIXes [6] by Berthold et al is an Onion Routing-like solution. Web MIXes deploys a ticket-based authentication system to prevent flooding attacks, that is, a mechanism is added that prevents unauthorized use of the mix network, and that intends to prevent denial-of-service by flooding the mix network with requests. Solution for trustee-controlled conditional anonymity A solution for revocable anonymous access to the Internet is proposed in [27] and [28]. The revocability concept from anonymous electronic payment protocols is here applied to generic solutions for anonymous communication. To our knowledge there has not been any other work in the area of anonymity control with conditional anonymity at the connections layer.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

30/128

Chapter 4

Digital credentials 4.1 4.1.1

Description of the system Introduction

A digital credential is the digital equivalent of a paper document or other tangible object (e.g., token) issued by a trusted party, that proves one or more attributes of the owner (e.g., identity card, passport, driver’s license, diploma, . . . ). Digital credentials are, however, more flexible than their paper counterparts, since the owner can choose which attributes or which properties of attributes to reveal to (and hence, which attributes to hide from) the verifying organization. As such ordinary identity certificates, binding a public key to a (legal) identity or pseudonym, can be seen as a special case of digital credentials. Digital credentials may also have options: such as one-show or limitedshow. When the credential is shown more than the predetermined number of times, the issuer of the credential or a third party may recover identifying information. These options can be useful for implementing e-cash or for casting votes.

4.1.2

Different steps

A digital credential system consists of different phases: • Issuing of the credential. The user contacts an issuing organization, and –after a verification process– receives a credential that can be shown to verifying organizations. The issuer may restrict the use of the credential (e.g., one-time use or limited-time use). • Showing a credential. The owner of the credential proves to a verifying organization that he owns a credential that fulfills certain requirements. The proof may be a digital signature that can be verified D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

31/128

with a certificate (= the credential), or may involve zero-knowledge protocols.

4.2

Different entities in the system

There are three (possibly five) players in a digital credential system: the owner, the issuer, the verifier and possibly a trusted deanonymizer and trusted arbiter(s). • The owner is the user that owns a credential. It maps to the initiator in the definitions of the proposed model (both in the issuing and the showing step). • The issuer is the organization that issues credentials to users. It maps to a provider. It is desirable that the issuer is an uninformed provider, because the anonymity of the owner must be protected towards the issuer. • The verifier is the organization that verifies the showing of a credential. When the verification succeeds, it will offer a service to the owner. The verifier maps to the responder in the model. • The deanonymizer is a trusted third party, that is able to deanonymize an anonymous credential show. • The arbiter is also a trusted third party, that will assess whether the deanonymization condition is fulfilled. That condition may be a court’s order, a contract breach, or any other condition upon which both owner and verifier agree.

4.3

Requirements/Properties of the system

Digital credentials can be used for authorization purposes: the user proves that he possesses a credential that meets certain requirements (e.g., minimum age, citizenship, . . . ), and as a consequence, the user gets access to resources, services, . . . . Digital credentials may also represent digital money (cfr. payment systems).

4.3.1

Requirements unrelated to anonymity

Bi-directional If the credential show requires zero-knowledge proofs (ZKP), then a bidirectional connection is necessary.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

32/128

Real-time Zero-knowledge proofs based on a challenge-response protocol, require interactions; hence, long delays should be avoided.

4.3.2

Anonymity Requirements/Properties

• Owner anonymity: the identity of the owner of the credential should remain anonymous, the owner must not be identifiable. • Owner untraceability: the inability to trace a credential show back to the owner. • Credential show untraceability: the inability to trace an owner to obtain information about past credential shows he has performed. • Credential show unlinkability: the inability to link different ‘credential shows’, made by the same owner. This requires that if an unlimited credential is shown more than once, these credential shows should also be unlinkable. The anonymity requirements should be durable, unless some anonymity control is implemented (by means of anonymity revocation) to prevent fraud or other illegal actions. Owner anonymity must be achieved towards the issuer, the verifier, an eavesdropper and any other party involved in the designed system (except the deanonymizer). To achieve untraceability, anonymous connections must be used. Not all systems will provide ‘general’ unlinkability or untraceabilty: sometimes the issuer will be able to link or trace credential shows.

4.4

Anonymity Control Requirements/Properties

Since digital credentials may be used for authorization purposes, they seem an attractive tool for performing anonymous illegal actions. Therefore, mechanisms to make actors accountable (or liable) for their actions are necessary.

4.4.1

Anonymity control mechanisms

We can identify the following anonymity control mechanisms for anonymous credentials.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

33/128

Controls with unconditional anonymity • Non-transferability of credentials. Users should be prevented or at least discouraged to share or pool their credentials. Examples of possible measures are: – PKI-assured non-transferability: the user’s secret is tied to some other valuable key outside the credential system, e.g., a key that gives access to the user’s bank account; – all-or-nothing transferability: sharing just one pseudonym or credential, implies sharing all the user’s pseudonyms and credentials; – non-transferability by content: encoding confidential or embarrassing attributes inside the credential. User-controlled conditional anonymity • Double spending detection. The issuer (or trustee) should be able to trace the owner of limited-show credential that was shown more than the built in limit. Trustee-controlled conditional anonymity • Revocable credential shows. Revocability allows the trustee to retrieve the owner of credential used in a credential show. Only the trustee can deanonymize a credential show, and this deanonymization is restricted by rigid rules (e.g., a court’s order, a provable contract breach, . . . ). Controls on the revocation can be implemented to discourage abuse: – The owner may prevent revocation by omitting necessary information in the credential show (it is up to the verifier whether or not this is acceptable). – The owner may bind a deanonymization condition to the credential show, stating under which condition the trustee may revoke the owner’s anonymity. – The owner may indicate one or more arbiters (independent of the trustee) who should evaluate the deanonymization condition. – The revocation should produce a verifiable proof. That way the trustee can be made liable for the revocations: a wrongfully deanonymized owner may take legal actions against that trustee.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

4.4.2

34/128

Specific requirements for revocable anonymous credential shows

Revocable anonymous credential shows should meet many requirements: • Anonymity for ‘honest’ owners. • Deanonymization upon warrant presentation. In order to prevent fraud or other illegal actions, anonymity should be revocable in certain cases, for example with a court’s order. In this case, a trusted party, or a set of parties must be able to deanonymize a previous credential show. • Separation of power. The trustee who has the ability to revoke the anonymity of the owner, should only have deanonymization power; i.e. they should not be able to forge a credential show. • No framing. The issuer, even in collaboration with the trustee or other parties (verifier, . . . ) should not be able to impersonate the credential owner. • Selectivity. Revocation must be selective; that is, only the credential show for which a judicial order is given, must be deanonymized. Other credential shows (of the same user, possibly of the same credential) should remain anonymous. • Efficiency. The trustee should only be involved when revocation is required. • Crime prevention. Deanonymization should not motivate crimes more serious than those it protects against. • Verifiability. It should be possible to prove (to a third party) that a credential show has been deanonymized correctly.

4.5

Trust Requirements

The verifier trusts: • that the holder of the credential does not share it with other users or that different users pool their credentials together (e.g., to get another credential). Users should be prevented or at least discouraged to share their credentials. The user trusts: • that the issuer –if he knows the mapping between the real identity of the user and a credential– will not collaborate with the verifier in order to find a link between a credential show and the user; D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

35/128

• that the deanonymizer will not deanonymize a credential show when the deanonymization condition is not fulfilled.

4.6 4.6.1

Overview of existing solutions Traditional PKI system

Any PKI offers a limited credential system. The certificate authority (CA) certifies (signs) a user’s certificate, which binds the user’s public key to a verified (legal) identity or pseudonym. A credential show involves sending the certificate and a signature that can be verified with the certificate. Credential shows of the same certificate are linkable. Certificates can support pseudonymity, if the identity of the certificate owner is replaced by a pseudonym. Revocation is supported if the certification authority keeps a mapping between the pseudonym and the real identity. If the pseudonym is the real identity encrypted with the trustee’s public key, then that trustee can deanonymize a credential show.

4.6.2

Chaum’s work on pseudonyms

David Chaum has set the stone for anonymous digital credentials: several of his papers ([19, 22, 20, 21]) discuss the basic concepts of anonymous credentials and their technical feasibility. In a nutshell, the idea is to use a set of pseudonyms that are signed by the issuer. Hereby, blinded signatures are used such that the issuer has knowledge of only one of these pseudonyms. Later, before presenting a credential to some other verifier party, the credential owner can change the accessibility of the pseudonyms within the credential. The verifier will hence receive a valid credential that uses a different pseudonym. As a result, the credentials are in fact anonymous and unlinkable between the issuer and other verifiers. To our knowledge, no practical implementations of credential systems were built by the author. Regarding the requirements related to controlled anonymity, the system guarantees anonymity for honest users: a verifier cannot link the pseudonym that he is presented to the original pseudonym (used for issuing) or to the real identity of the user. Support for deanonymization is however not provided: by using blinded signatures during credential issuing, even the issuer (nor any other party) is not able to link any of the pseudonyms in the credential. As a result, the identity of the owner cannot be revoked.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

4.6.3

36/128

Idemix

Idemix [13, 14, 46] is an anonymous credential system, which allows anonymous yet authenticated and accountable transactions between users and service providers. Idemix provides interactive protocols (using ZK-proofs) for: • registering a pseudonym (called a nym) with an organisation • getting/issuing a credential for a nym • showing/verifying a credential Idemix also provides off-line protocols for • checking multiple spending of limited-show credentials (which reveals the nym for which the credential was issued) • local and global deanonymization (which reveals the nym for which the shown credential was issued) Note: all the user’s nyms and credentials are linked to the user’s master secret. Hence sharing one credential means sharing all the other credentials as well. Registering a Nym A Nym is the pseudonym under which the user wants to be known by an organization. Idemix has two kinds of nyms: ordinary nyms and rootnyms. The user establishes a nym, based on his master secret and a randomly chosen secret. During the registration, the user proves that the nym has been correctly formed. A rootnym is a special nym, based on the user’s master secret only. It is hidden in every other nym of that user. Rootnyms are established with a special credential issuing organization, the Root Pseudonym Authority. There are three basic primitives for registering a nym: • RegNym, for registering ordinary nyms • RegSignedNym, for registering ordinary nyms • RegRootNym, for registering rootnyms In both, RegSignedNym and RegRootNym, the user signs the established nym with his signature key, which is certified through an external certificate (which links the user’s public key with his identity). Hence, the organization holds a provable link between the (root)nym and the identity certified by the certificate. The signature can be extended to include both, the nym and a message (e.g., the message could contain a description of the user’s liability towards the usage of this nym). D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

37/128

Getting/Issuing a credential A User, known by an organization by a nym, may request a credential for that nym. Credentials can have attributes (e.g., age, citizenship, expiration date, . . . ), and options (e.g., one/limited/multi-show, local/global deanonymization). Showing/Verifying a credential The user proves to the verifying organization OV that he owns a credential issued by organization OI . The proof convinces OV that the user knows the user’s master secret that is linked to the credential; in addition, the user may choose to reveal (and prove) any attribute, or a property of these attributes (e.g., citizenship is not American, age is > 18, . . . ). Showing a credential results in a transcript (for OV ) which can be used later in double spending and deanonymization protocols. The following anonymity properties are valid: • Two of more credential shows of the same credential cannot be linked together1 (unless the credential is a one-show credential); • A credential show does not reveal the pseudonym for which it was issued; • A credential show cannot be traced to the issuing of the credential. During a credential show, a message can be signed, which provably links the message to the transcript of the credential show. A credential show can have three extra features: • the credential is shown relative to another nym; • local deanonymization is allowed; • global deanonymization is allowed. Credential Show Relative to Another Nym The user may show a credential relative to another nym. Here, the user proves to the verifier OV that the nym (NI ) on which the credential was issued and the nym (NV ) under which the user is known by OV , both belong to the same user. This feature also allows a user to prove that he owns several credentials, and prevents that users collude in presenting credentials. 1

We assume that no subset of the revealed attributes uniquely identifies the user.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

38/128

Local and Global Deanonymization Transcripts of anonymous credential shows can be de-anonymized by including a verifiable encryption of the user’s identity: • the nym for which the credential was issued (in case of local deanonymization • the user’s rootnym2 (global deanonymization) The user may restrict de-anonymization by including a deanonymization condition, which must be fulfilled before the deanonymizing organization may de-anonymize the transcript of a credential show. The deanonymizing organization can construct a verifiable proof of the deanonymization. Note: since rootnyms are always provably linked to an external identity, global deanonymization reveals that identity. Accountability During the design of the idemix protocols and basic primitives accountability and liability have been taken into account. In order to make a user accountable for a transactions, verifiable proofs are a prerequisite. • The RootNym and SignedNym primitives establish a verifiable link between a (root)nym (and possibly a message) and an external identity. • During a credential show, the user can sign a message, which makes the (possibly anonymous) user accountable for that message. If the credential show was relative to a nym, at least a pseudonym of the user is provably revealed. If local anonymization is available, the nym for which the credential was issued can be provably retrieved. If global anonymization is available, the rootnym (and hence the external identity) can be provably recovered. • Showing more than once a one-show credential, allows the issuing organization to provably recover the nym on which the credential was issued. Idemix can fulfill all the (controlled) anonymity requirements. 2

The rootnym is a special nym. It is hidden in every other nym of the user. The user can prove that the encrypted rootnym is indeed hidden in the nym for which the credential was issued.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

39/128

Trust Requirements The verifier trusts: • that the holder of the credential does not share it with other users; this is strongly discouraged through the all-or-nothing transferability: sharing just one pseudonym or credential, implies sharing all the user’s pseudonyms and credentials. • that the deanonymizer will deanonymize a credential show when the deanonymization condition is fulfilled. The user trusts: • that the deanonymizer will not deanonymize a credential show when the deanonymization condition is not fulfilled. However, since deanonymizations are verifyable, the deanonymizer can be held accountable (liable) for wrongful deanonymizations. Also note that credential shows for which no local or global deanonymization options were used, will never be deanonymizable. However, it is the verifier who decides whether or not he will accept such credential shows. No other trust relations are necessary, since credential shows do not reveal the pseudonym on which the credential was issued (unless the credential is shown more than allowed; in this case the pseudonym is revealed). Only the issuer of the credential can do this over-spending check.

4.6.4

Brand’s Digital Credential System

[7, 8] discusses the design of a digital credential system. Different shows of the same credential can be linked (the public key and the signature are revealed to the verifier). However, a credential show cannot be traced to the issuing of the credential, because of a restrictive blinding technique. To discourage a user to share his digital credentials with other users, the issuer can encode a confidential attribute of the user into the credential.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

40/128

Chapter 5

Anonymous data 5.1

Description of the system

Today’s globally networked society places great demand on the dissemination and sharing of person-specific data. Data holders, operating autonomously and with limited knowledge, are left with the difficulty of releasing information that does not compromise privacy, confidentiality or national interests. For example in a medical setting this is of particular interest due to privacy issues and to prevent possible misuse of confidential information. As electronic medical records and medical data repositories get more common and widespread, the issue of making sensitive data anonymous becomes increasingly important. Nevertheless only a few research papers on this topic could be found. It seems that there is still not a good scientific approach to the problem and all described solutions are ad-hoc.

5.2

Different roles/entities in the system

One can distinguish the following entities: • Data provider/user (owner) is the entity which provides data to the database. • Register or database is the entity which could be subdivided into the following different entities. – Data collector is the entity which collects the data from different entities. – Data anonymizer is the entity which anonymizes the collected data. – Data holder is the entity which holds the anonymized data and provides it to the data requestor.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

41/128

• Data requestor is the entity which deposits a request to the database, searching for specific information. • Trustee/Revocation Authority is the entity that controls and if necessary performs the de-anonymization of the data. Note that in the literature only the anonymity of the collected data (the anonymity of the data provider) with respect to the data requestor is considered. On the other hand the privacy of the data requestor (i.e., the database gains no information about which information the requestor is looking for) will be interesting to be considered.

5.3

Anonymity requirements/properties

• Independence: In the case where an attacker knows the identity of some registered individuals, this should not help him to find other identities. • Verifiability: In some applications an individual should be able to prove to anyone that he is or is not identical to a given person registered in the database. • Anonymity: Even when given that an unknown person registered in the database is identical to one of two individuals, it should still be hard to tell which one.

5.3.1

Problems producing anonymous data

• Knowledge a viewer of the data may hold or bring to bear on the data is usually not known beforehand by the data holder at the time of release. • Unique and unusual values and combinations of values appearing within the data themselves often makes identification of related entities easier. • We cannot prove that a given release is anonymous. De-identified data versus anonymous data. In de-identified data, all explicit identifiers, such as SSN, name, address, etc. are removed, generalized or replaced with a made-up alternative. The term anonymous implies that the data cannot be manipulated or linked to identify an individual. Even when information shared with secondary parties is de-identified, it is often far from anonymous. The goal of an anonymous database is to limit what can be revealed about properties of the entities that are to be protected in the released information. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

5.4 5.4.1

42/128

Anonymity control requirements/properties User-controlled conditional anonymity

In some applications it could be the case that the user might want to revoke his anonymity. For example, in case of a medical database, if a patient asks for his medical records he should be able to prove his identity.

5.4.2

Trustee-controlled conditional anonymity

Control on the anonymity of the data should be possible if a judge or trusted party requires revocation. Revocable anonymous data systems should meet many requirements. Here we list the following set: 1. Anonymity. Data should be anonymous and unlinkable to legitimate users. 2. Revocation. Anonymity should be revocable, but only by a trustee or judge and when necessary. 3. Separation of power. The trustee should not have any power other than tracing. He should for example not be able to forge or modify the data. 4. Selectivity. The anonymity should be revocable only for the attributes in the database for which a trustee’s order is given. Tracing should for example not be performed by just revoking the anonymity of all attributes.

5.5 5.5.1

Short overview of existing systems Main techniques for making a database anonymous

• Outlier removal. Certain rows or columns, i.e., objects and/or attributes, are removed altogether. • Generalization. The value sets are made smaller. • Cell suppression. Selected database entries are locally suppressed. In order to achieve better results a combination of all three techniques should be applied [63]. In order to protect the anonymity of individuals to whom released data refer, data holders often remove or encrypt explicit identifiers such as names, addresses and phone numbers. However, other distinctive data, which is termed in [73] quasi-identifiers, often combine uniquely and can be linked D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

43/128

to publicly available information to re-identify individuals. Quasi-identifiers are a set of attributes in a table, whose release must be controlled, i.e., the attributes that can be exploited for linking. An approach based on kanonymity is given in [73]. Each release of data must be such that every combination of values of quasi-identifiers can be indistinguishably matched to at least k individuals. The k-anonymity characterizes the degree of protection of data with respect to inference by linking. Using cryptographic functions that relate the real identity and the identifier stored in the database (pseudonym) in [11] a model for anonymous data is proposed, where several organizations may provide data related to an individual to a central database.

5.5.2

DatAnon

Technologies enabling privacy and anonymity [extracts from the DatAnon web site [32]] The problem: The growth of the information economy will depend upon technologies which enable privacy and anonymity. Government agencies, financial institutions, medical and pharmaceutical research organizations compile information about customers and patients in databases. These organizations function in a highly regulated environment, and are required by a variety of laws to protect the privacy of their constituents. Information compiled in the databases is valuable for research, analysis of behaviors and management. Before it can be shared and used for such purposes, the information must first be “de-identified” and “anonymized” to protect the privacy of individuals. The solution: DatAnon, LLC has the exclusive license to commercialize technologies developed by a professor of computer science at Carnegie Mellon University, Latanya Sweeney. DataFly and k-Similar are fully developed computational algorithms which anonymize personal information compiled in databases without suppressing or distorting the data. The development of the above technologies was completed by Dr. Sweeney over the last several years. The technologies can be embedded in all types of commercial and proprietary databases. They are stable, statistically reliable, and are currently deployed in a variety of concurrent projects with the Department of Defense and other agencies. Anonymous Database System (ADS) [88] is a system that makes individual and entity-specific data available such that the ability to identify individuals and other entities contained in the released data is controlled, where controlled means that the k-anonymity property holds.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

5.5.3

44/128

Statistics Netherlands

µ-Argus System - 5-th EU framework CASC (Computational Aspects of Statistical Confidentiality) project. [extracts from the CASC web site [85]] The growing demands from researchers, policy makers and others for more and more detailed statistical information leads to a conflict. The statistical offices collect large amounts of data for statistical purposes. The respondents are only willing to provide a statistical office with the required information if they can be certain that their data will be used with the greatest care, and in particular will not jeopardise their privacy. So statistical use of the data by outside users should not lead to a compromise of confidentiality. However, making sure that microdata cannot be misused for disclosure purposes, requires, generally speaking, that they should be less detailed, or modified in another way that hampers the disclosure risk. – The aim of statistical disclosure control is to limit the risk that sensitive information of individual respondents can be disclosed from data that are released to third party users. In case of a microdata set, i.e., a set of records containing information on individual respondents, such a disclosure of sensitive information of an individual respondent can occur after that this respondent has been re-identified. That is, after it has been deduced which record corresponds to this particular individual. So, the aim of disclosure control should help to hamper re-identification of individual respondents represented in data to be published. – An important concept in the theory of re-identification is a key. A key is a combination of identifying variables. An identifying variable, or an identifier, is one that may help an intruder re-identify an individual. Typically an identifying variable is one that describes a characteristic of a person that is observable, that is registered (identification numbers, etc.), or generally, that can be known to other persons. This, of course, is not very precise, and relies on ones personal judgement. But once a variable has been declared identifiable, it is usually a fairly mechanical procedure how to deal with it in µ-ARGUS. In a disclosure scenario, keys are supposed to be used by an intruder to re-identify a respondent. Re-identification of a respondent can occur when this respondent is rare in the population with respect to a certain key value, i.e., a combination of values of identifying variables. Hence, rarity of respondents in the population with respect to certain key values should be avoided. – When a respondent appears to be rare in the population with respect to a key value, then disclosure control measures should be taken to protect this respondent against re-identification. In practice, however, D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

45/128

it is not a good idea to prevent only the occurrence of respondents in the data file who are rare in the population (with respect to a certain key). For this several reasons can be given. Firstly, there is a practical reason: rarity in the population, in contrast to rarity in the data file, is hard to establish. There is generally no way to determine with certainty whether a person who is rare in the data file (with respect to a certain key) is also rare in the population. Secondly, an intruder may use another key than the key(s) considered by the data protector. For instance, the data protector may consider only keys consisting of at most three variables while the intruder may use a key consisting of four variables. Therefore, it is better to avoid the occurrence of combinations of scores that are rare in the population instead of avoiding only population-uniques. To define what is meant by rare the data protector has to choose a threshold value Dk , for each key value k, where the index k indicates that the threshold value may depend on the key k under consideration. A combination of scores, i.e., a key value, that occurs not more than Dk times in the population is considered unsafe, a key value that occurs more than Dk times in the population is considered safe. The unsafe combinations must be protected, while the safe ones may be published. – PRAM (Post RAndomisation Method) is a disclosure control technique that can be applied to categorical data. Basically, it is a form of deliberate misclassification, using a known probability mechanism. Applying PRAM means that for each record in a microdata file, the score on one or more categorical variables is changed. This is done, independently of the other records, using a predetermined probability mechanism. Hence the original file is perturbed, so it will be difficult for an intruder to identify records (with certainty) as corresponding to certain individuals in the population. Since the probability mechanism that is used when applying PRAM is known, characteristics of the (latent) true data can still be estimated from the perturbed data file. Another technique that is closely related to PRAM is the technique called Randomised Response (RR). Where RR is applied before the data are obtained, PRAM is applied after the data has been obtained. Both methods use known probability mechanisms to change scores on categorical variables. RR is used when interviewers have to deal with highly sensitive questions, on which the respondent is not likely to report true values in a face-to-face interview setting. By embedding the question in a pure chance experiment, the true value of the respondent is never revealed to the interviewer.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

5.5.4

46/128

Custodix

As a spin-off from the University of Ghent, Custodix is involved in basic research in the technical domain of privacy protection. Custodix is partner and coordinator of several EU funded research projects on Privacy Enhancing Techniques. [extracts from the web site [30]] Custodix PET research is focused on: • Pseudonymisation Techniques (ID protectors, Privacy Protection Engine, ...) • Database Privacy Analysis • Re-identification Techniques and Countermeasures

5.5.5

The privacy of the data requestor

One solution for the privacy of the data requestor is the so called PIR (Private Information Retrieval) scheme [5, 26, 42, 41]. A PIR scheme enables a user to retrieve an item of information from a publicly accessible database in such a way that the database manager cannot figure out from the query which item the user is interested in. However, the user can get information about more than one item. On the other hand, in a SPIR (Symmetric Private Information Retrieval) scheme the user can get information about one and only one item, i.e., even the privacy of the database is considered. One PIR (SPIR) solution is to distribute the database from one server to a number of servers. In order to get information the requestor should contact certain threshold number of servers, in such a way that no coalition of less than another threshold of servers should have a clue which information the requestor is looking for. In some cases, when we have no data requestor privacy implemented, the exchanged information could help the requestor to be identified.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

47/128

Chapter 6

Anonymous peer-to-peer networking 6.1

Description of the system

A peer-to-peer (P2P) system is a distributed, often dynamic, architecture where many identical participants (peers) cooperate using a decentralized paradigm in order to achieve a common goal. The symmetric relationship between the participating computers distinguishes this architecture from other, typically asymmetric systems, like client-server. However, the roles and anonymity requirements for anonymous peer communications and anonymous connections are very similar. Note that an anonymous peer-to-peer networking architecture can be situated in between the communications layer and the application layer. Thus, from an infrastructure point of view, peer-to-peer systems significantly rely on anonymous connections, and many requirements and properties are inherited from them. In addition, an anonymous peer-to-peer networking architecture also incorporates a number of generic features that can be offered to peer-to-peer applications.

6.2

Different roles/entities in the system

For anonymous peer communication, the roles are very similar to the roles for anonymous connections (see Chapter 3). Both the initiator and the responder are peers. The initiator is a peer that requests information. The responder is the entity that accepts the request.

6.3

Anonymity requirements/properties

With respect to the anonymity and anonymity control requirements, peerto-peer networking is in fact very similar to the traditional client/server D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

48/128

anonymous connections. We adopt the same requirements as described in Chapter 3. In contrast to anonymous connections, both initiator and responder have the same status (they are both peers), and thus initiator anonymity and responder anonymity are equally important.

6.4

Anonymity control requirements/properties

Regardless of the electronic application or service, there is clearly a risk that a solution for unconditional anonymity in P2P networking can be misused as well. We distinguish between controlling P2P communication and controlling P2P group management. • Controlling P2P communication. With respect to anonymity control of the P2P communication, we adopt the same requirements as described in Chapter 3. • Controlling group management. The difference with connection systems is that group management is an important issue for P2P systems. A peer (participant) belongs to a group. Access to the peer group can be controlled. One user can only set up a limited number of peers. Limiting the participants can be useful in several cases. For instance, the Crowds system [70] controls a jondo when he is joining the crowd. Many jondo’s for the same user can compromise the system. We focus on controlling the group management in the rest of this section. • Controls with unconditional anonymity. It is useful to control the participants (group members) of a peer system. Access control is only possible if a peer must present (or prove the possession of) a valid access token, obtained in a prior registration phase. Users without a valid token, or whose token has been invalidated (revoked), will no longer be able to use the system. Hence, access control can prevent future abuse. In the case of persistent anonymity, reputation can be a (social) control mechanism. • User-controlled conditional anonymity. This control requires a registration phase where the system provides the user with a credential that can only be used a limited number of times. Even if different usages of the credential are unlinkable, using it more than the set limit, will provide the system with enough information to retrieve the identity of the sender. Hence, revocation is only possible when the user abuses the system. • Trustee-controlled conditional anonymity. An anonymous peer system may also revoke the identity of a peer with the help of a D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

49/128

trustee. This requires that peers add extra information in the registration phase, that can only be interpreted with the help of a trustee.

6.5

Overview of existing systems

Gnutella [43] and Freenet [39] provide a certain degree of anonymity in the way peers request/ send a document. In Gnutella, a request is broadcast and rebroadcast until it reaches a peer with the content. In Freenet, a request is sent and forwarded to a peer that is most likely to have the content. The reply is sent back along the same path. APFS [78] addresses the mutual anonymity problem assuming that trusted centralized support does not exist. Peers may inform a (untrusted) coordinator about their willingness to be index servers. Both the initiator and the responder need to prepare their own covert paths. PAST [72], CAN [68] and Chord [86] represent a new class of P2P system that provides a reliable infrastructure. One common property among these systems is that object placement can be entirely non-voluntary. As a result, when an object is placed on a node, that node cannot be held accountable for owning that object. The embedded routing mechanisms in these systems can also easily be adapted to covert path for mutual anonymity. Publius [93] is a peer system for publishing documents. We refer to Chapter 8 for a discussion of this system.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

50/128

Part III

Applications

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

51/128

Chapter 7

Anonymous e-mail 7.1

Description of the system.

In anonymous e-mail systems, a user should be able to send a message without revealing his identity. Moreover, the responder must be able to reply to that user without knowing the identity of the originator of the message. Three phases appear in mail systems. • Registration phase (optional). An anonymous e-mail system may require registration prior to using the system. The user provides the system with information which may or may not point to her real identity. The system may also request from the user credentials issued by a (trusted) authority. During the registration phase, the user receives an access token (password, a credential, . . . ), the possession of which has to be proven every time the system is used (cf. send phase). • Send phase. A user sends an e-mail to a recipient. In anonymous email systems, the identity of the sender is concealed from the recipient. The system may or may not require the user to present (or prove possession of) a valid access token (cf. registration phase). The system (or a trustee) may or may not be able to reveal the real identity of the sender of a message. • Reply phase (optional). In this phase, the recipient replies to a user without knowing the identity of the originator of the message. The original message contains enough information for the system to deliver to the originator.

7.2

Different roles/entities in the system.

This section is split into two subsections. The first subsection deals with the entities that are part of the e-mail application (without any anonymity D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

52/128

concerns). The second subsection discusses the entities that are necessary to add anonymity and privacy to the e-mail application.

7.2.1

Entities involved in the communication

In general, four entities are important in e-mail applications. Remark the similarities between client side and recipient side. Mail Client of originator: When a particular user types an e-mail message and presses the ”send” button, the Mail Client handles the message by converting it into the desired format and delivering it to the Mail Server on the client side. The converted message consists of the content of the message, the e-mail address of the sender, the e-mail address of the receiver, etc . . . Thus, the Mail Client is the component of the e-mail system that is closely related to the physical sender of the message. In other words, the Mail Client is the initiator of the message. When the recipient replies to that message, the originator side becomes the recipient. Mail Server of originator: The Mail Server on the client side is responsible for sending the e-mail message into the network towards the Mail Server of the recipient of the message. This Mail Server is the actual sender of the message. It is also the receiver of reply messages. Mail Server on recipient side: This is the server where the e-mail arrives after it has been sent by the Mail Server on the client side. It is in fact the recipient of the message. This Mail Server holds the e-mail until the Mail Client of the user (the recipient) asks for it. This Mail Server is also the sender of replies. Mail Client on recipient side: The user (the recipient) starts this application when she wants to read her e-mails. The Mail Client will contact the Mail Server (on the recipient side) and retrieve the waiting e-mail messages. Possibly, she may want to reply to a particular message. When the user just reads her e-mail, the Mail Client on the recipient side remains passive: it is the recipient of the messages. When the user wants to reply to a particular message, the Mail Client behaves as a responder to the original message; hence, the Mail Client is the initiator of the reply message.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

7.2.2

53/128

Entities involved in the anonymous communication

Mail Client of originator: As discussed in the previous paragraph, the Mail Client initiates the message to the Mail Server. In an anonymous mail environment, the Mail Client functionality can be extended to provide some degree of anonymity and privacy. In this way, the Mail Client also has anonymity functionality. The goal of the anonymity service provided is that the initiator remains anonymous towards the receiver and that the initiator and the recipient cannot be linked during the transfer. Central re-mailer: In an anonymous mail system, the e-mail can be redirected to a central re-mailer that provides some anonymity functionality to the e-mail and provides enough information to forward the e-mail to the Mail Server on the recipient side. The central re-mailer is able to link the initiator to the recipient of the message: it is an informed provider. Chain of re-mailers: Another option is to inject the e-mail message into a chain of re-mailers. Each re-mailer in the chain forwards the message to the next re-mailer. The initiator of the message is known by the first re-mailer; the recipient of the email is known by the last re-mailer in the chain. That way, the initiator and the recipient of the message cannot be linked to each other by any re-mailer in the chain. In this scheme, each re-mailer can be seen as an uninformed provider. This is similar to the onion routing model. Remark that not all entities involved in the communication have some anonymity functionality. This is normal because it is possible that an initiator of an anonymous e-mail message has no Mail Server with anonymity functionalities. Likewise, not every recipient of an anonymous e-mail message will have a Mail Server and/or Mail Client with anonymity functionalities.

7.3 7.3.1

Anonymity requirements/properties. Requirements unrelated to anonymity.

Bi-directional (optional): The e-mail application may require that a recipient of a message must be able to reply to the message. This has severe consequences to anonymous e-mail systems. The anonymous e-mail message needs to include sufficient

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

54/128

information (e.g., in a trusted header field) so that the mail system can deliver the reply message (sent by the responder) to the initiator of original message; this must be done without revealing the true identity of the initiator to the recipient. Although this requirement is desirable, not all existing mail systems are bi-directional. A one-directional mail system can be used to anonymously report abuse to a hot line. Not real-time: E-mail is not a real time application. Anonymous e-mail systems can benefit from this property when implementing its anonymity services. For instance, a re-mailer may delay the forwarding of a message so that eavesdroppers cannot link incoming messages with outgoing messages through timing attacks. Hence, global eavesdroppers cannot follow the path from initiator towards recipient. For more details, we refer to deliverable 2.

7.3.2

Requirements related to anonymity.

One time anonymity: One time anonymity of the initiator towards the recipient and towards attackers can be useful for e-mail applications. Especially, a user of a hot line (reporting an abuse, crime, . . . ) would prefer consecutive usages of the system to be unlinkable. Persistent anonymity: An anonymous initiator uses the same pseudonym for each message. All e-mail messages sent by the same initiator are, hence, linkable. This property may be desirable in some cases (e.g., a user/patient consulting a help center/doctor, . . . ) where the recipient needs to assemble information from different messages to complete the whole picture. An anonymous recipient will presumably use a persistent pseudonym if that recipient is a provider of a particular service. The pseudonym can then be listed in yellow pages, . . . Reply anonymity: If it is desirable that a recipient should be able to reply to an anonymous message, then the message needs to include sufficient information (e.g., in a trusted header field) to allow the recipient to deliver a reply message (sent by the responder) to the initiator of the original message.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

7.3.3

55/128

Requirements related to controlled anonymity.

The requirements related to controlled anonymity are orthogonal to the requirements related to anonymity in general (see previous subsection). Controls in anonymous e-mail systems are useful to prevent abuse –if possible– (e.g., spamming), to prevent future abuse (e.g., ban a misbehaving user from the system), or to revoke the anonymity of the user (initiator or recipient) in case of severe misdemeanor or crime. Controls with unconditional anonymity. • Access control. Access control is only possible if the initiator must present (or prove the possession of) a valid access token, obtained in a prior registration phase. Users without a valid token, or whose token has been invalidated (revoked), will no longer be able to use the system. Hence, access control can prevent future abuse. Note that not all systems provide a registration phase. • Block sending e-mail messages: – based on properties of the message, such as the size of the message. – based on the address of the recipient: the recipient may have requested to block all anonymous e-mail messages sent towards that recipient; – based on prior usage of the system. This control may prevent the initiator to send spam mail, by setting an upper limit to the number of times the same message can be sent in a time interval. Also, the system may limit the volume (number of messages or total size of the messages), sent by an initiator. User-controlled conditional anonymity. • Discourage overuse of the mail system. The mail system sets an upper limit to the number of e-mail messages a user may send. If the user exceeds this number, her identity will be revealed. This control requires a registration phase where the system provides the user with a credential that can only be used a limited number of times. Even if different usages of the credential are unlinkable, using it more than the set limit, will provide the system with enough information to retrieve the identity of the sender. Hence, revocation is only possible when the user abuses the system.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

56/128

Trustee-controlled conditional anonymity. An anonymous e-mail system may also revoke the identity of the initiator or the recipient of an e-mail message with the help of a trustee. This requires that e-mail messages contain extra information (e.g., a trusted header field), that can only be interpreted with the help of a trustee. • Revealing the identity of an anonymous originator. This is useful if an initiator is found guilty of sending illegal material, harassing a recipient, or doing other illegal acts. • Revealing the identity of an anonymous recipient. This might be useful if the recipient is suspected to be a accomplice in a criminal offense.

7.4

Overview of existing solutions

Current anonymous e-mail systems are not build with controlling anonymity in mind. However, it seems not so difficult to add controls in current anonymous e-mail systems. In this section, we give an overview of anonymous e-mail systems and discuss how controlled anonymity can be added to them.

7.4.1

Type 0 Re-mailer: Penet

In a type 0 Re-mailer System [90], the initiator forwards the message to the central re-mailer. This re-mailer erases the identity of the initiator and replaces it with a one-time identity or a pseudonym. The re-mailer then forwards the message to the recipient. A successful attack to the central re-mailer compromises the whole system. The central re-mailer is a trusted party. The central re-mailer controls the anonymity of the initiator and can revoke its anonymity in case of misuse. We trust that the central re-mailer will not revoke the anonymity of the initiator if it is not necessary.

7.4.2

Type 1 Re-mailer: CypherPunk

The initiator of an e-mail builds a nested set of encrypted messages [29, 91]. At each layer, the encrypted message consists of a set of instructions and another encrypted message. Each re-mailer in the chain first decrypts the message, executes the instructions and forwards it to the next re-mailer in the chain. The next re-mailer in the chain is known after decryption of the received message. Because each re-mailer removes only one layer of encryption the initiator and recipient are unlinkable from the point of view of a

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

57/128

local eavesdropper or a particular re-mailer in the chain. We trust that the re-mailers do not collaborate. Otherwise, the identity of the communicating parties can be revealed.

7.4.3

Type 2 Re-mailer: MixMaster

Type 2 re-mailer systems [29, 91] principally consist of the same building components as Type 1 re-mailer systems. Moreover, they provide techniques to counter attacks by global eavesdroppers. These techniques are message reordering at each hop, building messages of a fixed number of bytes and denial of replays. As with CypherPunk, the re-mailers are trusted not to collaborate.

7.4.4

Type 3 Re-mailer: MixMinion

Mixminion [31] is a message-based anonymous remailer protocol with secure single-use reply blocks. Mix nodes cannot distinguish Mixminion forward messages from reply messages, so forward and reply messages share the same anonymity set. They add directory servers that allow users to learn public keys and performance statistics of participating remailers, and describe nymservers that provide long-term pseudonyms using single-use reply blocks as a primitive. The design integrates link encryption between remailers to provide forward anonymity. As with CypherPunk, the re-mailers are trusted not to collaborate.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

58/128

Chapter 8

Anonymous publishing 8.1

Description of the system.

The goal of a publication system is that a user can publish documents. Documents are stored at one or more servers. Users can retrieve these documents in a later phase. • registration phase (optional). In the registration phase, a user may gain access to the publishing system. For instance, a user may sign a contract not to misuse the publication system. After that, he may gain credentials for publishing documents. This phase is optional. • publishing phase. In the publishing phase, the publisher puts anonymous material (documents) in the system anonymously. Controlled anonymity can be useful if the material is illegal or is threatening towards someone. • retrieval phase. In the retrieval phase, a user wants to retrieve the document. This phase is closely related to the web browsing phase (see chapter 9). We focus on anonymous publishing, as the counterpart of anonymous browsing. That is, with respect to anonymity, we primarily look at the problem of hiding the identity/location of the server on which a document is published. Censorship-resistant publishing is of secondary (but closely related) importance in our context. Documents are then distributed among multiple servers. Hiding the identity/location of a (single) server is then less relevant.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

8.2 8.2.1

59/128

Different roles/entities in the system. Entities involved in publishing

Publisher: A publisher puts a document online on one or more servers. The publisher is the initiator of the first phase in a publishing system. Client retrieval application: When a particular user wants to consult an online document, he types the according location in his retrieval application. The application (for instance, a web browse application) handles the request by contacting the server from which the document can be downloaded. As a result of the request, the client application receives the demanded document and shows it on the screen. As such, the client application is both the initiator and the sender of the request. Server(s): This is/are the server(s) where the document is published. The servers are the receivers and recipients of a publication request. Moreover the servers are both the receiver and the responder of a request. Upon request, the server returns the desired document.

8.2.2

Entities involved in anonymous web publishing

Publisher: When the publisher puts his documents on another server, he wants to remain anonymous during this phase. Therefore, the publisher must transform his request to an anonymous one. For instance, he can send the document included in an anonymous e-mail message to the publishing server(s). Location provider: A location provider publishes ”nested encrypted locations” together with a specification of what documents can be requested from that ”nested encrypted locations”. These locators contain enough information to contact the right server but in such a format that it is impossible for the initiator to derive the server from the information. Instead of contacting the server immediately, the location is send to a rewebber in the rewebber network. Because the location provider is nothing more than a public database, it is an uninformed provider.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

60/128

Chain of rewebbers: The published nested encrypted locator contains the first rewebber in a chain of rewebbers. The request is send to this rewebber. This rewebber peels off one layer of the nested encrypted locator so that the second rewebber in the chain appears. The request is send to the second rewebber. This process continues until the request arrives at the last rewebber in the chain, which forwards the request to the server. So, the last rewebber in the network knows the location of the server that stores the document but does not know the original request. Thus, the rewebbers in the network are uninformed providers.

Note again that we here focus on systems in which documents are stored on a single location, which should be kept anonymous. In a censorship-resistant publishing system, documents are distributed among multiple servers. It should be very difficult, even impossible, to remove documents from the system once they have been published. The anonymity of the server is however of less importance (the document itself should of course not reveal the author’s identity).

8.3 8.3.1

Anonymity requirements Requirements related to anonymity.

Persistent anonymity: • persistent anonymity of the publisher (author). The publisher uses the same pseudonym to publish different documents. These documents are linkable. • persistent anonymity of the server(location). In this case, documents at the same server can be linked. A person that wants to retrieve several documents at the same anonymous server can make use of the same locator to retrieve the documents(for instance, a reply onion structure). One time anonymity: • One time anonymity of the publisher (author). The documents of the same publisher are not linkable. This can be useful if a user publishes several documents about his different interests. If an outsider could link these topics, this could probably compromise the identity of the publisher.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

61/128

• One time anonymity of the server (location). The system uses another locator for each document at the same server. The location of different documents at the same server can not be linked. This can be useful in some cases. If one document at a server is illegal, only the locator to that document can be made unavailable. The other documents at that server can still be retrieved.

8.3.2

Requirements related to controlled anonymity.

Publishers can misuse an anonymous publication system. A user can publish illegal documents, wrong or insulting information. Controlled anonymity is useful in each of these scenario’s. This can result in: • removing the document: This action is reversable. Remark that this is very difficult in censorship-resistant systems. • revealing the identity (of the publisher/location): This action is irreversable. Controls with unconditional anonymity. • Access control. After the registration phase, a user may gain access to the publishing system. For instance, a user may sign a contract not to misuse the web publication system. After that, he may gain credentials for publishing documents. A user can also be retired from the publication system (offline). If the system detects that a user has published illegal documents, the publication system can drop him. • Block publications/ remove publications. Documents can be blocked when someone offers the document to the system. Removal of web publications is done afterwards. Blocking or removing documents can be based on the following properties: – based on the size and the contents of a document. For instance, if a document contains illegal material, wrong information, insults or accuses a person, the document should be made unavailable. In one of these cases, the document should removed or made unavailable. – based on the author of the document. The system can block documents from a malicious author (i.e. an author that has previously published viruses). This is only possible if the author is persistent anonymous. – based on the location of the document. This is possible if the web server is persistent anonymous.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

62/128

• Control presence of publications. If there is no reason to remove/block a document, the system can also check the availability of the document. User-controlled conditional anonymity. • Prevent overloading publication servers. The user of system can pay for a number of anonymous publications. If that user exceeds the number of publications, his identity is revealed. Trustee-controlled conditional anonymity. An anonymous publication system may also reveal the identity of the publisher or the location of a web server with the help of a trustee. Revoking the anonymity of the publisher requires that publications contain extra information (e.g. a trusted header field), that can only be interpreted with the help of a trustee. Revoking the anonymity of a web server requires that anonymous urls contains extra information that can only be interpreted with the help of a trustee. • Revealing the identity of the publisher. This is useful if the publisher is found guilty of publishing illegal material, harassing a person or doing other illegal acts. • Removing a document from the server. If a server contains an illegal document, the identity of that server can be revealed and the document can be removed from the server. However, this is very difficult for censorship-resistant systems. In such a system, the document is distributed between several servers.

8.4

Overview of existing solutions

The solutions for anonymous publishing are not intented to control the anonymity of the publisher. In this section, we describe the existing solutions en give indication about the way to adapt these systems in order to control the anonymity of the publisher.

8.4.1

Eternity

In this system [71], the anonymity lays in the publishing phase. The publisher sends the document to the Eternity server in an anonymous way. The Eternity system replicates the document on a series of servers so that it is practically impossible to remove it from all these servers. Users requesting the posted document cannot retrieve the identity of the publisher. Because

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

63/128

of the redundancy, the publisher is nearly sure that the document is available and remains online. The contents of the document can be controlled in the publishing phase (by controlling keywords). Moreover, the anonymity of the user may be revoked and he should not access the publication system anymore. Revocability after the publishing phase will be more difficult. Moreover, it is practically impossible to remove the document from all the servers.

8.4.2

Publius

Publius [93] has the same fundamentals. They also provide extra mechanisms such that it is not possible for outsiders to update the document. Only the publisher can do so. The publisher is responsible for replicating the document and dividing the keys along the servers needed to decrypt the document. Because Publius has the same fundamentals as the Eternity system, controlling the anonymity is similar.

8.4.3

TAZ Servers

In this system [45], the publisher puts the document on one web server. This may be its own web server or not. The anonymity is not in the publishing phase but in the web browsing phase. The TAZ system consists of a TAZ server and a network of rewebbers. The TAZ server provides a public database that maps virtual hostnames into nested encrypted URL’s. Thus, TAZ servers do not increase the security of the system. Next to virtual hostnames, pseudonyms (for authors) can be added. An initiator sends the nested encrypted url through a chain of rewebbers. Each intermediate rewebber peals off one layer of the nested encrypted URL. The document itself is also encrypted at the web server to prevent a search engine attack A trustee can control the pseudonyms that are linked to the documents. If illegal documents are published, the trustee can revoke the anonymity of the user and ask him to remove the document from his web server. If the system has no trustee, the contents of the document can be controlled before the encrypted url is put on the TAZ server. If the document contains illegal data, the url is not made available on any TAZ server. We trust that the rewebbers do not collaborate. Otherwise, the system is compromised.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

64/128

Chapter 9

Anonymous browsing 9.1

Description of the system.

The goal of this type of system is to provide to a user an environment for anonymous web browsing. The intention of this type of application is that the initiator of a web browse request remains anonymous. The use of the system can also be controlled. We distinguish two phases in web browse systems. • Registration phase(optional). In this phase, a user registers to the web browse system. After he is succesfully registered, he can send web browse requests. • Web browse phase. The user sends requests for a web document to a web server anonymously. The system can control the anonymity of the user.

9.2 9.2.1

Different roles/entities in the system. Entities involved in the web browsing

Client web browse application: When a particular user wants to consult an online document, he types the according url in his web browser. The web browser handles the request by contacting the web server from which the document can be downloaded. As a result of the request, the client web browser receives the demanded document and shows it on the screen. As such, the client web browser is both the initiator and the sender of the request. Web server(s): This is/are the web server(s) where the document is published. The web servers are the receivers and recipients of a web publication request. MoreD7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

65/128

over the web servers are both the receiver and the responder of a web browse request. Upon request, the web server returns the desired document

9.2.2

Entities involved in the anonymous web browsing

Client web browse application: To provide an anonymous web browse system, the client web browser must be extended with some extra anonymity functionality. Instead of sending a request to the web server immediately, the request is sent to (an)other server(s) so that the client is hidden from the receiver of the request. A central server: In an anonymous web browse system, a central server can provide an anonymity service. The web browse request is sent to the central server. This server scratches the initiator’s identity and forwards the request to the receiver of the request. The receiver answers the request to the central server. In this solution, a lot of parallelism is detected with the system of one central remailer in the anonymous mail systems. The central server is an informed provider because it knows everything about the identity of the initiator of the request. Chain of mixes: Instead of sending the request to one central server, a chain of mixes can be used through which the request is sent. A mix in the chain only knows the next mix in the chain to contact. As such, no mix in the chain can discover the full path between the initiator of the request and the receiver of the request. The first mix in the chain knows the initiator and the identity of the next mix but nothing about the receiver of the request. Analogously, the last mix in the chain knows the previous mix in the chain and the receiver of the request but nothing about the initiator of the request. So, the initiator and the receiver cannot be linked if the mixes do not collaborate. Thus, the mixes in the chain are uninformed providers. This system has some similarities with the ”chain of remailers”. Group of participants: Instead of sending a web browse request through a chain of mixes, a request can be sent to another initiator of other web browse requests. This process can be repeated so that the request passes a chain of randomly chosen initiators before the request is actually forwarded to the web server. In that solution, the first initiator in the chain is the real initiator of the request. But the second initiator in the chain does not know if the previous initiator

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

66/128

is the real initiator or not, etc. The web server can only guess that the request came from the last one in the chain. It does not know the real initiator of the request. All initiators register at an authority server. An authority server (= trustee): The system of a chain of initiators only works good if no compromised initiator register. Therefore, the registration server acts as an authority server. This type of server is actually uninformed because he does not know anything about the actual web browse requests. He only accepts or rejects a new user and informs the other users in case of acceptation.

9.3 9.3.1

Anonymity requirements Requirements unrelated to anonymity

Bi-directional: Web browse requests are bi-directional because a request is sent from a web browser to a web server and that web server responds to that particular request. The response can occur along the same connection as the original request, because the response happens immediately after the request. So, the web server does not have to possess enough information to contact the initiator afterwards (i.e. the web server does not have to make an anonymous connection to the initiator). This is not true for mail systems where the responder must have enough information about the initiator of the message to reply to that initiator. Real time: In contrast to mail systems, web browse requests are handled in real time. The response happens immediately after the request. So, mixes in the network cannot use latency as a security mechanism.

9.3.2

Requirements related to anonymity.

One time anonymity: The intention of an anonymous web browse system is that the initiator remains anonymous during the web browsing process. The initiator may not be identifiable and different web requests must be unlinkable. Persistent anonymity: If the initiator wants to have the same pseudonym during several web browse sessions, persistent anonymity is required. This may be useful if a user

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

67/128

wants to log in at the same site several times with the same pseudonym. Note that these types of anonymity have some parallelism with anonymous mail systems. In anonymous mail systems, the intention is also that the initiator of a mail remains anonymous towards all other parties (recipient and attackers). Reply anonymity is a property of an anonymous web browse system in contrast to anonymous mail systems. The answer to the web browse request is given along the same connection as the request itself. The web server does not have to know how to reply to an initiator in the future.

9.3.3

Requirements related to controlled anonymity.

Controlled anonymity is useful in web browse systems. We distinguish two types of control: user controls and request controls. • User controls. It is useful to control the users of the system. For instance, only registered users may access the web browse system. Malicious users are removed from the system. • Request controls. A web browse system can also control the web requests. If a web request to a malicious website is made, the system can deny it. User controls and request controls are difficult to separate. If the system detects a web request to a malicious website (by controlling the web requests), the identity of the initiator of the request can be revealed (by controlling the anonymity of the user). We discuss these types of control in relation to the model for controlled anonymity. Controls with unconditional anonymity. In a controlled web browse environment, access to the system as well as web browse requests can be controlled. • Access control. Access control is useful to prevent malicious users to access the web browse system. Access control is only possible if the initiator must present (or prove the possession of) a valid access token, obtained in a prior registration phase. Users without a valid token, or whose token has been invalidated (revoked), will no longer be able to use the system. The token can be revoked if the user visits illegal websites. Hence, access control can prevent future abuse. Note that not all systems provide a registration phase. • Block web requests.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

68/128

– based on the address of the recipient (i.e. the web server). The recipient may have requested to block all anonymous web requests sent towards that web server. The web browse system can also prevent users to request malicious information. The system can make a list of malicious web servers and deny requests to each of these servers. – based on prior usage of the system. If the system does not want that one user overuses the network, it can take controls to prevent it. For instance, the system can put an upperbound on the number of web requests for a user within a time interval. The system refuses additional web requests. This requires that the system can link all requests made by the same user. • by limiting the size of uploaded/downloaded material or by limiting the duration an initiator may send web browse requests. This control also requires a registration phase where the system privdes the user with a token. Remark that different web requests must be linkable to the system. User-controlled conditional anonymity. • Discourage overuse of the web browse system. by limiting the number of web requests. The web browse system sets an upper limit to the number of web requests a user may send. If the user exceeds this number, his identity will be revealed. This control requires a registration phase where the system provides the user with a credential that can only be used a limited number of times. Even if different usages of the credential are unlinkable, using it more than the set limit will provide the system with enough information to retrieve the identity of the sender. Trustee-controlled conditional anonymity. An anonymous web browse system may also revoke the identity of the initiator of a web request with the help of a trustee. This requires that web requests contain extra information (e.g. a trusted header field), that can only be interpreted with the help of a trustee. • Revealing the identity of an anonymous initiator. If an initiator sends requests to illegal web servers (illegal music, child porn, ...), the identity of the initiator can be revealed. A trusted party can revoke the anonymity of the user. The identity of a user is only revoked in case of malicious behaviour.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

9.4

69/128

Overview of existing solutions

The current solutions for anonymous web browsing do not provide controlled anonymity. In this section, we describe the current solutions and indicate how controlled anonymity can be added each system.

9.4.1

LPWA

The LPWA [58] web browse system consists of one central server that maps user names, passwords and e-mail addresses to corresponding pseudonyms. Thus, the receiver of a request does not know the identity of the initiator. A local eavesdropper between the initiator and the LPWA server can link the initiator and the receiver of a web browse request. Two variants try to tackle this problem: LPWA proxies and LPWA servers behind a firewall. The central LPWA server receives all incoming requests and makes the mappings. It is a trusted party. The functionality of this server can easily be extended in order to control anonymity. If an initiator brows to illegal websites, the central server can detect this and take countermeasures. The anonymity of the initiator can be revoked or he can be dropped of the anonymous web browse system.

9.4.2

Web Mixes

Web mixes [6] consist of a chain of mixes through which a request is sent. They basically use the same technique as onion routing. Each web mix peels off one layer in the request. However, this system is built in at application level in contrast to onion routing. We trust that the web mixes do not collaborate. Otherwise, the identity of the communicating parties can be revealed.

9.4.3

CROWDS

Crowds [70] consists of a series of jondo’s that both act as the initiator of a request and as a participant of the group, called crowd. An authority server adds candidates to the crowd or rejects them. It also informs the other crowd members of an added/rejected crowd member. A jondo sends a request to a random crowd member (i.e. a participant) that decides either to submit the request to the receiver or forwards the request to another crowd member. Conclusion: in a first phase, a jondo asks the authentication server to add the crowd; after that, the jondo can play the role of initiator or intermediate jondo during web browse requests. The CROWDS system controls a jondo when he is joining the crowd. A D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

70/128

jondo must have credentials to become a participant of the group. Access to the system is controlled.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

71/128

Chapter 10

Electronic payments 10.1

Description of the system

An electronic payment system is a crucial aspect in electronic commerce. To protect the users’ privacy, anonymity must be provided in the overall transaction, including the electronic payment. In an electronic payment system we can identify a buyer (payer), a seller (payee), and a bank. As explained in Deliverable 2, an electronic payment system consists of three basic phases: • Withdrawal: the payer contacts the bank, and receives electronic money in exchange for money in his bank account; the obtained electronic money should be anonymous, that is, it should not be linkable to the identity of the payer. • Payment: the payer transfers an amount of electronic money to the payee; this transfer should be anonymous, that is, the payer should not disclose his identity towards the payee. • Deposit: the payee contacts the bank, and sends electronic money in exchange for money in his bank account; the bank should not be able to retrieve the identity of the payer from whom the payee received the electronic money.

10.2

Different roles/entities in the system

As explained in Deliverable 2, during the payment phase, the different entities map to the following generic roles: the payer is an initiator, the payee is a responder, and the bank should be an uninformed provider. During withdrawal and deposit, the bank is responder, while during deposit, the payee is initiator.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

72/128

In the context of anonymity control, an additional important role is the trustee. This trusted party may have the ability to disclose information about the identity of the payer or payee under specific circumstances, but should not be able to play any other role in the payment system. We do not only consider outside attackers (e.g., eavesdroppers), but particularly focus on inside attackers (i.e., malicious payer, payee, bank, registration authority, or trustee).

10.3

Anonymity requirements/properties

In the context of unconditional anonymity, the following anonymity requirements are commonly considered: • Payer anonymity: the payer should remain anonymous. • Payer untraceability: it should not be possible to trace the payer based on the payment. • Payment untraceability: it should not be possible to obtain information about the payments performed by a certain payer. • Payment unlinkability: it should not be possible to link different payments made by the same payer. The anonymity types should all be durable, unless some form of anonymity control (in particular, conditional anonymity) is implemented Note that anonymity of the payee does not seem to be considered in the existing anonymous electronic payment systems, although this may be provided in an indirect way. Anonymity of the payee is not really needed in a traditional electronic commerce scenario, but becomes important in P2P settings, ad-hoc networks, or mobile commerce scenarios.

10.4

Anonymity control requirements/properties

The anonymity of electronic money can be misused by criminals as discussed by von Solms and Naccache [92]. Besides double/over-spending, possible fraud, misuse, and attacks include: money laundering, illegal purchases, blackmailing, extortion, bank robbery (i.e., stealing the bank’s secret keys; or forcing the bank to engage in non-standard protocol for withdrawal, called ‘blindfolding’), etc. A criminal could commit the ‘perfect crime’ by spending coins obtained through a hostage. The bank would never be able to tell later on which coins originate from the criminal. To make things worse, the electronic world allows actions that are not feasible in the physical world, such as making many small purchases in a short time, in order to hide a

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

73/128

large monetary exchange (i.e., compare for example to paying a certain deal with a truck loaded with coins). In order to make anonymous electronic cash systems acceptable to both users and governments and banks, mechanisms for controlled anonymity are needed.

10.4.1

Anonymity control mechanisms

We can identify the following anonymity control mechanisms for anonymous electronic payments: Controls with unconditional anonymity • Amount-limitedness. Governments and banks are particularly concerned about misuses that constitute large amounts of money. A trivial solution therefore is to limit the amount of anonymous cash users can withdraw and spend. Just as in the physical world, this limitation should not be a problem for the legitimate users. See Sect. 10.5.9. • Non-transferability. Many advanced electronic cash systems are made transferable, i.e., electronic coins can be transferred from one user to another user without depositing them, or in other words, coins received during a payment can be further spent by the receiver without interaction of the bank. This emulates the way physical cash can be handled. For transferable electronic cash systems, we refer to among others Okamoto and Ohta [64] and Chaum and Pedersen [25]. The NetCash-like schemes [59, 81] which solely depend on anonymous communications, trivially offer transferability. While in the physical world it is difficult and costly to transfer large amounts of cash, in the electronic world coins can easily be anonymously transferred in large amounts. This would be helpful to for example money laundering. Non-transferability would therefore be a good preventive feature for anonymity misuse. See Sect. 10.5.9. • Auditability. In an auditable payment system there can be no valid money that is not known to an auditor. This protects against bank robbery attacks. See Sect. 10.5.9. User-controlled conditional anonymity • Double-spending detection. Anonymity should be removable when coins are double-spent. We already discussed this mechanism during the first half of the APES project. • Self-escrow and auditable tracing. There are revocable payment schemes in which the user himself can initiate the tracing of his coins, or in D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

74/128

which the user has knowledge about his coins being traced. This protects respectively against blackmailing and illegal tracing. See Sect. 10.5.8. Trustee-controlled conditional anonymity • Revocable or escrow cash schemes. Revocability is seen as the most common way of controlled anonymity. Revocability allows to trace anonymous coins or to trace the original owner of a coin in the case of suspicious transactions. It can for example discourage money laundering and illegal purchases.

10.4.2

Specific requirements for revocable anonymous payment systems

Revocable anonymous electronic payment systems should meet many requirements. Davida et al. [33] list the following set: 1. Anonymity. Revocable electronic money should be anonymous and unlinkable to legitimate users. 2. Revocation. Anonymity should be revocable, but only by a trustee or judge and when necessary. 3. Separation of power. The trustee should not have any power other than tracing. He should for example not be able to forge money. 4. No framing. The bank or trustee should not be able to impersonate users, also not if they cooperate. For example, the bank should not be able to double-spend money on behalf of the user (or present proof thereof), in order to convince the trustee into tracing the coins of that user. 5. Selectivity. The anonymity should be revocable only for the transaction for which a trustee’s order is given. Tracing should for example not be performed by just revoking the anonymity of all transactions. 6. Efficiency. Electronic payment systems should always be efficient. For revocable anonymous payment systems in particular, a trustee should be involved only when revocation is required, remaining offline otherwise. 7. Crime prevention. Anonymity revocation should not motivate crimes that are more serious than those it protects against. For example, if a criminal forces a victim to withdraw electronic coins from the victim’s account, the criminal should not be motivated to kill the victim (i.e., to prevent him from immediately reporting the crime so that the coins D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

75/128

can be traced, in order to be able to spend the money before the murder is discovered). Owner and coin tracing Two types of anonymity control models are also discussed in [33], more specifically two types of tracing mechanisms: • An owner tracing protocol exposes the identity of the owner of a specific coin. In this kind of protocol the bank gives to the trustee the information it received during the deposit protocol. The trustee then returns information which can be used to identify the owners. Owner tracing prevents money laundering and making illegal purchases, as it allows the authorities to identify the involved users. This is also called ‘after-the-purchase’ tracing. • A coin tracing protocol traces the coin that originated from a specific withdrawal. In this kind of protocol the bank gives to the trustee the information about a specific withdrawal. The trustee returns information that will appear when the coin is spent. Coin tracing allows the authorities to find the destination of suspicious withdrawals. It can thus be used to identify the seller of illegal goods, and prevents the blackmailing problem. This is also called ‘before-the-purchase’ tracing. Note that an anonymity controlled system must include both means of tracing in order to satisfy the selectivity requirement (requirement 5 in the list above).

10.5

Overview of existing systems

For an overview of electronic payment systems, anonymous and identifiable, we refer to [27] and Deliverable 2 [79]. We here provide an overview of existing electronic payment systems with anonymity control. In this respect, we also refer to [27]. Other surveys of electronic payment systems with anonymity control can be found in Davida al. [33] and Petersen and Poupard [65].

10.5.1

Different schemes for revocable anonymous payment

Numerous schemes for revocable anonymous electronic payment systems have been proposed. Below we present a very informal and brief description of the schemes and concepts presented in the literature. We follow a classification in [65], and distinguish between three kind of schemes: • Unlinkable. Neither the bank nor the trustee can link different payments of the same user, unless they cooperate. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

76/128

• Linkable. The user has a pseudonym which is known to the bank. Different payments conducted under the same pseudonym are thus linkable by the bank. • Trustee linkable. The user has a pseudonym which is only known to the trustee. Blinding of coins is performed by the trustee. Different payments conducted by the same user can be linked by the trustee, but not by the bank. Note that some schemes only allow owner and coin tracing. We explicitly mention which schemes also protect against bank robberies and extortion attacks.

10.5.2

Unlinkable revocable anonymous payment systems

Brickell, Gemmell and Kravitz [12] extend two existing schemes for electronic cash, among which Brands’ scheme [9]. Revocability is enabled during the set-up of a user. An interactive protocol between user, bank, and (two) trustees takes place in which the user gives information to the trustees which will allow owner tracing, and in which the trustees prove to the bank that they have received the necessary information (information is needed and stored per coin, so that in practice the interactive protocol should be run during withdrawal). The scheme only allows coin tracing. Owner tracing is only possible to the extent that it can be verified whether a particular suspected owner has withdrawn a given coin. Stadler, Piveteau and Camenisch [83] proposed fair blind signatures, a new type of blind signature schemes, with the additional property that, with the help of a trusted entity, it is possible to link a message-signature pair and the corresponding protocol view of the signer. Two different types can be defined. Type I fair blind signatures allow a judge to find the messagesignature pair, given the signer’s view of the protocol (which is in fact coin tracing). Type II fair blind signatures allow a judge to identify the sender, given a specific message-signature pair (i.e., owner tracing). Note that [89] points out some weaknesses and vulnerabilities in one of the fair blind signature schemes proposed in [83]. More advanced and secure fair blind signature schemes have been proposed in [1, 82]. The trustee is in most schemes not involved during the signing process. Fair blind signature schemes exist on a per coin basis or with a registered pseudonym. Franklin and Yung [38] construct blind weak signatures. Weak signatures are signatures which require participation of a third party during signing and verification. Blind weak signatures are the blinded extension of this primitive. They have in fact not been intended for revocation purposes, but the signer and the on-line third party could combine their views on the signing protocol to perform tracing.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

77/128

A stronger tracing model, which survives the bank robbery attack, was proposed by Jakobsson and Yung [49]. The concept of dual verification signatures is presented. Similar to blind weak signatures, both bank and trustee (called ‘ombudsman’) sign coins (which again prevents the bank issuing coins by itself). Verification of the signatures can however be done without the help of the trustees or the bank (off-line payment and deposit). In case the shops have been reported about a bank robbery, the scheme reverts into on-line payments and deposits. During withdrawal, the trustee performs unblinding (i.e., decryption) of the coins. Camenisch, Maurer and Stadler [16] (see also Camenisch [15]), Frankel, Tsiounis and Yung [36, 37], and Davida, Frankel, Tsiounis and Yung [33] present traceable payment schemes with fully off-line trustees. Essentially, at the withdrawal stage, the user encodes the coin in such a way that the trustee will be able to provide the bank with extra information that allows coin and/or owner tracing. The user proves this encoding to the bank. The trustee is off-line in this process. The schemes differ in efficiency. In particular, owner tracing in [16] results in a link to the withdrawal transcript, while the other schemes provide links directly to the account that has been established by the user with the bank. Additionally, the notion of ‘distress cash’ is given in [33]. Here, a covert channel exists between the user and the bank with which the user can warn the bank that he is being blackmailed without the criminal noticing. I.e., the user has 2 different PIN codes to authenticate, one for normal operation, and a special one. When the user is forced to withdraw coins, he will use the special one, which will automatically trigger tracing of the coins. With respect to the effectiveness of this approach, we remark that, although the criminal might not know whether the normal or the special PIN code is used, he will try to “convince” the user to use his normal PIN code.

10.5.3

Linkable revocable anonymous payment systems

Camenisch, Piveteau and Stadler [18] suggested a fair on-line system (payment and deposit at the same time) based on [17]. Thus, users have two types of accounts, personal and anonymous. An anonymous payment is a transaction from an anonymous account to a shop’s account. The core of the system consists of an efficient method for transferring money from a personal account to an anonymous account without revealing the correspondence between them (except now to the judge). Perfect unlinkability of personal and anonymous accounts is realized by using a blind signature scheme. To achieve fairness, the judge knows the correspondence between the two accounts, and coins withdrawn from a personal account can only be deposited into the corresponding anonymous account. Although the customer’s identity is not revealed, the bank can still link different transactions when the same anonymous account is used for different payments. However, D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

78/128

the customer can use different accounts for transactions that should not be linked. This system provides owner tracing as the judge can at any time find the origin of a transfer given the anonymous account number, and coin tracing as the judge can find the destination of a transfer, given the personal account number. The scheme of Petersen and Poupard [65] is very similar to the previous scheme. They however add the notion of blacklists (e.g., of blackmailed pseudonyms) to counter extortion attacks. Fujisaki and Okamoto [40] describe a scheme in which users register a public/private key pair with a set of trustees. This pseudonymous key pair is certified by the trustees, who can cooperate on request to retrieve the identity of a user. The public key will be included in the electronic coin that is blindly issued by the bank. The private key is needed to spend the coin. Additionally, each electronic coin includes a unique random number which is registered with the trustees as well. To achieve the latter, this random number is encrypted for the trustees and sent to the bank during withdrawal. The encrypted numbers are later forwarded by the bank to the trustees. Registering all coins with the trustees in this way should be a countermeasure against extortion of the bank: if the bank reports that it has been extorted, the trustees recover all coin numbers; forged coins will not be in this list; payment must from then on be performed with on-line trustees, forged coins will be detected and distinguished from valid coins, i.e., forged coins will not be present in the trustees’ list of valid coins.

10.5.4

Trustee linkable revocable anonymous payment systems

M’Ra¨ıhi [61] proposes the following scheme. Users agree on a shared secret with the bank (the bank knows the correspondence identity–secret). The bank certifies the shared secret, so that the user can anonymously associate a pseudonym with the trustee (who then knows the correspondence secret– pseudonym). During withdrawal, blinding of the coins is delegated to the trustee, blinded coins are signed by the bank, and unblinded by the trustee. The trustee thus has a list with coins and associated secret-pseudonym pair. Juels [53] proposes a system that involves trustee tokens. The user registers with the trustee, and establishes two shared secrets with him. The shared secrets are used as the seed of a pseudo-random number generator with which the trustee calculates a series of blinding factors and serial numbers. Each coin includes the identity of the user and a serial number, and is encrypted with the public key of the trustee. The coins are blinded by the trustee. The user then receives the trustee tokens in the form of MACs on each of the blinded coins that are generated by the trustee (trustee and bank share a secret; alternatively digital signatures could be used). At withdrawal, the blinded coins can be signed by the bank with an ordinary blind D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

79/128

signature scheme. The trustee tokens prove to the bank that the blinded coins encode the necessary information that is needed to trace. The mechanism of trustee tokens can be used in on-line payment system like ecash, but it can also be used in off-line payment systems. A restrictive blind signature is not really needed, as in case of double-spending, the identity can be traced with the help of the trustee. Owner tracing is straightforward in this system. Coin tracing can be done very efficiently, as the trustee is able to generate all the coins of a specific owner by using the pseudo-random number generator and the seed corresponding to that user.

10.5.5

On-line vs. off-line trustees

An important efficiency property of a revocable electronic payment scheme is whether or not the trustee is involved in certain stages of the scheme. According to a classification in [65], the following situations can be distinguished: • Trustee on-line at initialization. In the schemes of [16, 33, 36, 37], the trustee is only involved at the initialization of the system, remaining off-line otherwise (except for tracing). • Trustee on-line at user registration. In the schemes of [18, 40, 65], the user has to interact once with the trustee to register his pseudonym. Also in the schemes of [12, 53], the user in principle interacts once with the trustee. However, this interaction involves a part of the withdrawal protocol, which can be done off-line with respect to the bank. The withdrawal of a specific amount of coins can thus be prepared during registration. If the user wants to withdraw more coins, he must contact the trustee again. • Trustee on-line at each withdrawal. In the schemes of [49, 61], the trustee must be involved at each withdrawal. • Trustee on-line at payment (after extortion). In the schemes of [40, 49], after the bank has reported a bank robbery or an extortion attack, the trustee must be on-line at each payment. In [40] the trustee has a list of all valid coins (i.e., not obtained via extortion) which should be consulted on-line. In [49] both bank and trustee are needed to verify the coins. • Trustee off-line at payment (after extortion). The trustee in the scheme of [65] is claimed to remain off-line after extortion of a user or the bank. ‘White lists’ and ‘black lists’ are deployed. These lists have to be sent to the shops, so that the trustee does not have to be contacted at the time of payment. However, the lists should be regularly updated.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

80/128

There will always be a short interval of time in which payments with extorted coins will be successful. (Note that this is very similar to the problem of certificate revocation in PKIs: black lists can be used, called CRLs, or the verifier can go on-line and verify the validity of the certificate via OCSP.)

10.5.6

Lowering the level of trust in trustees

In all revocable anonymous payment schemes the trustee should be trusted (hence the name!). However in practice it might not be desirable to have to trust one single party. Moreover, this party must be trusted by users, banks and government. Trust could be distributed among multiple parties, who need to cooperate in order to revoke anonymity. The scheme in [12] therefore has two trustees who both have to participate. One scheme of [40] also incorporates multiple trustees who all should cooperate. On the one hand, having multiple trustees increases security, that is, an untrustworthy trustee is no longer capable of illegitimately tracing payments. On the other hand, if one untrustworthy trustee refuses to cooperate, tracing is not possible. For this reason threshold schemes have been introduced. Such schemes have multiple trustees. A certain minimum number, the threshold, of these trustees should be trustworthy. This subset can cooperate to perform tracing or another protocol at some stage of the payment system. Examples of threshold schemes are fair blind signatures by Juang and Lei [52], the scheme of Jakobsson and Yung [50], which also uses a kind of fair blind threshold signatures [51, 48], and a scheme by M’Ra¨ıhi and Pointcheval [62] based on verifiable secret sharing. Instead of distributing trust among multiple trustees, mechanisms can be put in place that control the behaviour of a trustee. Brickell et al. [12] suggest the use of an electronic trustee, a tamper-resistant device that automatically alerts the user if the secret stored by the trustee is released or compromised. Tracing can be made auditable, see Sect. 10.5.8.

10.5.7

Revocation in anonymous communication based systems

The anonymity of electronic payment systems can be solely based on anonymous communication. Revocation is also possible in the mix-based scheme of [47]. The tracing operations can be based on the mix en/de-cryption scheme. A payment order of a specific payer can be traced (coin tracing) by simply decrypting (only) the encrypted payment order o˜ in question. The payer of a specific payment order can be traced (owner tracing) by reversing the encryption of o to o˜ step by step (how this must be done, depends on what type of mix network is used).

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

10.5.8

81/128

Self-escrow and auditable tracing

In the schemes for revocable anonymous electronic payment that are described in the previous sections, third parties are involved that must be trusted. The required level of trust in an individual trustee can be lowered by distributing trust among several trustees. There always remains a minimal level of trust though. This section describes systems without trustees. Pfitzmann and Sadeghi [67] introduce the concept of self-escrowed electronic cash. Their proposed system does not rely on a trusted third party for revocation. Instead it is the user himself who is able to revoke his electronic cash. This scheme is intended as a countermeasure against blackmailing. The user has control over the tracing of his money, and should not trust any other party. A system with self-escrowed cash can be derived from existing schemes with off-line trustees. The scheme of [67] in particular follows [36]. In this scheme, it is impossible for the blackmailer to withdraw electronic coins on behalf of the user without encoding the identity of the user both for double-spending as for revocation purposes. The user plays the role of the trustee and has the necessary keys to trace his own coins. After having been blackmailed the user should of course still be able to revoke his cash. If his tracing keys are destroyed, or if he is killed, nobody will be able to trace the coins anymore. The previous scheme is obviously not applicable against misuse of the user himself. K¨ ugler and Vogt [54, 55, 56] therefore present schemes with auditable or fair tracing. Tracing is done by the bank without the help of trustees. However, these schemes allow legal tracing, but prevent illegal tracing, as tracing is auditable. More precisely, a user is able to find out (and prove to a judge) whether he has been traced. If he has been traced, and it was performed without permission (of the judge or, of himself), the bank can be prosecuted. In the scheme of [56] the bank provides users and merchants with certificates that are periodically updated. The certificates contain public tracing keys, respectively for each user (coin tracing) and for each merchant (owner tracing). These keys are used during withdrawal and payment. The certificates also contain an encrypted corresponding private key. Tracing has to be enabled on beforehand, i.e., the public and private ‘tracing’ keys are either real tracing keys or dummy keys. New certificates are issues on a regular basis. Sometime after expiration of the certificates, the bank releases the key with which the private tracing keys are encrypted. Each user and merchant can then find out and prove whether coin respectively owner tracing was enabled. An important difference between these auditable tracing schemes and the tracing schemes with trustees is the following: in the auditable schemes, just enabling tracing is already considered as tracing, for which permission is required (although enabling tracing does not necessarily mean that tracing really has been performed), anonymity is unconditional in the normal case;

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

82/128

in the schemes with trustees, tracing is always enabled, or in other words anonymity is always conditional. As a consequence, the auditable tracing schemes are only useful in situations where fraudulent usage is suspected on beforehand, for example, tracing is enabled for electronic money that is withdrawn by individuals who are under police surveillance. Auditable tracing schemes can also offer resistance against blackmailing and kidnapping, as the user can request the bank, via a covert channel, to enable tracing when he is being blackmailed or kidnapped. Auditable tracing schemes can finally also provide resistance against bank robbery.

10.5.9

Flow control and auditability

Sander and Ta-Shma [76] identify inherent weaknesses in the concept of revocable anonymous electronic payment systems. In particular, anonymity is revoked only under suspicion, however the schemes do not effectively detect suspicious actions in the first place. Sander and Ta-Shma therefore propose alternative approaches that even prevent suspicious actions to take place. In these approaches anonymity is not revocable (and does not need to be) and remains unconditional. Sander and Ta-Shma [75] introduce the notion of flow control. Instead of controlling the anonymity of electronic cash, the flow of anonymous electronic cash is controlled. Flow is controlled by only allowing users to withdraw a limited amount of anonymous cash in a certain period. In addition, anonymous cash is explicitly made non-transferable. Preventing the transfer of anonymous cash to another user seems impossible in a digital world, unless one can rely on the fact that the user has a non-transferable secret which he does not want to give away. This secret should never become visible, not even after double-spending. The off-line anonymous cash scheme of [9] is modified to include such a non-transferable secret. Flow control is primarily intended to make money laundering on a large scale very cumbersome (too many users would have to participate). Sander and Ta-Shma [74] also discuss auditable anonymous electronic cash. Auditability is here not in the context of tracing, but in the context of the bank only producing valid coins with a one-to-one correspondence to legitimate withdrawals. More precisely, there cannot be valid money in the system that is not known to the auditor (note that the proposed system is publicly auditable). Consequently, auditability protects against issuer attacks. In addition to auditability, anonymous cash is made nonrigid which means that it can always be invalidated. This protects against blackmailing. The basic idea behind the system is the following. A crucial aspect on an anonymous electronic payment protocol is verifying whether a certain electronic coin is valid. In a (blind) signature based protocol, electronic coins are considered valid if they are signed by the bank; this is very practical as the proof is distributed together with the coin, but it is not D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

83/128

auditable. The bank could also maintain a public list of all valid coins, sign the list and distribute it to the merchants; such a list is auditable, but is very impractical (large and a substantial overhead to update it). Sander and Ta-Shma suggest to maintain a hash tree [60] of all the valid coins instead of a plain list. Maintaining and updating such a tree can be done efficiently. Users can prove that a coin is an element of the hash tree and therefore valid. Essentially, an auditable, anonymous electronic cash scheme needs an auditable set of valid coins and requires a mechanism with which a user can prove that a coin belongs to this set of valid coins (i.e., a ‘membership proof’). This is formalized further in Sander, Ta-Shma and Yung [77].

10.5.10

Discussion

The previous sections discussed different forms of anonymity control in electronic payment systems. We summarize the most important properties. We distinguished three different categories of revocable payment schemes. The unlinkable schemes are mostly rather complex. The revocable versions of off-line unlinkable payment schemes require the user, during withdrawal, to prove to the bank that a coin is traceable, in addition to a proof that the identity of the user is encoded in the coin for the purpose of double-spending detection. Linkable payment schemes are very practical and should be easy to implement. However, these schemes only provide anonymity with linkability. Trustee linkable payment schemes can be situated between unlinkable and linkable schemes with respect to complexity and performance. They still provide anonymity with linkability, but only by the trustee. Deploying such a scheme with multiple trustees and a threshold mechanism would therefore be a good balance between complexity and degree of anonymity. The concept of flow control (non-transferability and amount-limitedness) seems a very practical and effective mechanism for anonymity control. Unconditional anonymity can be supported but cannot be misused on a large scale. There is no need for revocation. Electronic payment schemes are being extended and made more complex in order to explicitly provide transferability. Flow control is a counterargument against this. Auditability seems a very promising property in anonymous electronic payment schemes. Just as some of the revocable schemes, auditable schemes can be very powerful: they can resist bank robbery attacks, and fraudulently obtained money can be distinguished from regular money, or be invalidated. They are in that sense even much stronger than physical cash schemes, as for example stolen money usually cannot be recognized. Anonymity remains a controversial property of real-life electronic payment systems. Current schemes for anonymity control do not seem to consolidate the interests of the different stakeholders yet. The schemes tend to either have a trustee which is too powerful, or either be too complex. Although there are already so many systems with slightly different propD7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

84/128

erties, there still seems therefore to be some room for future research in advanced anonymous electronic payment schemes. In particular, there are many schemes with interesting properties, but there is no scheme that really combines all of them, while still remaining efficient and practical. For example, it might be interesting to design a payment scheme that combines the revocability and auditability features. Note again that non-transferability seems a very pragmatic but extremely effective way of anonymity control.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

85/128

Chapter 11

Electronic voting 11.1

Description of the system

11.1.1

Introduction

Democratic countries celebrate elections every certain amount of years. It is not always convenient for the voters to physically go to the polls. Voting through the Internet may increase participation, making it more convenient, specially for people who live far away from their town, and for old or handicapped people. There is a strong need of anonymity in a democratic election. It is not acceptable that any party can trace the vote cast by a voter, or the voter who cast a vote. This anonymity should be durable and unconditional (non revocable). Distributed trust techniques must be implemented to emulate traditional paper elections, and also some means to allow observers to verify the result of the election.

11.1.2

Voting phases

An electronic election can usually be divided into the following phases: • Registration phase: During this first stage, a certain voting authority creates the electoral roll and publishes it on the network. During a certain complaining period, voters should be able to post objections. After that, the final version of the electoral roll is published by the voting authority. During this phase, voters should get the cryptographic material to be used in the voting phase (keys, passwords,...). Depending on the implementation and on the nature of the voting system, registration may be done after authentication of the user (e.g., democratic elections) or pseudonymously (e.g., Yahoo users vote for a new Yahoo mail feature, using the email address as pseudonym).

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

86/128

• Voting phase: During the voting phase, voters can cast the desired ballots using the communication facilities of the network. The vote casting procedure may need a single session with a ballot collection authority (which can be the same authority as the one responsible for the electoral roll), Some voting schemes need more than a single session, and it’s not uncommon to have more than one voting authority involved in the voting phase. • Counting phase: At the end of the voting phase, the ballot collecting authority stops accepting ballots. The counting process is initiated. Finally, the tally is published and made available through the network. The result should be verifiable by voters or any other third parties.

11.2

Different roles/entities in the system

The different roles and parties we can find in an electronic election system are: • The voter, who maps to the initiator in the described model. He will send a request to the voting server to cast a ballot. • In a voting system we can find several authorities, in most system there is at least one verification authority and one separate counting authority. • There is a responder to the initiator, who will accept the cast ballot. In some systems this is implemented as a bulletin board, which consists of a broadcast channel with memory that can be observed and read by all parties. Each party controls her own section of the board in the sense that she can post messages exclusively to her own section, but not to the extent that she can erase or overwrite previous messages.

11.2.1

Attackers

Attackers to the system might or might not be involved in the voting process. Eavesdroppers are attackers who are not involved in the system and may try to listen to communication at a local point or globally. Another more dangerous type of attacker is a compromised authority. For this reason, the trust should be distributed to avoid that a single corrupted authority is able to change the result of the election, or to violate voters’ privacy.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

11.3

Anonymity requirements/properties

11.3.1

Requirements unrelated to anonymity

87/128

• Correctness: A system is correct if it is impossible to modify a made vote; it is impossible to ignore a valid vote during the calculation of the end result; and it is impossible to count an invalid vote. • Fairness: All ballots must stay secret until the voting phase has ended. • Universal verifiability: Ensures that any party, including a passive observer, can convince herself that the election is correct, i.e., inability of any number of parties to influence the outcome of an election except by properly voting. • Eligibility: Only eligible voters can cast a vote. • Uniqueness: No voter can cast more than one vote. • Non-coercibility (receipt-freeness): Ensures that no voter will obtain, as a result of an execution of the voting scheme, a receipt that can prove how she voted. (To prevent vote-buying and other tactics of voter persuasion.). No voter should be able to prove that he voted in a particular way. • Revisability: A voter can change her vote within a given period of time. This requirement is optional. • Provide for null ballots: Allow voters to null races or even the entire ballot as an option (e.g., to counter coercion, to protest against lack of voting options). • Allow under-votes: The voter may receive a warning of under-voting. However, such a warning must not be public and must not prevent under-voting. • Authenticated ballot styles: The ballot style and ballot rotation to be used by each voter must be authenticated and must be provided without any other control structure but that given by the voter authentication process itself. • No vote duplication: It should be impossible to copy another’s vote (even without knowing what the copied vote is).

11.3.2

Anonymity related requirements

In voting schemes, privacy means untraceability of the published votes: the protocol must not reveal any information about which vote was cast by D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

88/128

which honest user. That is, the voter can not be traced to the vote, and the vote can not be traced to the voter. The two requirements “voter privacy” and “vote secrecy” split this untraceability in two parts: • Voter anonymity: Given a vote (ballot cast by the user) or any other information obtained during the voting process, the system must not leak information about the identity of the voter. In other words, the voter must not be identifiable, and the voter must not be traceable. • Vote secrecy: For any voter, the system must keep secret the vote (ballot) the voter has cast. The vote must not be traceable from the voter. • Vote unlinkability: In case more that a vote is cast by each voter, these votes should be unlinkable. That is, no entity in the system should be able to find out that two votes were cast by the same voter. • Durability: These anonymity properties should be durable.

11.3.3

Anonymity control requirements/properties

There should not be any form of conditional anonymity in an electronic elections system. If conditional anonymity with trustee were implemented, then a set of malicious authorities could disclose the identity of the voters and the vote they cast, which is not acceptable in a democratic election. If conditional anonymity without trustee were implemented, then a voter would be able to prove that he cast a particular vote, which would violate the requirement of receipt-freeness. Nevertheless, controlled unconditional anonymity must be implemented in order to prevent double-voting, or other forms of misuse. This necessary control that ensures the correct functioning of the system should be aimed to prevent misuse rather than to detect it after the fact by means of anonymity revocation. For more requirements concerning also the implementation see Deliverable 2 [79].

11.3.4

Trust requirements

Ideally, an e-voting system should not require any trust, because no set of collaborating entities should be able to link a voter and his vote. In case the design and implementation of the system allows for tracing if a set of entities collaborate, then the trust requirement is that these entities do not collaborate in order to find the link between a voter and his vote.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

11.4

89/128

Short overview of existing systems

We refer to Deliverable 2 [79].

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

90/128

Chapter 12

Electronic auctions 12.1

Description of the system

The fundamental goal of auction systems is the distribution of resources among a number of bidders. This distribution is based on pre-defined rules to determine the actual buyer(s), the selling price (clearing price), etc. Different types of auctions are practiced to achieve different business objectives such as best price, guaranteed sale, minimum collusion possibility, etc. In this chapter we first review the different kinds of auctions and their properties. Next we discuss the steps of a complete auction based trading process [57].

12.1.1

Different auction properties

We can distinguish a number of properties that are not related to a specific type of auctions (see next section). These properties are explained in more detail in deliverable 2 of the APES project [79]. Bid confidentiality We say the bids are Open if the bid amounts are known to all bidders during the auction. in a Sealed bid auction the bid amounts are only known by the bidder until the auction closes. Bid cancellation The rules of the auction can specify if and when buyers can cancel their bids. Auction winner(s) The rules of the auction should specify to means to determine the winner or winners of the auction. Normally the winner is the buyer with the highest bid amount. Selling or clearing price Once the winners are declared, the clearing price has to be set. An auction’s clearing price represents the exchange terms agreed upon by the bidders as a result of the auction.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

91/128

Auction closings Different rules are used to determine when an auction round closes. The most common methods are: • Expiration time (both for open or sealed auctions): The auction closes at a predetermined expiration time. • Timeout (for open bid auctions): The auction closes when no bids higher than the current high bid are made within a predetermined timeout interval. • Combination of expiration time and timeout (for open bid auctions). • Stock-out or price falls below a pre-specified level (for Dutch auctions, see below): After the auction closing the winners are determined and the clearing price(s) are set. In a multiple-round auction the auction has more than one closing; here the auction ends after the final closing.

12.1.2

Auction types

There are many auction types [57] of which the most common types are: the English or open cry auctions, the Dutch (downward-bidding) auction, the sealed bid auction and the double auction. Detailed descriptions of these auction types can be found in deliverable 2 of the APES project [79].

12.1.3

Different steps in an auction process

1. Initial buyer and seller registration. This step deals with the authentication of the trading parties, exchange of cryptographic keys, etc. 2. Setting up a particular auction event. In this step the item being sold is described and the rules of the auction are negotiated and defined. These rules explain the type of auction (open cry, sealed bid, . . . ) and other parameters like terms of payment, starting date and time, etc. 3. Scheduling and advertising. To attract potential buyers. 4. Bidding. This step handles the collection of bids and implementation of bid control rules of the auction (minimum bid, bid increment, deposits required with bids, . . . ) and for open cry auctions notification of the participants when new high bids are submitted. 5. Evaluation of bids and closing the auction. Implementation of auction closing rules and notification of the winners. 6. Trade settlement. This final step handles the payment to the seller, the transfer of the goods to the buyer, payment of fees to the auctioneer and other agents, etc. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

12.2

92/128

Different entities in the system

In the different auction systems discussed above a number of parties are involved: Traders: • Buyers or Bidders: Places bids in order to purchase the item for sale. • Seller: Has one or more items he/she wishes to sell; in double auction the seller also places (sell) bids. If the seller does not place any bids, then the seller is not considered an entity that participates in the system (the seller gives the goods to the auctioneer and the auctioneer participates in the system). • Auctioneer: Arranges the auction and normally should enforce the specific rules of the auction. A trusted third party can be involved to enforce the auctioneer to be “fair” towards the buyers and the sellers. Depending on the actual system this provider can be informed or uniformed. The auctioneer has also the role responder given that it will react to the actions of buyers (and seller if needed). Trusted third party (authority): From the above description of the different systems, we can see that both buyers and sellers can be actively involved in the auction process. Therefore the auctioneer (who has no interest in the outcome of the auction process) is trusted to make sure that the auction rules are obeyed. Sometimes it is necessary to enforce “fairness” of the auctioneer, for example using a third party that can take legal actions if necessary. Product(s): These are the items for sale. In a reversed auction there is one seller, for example a company that wants to build something, and a number of buyers, entrepreneurs who not only bid a price but an entire set of terms. Here the product is the service of the entrepreneurs.

12.3

Anonymity requirements/properties

In order to conduct a fair auction a number of requirements have to be met. In the “real world” some of these requirements are easily met because buyers and sellers are physically present at the auction and can check whether the rules are obeyed by the other parties. In the case of online auctions, (cryptographic) techniques will have to be used to enforce these requirements. In the context of unconditional anonymity, the following anonymity requirements are commonly considered: D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

93/128

Bidder anonymity: the identity of the bidder must remain anonymous towards the other parties. Bidder/bid untraceability: it should be impossible for an unauthorized party to trace the bidder from a particular bid or vice versa. Seller anonymity (optional): The seller of an item can choose to stay anonymous towards the buyers and/or the auctioneer. If the seller wishes to stay anonymous, means of anonymous payments and shipping have to be provided. Also maybe a means to prove some credentials without revealing the true identity should be provided. This way buyers assure themselves that they are not buying stolen goods, etc. Note that anonymity can be phase dependent: bidders can choose to stay anonymous during the bidding phase of an auction. Once the auction is closed the following can happen: • All bidders are revealed (but the bids stay sealed except for the winning bids). • Only the winners are revealed to all parties that participated in the auction process. • Only the winners are revealed and only towards the seller and/or the auctioneer. • The winners stay anonymous (then means of anonymous payments and shipping have to be provided.) A side effect of having an anonymous auction scheme is that it prevents auctioneers from logging bidder activity. These log books can be soled to marketeers, insurers or even thieves with a plausible tale [84]: “May I please buy the list of all those who recently bought gold jewellery? I sell a really good polish”. Serious difficulties may arise from storing and reselling such information as it may violate data protection laws. If an auctioneer adopts an anonymous auction scheme, the it would be easier for him/her to claim that he/she had taken all reasonable steps to comply with data protection principles.

12.4

Anonymity control requirements/properties

There are a number of ways in which an unconditionally anonymous auction scheme could be abused by certain parties. Therefore some means of controlled anonymity are required to make anonymous auction schemes acceptable. We can distinguish a number of cases for which the anonymity should be revoked: D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

94/128

• (P1) Seller bidding at his own auction. A seller should be prevented to bid at his own auction in order to raise the price. Note that this requirement will be impossible to fulfill in real life. A seller can easily hire another person to bid in the name of the seller. As this is something that happens offline, it is impossible to prevent this in the design of the auction protocol. • (P2) Winners that do not show up to buy their goods. Without anonymity control it is very hard to guard against this type of abuse. Once the winner of an auction has been determined, it should preferably be possible to obtain the winners real identity, i.e., revoke the winners anonymity without cooperation of the winner. • (P3) Revealing the bids of a specific (malbehaving) bidder. In some cases, for example if it is proven that a certain person has misbehaved during an auction, it might be necessary to withdraw and/or reveal the bids placed by this person. This would mean that the auction scheme is able to retrieve the bids of a specific person and open them without revealing any other bids. Note that this is contradictory to the requirement of unlinkable bids. Different forms of control. Anonymity control can be implemented in different ways. Decisions that are made during the design of this implementation have an effect on the overall anonymity properties of the system. Here we give a discussion on the different forms of control and their consequences: • Controls with unconditional anonymity. A number of controls have to be build into auction schemes in order to prevent abuse of the system. Some important examples are: 1. If sealed bids are used, users should be possible to check the outcome of the auction; they should be able that there bid was indeed taken into account and that the announced winning bid was indeed the highest bid. 2. Users have to obey the rules that are used during an auction. The auction scheme should enable the auctioneer to monitor the auction process in such a way that he/she can check that the rules are obeyed by the different users. 3. Auctioneers should not interfere with the auction process. For example with a Vickrey auction, the winner (the user with the highest bid) only has to pay the amount of the second highest bid for the object for sale. Without control it is easy for an auctioneer to introduce a dummy bid in order to increase his/her revenue. For example, if the highest bid is 1000 EURO and the second highest bid is 500 EURO, the payment of the winner becomes D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

95/128

500 EURO. However, by fabricating a dummy bid of 999 EURO, the auctioneer can increase his/her revenue to 999 EURO. See for example [87] for an auction scheme that solves this problem while maintaining absolute anonymity for all bidders. • User controlled conditional anonymity. Bidders can choose to revoke their anonymity or stay anonymous at all time, independent of any other party. Advantages: 1. No trusted third party is necessary; this makes the system easier to setup and maintain. 2. Bidders don’t have to trust any other party to assure their anonymity. Disadvantages: 1. All anonymity control resides at the bidders’ side. As a consequence, bidders can try to abuse the system in any way the choose without danger of getting caught. If they succeed in their mischief, they can revoke their anonymity and collect their goods; if their scheme fails, they can just stay anonymous and retry during the next auction. None of the problems P1, P2 or P3 are solved with this type of anonymity control. • Non-selective conditional anonymity with trustee. A single trustee or a set of trustees has all the necessary information to disclose the identities of the bidders. However, they are not able to selectively revoke the anonymity of selected bidders or bids. Advantages: 1. Simpler schemes are possible compared to the “selective” revocation (see next item). 2. Problems P2 and P3 are solved with this scheme (P1 is more cumbersome now, but the seller can always hire another person offline). Disadvantages: 1. Because the revocation is not selective, the identity of all bidders is revealed when the winner is revealed, at least to one or more trustees. 2. Bidders have to trust one or a subset of trustees not to reveal their anonymity if this is not required.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

96/128

• (Selective) conditional anonymity with trustee. A single trustee or a set of trustees has all the necessary information to disclose the identities of the bidders. In this case, the scheme allows them to selective revoke the anonymity of selected bidders or bids (for example only the winning bid). Advantages: 1. Again, problems P2 and P3 are solved with this scheme. 2. Because the revocation is selective, only the identity of selected bidders is revealed. For example only the winner is revealed, or a number of bids are opened that seem suspicious because of information collected outside of the auction scheme. Disadvantages: 1. These schemes will probably be more cumbersome to implement than the previous scheme. 2. Bidders still have to trust one or a subset of trustees not to reveal their anonymity if this is not required (the trustee can still reveal all bids).

12.5

Short overview of existing solutions

We refer to deliverable 2 of the APES project [79] and to [4] for an overview of existing anonymous auction schemes. The proposed schemes can be categorized in two groups: (1) schemes that distribute trust amongst different auctioneers, and (2) schemes that distribute trust amongst all bidders (and doesn’t require auctioneers). None of the schemes (to our knowledge) try to achieve controlled anonymity with the requirements we present. Protocols that distribute trust amongst different auctioneers have controlled anonymity, but in a “all or nothing” fashion. If the auctioneers work together they can reveal any bids they see fit – the protocols don’t provide a way to selectively open bids, besides the winning bid. Protocols of the second group try to avoid this by presenting “Fully Private” auctions with the following properties [10]: “We present a new cryptographic auction protocol that prevents extraction of bid information despite any collusion of participants. This requirement is stronger than common assumptions in existing protocols that prohibit the collusion of certain third-parties (e.g. distinct auctioneers)”. Obviously these protocols don’t have any means to provide controlled anonymity – all bids stay anonymous except for the winning bid. This means that having controlled anonymity without trustees is still an open research problem today.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

97/128

Part IV

Legal issues

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

98/128

Chapter 13

Legal issues on controlled anonymity 13.1

Introduction

The need for protecting on-line anonymity has been explicitly recognized both at U.S. and at European level. Over the last years, U.S. courts have had to deal with actions challenging the use of on-line anonymity. Those actions were often related to defamation claims or other torts claims, such as copyright and trade name infringements. Consequently U.S. courts have ruled strict conditions to authorize the discovery of the true identity of anonymous defendants in court actions.1 The rationale of ruling strict conditions for discovery of identity of defendants is based on the consideration that on-line anonymity may be used as a tool to effectively exercise the right to free speech without any fear of retaliation. Therefore, lifting the veil of anonymity in a court action can be justified, however upon fulfillment of a number of conditions.2 At EU level, the use of on-line anonymity has – up till now – more often been approached from a privacy perspective. The EU Data Protection Working Party, official advisor for the EU Commission and the Member States on the EU Directive with regard to processing of personal data3 issued a Recommendation in 1997 regarding anonymity on the Internet.4 The Working Party concludes that the ability to choose to remain anonymous – 1

For example: Columbia Ins.Co.v.SeesCandy.com, 185 F.R.D.573 (N.D.Cal.1999); Dendrite Int’l Inc.v. John Doe, N.J.Super.Ct.App.Div.A-2774-00T3 (July 11, 2001). 2 Roger M. Rosen and Charles B. Rosenberg. Suing anonymous defendants for Internet defamation. The Computer & Internet Lawyer, 19(2), February 2002. 3 Directive 95/46/EC on the protection of individuals with regard to the processing of personal data of 24 October 1995. 4 Recommendation 3/97 EU Data Protection Working Party: Anonymity on the Internet, http://www.europa.eu.int/comm/internal_market/privacy/workingroup/ wp1997/wpdocs97_en.htm.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

99/128

and consequently to have anonymous access to the Internet – is essential if individuals are to preserve the same protection for their privacy on-line as they currently enjoy off-line.Likewise, the Council of Europe in Strasbourg promotes the use of on-line anonymity as a tool for effective protection of the fundamental right to on-line privacy5 . It is striking to note that – even after the events of 11 September 2001 – the EU Data Protection Working Party has reaffirmed the need for using online anonymity and pseudonymity in its recent working document on on-line authentication systems.6 The Working Party concludes that: “All possible efforts should be made to allow anonymous or pseudonymous use of online authentication systems. Where this would inhibit full functionality, the system should be built to require minimal information only for the authentication of the user and to give the user full control over decisions concerning additional information (such as profile data). This choice should exist both at the level of the authentication provider and of the service providers (the sites making use of the system).” As suggested in the above mentioned observations regarding US court cases and EU approach on this issue, anonymity is not appropriate in all circumstances. Determining those circumstances needs a fair balancing of conflicting rights. In the next sections, legitimated motives to use anonymity are briefly recalled whereafter legitimated control on the use of anonymity is dealt with. Legal issues regarding controlled anonymity also relate to the duty of service providers to collaborate with law enforcement. Finally a clear liability regime should be available to enhance trust in anonymity services towards the user and law enforcement. The analysis of a legal status and framework for “trustees” in the control of anonymity services will be the object of future research.

13.2

Legitimated motives to use anonymity: brief overview

Basically the use of anonymity can be legitimated as a tool to protect effectively the fundamental right to privacy and the fundamental freedom of expression. Both fundamental rights are considered as pillars of a democratic society and are protected in international and European Conventions 5

Council of Europe. Guidelines for the protection of individuals with regard to the collection and processing of personal data on information highways; Recommendation R (99) for the Protection on the Internet, 1997. 6 Working Document on on-line authentication services, adopted on 29 January 2003, http://www.europa.eu.int/comm/internal_market/privacy/workingroup/ wp2003/wpdocs03_en.htm.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

100/128

as well as in national constitutional laws. In addition, the use of anonymity can be legitimated in specific situations to protect the possibly threatened interests of the user when completing certain legal acts. Finally anonymity can be used to protect the interests of a party in a contractual environment. Hereunder some examples are listed to clarify a legitimated use of anonymity: • Anonymous search for information (browsing), anonymous e-mail correspondence to avoid gender discrimination, harassment, differentiation, with a view to preserve privacy. • Use of anonymous self-help hotlines in delicate matters as alcohol, drugs, family abuse, sexual abuse, sexual identity, aids, mental and physical illness, with a view to preserve privacy and freely express thoughts. • Use of anonymous sources for investigative journalism, with a view to exercise the freedom of expression. • Whistleblowing through anonymous hotlines and postboxes to help, for example, detecting fraud (bribes) or immoral practices (sexual abuse) in a company, with a view to freely communicate information without fear of retaliation. • Use of anonymous informants, of anonymous witnesses by law enforcement, with a view to protect the possibly threatened interests of the user of anonymity. • Use of anonymity to avoid persecution of individuals subject to human right violations by repressive regimes and wishing to express themselves freely. • Use of anonymity to conclude contracts so as to avoid possible negative influence from a party to the contract (for example excessive price or refusal to sell because of the identity of the buyer). • Use of anonymous payments for on-line purchase of goods, of on-line information which may leave a trace if not done anonymously and thus be object of unauthorized profiling. • Use of anonymity to have anonymous access to public services such as for example information on taxation.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

13.3

101/128

Legitimated limits to use of anonymity

There is a broad consensus that absolute anonymity is not appropriate in all circumstances. Restrictions on the use of anonymity could be justified at two levels, namely from a private law and public law perspective.

13.3.1

From a private law perspective

From a private law perspective, a legal entity (physical or legal personality) is accountable for his acts performed both in a contractual and in a non contractual environment. If this entity causes damages (towards a contract party or any third person), he will be held liable to compensate the damage. Principally, the knowledge of the identity of a party to a contract is not an essential condition for the validity of a contract. According to the Belgian civil code a contract is valid if four conditions are met: the agreement of the party with the obligations to be fulfilled, the legal capacity (for example the required age of the party) to conclude contracts, a determined object as the content of the obligation and a lawful cause for entering into obligations.7 Unless agreed otherwise, most contracts can be concluded without prescribed form. Indeed, the consensus of parties on essential obligations is the determining element to consider the validity of a contract. Consequently a contract can principally be concluded anonymously and be perfectly valid, except if specific legal provisions require explicitly the identification of the parties for the validity of the contract. However there are limitations in concluding anonymous contracts.8 Difficulties may arise if irregularities in the course of the conclusion of the contract are detected afterwards. Examples are a deficient agreement on an essential obligation, the non fulfillment of an essential formality for the conclusion of certain types of contract, the fact that a contract party was not competent to conclude the contract. Difficulties may also arise in the course of the implementation of the contract when obligations are lately or not correctly complied with, for example a payment that is not made. If the damaged party wants to claim interests on the late payment, a notification of default has to be addressed to the defaulting party. The level of the parties’ expertise is also a relevant factor in assessing the liability of the parties and the duty of care arising from this. Such situations may justify the revocation of anonymity. Therefore mechanisms through trusted third parties - have to be found out to allow to trace back a contracting party or his legal representator when difficulties in the implementation of the contract arise. 7

Art.1108 Belgian civil code. Grijpink JHAM and Prins JEJ,“new rules for Anonymous Electronic Transactions? An exploration of the private law implications of digital anonymity”, 2001 (2), The Journal of Information, Law and Technology (JILT). 8

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

102/128

There are other examples, from a private law perspective, to justify a limitation on the use of anonymity. In specific circumstances, the legislator imposes explicitly identity disclosure in contractual relations, for example, on service providers offering services for the Information Society.9 In other circumstances, the European legislator explicitly bans the use of anonymity, for example the express prohibition to send unsolicited anonymous commercial e-mail (spamming) for purposes of direct marketing (spamming).10 Finally, there are circumstances in which the European legislator creates room for “grey areas” of anonymity. This is for example the case for certification service providers who cannot be prevented from indicating in the certificate a pseudonym instead of the signatory’s name, according to the Directive on electronic signatures.11

13.3.2

From a public law perspective

From a public law perspective, restrictions on the use of anonymity may be imposed with a view to protect public interest in a democratic society. The fundamental rights to privacy and freedom of expression, guaranteed by article 8 and 10 of the European Convention on Human Rights, are not absolute rights and have to be put in balance with other interests. Law enforcement purposes may justify intrusions on the right to privacy and thus, as a matter of example, revocation of anonymity. The legal basis is to be find in article 8 paragraph 2 of the European Convention on Human Rights, stating that interference in the right to privacy by public authority may exceptionally take place if explicitly provided by law and necessary in a democratic society. The European Court on Human Rights has elaborated extensive rulings on how to interpret those strict conditions.12 In brief, those conditions can be resumed as follows. The law authorizing the intrusion must comply with formal and quality requirements which means that the law should be accessible - or published - and foreseeable so that citizens can regulate their conduct. The notion of “necessary in a democratic society in the interests of a legitimate aim such as national security, ... for prevention of disorder or crime” implies that the interference corresponds to a “pressing social need” and, in particular, that it is proportional to the legitimate 9

Directive 2000/31/EC on certain legal aspects of Information Society services, in particular electronic commerce, in the Internal Market, 8 June 2000. 10 Directive 2002/58/EC of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communication sector (Directive on privacy and electronic communications). 11 Article 8 §2 Directive regarding a common framework on electronic signatures of 13 December 1999. 12 See: Goemans, C., and Dumortier, J.,Mandatory retention of traffic data: impact on anonymity.”, in Digital Anonymity and the Law - Tensions and Dimensions, Information Technology & Law Series (IT&Law Series) - Volume 2, T.M.C. Asser Press, The Hague, The Netherlands, ISBN 90-6704-156-4.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

103/128

aims pursued. The principle of “proportionality” requires that the extent of interference is not excessive in relation to the legitimate needs and interests, which have occasioned it. Finally the Court underlines that the assessment of ‘the necessity of interference’ requires that whatever system of surveillance has to be accompanied by adequate and effective guarantees against abuse. Consequently, exceptions on the right to privacy and the freedom of expression (and thus on the use of anonymity) touch on public order and on subjective rights of individuals. Public order refers to matters as protection of national security, territorial integrity or public safety, and prevention of disorder or crime, whereas subjective rights of individuals in this context covers the protection of the reputation or rights of others, preventing the disclosure of information received in confidence, etc.. In the next section, we are listing illegal behaviour justifying a controlled use of anonymity.

13.4

Illegal behavior justifying a controlled use of anonymity

13

13.4.1

Examples of illegal behavior

Spamming Spam is the practice of sending unsolicited e-mails, usually of commercial nature, in large numbers and repeatedly to individuals with whom the sender has no prior relationship.14 Commercial spam typically relates to sale advertising, promotions, etc. whereas non commercial spamming usually concerns chain mails or election campaigns. Spam can have negative effects on networks, especially in respect with speed, useability and costs of transmission.15 . Moreover, spamming often involves privacy violations in respect with the unlawful collection of e-mail addresses, especially when the e-mail address has been collected on a public 13 The notion of anonymity - especially in the context of controlled anonymity - may be interpretated differently depending on a legal or technical context. In this respect, we refer to chapter 2, section 2.6. on the legal notion of identifiability, more in particular the viewpoint of the Belgian Privacy Commission. 14 Opinion 1/2000 on certain data protection aspects of electronic commerce, Article 29 Data Protection Working Party, 3 February 2000. 15 For further reading on the harmful effects of spam: Hoffman P., “Unsolicited bulk e-mail: definitions and problems,” http://www.nylj.com/tech/091597s.html; http: //www.ftc.gov/consumer; http://www.jmls.edu/cyber/cases/spam.html; http://www/ cnw.com/lawsuit.html; http://www.techlawjournal.com/internet/80625ftc.htm

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

104/128

space on the Internet without consent of the data subject.16 Anonymous spam may perhaps be a common abuse of anonymous communication. Therefore, anonymous remailer services will often mention in their user agreements that no spamming is allowed. Commercial spam is regulated in the EU Directive on electronic commerce and in EU Directive on privacy and electronic communications: unsolicited commercial communication has to be clearly and unambiguously identifiable as soon as it is received by the recipient. In other words: anonymous commercial spam is not allowed. Regarding spam in general – both commercial and non commercial – we should mention article 114§8 of the “Belgacomwet” of 21 March 1991 stipulating that a person who intentionally uses an electronic communication network or service to create nuisance or damage to his correspondent, is punishable. Furthermore artikel 13 of the recent Belgian law on e-commerce, transposing the e-commerce Directive, rules that senders of commercial communication should clearly identify themselves after having had the express, free, specific,informed authorisation of the recipient to send commercial information. Defamation The criminal offence of defamation puts a restriction on the fundamental right to freedom of expression that recognizes the freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.17 The maliciously sending of offensive messages that harm the reputation of a person is punishable upon condition that a certain publicity has been given to the offensive messages. Applied to electronic mail, an offensive message sent only to the person concerned will therefore not be qualified as defamation. However, should the author of defamatory messages forward copies to other destinees, e.g. by means of a mailing list, he may be held liable for defamation. As mentioned earlier, anonymous speech has increasingly given rise to lawsuits, especially in the U.S. courts who have ruled strict conditions18 to allow revocation of anonymity so as to avoid abuse of defamation suits, for example against anonymous posters merely as a means to identify dissenting employees so that plaintiffs – the employers – could dismiss them.19 16

This could be seen as an unfair processing of personal data of article 6, 1 a and b of the general Directive on processing of personal data 95/46/EC, JO L 281/31 of 23 Nov.1995. 17 Art. 10 European Convention on Human Rights 18 For example, the evidence of the “viability” or seriousness of the claim 19 Pre-Paid Legal Services v.Stutz, Superior Court Santa Clara County, August 2001; Dendrite international, Inc. v. John Doe, Appellate Court new Jersey, July 2001. See also: SOBEL D., The process that “John Doe” is due: addressing the legal challenge to Internet anonymity, Virginia Journal of Law and Technology, 5 Va. JL, 2000.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

105/128

Revocation of anonymity is indeed an irreversible action – contrary to the decision to make illegal content inaccessible – and needs therefore to be operated with the greatest precautions.20 Cyberstalking Stalking is the kind of offence that tends to be perpetrated anonymously. The criminal offence of stalking – this is the deceitfully threatening of someone’s life or freedom – has been introduced in criminal codes in an off-line perspective.21 However, the notion itself is technological neutral. Stalking could therefore be applied to anonymous harassing electronic mail, also called mailbombs. Alleged cyberstalking could justify an action from the recipient to ask the judge for revocation of anonymity. Other illegal activities Selfevidently, on-line anonymity can be abused for all kind of illegal activities. Cybercrime offences are either “classic” criminal offences perpetrated with the means of electronic communication or specific criminal offences intrinsically linked with the use of electronic communication, such as for example computer hacking. The Convention on cybercrime of 23 November 2001 enumerates a variety of specific cybercrime offences, in particular22 : • offences against the confidentiality, integrity and availability of computer data and systems (illegal access, illegal interception, data interference (= intentionally damage, delete, alter computer data without right), system interference, misuse of devices (= production, sale, distribution of a device for the purpose of committing offences mentioned above).23 • computer related offences: computer related forgery, computer related fraud (unauthorized interferences and alterations with economic benefit); 20 Ekker, A., Anonimiteit en uitingsvrijheid op het Internet; het onthullen van identificerende gegevens door Internetproviders, Mediaforum 2002 -11/12, P; 348 - 351. 21 As a matter of example the criminal offence of stalking has been introduced in article 442bis of the Belgian criminal code, by law of 30 October 1998, B.S. 17 December 1998 22 Convention on cybercrime, Council of Europe, 23.11.2001, to be consulted at: http://www.coe.int/T/E/Legal_affairs/Legal_co-operation/Combating_ economic_crime/Cybercrime/ 23 article 550bis and following of the Belgian criminal procedure code, introduced by the cybercrime law of 28 November 2000, B.S. 3 February 2001

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

106/128

• content related offences: offences related to child pornography (producing, offering or making available, distributing, transmitting, procuring child pornography through a computer system); offences related to infringements of copyright and related rights;24 “Classic” criminal offences perpetrated by means of electronic communication - and also called cybercrime offences - often relate to money laundering, illegal arms transactions, drug deals, criminal organizational recruitment, fraud, etc.. Obviously, illegal behavior justifies control on the use of anonymity. Consequently anonymity could be revoked in the process of a criminal inquiry. However, this judicial decision should always be taken with the greatest care and in respect with the proportionality principle. We should underline once again that the process of revoking anonymity is an irreversible action, unlike removing alleged illegal content from public forums. Before explaining the legal duty of service providers to collaborate with law enforcement in a next section, we are dealing with the question whether, from a legal perspective, anonymity could be controlled not only a posteriori but also a priori before using the anonymity service.

13.4.2

A-priori control or/and a posteriori control of anonymity?

During discussions between the APES partners, the question raised how far anonymity providers could implement an a priori control on the use of anonymity. The idea behind implementing a priori controls would be to provide fair and legitimate use of anonymity services and make thus promote their use in society. However, too strong a priori controls may again jeopardize the use of anonymity. The kind of a priori controls on the use of anonymity services which were discussed related to spamming, virus detection and illegal content. From a legal perspective we should make a distinction between, on the one hand, controls regarding spam and virus detection and, on the other hand, a prior controls on illegal content. Whereas, from a legal perspective a priori spam and virus detections would be advisable, a priori controls of presumed illegal content would be object of legal hurdles, on the following grounds. An important aspect of privacy is the secrecy of private (electronic) communication. Presently the criminal protection of private (electronic) communication is based on two separate laws, distinguishing the protection of the existence of electronic communication and of the content of electronic communication. Both aspects are covered by the protection of secrecy of 24

cfr. legal issues on P2P systems, dealt with at the third APES workshop.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

107/128

electronic communication.25 Thus, the eavesdropping, examining, recording of private (electronic communication) without the consent of the participating parties is punishable. According to the preparatory works, the concept of “private” is a large notion and covers all communication that is not intended to be heard by anybody (such as broadcasting). Consequently it does not matter whether the communication is being held in a professional or private environment.26 Three exceptions on the protection of the secrecy of private electronic communication are provided for in article 109terE of the law of 21 March 1991. Besides exceptions based on a law and for activities related to help and emergency services, a third exception is provided for when the actions of eavesdropping, examining, recording of private communication are taken “with the sole purpose to verify the appropriate functioning of the network and to guarantee the appropriate implementation of the electronic communication service.” In other words, network providers may violate the principle of secrecy of private electronic communication for the sole purpose of taking care of the continuity of electronic communication services. This may include preventive measures, however, they have to take into account the proportionality principle which is always applicable for intrusions on fundamental rights such as the secrecy of private communication. In this context, anti spam and anti virus controls, performed prior to the use of anonymity services, could be justified. The situation is different regarding a priori controls of presumed illegal content. Firstly, the law does not provide for an exception on secrecy of private electronic communication for controls of illegal content by service providers. Moreover the controlling exercise would be quite delicate: whereas child porn is easy to detect, a criminal offence of defamation would hardly be impossible to qualify as such with certainty. This belongs indeed to the prerogatives of a court. Contractual user agreements, stipulating that it is not authorized to use the system for anonymous transmission of illegal content are in se useful as a warning. However, the secrecy of private communication is of public order and cannot be agreed otherwise on a contractual basis. Therefore, effective control could not be implemented on the basis of the contractual terms of agreement. Though, we could envisage a hypothesis where a service provider would rightly deny access to a user on the grounds of his knowledge of a court condemnation regarding transmission of illegal content through electronic means. Finally, in the section on liability issues further in this chapter, we exam25

Art.109terD of the law of 21 March 1991 on economic public enterprises or also called the Belgian “Telecomlaw” and art.259bis and 314bis of the Belgian criminal code. 26 Parl.Doc.Senaat, 1992 - 1993, no 843-1, p.6/7 en 843-2,p.10.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

108/128

ine exemptions of liability for illegal content provided for in the e-commerce directive. It should be underlined that the e-commerce directive prohibits Member States to impose a general obligation on providers – when providing services of mere conduit, caching, hosting – to monitor the information which they transmit or store nor a general obligation to seek facts or circumstances indicating illegal activity.

13.5

Limitations on privacy (and anonymity) for law enforcement purposes and duty to collaborate

As explained earlier in this chapter, intrusions on the right to privacy must be authorized by law and necessary in a democratic society. The European Convention on cybercrime laid down a framework of law enforcement measures facilitating criminal inquiries related to cybercrime.27 . New law enforcement powers and duties to collaborate are reflected in the Belgian cybercrime law of 28 November 2000.28 Expedited preservation of stored computer data Those measures, introduced by article 16 of the cybercrime convention apply only where computer data already exists and is currently being stored.29 The Belgian cybercrime law has not laid down specific regulations of orders to expedited preserve stored computer data but introduced more stringent rules such as production orders, search and seizure of stored computer data. Production order A new article 88 quater of the criminal procedure code introduces two kinds of duties for system managers to collaborate with law enforcement. Firstly, 27

Convention on cybercrime, Council of Europe, 23 November 2001, to be consulted at http://www.coe.int/T/E/Legal_affairs/Legal_co-operation/Combating_ economic_crime/Cybercrime/ 28 van Eecke, P., and Dumortier, J., “De implementatie van het Europees Verdrag cybercriminaliteit in de Belgische wetgeving”, Computerrecht 2003/02. 29 Therefore the preparatory works of the cybercrime convention underline the distinction between “data preservation” and “data retention”. To preserve data means to keep data which already exists in a stored form, protected from anything that would cause its current quality or condition to change or deteriorate. To retain data means to keep data, which is currently being generated, in one’s possession of it into a future time period. Data retention is the process of storing data. Data preservation, on the other hand, is the activity that keeps that stored data secure and safe. See Final activity report of the Committee of Experts on Crime in Cyber-Space, 18 - 22 June 2001, available at http://www.coe.int/ T/E/Legal_affairs/Legal_co-operation/Combating_economic_crime/Cybercrime/

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

109/128

the inquiring judge can order persons who presumably have a specific knowledge of the relevant electronic information system to produce information on the functioning of the system and on means to access the data in an intelligible form. (For example the production of a password). Secondly, the inquiring judge has the power to order any relevant person to implement specific operations on the system, for example the search of electronic data. The law underlines that those persons have the duty to collaborate as far as it lies within their capacities. Persons who are under the duty to collaborate have also a duty of confidentiality. Search and seizure of stored computer data Article 88 ter of the criminal procedure code constitutes the legal basis for network searching.30 It is important to note that the inquiring judge is empowered to extend a search to connected computer systems. Again, the principle of proportionality has to be respected and the need for such measure has to be motivated. The new article 39 bis of the criminal procedure code empowers the “procureur des Konings” to seize (or copy) electronic data without seizing the material support (computer or diskette). If copying the data is not sufficient for technical reasons or if the volume of data is too important, the data can be blocked. The law provides for an information duty for law enforcement authorities towards the system manager to whom a synthesis of the effectuated operations (a list of copied, blocked, deleted data) will have to be communicated. Furthermore the law enforcement authorities have to ensure the integrity and confidentiality of the seized data. Real time collection of computer data We mentioned earlier that one of the three exceptions on secrecy of telecommunication is the exception based on a law. The criminal procedure code provides for a legal basis to order a service provider to intercept communication, upon written and motivated decision of the law enforcement authorities. The order for interception can be related to identification data of a user, to localisation data or to the content of the communication.31 It is important to note that operators and providers of network services have to adapt their infrastructure accordingly. The modalities of this adaptation have been laid down in a Royal Decree of 9 January 2003.32 30

According to the preparatory works of the cybercrime convention, “search” means: to seek, read, inspect or review data. 31 Respectively, article 46bis, 88bis and 90ter of the criminal procedure code. 32 Publication: Belgisch Staatsblad of 10 February 2003.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

110/128

Mandatory retention of traffic data The Belgian cybercrime law of 28 November 2000 has introduced the principle of mandatory retention of traffic data for network operators and service providers during a period which may not be less than 12 months.33 It is rather doubtful whether such far reaching measures – retaining all traffic data of every single user for possible future law enforcement purposes – would succeed in the “proportionality test” imposed by the European Court on Human Rights.34 The principle of mandatory retention of traffic data is not yet in force in Belgium. Entering into force lies upon adoption of implementation modalities (for example, the period of retention, the list of data to be retained)by a Royal Decree. Many European countries have adopted similar laws but it is striking to note that all of them presently face difficulties in drafting the precise implementation modalities, likely due to the sensitiveness of the issue. Other measures “threatening” the provision of anonymity services Besides new law enforcement powers and duties to collaborate laid down in the Belgian cybercrime law of 28 November 2002, a rather worrying new disposition introduced in article 150 of the Program law of 30 december 2001 should be mentioned35 : this disposition gives a delegation to the King (i.e. the federal government) to forbid the provision of services which make impossible or could hinder the identification of the user, search and localisation activities, interception of private electronic communication according to the conditions laid down in the Belgian criminal procedure code. The new law opens, in other words, the possibility to publish a Royal Decree that would forbid anonymity services. This seems to impose also on the enduser heavy intrusions on private life. Up till now the disposition is still a ‘dead letter’ as no Royal Decree has been taken. It is also quite doubtful whether such measure could succeed the “proportionality test” ruled by the European Court on Human Rights.

As a conclusion, we may say that law enforcement agencies have been granted a myriad of tools to perform criminal inquiries in a digitized environment. Some of them are no doubt necessary for the efficiency of criminal inquiries. However, to our view, the balance between the right to privacy 33

Art.14 cybercrime law of 28 November 2000 modifying article 109terE of the law of 21 March 1991 regarding the reform of economic public enterprises. 34 See Goemans C., and Dumortier J., “Mandatory retention of traffic data: impact on anonymity”, in Digital Anonymity and the Law - Tensions and Dimensions, Information Technology & Law Series (IT&Law Series) - Volume 2, T.M.C. Asser Press, The Hague, The Netherlands, ISBN 90-6704-156-4. 35 Published in the B.S. of 30 December 1997

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

111/128

and the whole set of sometimes far reaching law enforcement measures such as mandatory retention of traffic data is missing. In this context, it seems quite important to develop technical solutions to facilitate disclosure whenever revocation of anonymity is being ordered in criminal investigations. Indeed, the existence of strong technical guarantees for expeditive revocation of anonymity, whenever ordered by law enforcement, may enhance a smooth collaboration between service providers and law enforcement and thus refrain the legislator to introduce further “anonymity unfriendly” regulations. However, as stated earlier, revocation of anonymity is an irreversible action. We should therefore take into account, as rightly raised by the Privacy Commission, that revocation tools for anonymity (or identification control) should not be abused by private parties, such as for instance copyright defendors.

13.6

Liability issues

In order to sustain the viability of anonymity services, users should trust the service. This implies that a clear liability regime should be in place in case of deficient services. However, the liability regime should be in balance so as to ensure that economic or other incentives exist to provide anonymity services in society. The core legal question regarding liability issues is to examine whether the existing liability rules in general contract law or in specific legislation for electronic services are appropriate or whether we need to create an adapted legal framework to offer enough legal security and economic incentives to develop anonymity services on the market. At this stage, we give a brief overview of relevant current rules regarding liability. As will be explained below, service providers have a contractual obligation towards users to provide appropriate anonymity services.(Section 13.6.1). Furthermore service providers have a responsibility towards society when they participate themselves in criminal offences through the means of anonymity services (section 13.6.2) or when it appears that anonymity services have been misused by users (section 13.6.3).

13.6.1

Liability in general contract law

The relationship between user and provider of anonymity services is of a contractual nature. The main obligation for the anonymity provider consists in adequately transmitting or making available anonymized information. If the anonymity provider does not succeed – for other reasons than acceptable reasons for revocation of anonymity – he will be principally held liable for possible damage arising out of the inadequate implementation of the contractual obligation.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

112/128

To assess contractual liability, countries with civil law traditions – such as Belgium, France, the Netherlands – operate a main distinction between obligations of result or “resultaatsverbintenissen” and obligations of efforts or “inspanningsverbintenis”or “middelenverbintenis”. An example of a “resultaatsverbintenis” is the (on-line) purchase of goods: party A will deliver to party B a book X for an agreed price and within an agreed period of time. Examples of an “inspanningsverbintenis” are (on-line) consultancy for all sorts of matters: legal,36 , medical, accountancy, taxation, etc.. The obligation of a medical doctor to endeavour to relieve a person from his illness is an example by excellence of an “inspanningsverbintenis”. The division is important as the liability regime will be different. Obviously, the contractual liability regime of obligations of result are stricter. The proof of having used due diligence will not be enough to be exempted from any liability. The burden of proof for a creditor of an obligation of result will be limited to show that the debtor of the obligation did not achieve the result, whether or not the debtor acted in good faith and with reasonable care. However, the debtor can waive his liability by evidencing that the failure of achieving the result (for example, delivering goods within an agreed period) is due to an “outside” cause beyond his control. A creditor of an obligation of efforts, for example the family of a deceased patient, will have to prove lack of due diligence or care from the side of the doctor. Obviously the failure to reach a positive result is not sufficient. How should we implement this distinction on the specific obligation for a provider to transmit or make available anonymized information on request of the user? Should we regard this “anonymity” obligation as an obligation of effort or of result? As mentioned above, this is important as both kind of obligations have different liability regimes. We are inclined to opt for an obligation of result. It is indeed most commonly accepted that an obligation is considered as an obligation of result when the undertaking can in the ordinary course of events be expected to be achieved. If for a reason or another, this would not be the case for anonymity services, it should be clarified in a practice statement or user agreement. We noted in the user agreement of anonymizer.com the following: “the provider will use reasonable efforts to provide the user with the anonymity services”. This would rather indicate – in terms of civil contractual law – an obligation of effort with according liability regime.37 . However, we underline that a liability regime cannot be formulated in such a way that a party is in practice freed from any effective obligation. 36 one single contract can cover two kinds of obligations; as a matter of example, legal services are overall “inspanningsverbintenissen”, but the respect of mandatory limits of periods to appeal in a case would be considered as a “resultaatsverbintenis” 37 it should be mentioned that the terms in this user agreement are drafted in an US law environment who do not operate this civil law distinction

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

13.6.2

113/128

Criminal liability

Beside contractual liability, arising out of the improper implementation of a contractual obligation, a provider of anonymity services will also be held liable, for any criminal offence. For example, the anonymity provider may be prosecuted for intentional participation in money laundering practices, child pornography. If he is proven guilty, he will be condemned for this criminal offence and may also be held liable for civil damages that arise out of the criminal offence. Obviously anonymity providers will never be allowed to exempt their liability for criminal offences in a user agreement.

13.6.3

Liability in specific legislation

Specific liability regimes have been introduced in Europe for a number of services with a view to create a harmonized and balanced legal framework at EU level, in particular to enhance trust from the side of the users on the one hand and on the other hand to support the economic viability of those services, for example through a fair liability regime. Relevant for providers of anonymity services are the rules developed in the e-commerce directive and in the directive on electronic signature. E-commerce Directive Applicability of e-commerce Directive. The e-commerce Directive38 is applicable on “Information Society services” which are defined as ‘any service normally provided for remuneration,39 at a distance, by electronic means and at the individual request of a recipient of service.40 When anonymity services (e-mail, e-browsing, e-publishing) are offered as a service (not as a software that can be directly installed on the user’s computer), based on an economic activity, the provision of this anonymity service can be considered as Information Society service. Liability regime developed in the e-commerce Directive. As a preliminary observation, it should be emphasized that the e-commerce Directive does neither impose a general obligation to monitor information which providers 38

Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of Information Society services, in particular electronic commerce, in the Internal Market, OJ L 178, 17 July 2000, http://www.europa.eu.int/eur-lex/ en/search/search_oj.html 39 this includes also free services, as long as the service is provided in an economic context. For example, banners could pay the remuneration. 40 The notion of Information Society service, used in the EU Directive on electronic commerce is defined in article 1(2) of Directive 98/48/EC of the European Parliament and of the Council of 20 July 1998 amending Directive 98/34/EC laying down a procedure for the provision of information in the field of technical standards and regulations, OJ L 204, 21.7.1998, and OJ L 217,5.8.1998.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

114/128

transmit or store, nor a general obligation to seek actively facts or circumstances indicating illegal activity. However, Member States may establish obligations for service providers to promptly inform public authorities of alleged illegal activities undertaken by recipients of the services.41 . Furthermore, the limitation of liability for intermediary service providers (see further) in respect with the transmitted information does not affect the possibility of injunctions of different kinds, such as orders by courts or administrative authorities requiring the termination or prevention of any infringement, including the removal of illegal information.42 According to the e-commerce Directive, on-line service providers have a duty to act, under certain circumstances, with a view to preventing or stopping illegal activities.43 . Exemptions of this duty will only apply to intermediary service providers. Intermediary service providers concern those activities in the on-line service that are limited to the technical process of operating and giving access to a communication network over which information – made available by third parties – is transmitted or temporarily stored for the sole purpose of making the transmission more efficient. 44 This activity is of a mere technical, automatic and passive nature, which implies that the on-line service provider has neither knowledge nor control over the information that is transmitted or stored. Intermediary service providers can benefit from exemptions for “mere conduit”, “caching” and “hosting” activities45 when they are in no way involved with the content of the information transmitted. For activities of “mere conduit” this exemption requires that intermediary service providers do not initiate the transmission, do not select the receiver of the transmission and do not select or modify the information contained in the transmission.46 The observation made in this respect in considerans 43 of the e-commerce Directive is quite relevant for anonymity services. It underlines that these requirements do not cover manipulations of a technical nature that take place in the course of the transmission and do not alter the integrity of the information contained in the transmission. In other words, these technical manipulations are not considered as ways to be involved in the transmitted information and therefore can be subject to exemption of liability. We would be inclined do think that the anonymization process proceeded by the provider of anonymity can be a manipulation of a technical nature and thus could benefit from the liability exemption for transmitted information. 41

Article 15 of the e-commerce Directive Considerans 45 of the e-commerce Directive. 43 Articles 12 to 14 of the e-commerce Directive 44 Considerans 42 of the e-commerce Directive. 45 “Mere conduit” activity: mere transmission or provision of access to a communication network; “caching” activity: intermediate and temporary storage of information; “hosting” activity: storage of information at the request of a recipient of the service 46 article 12 e-commerce Directive 42

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

115/128

Providers of “caching” activities will in principle be exempted from liability in respect with the stored information. However they are under the duty to act expeditiously to remove or disable access to the stored information upon obtaining actual knowledge of the fact that the information at the initial source of the transmission has been removed from the network, or access to it has been disabled, or that a court or an administrative authority has ordered such a removal or disablement.47 A fortiori, an on-line service provider who deliberately collaborates with one of the recipients of his service in order to undertake illegal acts goes beyond the activities of “mere conduit” or “caching” and as a result cannot benefit from the liability exemptions established for these activities. Likewise “hosting” activities will be exempted from liability as long as the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent. Upon obtaining knowledge or awareness, the hosting provider has to remove expeditiously or disable access to the information. To conclude, we feel that those liability provisions for on-line information in general, are appropriate for anonymity services when the anonymity provider limits his intervention to “mere conduit”, “caching” or “hosting” activities. This is for instance the case for the provision of anonymous email, anonymous e-browsing, anonymous e-publishing. Liability regime according to the Directive on electronic signature48 The Directive on electronic signature expressly states that certification service providers, issuing certificates or providing other services related to electronic signatures, cannot be prevented from indicating in the certificate a pseudonym instead of the signatory’s name. In this context, the liability regime of certification service providers is relevant for our research. This specific liability regime as developed in the Directive on electronic signatures regards the relation between certification service providers issuing qualified certificates and any entity or legal or natural person who reasonably relied on that certificate. It does not affect the contractual relationship between certification service providers and the recipient of a certificate nor the relationship between the certification providers and public authorities. Three aspects regarding the specific liability regime should be considered: A service provider issuing a (pseudonymous) certificate is liable for the damage resulting from the inaccuracy and incompleteness of information contained in the qualified certificate at the time of the issuance of the certificate. In47

article 13.2. e-commerce Directive. Directive 1999/93/EC of the European Parliament and of the Council of 13 December 1999 on a Community framework for electronic signatures,O.J.19.01.2000,13/12 48

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

116/128

deed, one can reasonably not expect that the certificate service provider would permanently verify the accuracy of the information. This is a responsibility of the recipient of the certificate, who possibly will have to revoke the certificate. A service provider issuing a (pseudonymous) certificate should also guarantee that the recipient of the certificate holds, at the time of the issuance of the certificate, the signature-creation data corresponding to the signatureverification data given in the certificate. If the certification service provider generates both, he should assure that they can be used in a complementary manner. Finally a service provider issuing a (pseudonymous)certificate should ensure that the date and time of revocation of the certificate are accurately registered. The certification service provider is liable for damage caused by non compliance of the above mentioned obligations, unless the certification service provider proves that he has not acted negligently. It could be for instance envisageable that the certificate service provider registered the revocation of a certificate via a register accessible on his website, but that third parties had no access to the website for a reason out of control of the certification service provider The certificate service provider can limit his liability on two grounds only: by indicating in a qualified certificate limitations on the use of that certificate or on the value of transactions for which the certificate can be used, provided that the limitations are recognisable to third parties. At this stage, we would conclude that existing liability regimes with a possible flexibility and clarification developed in user agreements on a contractual basis, should be sufficient to provide for a liability legal framework for anonymity services. A model of user agreement will be developed in future research.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

117/128

Part V

Conclusions

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

118/128

Chapter 14

Conclusions and future work It should be clear that anonymity control mechanisms are required in order to make anonymous applications and services more acceptable in our society. Anonymity control mechanisms will prevent or discourage abuse and fraud, while preserving the anonymity of legitimate users. As such, there is also a legal incentive to implement anonymity control mechanisms. In this deliverable we proposed a taxonomy for anonymity control, and applied it to a wide range of electronic services, including both infrastructurerelated mechanisms and specific applications. We investigated the state-ofthe-art of anonymity control mechanisms that are provided in each of these services. We identified three different categories of anonymity control mechanisms. The first category of controls prevents specific abuse, while preserving unconditional anonymity of the users. These controls are often already an essential part of the anonymous application. An example is the inability of casting a vote twice in electronic voting systems. The second category of controls discourages specific abuse or allows a user to revoke his anonymity for specific purposes. This control provides conditional anonymity. The condition upon which the anonymity can be revoked, is completely under the control of the user. A well-known example is the double-spending detection mechanism in electronic payment systems. The third category is known as revocability. This anonymity control mechanism provides conditional anonymity under the control of a trustee. That is, when a user or a transaction is considered suspicious or inappropriate, and if a authorized by the law, the trustee can revoke the anonymity of the user or the transaction. In particular the last category shows that anonymity control can be a very delicate matter. Finding and implementing the right balance in the right context, between anonymity of legitimate users and control of abuse, is of crucial importance. This is still an open problem for many applications. Of specific concern here is the trustee entity, which is – from a technical point of view – a very powerful entity. The trustee may be very different

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

119/128

dependent on the application and the types of misuse that a user can do (e.g., distributed trustee or not). For some applications, it might even be better not to implement trustee-controlled conditional anonymity at all. In past years, governments showed their concern on how to enhance trust for citizens in the Information Society. Simultaneously, governments elaborated far reaching measures facilitating control over the Information Society by strengthening law enforcement powers. Some measures are certainly appropriate. However other measures, such as mandatory retention of traffic data, may drift away from proportionality requirements, ruled by the European Court on Human Rights. Quite worrying is a disposition introduced in article 150 of the Belgian Program law of 30 december 2001 which gives a delegation to the King (i.e., the federal government) to forbid the provision of telecommunication services which make impossible or would hinder the identification of the user, search and localisation activities, or interception of private telecommunication according to the conditions laid down in the Belgian criminal procedure code. The new law opens, in other words, the possibility to publish a Royal Decree that forbids anonymity services. Up till now the disposition is still a ‘dead letter’ as no Royal Decree has been taken. It is also quite doubtful whether such measure could succeed the “proportionality test” ruled by the European Court on Human Rights. In this context, it seems quite important to develop technical solutions to facilitate disclosure whenever revocation of anonymity is being ordered in criminal investigations. Indeed, the existence of strong technical guarantees for expeditive revocation of anonymity, whenever ordered by law enforcement, may enhance a smooth collaboration between service providers and law enforcement and thus refrain the legislator to introduce further “anonymity unfriendly” regulations. However, revocation of anonymity is an irreversible action. We should no doubt take into account, as rightly raised by the Privacy Commission, that private parties, such as for instance copyright defendors, should not be able to abuse tools for anonymity revocation (or identification control). The distribution of the power of revocation among appropriate parties is here important. In future technical work we will analyze in depth the mechanisms for anonymity control, and distill basic building blocks from the existing solutions. An interesting question is whether the concept of anonymity control can be generalized and be merged with the notion of anonymity degree (see Deliverable 5 [35]). In future legal work, we have to reflect on the status of ‘trustee’, key actor in the process of revocation of anonymity. A model of practice statements and user agreement will also be developed.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

120/128

Bibliography [1] Masayuki Abe and Miyako Ohkubo. Provably Secure Fair Blind Signatures with Tight Revocation. In Colin Boyd, editor, Advances in Cryptology – ASIACRYPT 2001, Lecture Notes in Computer Science, LNCS 2248, pages 583–601. Springer-Verlag, December 2001. [2] Anonymizer. http://www.anonymizer.com/. [3] APES. Anonymity and Privacy in Electronic Services. https://www. cosic.esat.kuleuven.ac.be/apes/. [4] Cryptographic Auctions. http://www.tcs.hut.fi/~helger/crypto/ link/protocols/auctions.html. [5] A. Beimel, Y. Ishai, and T. Malkin. Reducing the servers computation in private information retrieval: Pir with preprocessing. In Advances in Cryptology – CRYPTO’00, Lecture Notes in Computer Science, LNCS 1880, pages 55–73. Springer-Verlag, August 2000. [6] Oliver Berthold, Hannes Federrath, and Stefan K¨opsell. Web MIXes: A system for anonymous and unobservable Internet access. In Hannes Federrath, editor, Designing Privacy Enhancing Technologies. Proceedings of the Workshop on Design Issues in Anonymity and Unobservability, Lecture Notes in Computer Science, LNCS 2009, pages 115–129. Springer-Verlag, July 2000, Proceedings 2001. [7] S. Brands. A technical overview of digital credentials. [8] S. Brands. Rethinking Public Key Infrastructure and Digital Certificates – Building in Privacy. PhD thesis, Eindhoven Institute of Technology, 1999. [9] Stefan Brands. Untraceable Off-line Cash in Wallets with Observers. In Douglas R. Stinson, editor, Advances in Cryptology – CRYPTO’93, Lecture Notes in Computer Science, LNCS 773, pages 302–318. SpringerVerlag, August 1993.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

121/128

[10] Felix Brandt. Fully private auctions in a constant number of rounds. In Gosier, editor, Financial Cryptography 2003 conference pre-proceedings revised version (final version to appear), Guadeloupe, 2003. [11] Jørgen Brandt, Ivan Damg˚ ard, and Peter Landrock. Anonymous and verifiable registration in databases. In C. G. G¨ unther, editor, Advances in Cryptology – EUROCRYPT’88, Lecture Notes in Computer Science, LNCS 330, pages 167–176. Springer-Verlag, May 1988. [12] Ernie Brickell, Peter Gemmell, and David Kravitz. Trustee-based Tracing Extensions to Anonymous Cash and the Making of Anonymous Change. In Proceedings of the 6th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 457–466, January 1995. [13] J. Camenisch and E. Van Herreweghen. Design and implementation of the idemix anonymous credential system. In Proceedings of the 9th ACM conference on Computer and Communications Security, page ??? ACM Press, Nov 2002. [14] J. Camenisch and A. Lysyanskaya. Efficient non-transferable anonymous multi-show credential system with optional anonymity revocation. In B. Pfitzmann, editor, Advances in Cryptology – EUROCRYPT’01, volume 2045 of Lecture Notes in Computer Science, pages 93–118. Springer-Verlag, 2001. [15] Jan Camenisch. Group Signature Schemes and Payment Systems Based on the Discrete Logarithm Problem. PhD thesis, ETH Z¨ urich, February 1998. [16] Jan Camenisch, Ueli Maurer, and Markus Stadler. Digital Payment Systems with Passive Anonymity-Revoking Trustees. In Elisa Bertino, Helmut Kurth, Giancarlo Martella, and Emilio Montolivo, editors, Proceedings of the Fourth European Symposium on Research in Computer Security (ESORICS), Lecture Notes in Computer Science, LNCS 1146, pages 33–43. Springer-Verlag, September 1996. [17] Jan Camenisch, Jean-Marc Piveteau, and Markus Stadler. An Efficient Electronic Payment System Protecting Privacy. In Dieter Gollmann, editor, Proceedings of the Third European Symposium on Research in Computer Security (ESORICS), Lecture Notes in Computer Science, LNCS 875, pages 207–215. Springer-Verlag, November 1994. [18] Jan Camenisch, Jean-Marc Piveteau, and Markus Stadler. An Efficient Fair Payment System. In Proceedings of the 3rd ACM Conference on Computer and Communications Security, pages 88–94, March 1996.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

122/128

[19] D. Chaum. Security without identification: Transaction systems to make big brother obsolete. Communications of the ACM, 25(10):1030– 1044, 1985. [20] D. Chaum. Showing credentials without identification: Transferring signatures between unconditionally unlinkable pseudonyms. In J. Seberry and J. Pieprzyk, editors, Advances in Cryptology – AUSCRYPT ’90, volume 453 of Lecture Notes in Computer Science, pages 246–264. Springer-Verlag, 1990. [21] D. Chaum. Achieving electronic privacy. 267(2):96–101, aug 1992.

Scientific American,

[22] D. Chaum and J. Evertse. A secure and privacy-protecting protocol for transmitting personal information between organizations. In Advances in Cryptology – CRYPTO ’86, Lecture Notes in Computer Science, pages 118–167. Springer-Verlag, 1986. [23] David Chaum. Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms. Communications of the ACM, 24(2):84–88, February 1981. [24] David Chaum. The Dining Cryptographers Problem: Unconditional Sender and Recipient Untraceability. Journal of Cryptology, 1(1):65– 75, 1988. [25] David Chaum and Torben Pryds Pedersen. Transferred Cash Grows in Size. In Rainer A. Rueppel, editor, Advances in Cryptology – EUROCRYPT’92, Lecture Notes in Computer Science, LNCS 658, pages 390–407. Springer-Verlag, May 1992. [26] B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan. Private information retrieval. In 36th IEEE Symposium on Foundations of Computer Sciences (FOCS), pages 41–50, 1995. [27] Joris Claessens. Analysis and design of an advanced infrastructure for secure and anonymous electronic payment systems on the Internet. PhD thesis, Katholieke Universiteit Leuven, December 2002. 220 pages. [28] Joris Claessens, Claudia D´ıaz, Caroline Goemans, Bart Preneel, Joos Vandewalle, and Jos Dumortier. Revocable anonymous access to the Internet? Internal Report, January 2003. 28 pages. [29] Lance Cottrell. Mixmaster & Remailer Attacks. Was available at http://www.obscura.com/~loki/remailer/remailer-essay.html; now at http://www.inf.tu-dresden.de/~hf2/anon/mixmaster/ remailer-essay.html. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

123/128

[30] Custodix. http://www.custodix.be/. [31] George Danezis, Roger Dingledine, and Nick Mathewson. Mixminion: Design of a type iii anonymous remailer protocol. In Proceedings of the 9th ACM conference on Computer and Communications Security, 2003. [32] DatAnon. http://www.datanon.net/. [33] George Davida, Yair Frankel, Yiannis Tsiounis, and Moti Yung. Anonymity Control in E-Cash Systems. In Rafael Hirschfeld, editor, Proceedings of Financial Cryptography ’97, Lecture Notes in Computer Science, LNCS 1318, pages 1–16. Springer-Verlag, February 1997. [34] Bart De Win, Vincent Naessens, Claudia D´ıaz, Stefaan Seys, Caroline Goemans, Joris Claessens, Bart De Decker, Jos Dumortier, and Bart Preneel. Technologies overview. Anonymity and Privacy in Electronic Services (APES), Deliverable 3, November 2001. 93 pages. [35] Claudia D´ıaz, Vincent Naessens, Joris Claessens, Bart De Win, Stefaan Seys, Bart De Decker, and Bart Preneel. Tools for technologies and applications. Anonymity and Privacy in Electronic Services (APES), Deliverable 5, December 2002. 87 pages. [36] Yair Frankel, Yiannis Tsiounis, and Moti Yung. Indirect disclosure proof: Achieving efficient fair off-line e-cash. In Kwangjo Kim and Tsutomu Matsumoto, editors, Advances in Cryptology – ASIACRYPT’96, Lecture Notes in Computer Science, LNCS 1163, pages 286–300. Springer-Verlag, November 1996. [37] Yair Frankel, Yiannis Tsiounis, and Moti Yung. Fair Off-Line e-Cash made easy. In Kazuo Ohta and Dingyi Pei, editors, Advances in Cryptology – ASIACRYPT’98, Lecture Notes in Computer Science, LNCS 1514, pages 257–270. Springer-Verlag, October 1998. [38] Matthew K. Franklin and Moti Yung. The Blinding of Weak Signatures. In Alfredo De Santis, editor, Advances in Cryptology – EUROCRYPT’94, Lecture Notes in Computer Science, LNCS 950, pages 67–76. Springer-Verlag, May 1994. [39] FreeNet. The freenet home page. http://www.freenetproject.org/. [40] Eiichiro Fujisaki and Tatsuaki Okamoto. Practical Escrow Cash System. In Mark Lomas, editor, Proceedings of the 4th Security Protocols Workshop, Lecture Notes in Computer Science, LNCS 1189, pages 33– 48. Springer-Verlag, April 1996.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

124/128

[41] Y. Gertner, S. Goldwasser, and T. Malkin. A random server model for private information retrieval or how to achieve information theoretic pir avoiding database replication. In RANDOM, Lecture Notes in Computer Science, LNCS 1518, pages 200–217. Springer-Verlag, 1998. [42] Y. Gertner, Y. Ishai, E. Kushilevitz, and T. Malkin. Protecting data privacy in private information retrieval schemes. In 30th Annual ACM Symposium on Theory of Computing (STOC), pages 151–160, 1998. [43] Gnutella. The gnutella home page. http://www.gnutella.com/. [44] Ian Goldberg. A Pseudonymous Communications Infrastructure for the Internet. PhD thesis, University of California at Berkeley, 2000. [45] Ian Goldberg and David Wagner. TAZ Servers and the Rewebber Network: Enabling Anonymous Publishing on the World Wide Web. First Monday, 3(4), April 1998. [46] E. Van Herreweghen. Anonymous credentials and accountability. PhD thesis – to be published. [47] Markus Jakobsson and David M’Ra¨ıhi. Mix-based Electronic Payments. In Stafford Tavares and Henk Meijer, editors, Proceedings of the 5th Annual International Workshop on Selected Areas in Cryptography (SAC’98), Lecture Notes in Computer Science, LNCS 1556, pages 157–173. Springer-Verlag, August 1998. [48] Markus Jakobsson and Joy M¨ uller. Improved Magic Ink Signatures Using Hints. In Matt Franklin, editor, Proceedings of Financial Cryptography ’99, Lecture Notes in Computer Science, LNCS 1648, pages 253–268. Springer-Verlag, February 1999. [49] Markus Jakobsson and Moti Yung. Revokable and Versatile Electronic Money. In Proceedings of the 3rd ACM Conference on Computer and Communications Security, pages 76–87, March 1996. [50] Markus Jakobsson and Moti Yung. Applying Anti-Trust Policies to Increase Trust in a Versatile E-Money System. In Rafael Hirschfeld, editor, Proceedings of Financial Cryptography ’97, Lecture Notes in Computer Science, LNCS 1318, pages 217–238. Springer-Verlag, February 1997. [51] Markus Jakobsson and Moti Yung. Distributed “Magic Ink” Signatures. In Walter Fumy, editor, Advances in Cryptology – EUROCRYPT’97, Lecture Notes in Computer Science, LNCS, pages 450–464. SpringerVerlag, May 1997.

D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

125/128

[52] Wen-Shenq Juang and Chin-Laung Lei. Fair Blind Signatures Based on Discrete Logarithm. In Proceedings of National Computer Symposium, Taiwan, pages C95–C100, 1997. [53] Ari Juels. Trustee Tokens: Simple and Practical Anonymous Digital Coin Tracing. In Matt Franklin, editor, Proceedings of Financial Cryptography ’99, Lecture Notes in Computer Science, LNCS 1648, pages 29–45. Springer-Verlag, February 1999. [54] Dennis K¨ ugler and Holger Vogt. Auditable Tracing with Unconditional Anonymity. In Proceedings of the Second International Workshop on Information Security Applications (WISA 2001), September 2001. [55] Dennis K¨ ugler and Holger Vogt. Fair Tracing without Trustees. In Paul Syverson, editor, Proceedings of Financial Cryptography 2001, Lecture Notes in Computer Science, LNCS 2339. Springer-Verlag, February 2001. [56] Dennis K¨ ugler and Holger Vogt. Off-line Payments with Auditable Tracing. In Proceedings of Financial Cryptography 2002, Lecture Notes in Computer Science. Springer-Verlag, March 2002. [57] M. Kumar and S. Feldman. Internet auctions. In 3rd USENIX Workshop on Electronic Commerce, pages 49–60, Boston, Mass. September 1998, pp. 49-60, September 1998. [58] LPWA. The Lucent Personalized Web Assistant website. http://www. bell-labs.com/project/lpwa/. [59] Gennady Medvinsky and Clifford Neuman. NetCash: A design for practical electronic currency on the Internet. In Proceedings of the 1st ACM Conference on Computer and Communications Security, pages 102–106, November 1993. [60] Ralph C. Merkle. Protocols for Public Key Cryptosystems. In Proceedings of the 1980 IEEE Symposium on Security and Privacy, pages 122–134, April 1980. [61] David M’Ra¨ıhi. Cost-effective payment schemes with privacy regulation. In Kwangjo Kim and Tsutomu Matsumoto, editors, Advances in Cryptology – ASIACRYPT’96, Lecture Notes in Computer Science, LNCS 1163, pages 266–275. Springer-Verlag, November 1996. [62] David M’Ra¨ıhi and David Pointcheval. Distributed Trustees and Revocability: a Framework for Internet Payment. In Rafael Hirschfeld, editor, Proceedings of Financial Cryptography ’98, Lecture Notes in Computer Science, LNCS 1465, pages 28–42. Springer-Verlag, February 1998. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

126/128

[63] Alexander Øhrn and Lucila Ohno-Machado. Using Boolean reasoning to anonymize databases. Artificial Intelligence in Medicine, 15(3):235– 253, 1999. [64] Tatsuaki Okamoto and Kazuo Ohta. Universal Electronic Cash. In Joan Feigenbaum, editor, Advances in Cryptology – CRYPTO’91, Lecture Notes in Computer Science, LNCS 576, pages 324–337. Springer-Verlag, August 1991. [65] Holger Petersen and Guillaume Poupard. Efficient Scalable Fair Cash with Off-line Extortion Prevention. In Yongfei Han, Tatsuaki Okamoto, and Sihan Qing, editors, Proceedings of the First International Conference on Information and Communications Security (ICICS), Lecture Notes in Computer Science, LNCS 1334, pages 463–477. SpringerVerlag, November 1997. Long version: Technical Report, ENS, April 1997, 34 pages. [66] Andreas Pfitzmann and Marit K¨ohntopp. Anonymity, Unobservability and Pseudonymity – A Proposal for Terminology. In Hannes Federrath, editor, Designing Privacy Enhancing Technologies, Lecture Notes in Computer Science, LNCS 2009, pages 1–9. Springer-Verlag, 2001. Proceedings of the Workshop on Design Issues in Anonymity and Unobservability, July 2000. [67] Birgit Pfitzmann and Ahmad-Reza Sadeghi. Self-Escrowed Cash against User Blackmailing. In Yair Frankel, editor, Proceedings of Financial Cryptography 2000, Lecture Notes in Computer Science, LNCS 1962, pages 42–52. Springer-Verlag, February 2000. [68] S. RATNASAMY, P. FRANCIS, M. HANDLEY, R. KARP, and S. SHENKER. A scalable content-addressable network. In Proceedings of the SIGCOMM, pages 161–172, 2001. [69] Michael G. Reed, Paul F. Syverson, and David M. Goldschlag. Anonymous Connections and Onion Routing. IEEE Journal on Selected Areas in Communications, 16(4):482–494, May 1998. Special issue on Copyright and Privacy Protection. [70] Michael K. Reiter and Aviel D. Rubin. Crowds: Anonymity for Web Transactions. ACM Transactions on Information and System Security (TISSEC), 1(1):66–92, November 1998. [71] R.J.Anderson. The eternity service. Pragocrypt 1996. [72] A. ROWSTRON and DRUSCHEL. Storage management and caching in past, a large-scale, persistent, peer-to-peer storage utility. In Proceedings of SOSP, pages 188–201, 2001. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

127/128

[73] Pierangela Samarati and Latanya Sweeney. Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression. In Proceedings of the IEEE Symposium on Security and Privacy, May 1998. [74] Tomas Sander and Amnon Ta-Shma. Auditable, Anonymous Electronic Cash. In Michael Wiener, editor, Advances in Cryptology – CRYPTO’99, Lecture Notes in Computer Science, LNCS 1666, pages 555–572. Springer-Verlag, August 1999. [75] Tomas Sander and Amnon Ta-Shma. Flow Control: A New Approach For Anonymity Control in Electronic Cash Systems. In Matt Franklin, editor, Proceedings of Financial Cryptography ’99, Lecture Notes in Computer Science, LNCS 1648, pages 46–61. Springer-Verlag, February 1999. [76] Tomas Sander and Amnon Ta-Shma. On Anonymous Electronic Cash and Crime. In Masahiro Mambo and Yuliang Zheng, editors, Second International Workshop on Information Security (ISW’99), Lecture Notes in Computer Science, LNCS 1729, pages 202–206. SpringerVerlag, November 1999. [77] Tomas Sander, Amnon Ta-Shma, and Moti Yung. Blind, Auditable Membership Proofs. In Yair Frankel, editor, Proceedings of Financial Cryptography 2000, Lecture Notes in Computer Science, LNCS 1962, pages 53–71. Springer-Verlag, February 2000. [78] V. SCARLATA, B. N. LEVINE, and C. SHIELDS. Responder anonymity and anonymous peer-to-peer file sharing. ICNP’2001. [79] Stefaan Seys, Claudia D´ıaz, Bart De Win, Vincent Naessens, Caroline Goemans, Joris Claessens, Wim Moreau, Bart De Decker, Jos Dumortier, and Bart Preneel. Requirement study of different applications. Anonymity and Privacy in Electronic Services (APES), Deliverable 2, May 2001. 73 pages. [80] Rob Sherwood, Bobby Bhattacharjee, and Aravind Srinivasan. P 5 : A Protocol for Scalable Anonymous Communication. In Proceedings of the 2002 IEEE Symposium on Security & Privacy, May 2002. [81] Daniel R. Simon. Anonymous Communication and Anonymous Cash. In Neal Koblitz, editor, Advances in Cryptology – CRYPTO’96, Lecture Notes in Computer Science, LNCS 1109, pages 61–73. Springer-Verlag, August 1996. [82] Markus Stadler. Cryptographic Protocols for Revocable Privacy. PhD thesis, ETH Z¨ urich, May 1996. D7 – Applications requirements for controlled anonymity

Anonymity and Privacy in Electronic Services

128/128

[83] Markus Stadler, Jean-Marc Piveteau, and Jan Camenisch. Fair Blind Signatures. In Louis Guillou and Jean-Jacques Quisquater, editors, Advances in Cryptology – EUROCRYPT’95, Lecture Notes in Computer Science, LNCS 921, pages 209–219. Springer-Verlag, May 1995. [84] Frank Stajano and Ross Anderson. The cocaine auction protocol: On the power of anonymous broadcast. In A. Pfitzmann, editor, Proceedings of Information Hiding Workshop 1999, Lecture Notes in Computer Science, Dresden, Germany, 1999. Springer-Verlag. [85] Statistics Netherlands. http://www.cbs.nl/sdc/argus.htm and http://neon.vb.cbs.nl/casc/. [86] I. STOICA, R. MORRIS, D. KARGER, F. KAASHOEK, and BALAKRISHNAN. Chord: A scalable peer-to-peer lookup service for internet applications. In Proceedings of the SIGCOMM, pages 149–160, 2001. [87] Koutarou Suzuki and Makoto Yokoo. Secure Generalized Vickrey Auction using Homomorphic Encryption. In Proceedings of the Seventh International Financial Cryptography Conference (FC’ 03), 2003. [88] Latanya Sweeney. Protection models for anonymous databases. http: //citeseer.nj.nec.com/371102.html. [89] Jacques Traor´e. Making Unfair a “Fair” Blind Signature Scheme. In Yongfei Han, Tatsuaki Okamoto, and Sihan Qing, editors, Proceedings of the First International Conference on Information and Communications Security (ICICS), Lecture Notes in Computer Science, LNCS 1334, pages 386–397. Springer-Verlag, November 1997. [90] Type 0 re-mailer. http://www.penet.fi/. [91] Type 1 and 2 re-mailers. http://anon.efga.org/. [92] Sebastiaan von Solms and David Naccache. On Blind Signatures and Perfect Crimes. Computers & Security, 11(6):581–583, October 1992. [93] Marc Waldman, Aviel D. Rubin, and Lorrie Faith Cranor. Publius: A robust, tamper-evident, censorship-resistant web publishing system. In Proceedings of the 9th USENIX Security Symposium, August 2000.

D7 – Applications requirements for controlled anonymity