P4TC—Provably-Secure yet Practical Privacy-Preserving Toll Collection

0 downloads 0 Views 1MB Size Report
for toll collection at full speed assuming one road-side unit per lane. .... only need enough money but the right denomination of e-coins. ..... In our scenario, we assume a common reference string (CRS), ...... a graph structure where serial numbers s are considered vertices and predecessors sprev define edges (sprev,s).
P4TC—Provably-Secure yet Practical Privacy-Preserving Toll Collection MAX HOFFMANN ∗ , Ruhr-Universität Bochum, Germany VALERIE FETZER † , MATTHIAS NAGEL ‡ , ANDY RUPP § , and REBECCA SCHWERDT ¶ , Karlsruhe Institute of Technology, Germany Electronic toll collection (ETC) is widely used all over the world not only to finance our road infrastructures, but also to realize advanced features like congestion management and pollution reduction by means of dynamic pricing. Unfortunately, existing systems rely on user identification and allow tracing a user’s movements. Several abuses of this personalized location data have already become public. In view of the planned European-wide interoperable tolling system EETS and the new EU General Data Protection Regulation, location privacy becomes of particular importance. In this paper, we propose a flexible cryptographic model and protocol framework designed for privacy-preserving toll collection in the most dominant setting, i.e., Dedicated Short Range Communication (DSRC) ETC. As opposed to our work, most related cryptographic proposals target a less popular type of toll collection based on Global Navigation Satellite Systems (GNSS), and do not come with a thorough security model and proof. In fact, to the best of our knowledge, our system is the first in the DSRC setting with a (rigorous) security model and proof. A major challenge in designing the framework at hand was to combine provable security and practicality, where the latter includes practical performance figures and a suitable treatment of real-world issues, like broken on-board units etc. For our ETC system, we make use of and significantly extend a payment protocol building block, called Black-Box Accumulators, introduced at ACM CCS 2017. Additionally, we provide a prototypical implementation of our system on realistic hardware. This implementation already features fairly practical performance figures, even though there is still room for optimizations. An interaction between an on-board unit and a road-side unit is estimated to take less than a second allowing for toll collection at full speed assuming one road-side unit per lane.

Contents 1 Introduction 2 Security Model 3 System Definition 4 Protocol Overview 5 Security Theorem 6 Performance Evaluation References A Preliminaries B Adversarial Model C Full System Definition D Full Protocol Description E Security Proof

∗ The

2 10 12 18 23 24 26 28 38 40 50 71

author is supported by DFG grant PA 587/10-1. author is supported by DFG grant RU 1664/3-1. ‡ The author is supported by the German Federal Ministry of Education and Research within the framework of the project “Sicherheit vernetzter Infrastrukturen (SVI)” in the Competence Center for Applied Security Technology (KASTEL). § The author is supported by DFG grant RU 1664/3-1 and the Competence Center for Applied Security Technology (KASTEL). ¶ The author is supported by the German Research Foundation (DFG) as part of the Research Training Group GRK 2153: Energy Status Data-Informatics Methods for its Collection, Analysis and Exploitation. † The

2

1

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

INTRODUCTION

Electronic toll collection (ETC) is already deployed in many countries all over the world. A recent study [50] conducted by “Markets and Markets” predicts a CAGR for this market of 9.16% from 2017 to 2022 reaching 10.6 Billion USD by 2022. Europe plans to introduce the first implementation of a fully interoperable tolling system (EETS) until 2027 [23]. Hence, ETC has been and will be an important technology deserving a careful analysis of security and privacy issues as well as dedicated secure solutions. As ETC will become the default option for paying tolls with no easy way to opt-out, privacy is a concern of particular importance. Unfortunately, the systems in use today do not protect the location privacy of their users. This encourages abuse: EZ-Pass records, for instance, have been used as evidence in divorce lawsuits [53], EZ-pass transponders are abused to track drivers throughout New York City [58], and the Norwegian AutoPASS system allowed anyone to obtain a transcript of which toll booths a car visited [25]. However, the only legitimate reason to store personalized location records in these systems is to bill customers. As opposed to other systems, as for instance loyalty systems, this data should not be used for other business relevant purposes like, e.g., personalized advertising. Thus, an efficient and cost-effective privacy-preserving mechanism which avoids data collection in the first place, but still enables the billing functionality, should be of interest to ETC providers as well. In this way, there is no need to deploy costly technical and organizational measures to protect a large amount of sensitive data and there is no risk of a data breach resulting in costly law suits, fines, and loss of customer trust. This is especially interesting in view of the new EU General Data Protection Regulation (GDPR) [31] which is in effect since May 2018. The GDPR stipulates comprehensive protection measures and heavy fines in case of non-compliance.

1.1

Classification of ETC

One can classify ETC systems based on what the user is charged for, where toll determination takes place, and how this determination is done [22, 54]. Concerning what the user is charged for, we distinguish between two major types of charging schemes: Distance-based: The toll is calculated based on the distance traveled by the vehicle and adapted by other parameters like the type of vehicle, discounts, etc. Access-based: Tolls apply to a specific geographic area, e.g. part of a city, segment of a highway, tunnel, etc. As before, it can also be dynamically adapted by other parameters. This charging scheme is typically used in urban areas—not only to finance the road infrastructure but also for congestion management and pollution reduction by adapting tolls dynamically. There are two main types of toll determination environments: Toll plaza: This is the traditional environment where cars pass toll booths on physically separated lanes which may be secured by barriers or cameras to enforce honest behavior. Open road: Open road (aka free-flow) tolling is done without the use of toll booths, separated lanes, or barriers. Instead, the traffic is not disrupted as tolls are collected in a seamless fashion without forcing cars to slow down. In the DSRC setting, this is enabled by equipping roads with toll gantries and enforcing honesty by cameras. Several key technologies define how toll is determined: Dedicated Short-Range Communication (DSRC): Today, this is the most widely used ETC technology worldwide and the de facto standard in Europe [54]. It is based on bidirectional radio communication between a road-side unit (RSU) and a mobile device aka on-board unit (OBU) installed in the vehicle. In todays systems, the OBU just identifies the user to trigger a payment. However, more complex protocols (like ours) between OBU and RSU can be implemented.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

3

Automatic Number Plate Recognition (ANPR): ANPR, aka video tolling, inherently violates privacy. Global Navigation Satellite System (GNSS): In a GNNS-based system the OBU keeps track of the vehicle’s location (e.g., by means of GPS) and processes the necessary information to measure its road usage autonomously, i.e., without the aid of an RSU. GNSS is typically used in combination with GSM for communicating with the toll service provider.

1.2

Drawbacks of Existing Systems and Proposals

Independent of the technology used, existing systems in practice build on identifying the user to charge him. Previous work on privacy-preserving toll collection mainly considers the GNSS setting and comes—apart from a few exceptions [4, 21, 24]—without any formal security analysis. Although DSRC is the most dominant setting in practice, it did not receive much attention in the literature so far. Moreover, practical issues like “what happens if an OBU breaks down”, are usually not taken into account by these proposals. We elaborate on related work in Section 1.4.

1.3

P4TC

We propose a comprehensive security model as well as provably-secure and efficient protocols for privacy-friendly ETC in the DSRC setting. Our definitional framework and the system are very flexible. We cover access-based and distant-based charging schemes as well as combinations of those. Our protocols work for toll plaza environments, but are (due to offline precomputations) efficient enough for open road tolling. Additionally, we also cope with several issues that may arise in practice, e.g., broken/stolen OBUs, RSUs with non-permanent internet connection, imperfections of violation enforcement technology, etc. To the best of our knowledge, we arguably did the most comprehensive formal treament of ETC security and privacy overall. Section 1.5 provides an overview on the toll collection scenario and the desired properties we consider. Section 1.6 gives details on our contribution.

1.4

Related Work

In this section, we review proposals for privacy-preserving ETC. So far, more elaborate privacy-preserving ETC systems have been proposed for the GNSS setting in comparison to the actually more widely used DSRC setting. 1.4.1 DSRC (OBU/RSU) Setting. Previous work [26, 41–43] in this setting mainly focuses on a pre-pay scenario where some form of e-cash is used to spend coins when passing an RSU. This scenario is not preferred by users due to its inconveniences. A user needs to ensure that he always has enough e-coins to pay the tolls. This is particularly inconvenient when prices change dynamically. In addition, using standard offline e-cash, the user may not overpay since he cannot get any change from the RSU (in a privacy-preserving way). So he would not only need enough money but the right denomination of e-coins. Moreover, none of the previous proposals come with a formal security model and proof. In [41–43] multiple electronic road pricing systems specifically tailored for Low Emission Zones (LEZ) are proposed. In [41] a user drives by an RSU and directly pays some price depending on this RSU. For [42, 43] the price a user has to pay depends on the time spent inside the LEZ. To this end, the user receives some e-ticket from an Entry-RSU when entering the LEZ which he needs to present again at an Exit-RSU when leaving the LEZ. For the actual payment in all these systems, some untraceable e-cash scheme that supports dynamic pricing is assumed but not specified. The systems require tamper-proof hardware for the OBU and are claimed to provide fraud protection and privacy for honest drivers. In [26], a simple access-based toll collection system based on RSA blind signatures is sketched. Users buy a set of tokens/coins during registration which are used to pay toll while driving. Double-spending detection can be detected with high probability and some ideas are presented to keep the double-spending database small.

4

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

In [7], the authors propose an interesting scheme for distance-based toll collection. Here, a user obtains a coin, used as an entry ticket, which is worth the maximum toll in the system and can be reused a fixed number of times. The actual toll is calculated at the exit RSU where the user is reimbursed for the difference. The system supports revocation of a user’s anonymity and token revocation. The instantiation mixes cryptographic primitives from the Paillier and DLog setting. Zero-knowledge proofs for languages which mix statements (and share variables) of the two settings are also required. The actual proofs are not specified in the paper. Instead they are just claimed to be “quite standard”, what we doubt. As opposed to ours, their system relies on online “over-spending” detection and does not come with any formal model or security proof. 1.4.2 GNSS (GPS) Setting. A variety of GNSS-based toll collection systems can be found in the literature. Here, the OBU equipped with GPS and GSM typically collects location-time data or road segment prices and sends this data to the toll service provider (TSP). To ensure that the user behaves honestly and, e.g., does not omit or forge data, unpredictable spot checks are assumed which force a user to reveal the data he sent at a certain location and time. In a reconciliation phase, the user calculates his toll based on the data sent and proves that his calculation is correct. One example for the GNSS approach is the simple and flexible electronic road pricing system in [27]. There the server collects hash values of the trip records from the OBUs in form of a hash tree. In the reconciliation phase the consistency of the hash tree is verified. However, the whole system is only described imprecisely and the security and privacy of the system are not proven. In VPriv [55], the OBU anonymously sends tagged location-time data to the TSP while driving. The user previously committed to use exactly these random tags in a registration phase with the TSP. In an inefficient reconciliation phase, each user needs to download the database of all tagged location-time tuples, calculate his total fee, and prove that for this purpose, he correctly used the tuples belonging to his tags without leaking those. In [24] ProVerif is used to show that VPriv is privacy-preserving for honest-but-curious adversaries. In [38] a road pricing system, where the OBU anonymously sends location data to a central server, is presented. The system is based on splitting a single trip into several unlinkable segments (called “legs”) and on distributing the calculation of the fee for a trip between several distinct entities. During the reconciliation phase, every OBU learns the locations of the spot check devices that recorded its corresponding vehicle. The security and privacy requirements are only proved informally. In the PrETP [4] scheme, the OBU non-anonymously sends payment tuples consisting of commitments to location, time, and the corresponding price that the OBU determined itself. During reconciliation with the TSP, the user presents his total toll and proves that this is indeed the sum of the individual prices he sent, using the homomorphic property of the commitment scheme. The authors prove their system secure using the ideal/real world paradigm. In [51], the authors identify large-scale driver collusion as a potential threat to the security of PrETP (and other systems): As spot check locations are leaked to the drivers in the reconciliation phase, they may collude to cheat the system by sharing these locations and only sending correct payment tuples nearby. To this end, the Milo system is constructed as an extension of PrETP. In contrast to PrETP, the location of the spot checks is not revealed during the monthly reconciliation phase. Therefore, drivers are less motivated to collude and cheat. However, if a cheating user is caught, the corresponding spot check location is still revealed. Thus, Milo does not protect against mass-collusion of dishonest drivers. No security or privacy proofs are given. In [20], the authors propose a system based on group signatures that achieves k-anonymity. A user is assigned to a group of drivers during registration. While driving, the user’s OBU sends location, time, and group ID—signed using one of the group’s signature keys—to the TSP. At the end of a billing period, each user is expected to pay his toll. If the sum of tolls payable by the group (calculated from the group’s location-time data) is not equal to the total toll actually paid by the group, a dispute solving protocol is executed. During dispute solving, the

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

5

group manager recalculates each user’s toll and, thus, catches the cheater. Choosing an appropriate group size is difficult (the larger the size, the better the anonymity is protected, but the computation overhead rises), as well as choosing a suited group division policy (users in a group should have similar driving regions and similar driving patterns). In [21], the system’s security and privacy properties are verified using ProVerif. Another cell-based road pricing scheme is presented in [32]. Here, a roadpricing area is divided into cells, where certain cells are randomly selected as spot check cells. A trusted platform module (TPM) inside each OBU is aware of this selection. While driving, the OBU tells its TPM the current location and time. The TPM updates the total toll and also generates a “proof of participation” which is sent to the TSP. This proof is the signed and encrypted location-time data under the TSP’s public key if the user is inside a spot check cell and 0 otherwise. In this way, the TSP can easily verify that a user behaved honestly at spot check cells without leaking their locations to honest users. Because of the cell structure, the accuracy requirements for the on-board GPS-devices are fairly relaxed in comparison to other systems. A security proof is sketched. A main issue with all systems described above is that their security relies on a strong assumption in practice: invisible spot checks at locations which are unpredictable in each billing period. If they were visible or predictable, users could easily collude and cheat. On the other hand, spot checks reveal a user’s location and thus prevent users from cheating. Hence, fixing the number of spot checks is a trade-off between ensuring honesty and preserving privacy. Clearly, the penalty a user faces when cheating influences the number of spot checks required to ensure a certain security level. In [45], the authors argue that even mass surveillance, protecting no privacy at all, cannot prevent collusion under reasonable penalties. They present a protocol for a privacy-preserving spot checking device where the locations of these devices can be publicly known. This protocol is based on oblivious transfer, a fair coin flip, and an identification protocol. Drivers interacting with the device identify themselves with a certain probability, but do not learn whether they do so, therefore being forced to honesty. To let a user not benefit from turning off his OBU between two spot check devices, such a device is needed at every toll segment. Furthermore, to ensure that a user actually interacts with the device, a violation enforcement camera is required. However, since all these road-side devices are needed anyway, there is no advantage in terms of infrastructure requirements compared to the DSRC setting and we could also surrender the GNSS technology.

1.5

Considered Scenario

Here, we sketch the considered scenario, giving an idea of the required flexibility and complexity of our framework. We target a post-payment toll collection system in the DSRC setting which allows access-based as well as distancebased charging and can be deployed in a toll-plaza as well as an open-road environment. It involves the following entities: • The Toll Collection Service Provider (TSP) which might be a privately owned company. • The State Authority (SA), e.g., the Department of Transportation, which outsourced the toll collection operations to the TSP but is responsible for violation enforcement. • A Dispute Resolver (DR), e.g., an NGO protecting privacy, which is involved in case of an incident or dispute.1 • Users who participate in the system by means of a (portable or mounted) On-Board Unit (OBU). The OBU is used for on-road transactions and in the scope of debt and dispute clearance periods. For the latter, it needs to establish a (3G) connection to the TSP/SA. Alternatively, a smartphone might be used for that purpose, which, however, needs access to the OBU’s data. • Road-Side Units (RSUs) which interact with OBUs and are typically managed by the TSP. To enable fast and reliable transactions with OBUs, we do not require RSUs to have a permanent connection to the TSP. We 1 Note

that we assume the DR to be trusted by all other parties. It implements an optional “kind of key escrow” mechanism.

6

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Double-Spending Detection

RSU (Road-Side Unit)

TSP (Toll Collection Service Provider)

RSU Certification

De bt

User Registration

Ac cu

Wallet Issuing

User Blacklisting

DR (Dispute Resolver)

Prove Participation

SA (State Authority)

Debt Clearance

mu lat ion

Camera

User

Fig. 1. The P4TC System Model

only assume that they are periodically online for a short duration (presumably at night when there is not much traffic) to exchange data for fraud detection with the TSP. • Cameras triggered by the RSUs which are typically owned by the SA and used to make photos of license plates (and possibly drivers) in case anything goes wrong. Alternatively or additionally, there might be barriers in a toll-plaza environment which are controlled by the RSU. 1.5.1 Main Protocols. In the following, we sketch the main protocols of the system involving the parties from above. Fig. 1 provides an overview of these interactions. A more detailed description that also includes the remaining protocols can be found in Section 4. For simplicity, let us envision a system with (e.g., monthly) billing periods, although this is not mandatory but suggested for our framework. User Registration: To participate in the system, a user needs to register with the TSP using some physical ID (e.g., passport number, SSN) which is verified out-of-band. This is done once and makes the user accountable in case he cheats or refuses to pay his bill. Wallet Issuing: In the scope of this protocol, a wallet is issued to a user by the TSP which is bound to the user’s ID and a set of user attributes (the role of attributes is discussed later on). The wallet is used to accumulate debt. Debt Accumulation: Every time the user passes an RSU with his car, the user (by means of his OBU) and the (offline) RSU execute the Debt Accumulation protocol. First, the toll the user owes is determined and then this toll is added to the user’s wallet—possibly along with some public attributes of the RSU. The toll may be dynamic and depend on different factors like the current time and congestion, the number of axles (recognized by sensors attached to the RSU), some user attributes attached to the wallet as well as some attributes of the previous RSU the user drove by. The camera takes a photo of the license plate(s) in case the RSU reports any protocol failure, aborts (e.g., if a wallet is outdated) or a car in the range of the RSU refuses to communicate at all. Due to technical limitations, it might not always be possible to determine exactly which car triggered the camera, especially in open-road tolling environments [39]. In these cases, photos of more than one car being in the range of

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

7

the RSU are taken. These photos are transmitted to the SA along with transcripts of the protocols having be run at the time in question. Prove Participation: After the SA has identified the users behind the license plates involved in an incident, the transactions need to be matched with the users in order to determine who caused the incident. Since we demand that Debt Accumulation transactions are anonymous, the users need to interact with the SA to help with this matching. To this end, an instance of the Prove Participation protocol is executed between the SA and each of these users, respectively. This prevents honest users who successfully ran Debt Accumulation from being penalized. Debt Clearance: At the end of a billing period, all users who obtained a wallet for this period are requested to participate in a clearance phase with the TSP. As a user is not anonymous in this protocol, he can be penalized if he refuses to run the protocol within a certain time frame. In a protocol execution, he presents his wallet and the accumulated debt to the TSP. Then he may clear his debt immediately or within a certain grace period. After a successful protocol execution, his wallet is invalid. He can get a new one by running Wallet Issuing again. 1.5.2 Attributes, Pricing Function and Privacy Leakage. Our scenario involves two types of attribute vectors: user attributes as well as (previous) RSU attributes. To keep our framework flexible, we do not stipulate which kind of attributes or how many of them are used. Those details depend on the concrete pricing model with a pricing function depending on the user attributes, the current and previous RSU attributes and auxiliary, publicly verifiable input. However, we expect that for most scenarios very little information needs to be encoded into these attributes. For instance, one could realize access-based toll collection with a pricing function primarily depending on auxiliary input like location, time and congestion without any previous RSU attributes and only encoding the current billing period as a user attribute. Including the previous RSU’s attributes into the toll calculation also allows to cover distance-based charging, where RSUs are installed at each entry and exit of a highway. Running Debt Accumulation at the Entry-RSU does not add any debt to the wallet but only encodes the previous RSU’s ID as attribute. At the Exit-RSU, the appropriate toll is calculated and added but no RSU attribute is set.2 To mitigate the damage of a stolen RSU, one might want RSUs to have a common “expiration date” which is periodically renewed and encoded as an RSU attribute. Likewise, to enforce that a user eventually pays his debt, the user attributes should at least encode the billing period they are valid for.3 Obviously, the concrete content of the attributes affects the “level” of user privacy provided by the system. It provides provable privacy up to what can be possibly be deduced by an operator (TSP and RSUs) who explicitly learns those values as part of its input needed to compute the pricing function. Additionally, at the end of a billing period the TSP learns the total balance owed by a user, but no individual transactions. Our framework guarantees that protocol executions of honest users do not leak anything (useful) beyond that. In order to allow users to assess the privacy of a particular instantiation of our framework, we assume that all attributes, all possible values for those attributes and how they are assigned, as well as the pricing function are public. In this way, the TSP is also discouraged from running trivial attacks by tampering with an individual’s attribute values (e.g., by assigning a billing period value not assigned to any other user). To this end, a user needs to check if the assigned attribute values appear reasonable. Such checks could also be conducted (at random) by a regulatory authority or often also automatically by the user’s OBU. Likewise, if a (corrupted) RSU tries to break privacy by charging a peculiar price that differs from the prescribed pricing function, the user is assumed to file a claim. For realistic pricing functions the chance to correctly identify an individual trip at the end of a billing period given only its total balance is negligible [56]. 2 In

this way, the entry point of a user’s trip can be linked to his exit point. However, our system ensures that the user is still anonymous and multiple entry/exit pairs are unlinkable. 3 Clearly, for privacy reasons, unique expiration dates for RSUs and/or users need to be avoided.

8

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

1.5.3 Desired Properties. The following list summarizes the desired security properties in an informal manner. Having them in mind, we propose an ideal world model formally covering what we would like to achieve in Section 2. (P1) A user may only use a wallet legitimately issued to him. (P2) In Debt Accumulation, a user may not pretend that he owes less by forging the attributes attached to his wallet. (P3) In Debt Clearance, a user should not be able to claim that he owes less than the amount added to his wallet, provided he never used an old copy of his wallet (double-spending) or aborted Debt Accumulation. (P4) If a user re-uses an old copy of his wallet (double-spending) he will be identified. (P5) If a user fails to participate in or prematurely aborts Debt Accumulation he will be identified. (P6) A Debt Accumulation transaction of an honest user does not leak anything to an RSU/TSP beyond the user’s attributes and the attributes of the previous RSU. (P7) Prove Participation only enables the SA to deanonymize a single transaction of an honest user in case of an incident involving this user. His remaining transactions stay unlinkable. (P8) The TSP is—with a hint from the DR—able to efficiently blacklist wallets of individual users. This is important in practice, e.g., to mitigate the financial loss due to stolen or compromised OBUs or double-spending. (P9) The TSP is—with a hint from the DR—able to efficiently recalculate the debt for individual users during a billing period. This is important in practice, e.g., to mitigate the financial loss due to broken, stolen, or compromised OBUs. Furthermore, it allows to determine the actual debt of a double-spender. Also, in a dispute, a user may request a detailed invoice listing the toll points he visited and the amounts being charged. (P10) As the secrets of an RSU might enable a user to tamper with his debt, there is a mechanism to mitigate the financial loss due to a stolen or compromised RSU.

1.6

Our Contribution

Our contribution is threefold: 1.6.1 Protocols. While the overall idea of our construction might appear quite intuitive and we partly draw from known techniques, a major challenge was to twist and combine all these techniques in order to achieve simulation-based security and practicality at the same time. To this end, we started from a payment system building-block called black-box accumulation (BBA+), recently introduced in [36]. BBA+ offers the core functionality of an unlinkable user wallet maintaining a balance. Values can be added to and subtracted from the wallet by an operator, where the use of an old wallet is detected offline by a double-spending mechanism. Besides unlinkability, the system guarantees that a wallet may only be used by its legitimate owner and with its legitimate balance. While BBA+ provides us with some ((P1), (P3), (P4) and (P6) partially) of the desired properties identified in subsection 1.5.3, significant modifications and enhancements had to be made to fully suit our needs. For instance, the basic BBA+ mechanism does not allow for efficient blacklisting of individual wallets (P8) nor to recalculate individual balances (P9). We solve this by having an individual trapdoor for each wallet (accessible to the DR in case of an incident) which makes transactions involving this wallet forward and backward linkable. The trapdoor does not allow to link wallets of other users.4 To realize this, we adopt and adjust an idea from the e-cash literature [12]. More precisely, we make use of a PRF applied to a counter value (which is bound to a wallet) to generate some fraud detection ID for a wallet state. To ensure security and privacy, we let both the user and the TSP jointly choose the PRF key with the key remaining unknown to the TSP. To make it accessible in 4 To

be precise, in the BBA+ system, there is a TTP-owned trapdoor that allows to link a user’s transactions. However, this is not a user- or wallet-specific trapdoor, which means, that if handed it to the TSP, it could track each and every user in the system.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

9

case of an incident, the user is forced to deposit the key encrypted under the DR’s public key. The correctness of this deposit is ensured to the TSP by means of a NIZK proof. This part is tricky due to the use of Groth-Sahai NIZKs for efficiency reasons and the lack of a compatible (i.e., algebraic) encryption scheme with message space Zp . By letting a user choose a new PRF key for each billing period, his transactions from previous/upcoming periods stay unlinkable. As a minor but very useful modification, we added (user and RSU) attributes to a wallet, which, of course, get signed along with the wallet to protect from forgery. This allows us, e.g., to bind wallets to a billing period encoded as attribute. By making RSUs only accept wallets from the current period, the size of the blacklist checked by the RSU can be limited to enable fast transactions. IDs that belong to a previous billing period can be removed from the blacklist. Similarly, the transaction database needed to recalculate balances can also be kept small. Another problem is the use of a single shared wallet certification key in the BBA+ scheme. Translated to our setting, the TSP and all RSUs would share the same secret key. Hence, if an adversary corrupted a single RSU, he could create new wallets for fake users, forge user attributes, balances, etc. In order to mitigate this problem (P10), we take the following measures: First, we separate user identity and attribute information, i.e., the fixed part of a wallet, from balance information, i.e., the updatable part. The first part is signed by a signature key only held by the TSP when the wallet is issued. The second part is signed by an RSU each time the balance of a wallet is updated. This prevents a corrupted RSU to issue new wallets or fake user attributes. Furthermore, we the individual key of each RSU is certified by the TSP along with its attributes. In this way, a RSU may not forge its attributes (P2) but may still fake the balance. By including an expiration date into the RSU attributes, one can limit the potential damage involved with the latter issue. In view of the fact that RSUs are usually not easily accessible physically and key material is usually protected by a HSM, we believe that these measures are sufficient. We intentionally refrain from using key revocation mechanisms like cryptographic accumulators [48] in order to retain real-time interactions. Finally, we added mechanisms to prove the participation in Debt Accumulation interactions (P7) and to enable a simulation-based security proof. 1.6.2 System Definition, Security Model and Proof. Having the scenario from Section 1.5 at the back of our mind, we propose a system definition and security model for post-payment toll collection systems based on the real/ideal-paradigm. More precisely, we build upon the GUC framework [16]. Our work is one of very few that combines a complex, yet practical crypto system with a thorough UC security analysis. Typically, the standard approach is to cast a complex system as an MPC problem and then resort to a generic but inefficient UC-secure MPC protocol [17, 40]. The security of BBA+ has been modeled by formalizing each security property from a list of individual properties as it is usually done in the game-based setting. This approach bears the intrinsic risk that important and expedient security aspects are overlooked, e.g., the list is incomplete. This danger is eliminated by our UC-approach where we do not aim to formalize a list of individual properties but rather how an ideal system should look like. A challenging task was to find a formalization of such an ideal system (aka ideal functionality) such that a reasonable trade-off between various aspects is accomplished: On the one hand, it needs to be sufficiently abstract to represent the semantics of the eventual goal “toll collection” while it should still admit a realization. On the other hand, keeping it too close to the concrete realization and declaring aspects as out-of-scope only provides weak security guarantees. We decided to directly model the ETC system as a single functionality with (polynomially) many parties that reactively participate in (polynomially) many interactions. This leads to a clean interface but makes the security analysis highly non-trivial. At first sight, it seems tempting to follow a different approach: Consider the system as a composition of individual two-party protocols, analyze their security separately and argue about the security

10

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

of the whole system using the UC composition theorem. We refrain from this approach, however, as it entails a slew of technical subtleties due to the shared state between the individual two-party protocols. Moreover, although our system uses cryptographic building blocks for which UC formalizations exist (commitments, signatures, NIZK), these abstractions cannot be used. For example, UC-commitments are non-transferable, i.e., the commitment message cannot be passed to a different party, but we exploit this property heavily. Abstract UC-signatures are just random strings that are information-theoretic independent of the message they sign. Thus it is impossible to prove in zero-knowledge any statement about message-signature-pairs. Hence, our security proof has to start almost from scratch. Although parts of it are inspired by proofs from the literature, it is very complex and technically demanding in its entirety. 1.6.3 Implementation. In addition to our theoretical framework, we also evaluate the real-world practicality of P4TC by means of an implementation. We specifically benchmarked our protocols on an embedded processor which is known to be used in currently available OBUs such as the Savari MobiWAVE [57]. The major advantage for real-world deployment originates in the use of non-interactive zero-knowledge proofs, where major parts of the proofs can be precomputed and verification equations can be batched efficiently. This effectively minimizes the computations which have to be performed by the OBU and the RSU during an actual protocol run. Our implementation suggests that provably-secure ETC at full speed can indeed be realized using present-day hardware.

2

SECURITY MODEL

Our toll collection system FP4TC is defined within the (G)UC-framework by Canetti [16], which is a simulationbased security notion. We briefly explain the UC-framework in this section; for a more detailed introduction to UC and its variant GUC see [13, 16].

2.1

Introduction to UC

In the UC-framework security is defined by indistinguishability of two experiments: the ideal experiment and the real experiment. In the ideal experiment the task at hand is carried out by dummy parties with the help of an ideal incorruptible entity—called the ideal functionality F —which plainly solves the problem at hand in a secure and privacy preserving manner. In the real experiment the parties execute a protocol π in order to solve the prescribed tasks themselves. A protocol π is said to be a (secure) realization F if no PPT-machine Z, called the environment, can distinguish between two experiments. 2.1.1 The Basic Model of Computation. The basic model of computation consists of a set of instances (ITIs) of interactive Turing machines (ITMs). An ITM is the description of a Turing machine with additional tapes for its identity, for subroutine input and output and for incoming and outgoing network messages. An ITI is a tangible instantiation of an ITM and is identified by the content of its identity tape. The order of activation of the ITIs is message-driven. If the ITI provides subroutine output or writes an outgoing message, the activation of the ITI completes and the ITI to whom the message has been delivered to gets activated next. Each experiment has two special ITIs: the environment Z and the adversary A (in the real experiment) or the simulator S (in the ideal experiment). The environment Z is the ITI that is initially activated. If the environment Z provides subroutine output, the whole experiments stops. The output of the experiment is the output of Z. 2.1.2 The (Dummy) Adversary. The adversary A is instructed by Z and represents Z’s interface to the network, e.g., A reports all network messages generated by any party to Z and can manipulate, reroute, inject and/or suppress messages on Z’s order. This modeling reflects the idea of an unreliable and untrusted network. Please note: Only incoming/outgoing messages are under the control of A. Subroutine input/output is authenticated, confidential and of integrity. Moreover, Z may instruct A to corrupt a party. In this case, A takes over the

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

11

position of the corrupted party, reports its internal state to Z and from then on may arbitrarily deviate from the protocol in the name of the corrupted party as requested by Z. 2.1.3 The Real Experiment. In the real experiment, denoted by EXECπ, A, Z (1n ), the environment Z interacts with parties running the actual protocol π and is supported by a real adversary A. The environment Z specifies the input of the honest parties, receives their output and determines the overall course of action. 2.1.4 The Ideal Experiment. In the ideal experiment, denoted by EXEC F , S, Z (1n ), the protocol parties are mere dummies that pass their input to a trusted third party F and hand over F ’s output as their own output. The ideal functionality F executes the task at hand in a trustworthy manner and is incorruptible. The real adversary A is replaced by a simulator S. The simulator must mimic the behavior of A, e.g., simulate appropriate network messages (there are no network messages in the ideal experiment), and come up with a convincing internal state for corrupted parties (dummy parties do not have an internal state). 2.1.5 Definition of Security. A protocol π is said to securely UC-realize an ideal functionality F , denoted by π ≥UC F , if and only if c

∃ S ∀ Z : EXECπ, A, Z (1n ) ≡ EXEC F , S, Z (1n ) holds whereby the randomness is taken over the initial input of Z and the random tapes of all ITIs.5 If no environment Z can tell executions of the real and the ideal experiment apart, then any successful attack existing in the real experiment would also exist in the ideal experiment. Therefore the real protocol π guarantees the same level of security as the (inherently secure) ideal functionality F .

2.2

Remarks on Privacy

Regarding privacy, please note that all parties (including the simulator) use the ideal functionality as a black-box and only know what the ideal functionality explicitly allows them to know as part of their prescribed output. The output to the simulator is also called leakage. This makes UC suitable to reason about privacy in a very nice way. As no additional information is unveiled, the achieved level of privacy can directly be deduced from the defined output of the functionality. In other words, the privacy assessment can be conducted onto the ideal functionality and is completely decoupled from the analysis of the protocol implementation. The proof of indistinguishability asserts that any secure realization of the functionality provides the same level of privacy.

2.3

UC Model Conventions

The bare UC model does not specify many important aspects. For example, it leaves open which ITIs are allowed to communicate with each other, how parties are corrupted and which ITI is allowed to invoke what kind of new ITIs. In this section we clarify these aspects. Our conventions are probably the mostly used one and quite natural. • Each party is identified by its party identifier (PID) pid which is unique to the party and is the UC equivalent of the physical identity of this party. A party runs a protocol π by means of an ITI which is called the main party of this instance of π . An ITI can invoke subsidiary ITIs to execute sub-protocols. A subsidiary and its parent use their subroutine input/output tapes to communicate with each other. The set of ITIs taking part in the same protocol but for different parties communicate through their message tapes. An instance of a protocol is identified by its session identifier (SID) sid. All ITIs taking part in the same protocol instance share the same SID. A specific ITI is identified by its ID id = (pid, sid). 5 N.b.:

To streamline this short introduction we directly defined the so-called dummy adversary. The original definition all-quantifies over all adversaries in the first step. In the second step it is shown that the dummy adversary—as defined here—is the most severe one and complete.

12

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

• Ideal functionalities are an exception to this rule. An instance of an ideal functionality F owns a SID but no PID. Input to and output from F is performed through dummy parties which share the same SID as F , but additionally have the PIDs of the parties they belong to. • As already stated in subsection 2.1.2 subroutine input/output is identifying. This implies in particular that an ideal functionality knows the pid of the party to/from whom it writes/reads input/output, because an ideal functionality communicates with its calling ITI by means of intermediary dummy ITIs and subroutine input/output. • We assume PID-wise corruption: either all ITIs of party are corrupted or none. All these conventions capture the natural intuition that ITIs with the same PID represent processes within the same physical computer and therefore can communicate with each other directly. But communication between different physical computers (i.e., parties) is only possible through an unreliable network under the control of an adversary.

2.4

Setup Assumptions — CRS FCRS and Bulletin-Board G bb

As commonly found in the UC setting, we also draw from constitutive trust assumptions, or setup assumptions in the UC terminology. Setup assumptions are ideal functionalities that remain ideal in the real experiment, e.g., their secure realization is left unspecified. In our scenario, we assume a common reference string (CRS), denoted FCRS , and a globally available bulletin board, G bb [18, Fig. 3], which is sometimes also referred to as key registration service in the literature [5, 15]. A CRS is a short piece of information that has been honestly generated and is shared between all parties. A bulletin board can be depicted as the abstract formalization of a globally available key registration service which associates (physical) party identifiers (PIDs) with (cryptographic) public keys. Every party has the option to register a key for itself once and G bb ensures that the registering party cannot lie about its (physical) identity pid. In our scenario the PID of users could be a passport number or SSN. For RSUs the geo-location could be used as a PID. The key can be any value v, i.e., G bb is totally agnostic to the data. Moreover, every party can obtain keys that have been registered by any other party in a trustworthy way. Our augmentations to G bb are only syntactical bagatelles. Parties can also perform a reverse lookup, i.e., lookup the PID of the party that registered a value, and parties can not only register an opaque string of bits but an ordered tuple of strings. The latter is required to support reverse searches on substrings.

2.5

Generalized UC

In the plain UC model the environment Z is only allowed to invoke a single session of the challenge protocol and is only allowed to exchange messages with the respective main parties. This implies that many real-world scenarios cannot be appropriately captured with plain UC. For example, a globally available PKI that also provides its services to other protocols cannot be modeled in UC. Generalized UC [16] and its simplification Externalied UC (EUC) relaxes this restriction such that a particular, designated ITI—the global functionality—can be accessed by any other ITI including the environment. In our setting, the bulletin-board G bb is allowed to be global.

3

SYSTEM DEFINITION

We now give a condensed description of our ideal functionality FP4TC and point out how it ensures the desired properties given in subsection 1.5.3. As already mentioned, we do not formalize each task (e.g., Wallet Issuing, Debt Accumulation, . . . ) as an individual ideal functionality, but the whole system as a monolithic, highly reactive functionality FP4TC with polynomially many parties as users and RSUs. This allows for a shared state between the individual interactions and thus correctness and security can be defined more easily. An excerpt of FP4TC is depicted in Fig. 2. For better understanding we refrain from giving a complete descrip-

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

13

Functionality FP4TC I. State • A set TRDB of transaction entries trdb having the form trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b) ∈ S × S × Φ × N0 × L × PID U × PID R × Zp × Zp . • A (partial) mapping f Φ assigning a wallet ID λ and a counter x to a fraud detection ID φ: f Φ : L × N0 → Φ, (λ, x ) 7→ φ • A (partial) mapping f AU assigning user attributes to a given wallet ID λ: f AU : L → A U , λ 7→ a U • A (partial) mapping f Π assigning a user PID pid U and a proof of guilt π to a validity bit: f Π : PID U × Π → {OK, NOK} • A (partial) mapping f AR assigning RSU attributes to a given RSU PID pid R : II. Behavior (Task Debt Accumulation only) User input: (pay_toll, s prev ) RSU input: (pay_toll, bl R ) (1) (2) (3) (4) (5) (6)

R

Pick serial number s ← S that has not previously been used. User corrupted: Ask the adversary if the PID pid U of another corrupted user should be used.a prev Look up (·, s prev , φ prev , x prev , λprev , pid U , pid R , ·, b prev ) ∈ TRDB, with (s prev , pid U ) being the unique key. If φ prev ∈ bl R , output blacklisted to both parties and abort. Adopt previous walled ID λ := λprev and increase counter x := x prev + 1. Double-spending/Blacklisted: In this case φ := f Φ (λ, x ) has already been defined; continue with (7). R

Pick fraud detection ID φ ← Φ that has not previously been used. User corrupted: Allow the adversary to choose a previously unused fraud detection ID φ.b Append (λ, x ) 7→ φ to f Φ .   prev prev (7) Look up attributes (a U , a R , a R ) := f AU (λ), f AR (pid R ), f AR (pid R ) . prev (8) Calculate price p := Opricing (a U , a R , a R ). prev RSU corrupted: Leak (a U , a R , a R ) to the adversary and obtain a price p. (9) Calculate new balance b := b prev + p. (10) Append new transaction (s prev , s, φ, x, λ, pid U , pid R , p, b) to TRDB. User output: (s, a R , p, b) prev RSU output: (s, φ, a U , a R ) b The

a If corrupted users collude, they might share their credentials and use each other’s wallet. ideal model only guarantees privacy for honest users. For corrupted users the fraud detection ID might be chosen adversarially (cp. text body).

Fig. 2. An excerpt of the ideal functionality FP4TC .

14

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

tion of all the tasks FP4TC provides at this point. Instead we focus on the most important task, namely Debt Accumulation, and only sketch the remaining tasks. We also state why the ideal model reflects the security and privacy level we wish to achieve. Again, we only concentrate on Debt Accumulation. The full-fledged ideal functionality can be found in Appendix C. For the ease of presentation, all instructions in Fig. 2 that are typically executed are printed in normal font, while some conditional side tracks are grayed out. The normal execution path defines the level of security and privacy achieved for honest, well-behaving parties. The conditional branches deal with corrupted or misbehaving 6 parties and weaken the security and privacy guarantees in those cases. The key idea behind FP4TC is to keep track of all conducted transactions in a pervasive transaction database TRDB (cp. Fig. 2). Each transaction entry trdb ∈ TRDB is of the form trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b). It contains the identities pid U and pid R of the involved user and RSU (or TSP) respectively,7 the price p (toll) associated with this particular transaction and the total balance b of the user’s wallet, i.e., the accumulated sum of all prices so far including this transaction. In other words, FP4TC implements a trustworthy global bookkeeping service that manages the wallets of all users. Each transaction entry is uniquely identified by a serial number s. Additionally, each entry contains a pointer s prev to the logically previous transaction, a unique wallet ID λ, a counter x indicating the number of previous transactions with this particular wallet, and a fraud detection ID φ. Some explanations are in order with respect to the different IDs. In a truly ideal world FP4TC would use the user identity pid U to look up its most recent entry in the database and append a new entry. Such a scheme, however, could only be implemented by an online system. Since we require our system to have offline capabilities—allowing a user and RSU to interact without the help of other parties and without permanent access to a global database—the inherent restrictions of such a setting must be reflected in the ideal model: • (Even formally honest) users can misbehave, reuse old wallet states and commit double-spending without being noticed instantly. • Double-spending is eventually detected after-the-fact. In order to accurately define security, these technicalities have to be incorporated into the ideal functionality, which causes the bookkeeping to be more involved. The whole transaction database is best depicted as a directed graph. Nodes are identified by serial numbers s and additionally labeled with (φ, x, λ, pid U , b). Edges are given by (s prev , s) and additionally labeled with (pid R , p). A user’s wallet is represented by the subgraph of all nodes with the same wallet ID λ and forms a connected component. As long as a user does not commit double-spending the particular subgraph is a linked, linear list. In this case, each transaction entry has a globally unique fraud detection ID φ. If a user misbehaves and re-uses an old wallet state (i.e., there are edges (s prev , s) and (s prev , s ′ )), the corresponding subgraph degenerates into a directed tree. In this case, all transaction entries that constitute a double-spending, i.e., all nodes with the same predecessor, should share the same fraud detection ID φ. To this end, the counter x and the injective map f Φ : L × N0 → Φ have been introduced in order to manage fraud detection IDs consistently: For any newly issued wallet with ID λ the counter x starts at zero and x = (x prev + 1) always holds. It counts the number of subsequent transactions of a wallet since its generation, i.e., x equals the depth of a node. The function f Φ maps a transaction to its fraud detection ID φ, given its wallet (aka tree) λ and its depth x. 6 Please note, that users do not need to be formally corrupted in order to commit double-spending. We call these users honest, but misbehaving. 7 The

party identifier (PID) can best be depicted as the model’s counterpart of a party’s physical identity in the real world. E.g., the user’s physical identity could be a passport number or SSN; the “identity” of an RSU could be its geo-location. Generally, there is no necessary one-to-one correspondence between a PID and a cryptographic key. Also, the ideal functionality always knows the PID of the party it interacts with by the definition of the UC framework.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

15

Besides storing transaction data, FP4TC also keeps track of parties’ attributes by internally storing RSU attributes a R upon certification and user (or rather wallet) attributes a U when the wallet is issued. To give a better understanding of how FP4TC works, we explain the task Debt Accumulation in more detail.

3.1

Task “Debt Accumulation”

In this task, the user provides a serial number s prev as input to FP4TC , indicating which past wallet state he wishes to use for this transaction. The participating RSU inputs a list of fraud detection IDs that are blacklisted. Firstly, FP4TC randomly picks a fresh serial number s for the upcoming transaction. If (and only if) the user is corrupted, FP4TC allows the simulator to provide a different value for pid U that belongs to a another corrupted user. This is a required technicality as corrupted users might share their credentials and thus might use each other’s wallet. Please note that this does not affect honest users. FP4TC looks up the previous wallet state trdbprev in TRDB and asserts that its fraud detection ID is not blacklisted; otherwise it aborts. The ideal functionality extracts the wallet ID from the previous record (λ := λprev ) and increases the counter for this particular wallet (x := x prev + 1). Then, it checks if a fraud detection ID φ has already been defined for the wallet ID λ and counter x (n.b.: tree and depth of node). If so, the current transaction record will be assigned the same fraud detection ID φ. Otherwise, FP4TC ties a fresh, uniformly and independently drawn fraud detection ID ((λ, x ) 7→ φ) to the x’th transaction of the wallet λ. If and only if the user is corrupted and φ has not previously been defined, the adversary is allowed to overwrite the fraud detection ID with another value.8 Moreover, FP4TC looks up the user’s attributes bound to this particular wallet (a U := f AU (λ)) and the attributes of the current and previous RSU prev prev (a R := f AR (pid R ), a R := f AR (pid R )). Finally, the ideal functionality queries the pricing oracle Opricing for the price p of this transaction, calculates the new balance of the wallet (b := b prev + p) and appends a new record to the transaction database. If and only if the RSU is corrupted, the adversary learns the involved attributes and is allowed to override the price. Please note that leaking the user/RSU attributes to the adversary does not weaken the privacy guarantees as the (corrupted) RSU learns the attributes as an output anyway. The option to manipulate the price on the other hand was a design decision. It was made to enable implementations in which the pricing function is unilaterally evaluated by the RSU and the user initially just accepts the price. It is assumed that a user usually collects any toll willingly in order to proceed and (in case of a dispute) files an out-of-band claim later (before making the actual payment). The user’s output are the serial number s of the current transaction, the current RSU’s attributes a R , the price p to pay and the updated balance b of his wallet. The RSU’s output are the serial number s of the current transaction, prev the fraud detection ID φ and the attributes a U and a R of the user and the previous RSU, respectively. Learning their mutual attributes is necessary, because the RSU and user must evaluate the pricing function themselves in the real implementation without the help of a third party.9

3.2

Correctness and Operator Security

Both are immediately asserted by the ideal functionality. As FP4TC represents an incorruptible bookkeeper, the user has no chance to cheat. The only input of a (possibly malicious) user is his choice of which previous wallet state should be used. Hence, the user has no chance to lie about his attributes a U , the balance b prev associated with this state, the calculated new balance b nor anything else. If the user is malicious, the adversary has the additional options to choose an alternative (malicious) user that is being charged and to choose a different fraud detection ID. Both do not effect operator security. The former does not change the total amount due to the 8 Again,

this is a technical concession to the security proof. Corrupted users are not obliged to use “good” randomness. This might affect untrackability, but we do not aim to provide this guarantee for corrupted users. 9 At least, it is unavoidable for any practical implementation given our scenario. Due to the offline capabilities there is no third party available.

16

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

operator. It only changes the party liable which is unavoidable, if a group of corrupted users collude. The latter does not affect the operator at all.

3.3

User Security and Privacy

User security follows by the same arguments as for operator security. The information leakage that needs to be considered for an eventual privacy assessment directly follows from the in- and output of the ideal functionality. We stress that we only care about privacy for honest and well-behaving users. Hence, the grayed out steps can be ignored. Moreover, we ignore “trivial attacks” like a conspicuously chosen price (see Fig. 2, Step 8 and cp. subsection 1.5.2) and the premature abort in case of a blacklisted user.10 First note that the serial number of the previous transaction s prev is a private input of the user and never output to any party. The RSU only learns the serial number s and the fraud detection ID φ of the current transaction which are both freshly, uniformly and independently drawn by FP4TC . Hence, it is information-theoretically impossible to track the honest and well-behaving user across any pair of transactions using any of these numbers. The only information leakage of an honest and well-behaving user in this task is prev determined by his and the previous RSU’s attributes a U and a R which are both learned by the current RSU at the end of Debt Accumulation. Independence of the fraud detection ID might be lost for two cases: • The user is corrupted and the adversary playing a corrupted user knows all his secrets anyway. In this case, the adversary is allowed to pick the fraud detection ID if not previously defined. • The mapping (λ, x ) 7→ φ is already defined, because the user repeatedly runs Debt Accumulation with the same inputs and thus tries to attempt double-spending or the user has been blacklisted. (Our blacklisting task—sketched later—prefills f Φ and creates a pool of fraud detection IDs to be used.) In neither of these cases we aim to provide untrackability.

3.4

Remaining Tasks

We briefly sketch the remaining tasks of the ideal functionality. Please note that these sketches omit some details regarding special cases if one of the involved parties is corrupted. The full-fledged ideal functionality is described in Appendix C. Registration and Certification: There are some auxiliary tasks for registration and certification of parties. They are modeled in the obvious manner and append the appropriate mappings (e.g., f AR ) with the given attributes for the PIDs at hand. Wallet Issuing: In Wallet Issuing a new wallet ID λ is freshly, uniformly and independently drawn. A new transaction entry for the particular user is inserted into the database using this λ and a zero balance. This entry can be depicted as the root node of a new wallet tree. Debt Clearance: The task Debt Clearance is very similar to Debt Accumulation described above except that the TSP additionally learns the user’s party ID pid U and the wallet balance b. Debt Clearance is identifying for the user to allow the operator to invoice him and check if he (physically) pays the correct amount. Also, the user does not obtain a new serial number such that this the transaction entry becomes a leaf node of the wallet tree. User Blacklisting: The task User Blacklisting is run between the DR and TSP. The TSP inputs the PID of a user it wishes to blacklist and obtains the debt the user owes and a list of upcoming serial numbers. For the latter, FP4TC draws a sequence of fresh, uniform and independent fraud detection IDs φ i and prefills the 10 These

privacy losses are inherent to any such scheme and need to be explicitly modeled in the ideal functionality in order for the indistinguishability proof to hold.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

17

mapping f Φ for all wallets the user owns. This ensures that upcoming transactions use predetermined fraud detection IDs that are actually blacklisted. Prove Participation: The task Prove Participation simply checks if a record exists in the database for the particular user and serial number. Double-Spending Detection: The task Double-Spending Detection checks if the given fraud detection ID exists multiple times in the database, i.e. if double-spending has occurred with this ID. If so, FP4TC leaks the identity of the corresponding user to the adversary and asks the adversary to provide an arbitrary bit string that serves as a proof of guilt. FP4TC outputs both—the user’s identity and the proof—to the TSP. Additionally, the proof is internally recorded as being issued as a valid proof for this particular user. Please note, correctness, completeness and soundness is is guaranteed by the ideal bookkeeping service. Guilt Verification: The task Guilt Verification checks if the given proof is internally recorded as being issued for the particular user.

3.5

Mapping of the Desired Properties to the Ideal Model

In this section, we illustrate how the definition of our ideal model FP4TC ensures the desired properties of a toll collection scheme (cp. subsection 1.5.3, (P1)-(P10)). (P1) to (P3): In the tasks of Debt Accumulation and Debt Clearance, the user only ever inputs the serial number of the wallet state he wants to use. All relevant information is then looked up internally by FP4TC . Therefore, the user is not able to pretend his wallet state contains any other information than it actually does (like different attributes in Debt Accumulation or a lower balance in Debt Clearance). FP4TC also checks that the wallet actually belongs to this user. Only corrupted users are able to use wallets of one another, but not wallets of honest users. (P4): As explained above, the same fraud detection ID φ occurs in multiple transaction entries if and only if double-spending was committed. The TSP can check this using the task Double-Spending Detection. If double-spending is detected, FP4TC provides the TSP with the identity pid U of the respective user and a publicly verifiable proof π that this user has actually committed double-spending. This prevents the TSP from falsely accusing a user of double-spending. (P5) and (P7): As discussed in Section 1.5, we assume that if a user does not properly participate in Debt Accumulation, he is physically identified outside the scope of FP4TC . The important feature FP4TC provides for pp this property is the task Prove Participation. In this task the SA inputs a set S R of serial numbers and the ID of the user in question. With the users consent FP4TC then checks whether the user successfully participated in any of the transactions from this list. If so it just responds with OK to the SA who does not learn anything about any of the user’s transactions beyond that. prev (P6): After an instance of Debt Accumulation, an RSU gets the information (s, φ, a U , a R ) as output. As already discussed in Section 1.5, we assume user and RSU attributes to be sufficiently indistinct that they do not enable tracking of the user. This is not ensured within the scope of FP4TC , apart from attribute outputs to the user which enable him to check they are not identifying. Unless double-spending is committed, the serial numbers s and fraud detection IDs φ are fresh and randomly drawn for every transaction and can therefore not compromise unlinkability. In case of double-spending, the TSP is able to directly link transactions with the same fraud detection ID and also find out the ID of the user responsible, but is still only able to link any other transactions of this user with help from the DR. (P8): Given a user ID pid U , the task User Blacklisting provides the TSP a set of fraud detection IDs φ of all wallets λ of this user. In Debt Accumulation, the RSU inputs a blacklist bl R (containing fraud detection IDs from the TSP). Then, FP4TC checks whether the fraud detection ID of the wallet version the user wishes to use for the transaction is contained in this list.

18

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

(P9): Within the task of User Blacklisting FP4TC sums up all prices of all past transactions of all wallets of the user in question and outputs the resulting amount b bill to the TSP. As each instance of Debt Clearance ∗ results in a transaction entry trdb∗ with p ∗ = −b bill as the price, correctly cleared and paid wallets cancel out in this sum and the result accurately gives the amount of debt still owed by the user. (P10): Is handled outside the scope of FP4TC by encoding a limited time of validity into the RSU’s attributes (cp. subsection 1.5.2).

4

PROTOCOL OVERVIEW

For an easier comprehension of the system, this section gives only a simplified description of the main protocols of our system. We believe that this description illustrates the main ideas behind P4TC. The description of the full protocols is postponed to Appendix D. Before describing our protocols, some remarks about the used crypto building blocks, secure channels and the structure of user wallets are in order.

4.1

Crypto Building Blocks and Algebraic Setting

Our construction makes use of (F gp -extractable) non-interactive zero-knowledge (NIZK) proofs, equivocal and extractable homomorphic commitments, digital signatures, public-key encryption, and pseudo-random functions (PRFs). The latter building blocks need to be efficiently and securely combinable with the chosen NIZK (which is Groth-Sahai in our case). For readers not familiar with these building blocks and their security properties, we refer to Appendix A for a detailed description. Our system instantiation is based on an asymmetric bilinear group setting (G 1 , G 2 , GT , e, p, д1 , д2 ). Here, G 1 , G 2 , GT are cyclic groups of prime order p (where no efficiently computable homomorphisms between G 1 and G 2 are known); д1 , д2 are generators of G 1 , G 2 , respectively, and e : G 1 × G 2 → GT is a (non-degenerated) bilinear map. We rely on the SXDH assumption [36] asserting that the well-known DDH assumption holds in both G 1 and G 2 . In this section we use a simpler notation than the one introduced in Appendix A in order to improve the comprehension. Here, a commitment scheme consists of the PPT algorithms Com and Open, where Com returns a commitment c together with an opening information d for a given message m and Open verifies if a commitment c opens to a particular message m given the opening information d. A digital signature scheme consists of the PPT algorithms Sgn and Vfy, where Sgn takes as input the secret key sk and a message m ∈ M, and outputs a signature σ and Vfy takes as input the public key pk, a message m ∈ M, and a purported signature σ , and outputs a bit indicating whether or not the signature is valid.

4.2

Secure Channels

In our system, all protocol messages are encrypted using CCA-secure encryption. For this purpose, a new session key chosen by the user is encrypted under the public key of an RSU/TSP for each interaction. We omit these encryptions when describing the protocols.

4.3

Wallets

A user wallet essentially consists of two signed commitments (c T , c R ), where c T represents the fixed part of a wallet and c R the updatable part. Accordingly, the fixed part is signed along with the user’s attributes a U by the TSP using sk TT during wallet issuing. Every time c R is updated, it is signed along with the serial number s of 11 the transaction by the RSU using sk R . The fixed part c T = Com(λ, pkid U ) is a commitment on the PRF key λ 11 Note

that by abuse of notation, we sometimes ignore the opening or decommitment value which is also an output of Com().

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

19

(which is used as wallet ID) and the user’s secret identification key skid U . The updatable part c R = Com(λ, b, u 1 , x ) also contains λ (to link both parts), the current balance b (debt), some user-chosen randomness u 1 to generate double-spending tags for the current version of the wallet, and a counter value x being the input to the PRF. The value PRF(λ, x ) serves as the wallet’s fraud detection ID φ. This choice of the fraud detection ID has the advantage that the different versions of a wallet are traceable given λ but untraceable if λ is unknown.

4.4

The P4TC Protocols

We stress again that the following description of the P4TC protocols is a simplified version. The actual protocols, including protocol interaction diagrams, can be found in Appendix D. 4.4.1 System Setup. In the setup phase, a trusted third party generates the bilinear group and a public common reference string for the system. The latter contains parameters for some of the building blocks, namely the NIZK and commitment scheme(s) we use. 4.4.2 DR/TSP/RSU Key Generation. The DR generates a key pair (pkDR , skDR ) for an IND-CPA secure encryption scheme, where pkDR is used to deposit a user-specific trapdoor (a PRF-key) which allows to link this user’s transactions in case of a dispute. An individual signature key pair (pk R , sk R ) is generated for each RSU to sign cert the updatable part of a user’s wallet. Moreover, the TSP generates several key pairs (pk TT , sk TT ), (pkcert T , sk T ), (pk RT , sk RT ) for an EUF-CMA secure signature scheme. The key sk TT is used to sign fixed user-specific information when a new wallet is issued. Using sk RT , the TSP can play the role of the initial RSU to sign the updatable part of a new wallet. 4.4.3 RSU Certification. An RSU engages with the TSP in this protocol to get its certificate cert R . It contains the RSU’s public key pk R , its attributes a R (that are assigned in this protocol by the TSP), and a signature on both, generated by the TSP using skcert T . 4.4.4 User Registration. Before a user can participate in the system, he needs to generate a key-pair (pk U , sk U ). We assume that binding pk U to a verified physical user ID, in order to hold the user liable in case of a misuse, is done out-of-band. For instance, this could be done by some trusted (external) certification authority. auth id auth id id The keys pk U = (pkid U , pk U ) and sk U = (sk U , sk U ) consist of two parts each: (pk U , sk U ) is used throughauth auth out the protocols to bind a wallet to a user, whereas (pk U , sk U ) is used only when a new wallet is issued to skid U

authenticate the deposited PRF key. We use pkid U = д1

∈ G 1 and skid U ∈ Zp .

4.4.5 Wallet Issuing. In this protocol executed between a user and the TSP, a new wallet with balance 0 is created and a new PRF key is chosen and deposited using the DR’s public encryption key. The PRF key needs to be chosen jointly and must only be known to the user. If only the user chose the key, an adversary could tamper with recalculations and blacklisting, as well as with double-spending detection (e.g., by choosing the same key for two different users). If only the TSP chose it, a user’s transactions would be linkable. To generate the wallet, the user encodes his secrets into the wallet. That means, he randomly chooses his share λ ′ of the PRF key and some randomness u 1 to generate a double-spending tag (later on). Then he computes the ′ commitments c T = Com(λ ′, skid U ) and c R := Com(λ , b := 0, u 1 , x := 0). Additionally, he prepares the deposit of ′ λ . This is a bit tricky, as the user needs to prove that the ciphertext he gives to the TSP is actually an encryption of λ ′ (under pkDR ). For practical reasons, we use Groth-Sahai NIZKs and the Dodis-Yampolski PRF, which has key space Zp . To work with Groth-Sahai, we would need an algebraic encryption scheme with message space Zp .

20

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Unfortunately, we are not aware of any such scheme.12 We therefore use a workaround: The user splits up λ ′ into λ′ small pieces λi′ < B (e.g., B = 232 ) and encrypts д1 i under pkDR resulting in ciphertexts ei . The user then sends over pk U , c T , c R , ei along with a NIZK π proving that everything has been generated honestly and the wallet is bound to the user owning sk U . This NIZK, in particular, includes range proofs (cf. ′ Appendix A.2.2) for λi′ < B. Besides this, the user also sends over an encryption e ∗ of д1λ and a signature σ ∗ on the ciphertexts ei and e ∗ under skauth U . This information is used to bind a PRF key to a user, so the DR cannot be tricked into recovering the trapdoor for a different user. When the TSP receives this data, it verifies the signature and the NIZK first. If these checks pass, the TSP chooses its share λ ′′ of the PRF key and adds it to the user’s share in the commitments c T , c R by using the homomorphic property of the commitment scheme. Finally, c T is signed using sk TT along with attributes a U the TSP assigned to the user, and c R is signed along with s using sk RT .13 The resulting signatures σ T , σ R , the commitments c T , c R , the share λ ′′, and the attributes a U are sent to the user, who checks their correctness. The user finally stores his freshly generated state token τ := (a U , c R , d R , σ R , c T , d T , σ T , λ := λ ′ + λ ′′, b := 0, u 1 , x := 1, s), where d R and d T are the decommitment values required to open c R and c T , respectively. The TSP stores htd := (λ ′′, ei , e ∗ , σ ∗ ) as hidden trapdoor to recover λ with help of DR. 4.4.6 User Blacklisting. This is a protocol executed by DR upon request of the TSP, who provides inputs pkauth U , htd. auth It recovers the PRF key of the user owning pk U or aborts. λ′

First, DR decrypts each ei using skDR to obtain ai := д1 i , where λi′ are the small pieces of the user’s share λ ′ of the PRF key. Then it computes the discrete logarithms of ai to the base д1 to recover the pieces itself. This is P feasible as the DLOGs are very small. The key λ can be computed as λ := λ ′ + λ ′′, where λ ′ := i λi′ · B i . The DR ′ also checks if e ∗ decrypts to д1λ and verifies the signature σ ∗ . If a check fails, the DR aborts as λ ′ has not been deposited by the user owning pkauth U . Using λ, the fraud detection IDs belonging to the previous and upcoming versions of a user’s wallet can be computed. Thus, all interactions of the user (including double-spendings) in the TSP’s database can be linked and the legitimate debt recalculated. Also, the fraud detection IDs for upcoming transactions with this wallet can be precomputed and blacklisted. Here, we need to precompute enough IDs to cover all Debt Accumulation transactions the user may do until the end of the billing period. 4.4.7 Debt Accumulation. In this protocol executed between a user and an RSU, the toll p is determined and a new version of the user’s wallet with a debt increased by p is created. See Fig. 3 for a depiction of the protocol. The toll p may depend on the user’s attributes, the current and the previous reader’s attributes, as well as other factors like the current time, etc. The user stays anonymous, i.e., the RSU does not receive the user’s key or see prev the previous balance. Only a U and a R (the attributes of the RSU who signed the previous version of the wallet) are revealed to the RSU, which are assumed not to contain identifying information. The user’s main input is the state token prev

prev

prev

prev

τ prev := (a U , c R , d R , σ R , c T , d T , σ T , λ, b prev , u 1

, x, s prev )

containing the state of his previous protocol run including the old version of his wallet. To prepare a new version of his wallet, the user computes a fresh commitment c R′ on (λ, b prev , u 1next , x), where except for u 1next the same values are contained in the previous version of his wallet. The randomness u 1next is freshly chosen by the user 12 Note

that Paillier encryption works in a different algebraic setting and cannot easily be combined with Groth-Sahai proofs. the protocol run, a uniformly random serial number s for this transaction was jointly generated by user and TSP by means of a Blum-like coin toss (see Debt Accumulation for a description).

13 During

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

U (pk U , sk U , τ prev )

21 R (cert R , sk R , bl R )

R

R

s ′ ← G1

s ′′ ← G 1

R

R

u 1next ← Zp

u 2 ← Zp

′ ′ (c R ,dR ) ← Com(λ, b prev , u 1next , x )

′′ ′′ (c ser , d ser ) ← Com(s ′′ ) ′′ u 2 , c ser

next t := skid U u2 + u1

φ := PRF(λ, x ) (c hid , d hid ) ← Com(pkid U) Compute proof π prev

′ , φ, t, a U , a R , π , s ′ c hid , c R

if φ ∈ bl R ∨ π does not verify return ⊥ Calculate price p s := s ′ · s ′′ ′′ ′′ (c R , d R ) ← Com(0, p, 0, 1) ′ ′′ c R := c R · cR

σ R ← Sgn(sk R , (c R , s)) ′′ ′′ cR,dR , σ R , p, cert R , s ′′, d ser ′ ′′ d R := d R · dR ′′ ′′ if Open(s ′′, c ser , d ser )=0 ∨

Vfy(pk R , σ R , (c R , s)) = 0 return ⊥ τ := (a U , c R , d R , σ R , c T , d T , σ T , λ, b := b prev + p, u 1next , x + 1, s := s ′ · s ′′ ) return (τ , c hid , d hid )

return (φ, p, t, u 2 , c hid , s)

Fig. 3. A Simplified Version of the Debt Accumulation Protocol

and is used to generate a double-spending tag for the new wallet version. In order to get his new wallet version certified by the RSU, the user needs to invalidate his previous version. To do so, he needs to reveal the fraud detection ID and double-spending tag of his previous version. For the latter, the RSU sends a random challenge u 2 ′′ = Com(s ′′ ) on his share of the serial number of this transaction (which is part of along with a commitment c ser next the Blum coin toss). Upon receiving these values, the user computes the double-spending tag t := u 2 · skid U + u1 id mod p (a linear equation in the unknowns u 1 and sk U ), the fraud detection ID φ := PRF(λ, x ), and a hidden user 14 ID c hid := Com(pkid U ). The latter is used in Prove Participation to associate this interaction with the user. As prev ′ ′ response, the user sends over c hid , c R , φ, t, a U , a R , π , and s , where π is a NIZK proving that everything has been computed honestly, and s ′ is the user’s share of the serial number. In particular, π shows that the user knows 14 We

are aware of alternatives realizing this mechanism without introducing c hid , which are, however, less efficient.

22

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt • prev

P4TC

prev

a certified wallet version involving commitments c T and c R such that c R and c R′ are commitments on the prev same messages except for the double-spending randomness, that the (hidden) signature on c R verifies under prev some (hidden) RSU key pk R certified by the TSP, and that t, φ, and c hid have been computed using the values prev contained in c T and c R . When receiving this data, the RSU first checks that φ is not on the blacklist and π is correct. Then it calculates the price p, adds it to the user’s balance b prev and increases the counter x by 1 using the homomorphic property of c R′ . The resulting commitment c R is signed along with the serial number s := s ′ · s ′′ using sk R . Then the RSU sends c R to the user, along with its decommitment information d R , its signature σ R , the added price p, the ′′ for c ′′ . certificate for pk R , RSU’s share s ′′ of the serial number and the decommitment value d ser ser The user checks if the received data is valid and ends up with an updated state token τ := (a U , c R , d R , σ R , c T , d T , σ T , λ, b prev + p, u 1next , x + 1, s) containing his new wallet version c T , c R , σ T , σ R . He additionally stores (c hid , d hid ), where d hid allows to open the hidden ID. The RSU stores (φ, p, t, u 2 , c hid , s) as transaction information. 4.4.8 Prove Participation. In this protocol executed between a user and the SA, the user proves that he participated in one of the Debt Accumulation transactions under audit by the SA. This protocol is identifying, as the SA retrieved the user’s physical ID from his license plate number and, thus, is aware of his public key pkid U . It first pp sends the list Ω R of hidden ID commitments observed by the RSU during the considered interactions. If the user pp owns some c hid contained in Ω R , he simply sends over c hid along with the corresponding opening d hid he stored and the serial number s of the transaction.15 The SA accepts if the commitment d hid indeed opens to pkid U and s is part of the corresponding transaction information the RSU stored. No other user may claim to have sent c hid , as this would imply to open c hid with a different public key. 4.4.9 Debt Clearance. In this protocol executed between a user and the TSP, the user identifies himself using pkid U and reveals the balance for his current wallet version. The wallet is invalidated by not creating a new version. Upon receiving a challenge u 2 from the TSP, the user computes the double spending tag t and fraud detection prev , φ, t and a ID φ for his wallet. This is the same as in Debt Accumulation. Then the user sends over pkid U, b NIZK π . This NIZK essentially shows that the user knows a certified wallet version with balance b prev , fraud detection ID φ, and double-spending tag t which is bound to pkid U . Note that if the user does not make use of his latest wallet, double-spending detection will reveal this. If the proof verifies, the balance and the double-spending information, i.e., (b prev , φ, t, u 2 ), is stored by the TSP. 4.4.10 Double-Spending Detection. This algorithm is applied by the TSP to its database containing doublespending tags. The fraud detection ID φ is used as the database index associated with the double-spending tag for a wallet version. If the same index appears twice in the TSP’s database, a double-spending occurred and the cheater’s key-pair can be reconstructed from the two double-spending tags as follows: Let us assume there exist two entries id ′ ′ −1 mod p (φ, t, u 2 ), (φ, t ′, u 2′ ) in the TSP’s database. In this case, skid U can be recovered as sk U = (t − t )(u 2 − u 2 ) id with overwhelming probability. The cheater’s public identification key pk U can be computed from skid U . As a consequence, the wallet bound to pkid could be blacklisted. The output of the protocol is the fraudulent user’s U id identity along with a proof-of-guilt π := sk U . 4.4.11 Guilt Verification. This algorithm can be executed by any party to verify the guilt of an accused doubleid spender. Given a public key pkid U and a proof-of-guilt π , it checks if π equals sk U (which can be done by evaluating д1π = pkid U ). 15 By

additionally providing a time interval out-of-band this search can be accelerated.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

5

• P4TC

23

SECURITY THEOREM

For our toll collection scheme we formally prove that under the Co-CDH assumption and security of our building blocks the protocols from Section 4 (combined as one comprehensive toll collection protocol πP4TC ) GUC-realize the ideal model FP4TC from Section 2 (in the (FCRS , G bb )-hybrid model), i.e., F

, G bb

CRS πP4TC

≥UC F G bb .

Informally this means the ideal model and our protocol are indistinguishable and therefore provide the same guarantees with regard to their security and privacy. The statement holds given static corruption of either (1) A subset of users. (2) All users and a subset of RSUs, TSP and SA. (3) A subset of RSUs, TSP and SA. (4) All of RSUs, TSP and SA as well as a subset of users. Be reminded that we assume the DR to be honest. In Appendix B.1 more details regarding the corruption possibilities are presented. Other details of our adversarial model, i.e. the underlying channel model and handling of aborts, can be found in Appendix B as well. We now briefly describe the outline of the proof. The detailed formal statement including underlying assumptions and the full proof is postponed to Appendix E.

Proof Outline For our security statement, we separately prove correctness, system security and user security and privacy. Although proofs of correctness are often neglected for smaller UC-secure protocols, they are highly nontrivial for extensive ideal models as are required for a complex real-world task like toll collection. Since it is not only instructive to understand the underlying functionality but also a helpful basis for our proofs of system and user security, we briefly sketch the idea behind our proof of correctness here. The detailed formal proof can be found in Appendix E.1. First, note that the entries trdb = (s prev , s, φ, λ, pid U , pid R , p, b) of the ideal transaction database TRDB define a graph structure where serial numbers s are considered vertices and predecessors s prev define edges (s prev , s). Assigning a label (pid R , p) to this edge and (φ, λ, pid U , b) to the vertex s results in a graph where every vertex represents the state of a wallet and the incoming edge represents the transaction that led to this wallet state. We call this perception of TRDB the Ideal Transaction Graph and give graph-theoretic proofs of its structural properties. These properties include that the graph as a whole is a directed forest where each tree corresponds to a wallet ID λ, double-spending corresponds to branching and different wallet states have the same fraud detection ID φ if and only if they have the same depth in the same tree. in out out Secondly we add in- and out-commitments (c in R , c T ) and (c R , c T ) from the real protocols to each transaction node in the Ideal Transaction Graph. These commitments are the fixed and updateable part of the wallet before and after the transaction (cp. Section 4). This information gives a second set of edges where two transactions ∗ in ∗ trdb and trdb∗ are connected if (c out , c out ) corresponds to (c in R , c T ). We call the resulting graph the Augmented R T Transaction Graph. Showing that in case of an honest execution of the toll collection protocol both of those graph structures coincide with overwhelming probability yields correctness. The proofs of system security and user security and privacy are conducted by explicitly specifying a simulator and reducing the indistinguishability of real and ideal world to the security of our building blocks. During an execution of the protocol our simulator generates the Augmented Transaction Graph as explained above. After showing that all messages sent by the simulator are statistically close to the real messages, i.e., simulated perfectly, that leaves only two kinds of reasons the environment Z could be able to distinguish both worlds. The first kind are failure events where the two graph structures in the augmented transaction graph diverge and the second

24

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Table 1. Averaged performance results for execution time t and transmitted data n. Byte values are rounded.

Protocol Session Key Generation Wallet Issuing Debt Accumulation – Precomp (offline) – Online – Online (cached certificate) – Postprocessing (offline) Debt Clearance

t user

|a U | = |a R | = 1 t RSU/TSP n user

|a U | = |a R | = 4 t RSU/TSP n user

n RSU/TSP

t user

[byte]

[ms]

[ms]

[byte]

n RSU/TSP

[ms]

[ms]

[byte]

[byte]

14.97 21205.86

2.30 30675.71

130 78975

– 1264

14.98 21306.07

2.42 30698.55

130 78995

– 1456

2677.31 346.80 41.01 34.22 2398.88

– 462.27 462.27 – 3062.49

– 7968 7968 – 7135

– 976 976 – 96

2676.53 453.72 40.23 36.35 2399.32

– 479.34 479.34 – 3140.02

– 8160 8160 – 7328

– 1072 1072 – 96

are discrepancy events where some party’s outputs could be distinguished. We show that both of those cases only occur with negligible probability by various reductions to our cryptographic building blocks and hardness assumptions. The full versions of those proofs can be found in Appendices E.2 and E.3.

6

PERFORMANCE EVALUATION

In order to evaluate the practicality of P4TC, we implemented our system for a realistic target platform. We performed our measurements for the user side on an i.MX6 Dual-Core processor running at 800MHz with 1GB DDR3 RAM and 4GB eMMC Flash, the same processor as used in the Savari MobiWAVE-1000 OBU [57]. The processor runs an embedded Linux, is ARM Cortex-A9 based (32-bit), and also exists in a more powerful Quad-Core variant. For the RSU hardware we take the ECONOLITE Connected Vehicle CoProcessor Module as a reference system, which was specifically “designed to enable third-party-developed or processor-intensive applications” [34] and measured on comparable hardware.

6.1

Building Block Instantiation

We implemented P4TC in C++14 using our own library for Groth-Sahai NIZK proofs [30, 33] and employed the method in [11] to realize the range proofs required in Wallet Issuing. We make use of two types of commitments: the shrinking commitment scheme from [2], as well as the (dual-mode) extractable commitment scheme from [33]. Moreover, we implemented the structure-preserving signature scheme from [1]. We use the ElGamal encryption scheme [29] for encrypting PRF key shares in the Wallet Issuing protocol, as well as the IND-CCA-secure encryption scheme from [19] (in combination with AES-CBC and HMAC-SHA256) to establish secure channels for exchanging protocol messages. The required PRF is instantiated with the Dodis-Yampolskiy construction introduced in [28]. For the underlying math we used the RELIC toolkit v.0.4.1, an open source cryptography and arithmetic library written in C under LGPL 2.1 license, with support for pairing-friendly elliptic curves [3].

6.2

Parameter Choice

As for the bilinear group setting, we use the Barreto-Naehrig curves Fp254BNb and Fp254n2BNb presented by Aranha et al. [8, 44]. For the pairing function, we use the optimal Ate pairing since it results in the shortest execution times [52]. This yields a security level of about 100 bit [6]. We evaluated P4TC for two sizes of attribute vectors: |a U | = |a R | = 1 and |a U | = |a R | = 4. With curves of 254-bit order, each vector component can encode up to 253 bits of arbitrary information. So in practice it should

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

25

be possible to encode multiple attributes into one such component. Since the actual encoding of data depends on the concrete scenario, we only focus here on evaluating the performance penalties when increasing the size of the attribute vectors.

6.3

Implementation Results

Table 1 shows the results of our measurements on the OBU and on the RSU/TSP side in terms of execution time and transmitted data. All values are averaged over 1000 independent protocol executions. Note that the processor is running an embedded Linux, hence execution times can vary by tens of milliseconds due to internal processes and scheduling. We only considered the main protocols which include the expensive NIZKs. The row entitled “Session Key Generation” in Table 1 includes the runtime and size of data to setup a session key for the secure channel which is established prior to any protocol run.16 In order to utilize the capabilities of our target platform, the user side algorithms were optimized for two CPU cores. Note that for simplicity, we assumed the same hardware for the TSP as for the RSU, although we can expect the TSP side to be equipped with much more powerful hardware. 6.3.1 Debt Accumulation. While the protocols Wallet Issuing and Debt Clearance can be regarded as non-timecritical, Debt Accumulation is performed while driving (possibly at high speed). Thus, execution has to be as efficient as possible. Fortunately, all parts of the expensive NIZK proof which do not involve the challenge value u 2 (provided by the RSU) can be precomputed (offline phase) at the OBU which takes approximately 2670ms. During the actual interaction with the RSU (online phase), the remaining part of the NIZK is computed and all data is transmitted. Computations in this online phase take only approximately 350ms (for |a U | = |a R | = 1) on the OBU, mostly due to the verification of the RSU certificate. When caching valid certificates, computations during the online phase can be reduced to approximately 40ms. After the OBU received a response from the RSU, the internal wallet has to be updated. Since this step can be done offline again, we measured the execution time separately and called the phase postprocessing. We also optimized the computations performed by the RSU, taking advantage of the 4 CPU cores and the batching techniques for Groth-Sahai verification by Herold et al. [37]. In this way, we obtain an online runtime of about 460ms. In summary, all computations in the online phase of Debt Accumulation can be performed in about 810ms or just 500ms when the certificate has already been cached. The WAVE data transmission standard on DSRC guarantees a transmission rate of 24 Mbit/s [49]. At this rate, all data of the Debt Accumulation protocol are transmitted in approximately 27ms. While the standard claims communication ranges of up to 1km, we assume a toll collection zone of 50m. Moreover, we may assume one RSU per lane [47] such that the workload can be easily spread among the available units. Going at 120km/h, it takes a car 1.5s to travel this distance, leaving us with a time buffer of more than 600ms in case of an uncached certificate or one full second with a cached certificate. Considering the mandated safety distance at this speed, there should only be a single car inside the 50m zone on each lane. However, since computations take less time than it takes a car to cross the toll collection zone, the system could theoretically handle cars with a distance of only 24m at 120km/h. We therefore conclude that the performance is sufficient for real-world scenarios. 6.3.2 Storage Requirements. During Debt Accumulation, the RSU and the OBU collect data in order to, e.g., prevent double-spending or to prove participation in a protocol run. In Debt Accumulation, the OBU has to store 134 bytes of transaction information and (optionally) 268 bytes to cache the RSU certificate. Assuming that in one billing period 10000 transactions are performed by the OBU, it only has to store 1.34MB of transaction information and, even if all visited RSUs were different, 2.68MB of cached certificates. The wallet itself consumes 1kB of memory and is fixed in size. The RSU has to store 242 bytes of transaction information after each run of 16 All

communication between the participants is encrypted with AES-CBC and HMAC-SHA256 to realize a secure channel. Therefore, each individual protocol is preceded by a symmetric key exchange via the KEM by Cash et al. [19].

26

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Debt Accumulation (for 32-bit toll values). All this information is eventually aggregated at the TSP’s database. The US-based toll collection system E-ZPass reported about 252.4 million transactions per month in 2016 [35], which would result in a database of size 61GB. In case a wallet is blacklisted, an RSU is updated with a list of future fraud detection IDs. Each detection ID consumes 34 bytes of memory. Using appropriate data structures such as hashsets or hashmaps, a lookup is performed in less than 1ms in sets of size 106 . Hence, blacklisting does not affect the execution time on the RSU. Such a set consumes 34MB of memory, neglecting the overhead induced by the data structure. 6.3.3 Computing DLOGs. To blacklist a user, the DR has to compute a number of discrete logarithms to recover λ. With our choice of parameters, λ is split into 32-bit values, thus resulting in the computation of eight 32-bit DLOGs. While DLOGs of this size can be brute-forced naively, the technique of Bernstein et al. [10] can be used to speed up this process. Using their algorithm, computing a discrete logarithm in an interval of order 232 takes 1.93 · 232/3 ≈ 3138 multiplications with a table of 1626 precomputed elements, which can be generated with 1.24 · 264/3 ≈ 3197118 multiplications. However, this 55kB table only has to be generated once for the entire system. With this table, computing all 8 DLOGs requires approximately 25104 multiplications on average. Using only a single core on a standard desktop, this translates to 12 seconds of computation time. Thus, the required DLOGs can be computed in reasonable time by the DR.

REFERENCES [1] Masayuki Abe, Jens Groth, Kristiyan Haralambiev, and Miyako Ohkubo. 2011. Optimal Structure-Preserving Signatures in Asymmetric Bilinear Groups. In Advances in Cryptology – CRYPTO 2011 (Lecture Notes in Computer Science), Phillip Rogaway (Ed.), Vol. 6841. Springer, Heidelberg, 649–666. [2] Masayuki Abe, Markulf Kohlweiss, Miyako Ohkubo, and Mehdi Tibouchi. 2015. Fully Structure-Preserving Signatures and Shrinking Commitments. In Advances in Cryptology – EUROCRYPT 2015, Part II (Lecture Notes in Computer Science), Elisabeth Oswald and Marc Fischlin (Eds.), Vol. 9057. Springer, Heidelberg, 35–65. https://doi.org/10.1007/978-3-662-46803-6_2 [3] D. F. Aranha and C. P. L. Gouvêa. 2016. RELIC is an Efficient Library for Cryptography. Online Resource. (2016). https://github.com/ relic-toolkit/relic. [4] Josep Balasch, Alfredo Rial, Carmela Troncoso, Bart Preneel, Ingrid Verbauwhede, and Christophe Geuens. 2010. PrETP: PrivacyPreserving Electronic Toll Pricing (extended version). In Proceedings of the 19th USENIX Security Symposium. 63–78. [5] Boaz Barak, Ran Canetti, Jesper Buus Nielsen, and Rafael Pass. 2004. Universally Composable Protocols with Relaxed Set-Up Assumptions. In 45th Annual Symposium on Foundations of Computer Science. IEEE Computer Society Press, 186–195. [6] Razvan Barbulescu and Sylvain Duquesne. 2017. Updating key size estimations for pairings. Cryptology ePrint Archive, Report 2017/334. (2017). http://eprint.iacr.org/2017/334. [7] Amira Barki, Solenn Brunet, Nicolas Desmoulins, Sébastien Gambs, Saïd Gharout, and Jacques Traoré. 2016. Private eCash in Practice (Short Paper). In International Conference on Financial Cryptography and Data Security (FC 2016). Springer, 99–109. [8] Paulo S. L. M. Barreto and Michael Naehrig. 2006. Pairing-Friendly Elliptic Curves of Prime Order. In SAC 2005: 12th Annual International Workshop on Selected Areas in Cryptography (Lecture Notes in Computer Science), Bart Preneel and Stafford Tavares (Eds.), Vol. 3897. Springer, Heidelberg, 319–331. [9] Mihir Bellare. 2015. New Proofs for NMAC and HMAC: Security Without Collision-Resistance. Journal of Cryptology 28, 4 (2015), 844–878. [10] Daniel J. Bernstein and Tanja Lange. 2012. Computing small discrete logarithms faster. Cryptology ePrint Archive, Report 2012/458. (2012). http://eprint.iacr.org/2012/458. [11] Jan Camenisch, Rafik Chaabouni, and abhi shelat. 2008. Efficient Protocols for Set Membership and Range Proofs. In Advances in Cryptology – ASIACRYPT 2008 (Lecture Notes in Computer Science), Josef Pieprzyk (Ed.), Vol. 5350. Springer, Heidelberg, 234–252. [12] Jan Camenisch, Susan Hohenberger, and Anna Lysyanskaya. 2005. Compact E-Cash. In Advances in Cryptology – EUROCRYPT 2005 (Lecture Notes in Computer Science), Ronald Cramer (Ed.), Vol. 3494. Springer, Heidelberg, 302–321. [13] Ran Canetti. 2001. Universally Composable Security: A New Paradigm for Cryptographic Protocols. In 42nd Annual Symposium on Foundations of Computer Science. IEEE Computer Society Press, 136–145. [14] Ran Canetti. 2006. Security and Composition of Cryptographic Protocols: A Tutorial. Cryptology ePrint Archive, Report 2006/465. (2006). http://eprint.iacr.org/2006/465.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

27

[15] Ran Canetti. 2007. Obtaining Universally Composable Security: Towards the Bare Bones of Trust. Cryptology ePrint Archive, Report 2007/475. (2007). http://eprint.iacr.org/2007/475. [16] Ran Canetti, Yevgeniy Dodis, Rafael Pass, and Shabsi Walfish. 2007. Universally Composable Security with Global Setup. In TCC 2007: 4th Theory of Cryptography Conference (Lecture Notes in Computer Science), Salil P. Vadhan (Ed.), Vol. 4392. Springer, Heidelberg, 61–85. [17] Ran Canetti, Yehuda Lindell, Rafail Ostrovsky, and Amit Sahai. 2002. Universally composable two-party and multi-party secure computation. In 34th Annual ACM Symposium on Theory of Computing. ACM Press, 494–503. [18] Ran Canetti, Daniel Shahaf, and Margarita Vald. 2016. Universally Composable Authentication and Key-Exchange with Global PKI. In PKC 2016: 19th International Conference on Theory and Practice of Public Key Cryptography, Part II (Lecture Notes in Computer Science), Chen-Mou Cheng, Kai-Min Chung, Giuseppe Persiano, and Bo-Yin Yang (Eds.), Vol. 9615. Springer, Heidelberg, 265–296. https://doi.org/10.1007/978-3-662-49387-8_11 [19] David Cash, Eike Kiltz, and Victor Shoup. 2008. The Twin Diffie-Hellman Problem and Applications. In Advances in Cryptology – EUROCRYPT 2008 (Lecture Notes in Computer Science), Nigel P. Smart (Ed.), Vol. 4965. Springer, Heidelberg, 127–145. [20] Xihui Chen, Gabriele Lenzini, Souke Mauw, and Jun Pang. 2012. A Group Signature Based Electronic Toll Pricing System. In 2012 Seventh International Conference on Availability, Reliability and Security (ARES 2012). 85–93. [21] Xihui Chen, Gabriele Lenzini, Sjouke Mauw, and Jun Pang. 2013. Design and Formal Analysis of A Group Signature Based Electronic Toll Pricing System. JoWUA 4, 1 (2013), 55–75. http://isyou.info/jowua/papers/jowua-v4n1-3.pdf [22] European Commission. 2015. Study on State of the Art of Electronic Road Tolling. https://ec.europa.eu/transport/sites/transport/files/ modes/road/road_charging/doc/study-electronic-road-tolling.pdf. (2015). [Online; accessed April-19-2018]. [23] European Commission. 2017. Proposal for a Directive of the European Parliament and of the Council on the Interoperability of Electronic Road Toll Systems and Facilitating Crossborder Exchange of Information on the Failure to Pay Road Fees in the Union (recast). https://ec.europa.eu/transport/sites/transport/files/com20170280-eets-directive.pdf. (2017). [Online; accessed April-19-2018]. [24] Morten Dahl, Stéphanie Delaune, and Graham Steel. 2012. Formal Analysis of Privacy for Anonymous Location Based Services. Theory of Security and Applications (2012), 98–112. [25] Datatilsynet. 2007. Statens Vegvesen Holdt Tilbake Viktig AutoPASS-Informasjon (Press release). http://www.datatilsynet.no/. (2007). [Online; accessed April-19-2018]. [26] Jeremy Day, Yizhou Huang, Edward Knapp, and Ian Goldberg. 2011. SPEcTRe: Spot-checked Private Ecash Tolling at Roadside. In Proceedings of the 10th Annual ACM Workshop on Privacy in the Electronic Society (WPES ’11). 61–68. [27] Wiebren de Jonge and Bart Jacobs. 2008. Privacy-Friendly Electronic Traffic Pricing via Commits. In Formal Aspects in Security and Trust – FAST 2008 (Lecture Notes in Computer Science), Vol. 5491. 143–161. [28] Yevgeniy Dodis and Aleksandr Yampolskiy. 2004. A Verifiable Random Function With Short Proofs and Keys. Cryptology ePrint Archive, Report 2004/310. (2004). http://eprint.iacr.org/2004/310. [29] Taher ElGamal. 1984. A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms. In Advances in Cryptology – CRYPTO’84 (Lecture Notes in Computer Science), G. R. Blakley and David Chaum (Eds.), Vol. 196. Springer, Heidelberg, 10–18. [30] Alex Escala and Jens Groth. 2014. Fine-Tuning Groth-Sahai Proofs. In PKC 2014: 17th International Conference on Theory and Practice of Public Key Cryptography (Lecture Notes in Computer Science), Hugo Krawczyk (Ed.), Vol. 8383. Springer, Heidelberg, 630–649. https://doi.org/10.1007/978-3-642-54631-0_36 [31] eugdpr.org. 2018. The EU General Data Protection Regulation (GDPR). https://www.eugdpr.org/. (2018). [Online; accessed April-19-2018]. [32] Flavio D Garcia, Eric R Verheul, and Bart Jacobs. 2011. Cell-Based Roadpricing. In European Public Key Infrastructure Workshop – EuroPKI 2011 (Lecture Notes in Computer Science), Vol. 7163. 106–122. [33] Jens Groth and Amit Sahai. 2008. Efficient Non-interactive Proof Systems for Bilinear Groups. In Advances in Cryptology – EUROCRYPT 2008 (Lecture Notes in Computer Science), Nigel P. Smart (Ed.), Vol. 4965. Springer, Heidelberg, 415–432. [34] ECONOLITE Group. 2018. Connected Vehicle CoProcessor Module. http://www.econolitegroup.com/wp-content/uploads/2017/05/ controllers-connectedvehicle-datasheet.pdf. (2018). [Online; accessed 07-April-2018]. [35] E-ZPass Group. 2017. E-ZPass Statistics: 2005 - 2016. https://e-zpassiag.com/about-us/statistics. (2017). [Online; accessed May-08-2018]. [36] Gunnar Hartung, Max Hoffmann, Matthias Nagel, and Andy Rupp. 2017. BBA+: Improving the Security and Applicability of PrivacyPreserving Point Collection. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 - November 03, 2017, Bhavani M. Thuraisingham, David Evans, Tal Malkin, and Dongyan Xu (Eds.). ACM, 1925–1942. https://doi.org/10.1145/3133956.3134071 [37] Gottfried Herold, Max Hoffmann, Michael Klooß, Carla Ràfols, and Andy Rupp. 2017. New Techniques for Structural Batch Verification in Bilinear Groups with Applications to Groth-Sahai Proofs. In ACM CCS 17: 24th Conference on Computer and Communications Security, Bhavani M. Thuraisingham, David Evans, Tal Malkin, and Dongyan Xu (Eds.). ACM Press, 1547–1564. [38] Jaap-Henk Hoepman and George Huitema. 2010. Privacy Enhanced Fraud Resistant Road Pricing. What Kind of Information Society? Governance, Virtuality, Surveillance, Sustainability, Resilience (2010), 202–213. [39] Kapsch (Toll Collection System Integrator and Supplier). 2018. Personal Communication. (2018).

28

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

[40] Yuval Ishai, Manoj Prabhakaran, and Amit Sahai. 2008. Founding Cryptography on Oblivious Transfer - Efficiently. In Advances in Cryptology – CRYPTO 2008 (Lecture Notes in Computer Science), David Wagner (Ed.), Vol. 5157. Springer, Heidelberg, 572–591. [41] Roger Jardí-Cedó, Jordi Castellà-Roca, and Alexandre Viejo. 2014. Privacy-Preserving Electronic Toll System with Dynamic Pricing for Low Emission Zones. In Data Privacy Management, Autonomous Spontaneous Security and Security Assurance – DPM 2014, SETOP 2014 and QASA 2014. Revised Selected Papers (Lecture Notes in Computer Science), Vol. 8872. 327–334. [42] Roger Jardí-Cedó, Macià Mut-Puigserver, M. Magdalena Payeras-Capellà, Jordi Castellà-Roca, and Alexandre Viejo. 2014. Electronic Road Pricing System for Low Emission Zones to Preserve Driver Privacy. In Modeling Decisions for Artificial Intelligence – MDAI 2014. Proceedings. 1–13. [43] Roger Jardí-Cedó, Macià Mut-Puigserver, M. Magdalena Payeras-Capellà, Jordi Castellà-Roca, and Alexandre Viejo. 2016. Privacypreserving Electronic Road Pricing System for Multifare Low Emission Zones. In Proceedings of the 9th International Conference on Security of Information and Networks (SIN 2016). 158–165. [44] Yuto Kawahara, Tetsutaro Kobayashi, Michael Scott, and Akihiro Kato. 2016. Barreto-Naehrig Curves. Internet Draft. Internet Engineering Task Force. Work in Progress. [45] Florian Kerschbaum and Hoon Wei Lim. 2015. Privacy-Preserving Observation in Public Spaces. In ESORICS 2015: 20th European Symposium on Research in Computer Security, Part II (Lecture Notes in Computer Science), Günther Pernul, Peter Y. A. Ryan, and Edgar R. Weippl (Eds.), Vol. 9327. Springer, Heidelberg, 81–100. https://doi.org/10.1007/978-3-319-24177-7_5 [46] Eike Kiltz, Jiaxin Pan, and Hoeteck Wee. 2015. Structure-Preserving Signatures from Standard Assumptions, Revisited. In Advances in Cryptology – CRYPTO 2015, Part II (Lecture Notes in Computer Science), Rosario Gennaro and Matthew J. B. Robshaw (Eds.), Vol. 9216. Springer, Heidelberg, 275–295. https://doi.org/10.1007/978-3-662-48000-7_14 [47] Hamish Koelmeyer and Sithamparanathan Kandeepan. 2017. Tagless Tolling using DSRC for Intelligent Transport System: An Interference Study. In Asia Modelling Symposium. IEEE. [48] Jorn Lapon, Markulf Kohlweiss, Bart De Decker, and Vincent Naessens. 2010. Performance Analysis of Accumulator-Based Revocation Mechanisms. In Security and Privacy - Silver Linings in the Cloud - 25th IFIP TC-11 International Information Security Conference, SEC 2010, Held as Part of WCC 2010, Brisbane, Australia, September 20-23, 2010. Proceedings (IFIP Advances in Information and Communication Technology), Kai Rannenberg, Vijay Varadharajan, and Christian Weber (Eds.), Vol. 330. Springer, 289–301. https://doi.org/10.1007/ 978-3-642-15257-3_26 [49] Yunxin Jeff Li. 2010. An overview of the DSRC/WAVE technology. In International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness. Springer, 544–558. [50] Markets and Markets. 2017. Electronic Toll Collection Market Study. https://www.marketsandmarkets.com/Market-Reports/ electronic-toll-collection-system-market-224492059.html. (2017). [Online; accessed April-19-2018]. [51] Sarah Meiklejohn, Keaton Mowery, Stephen Checkoway, and Hovav Shacham. 2011. The Phantom Tollbooth: PrivacyPreserving Electronic Toll Collection in the Presence of Driver Collusion. In Proceedings of the 20th USENIX Security Symposium. [52] Dustin Moody, Rene C. Peralta, Ray A. Perlner, Andrew R. Regenscheid, Allen L. Roginsky, and Lidong Chen. 2015. Report on Pairingbased Cryptography. In Journal of Research of the National Institute of Standards and Technology, Vol. 120. National Insititute of Standards and Technology, Gaithersburg, MD, USA, 11–27. [53] ABC News. 2008. Toll Records Catch Unfaithful Spouses. https://abcnews.go.com/Technology/story?id=3468712&page=1. (2008). [Online; accessed April-19-2018]. [54] European Parliament. 2014. Technology Options for the European Electronic Toll Service. http://www.europarl.europa.eu/RegData/ etudes/STUD/2014/529058/IPOL_STUD(2014)529058_EN.pdf. (2014). [Online; accessed April-19-2018]. [55] Raluca Ada Popa, Hari Balakrishnan, and Andrew J. Blumberg. 2009. VPriv: Protecting Privacy in Location-Based Vehicular Services. In Proceedings of the 18th USENIX Security Symposium. 335–350. [56] Andy Rupp, Foteini Baldimtsi, Gesine Hinterwälder, and Christof Paar. 2015. Cryptographic Theory Meets Practice: Efficient and Privacy-Preserving Payments for Public Transport. ACM Trans. Inf. Syst. Secur. 17, 3 (2015), 10:1–10:31. https://doi.org/10.1145/2699904 [57] Savari.net. 2017. MobiWAVE On-Board-Unit (OBU). http://savari.net/wp-content/uploads/2017/05/MW-1000_April2017.pdf. (2017). [Online; accessed 05-February-2018]. [58] American Civil Liberties Union. 2015. Toll Records Catch Unfaithful Spouses. https://www.aclu.org/blog/privacy-technology/ location-tracking/newly-obtained-records-reveal-extensive-monitoring-e-zpass. (2015). [Online; accessed April-19-2018].

A

PRELIMINARIES

In this appendix we introduce the algebraic setting and building blocks we make use of. In particular, the latter includes non-interactive zero-knowledge proofs, commitments, signatures, encryption and pseudo-random functions. We also describe possible instantiations for these building blocks and explain how these primitives are used in our system.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

A.1

• P4TC

29

Algebraic Setting and Assumptions

Our protocol instantiations are based on an asymmetric bilinear group setting gp := (G 1 , G 2 , GT , e, p, д1 , д2 ). We adopt the following definition from [36]. Definition A.1 (Prime-order Bilinear Group Generator). A prime-order bilinear group generator is a PPT algorithm SetupGrp that on input of a security parameter 1n outputs a tuple of the form gp := (G 1 , G 2 , GT , e, p, д1 , д2 ) ← SetupGrp(1n )

(1)

where G 1 , G 2 , GT are descriptions of cyclic groups of prime order p, log p = Θ(n), д1 is a generator of G 1 , д2 is a generator of G 2 , and e : G 1 × G 2 → GT is a map (aka pairing) which satisfies the following properties: • Efficiency: e is efficiently computable. • Bilinearity: ∀a ∈ G 1 , b ∈ G 2 , x, y ∈ Zp : e (a x , b y ) = e (a, b) xy . • Non-Degeneracy: e (д1 , д2 ) generates GT . The setting is called asymmetric, if no efficiently computable homomorphisms between G 1 and G 2 are known. In the remainder of this paper, we consider the asymmetric kind. Our construction relies on the Co-CDH assumption for identification, and the security of our building blocks (cp. Appendix A.2) in asymmetric bilinear groups. For our special instantiation of the building blocks (see there), security holds under the SXDH assumption, which again implies the Co-CDH assumption. The SXDH assumption essentially asserts that the DDH assumption holds in both source groups G 1 and G 2 of the bilinear map and is formally defined as: Definition A.2 (DDH and SXDH Assumption). n (1) We say that the DDH assumption holds with respect to SetupGrp over G i if the advantage AdvDDH SetupGrp,i, A (1 ) defined by   Pr  ′  b = b  

gp := (G 1 , G 2 , GT , e, p, д1 , д2 ) ← SetupGrp(1n ) xy x, y, z ← Zp ; h 0 := дi ; h 1 := дiz R b ← {0, 1} y b ′ ← A(1n , gp, дix , дi , hb )

   − 1  2  

is a negligible function in n for all PPT algorithms A. (2) We say that the SXDH assumption holds with respect to SetupGrp if the above holds for both i = 1 and i = 2. The Co-CDH assumption is defined as follows: Definition A.3 (Co-CDH assumption). We say that the Co-CDH assumption holds with respect to SetupGrp if n the advantage AdvCO-CDH SetupGrp, A (1 ) defined by  gp := (G , G , G , e, p, д , д ) ← SetupGrp(1n ) 1 2 T 1 2  x Pr  a = д2 x ← Zp  a ← A(1n , gp, д1x ) 

   

is a negligible function in n for all PPT algorithms A.

A.2

Cryptographic Building Blocks

Our semi-generic construction makes use of various cryptographic primitives including (F gp -extractable) NIZK proofs, equivocal and extractable homomorphic commitments, digital signatures, public-key encryption, symmetric encryption and pseudo-random functions. The latter building blocks need to be efficiently and securely

30

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

combinable with the chosen NIZK proof system, which is Groth-Sahai (GS) in our case. In the following, we give an introduction to the formal definition of these building blocks. A.2.1 Group setup. Let SetupGrp be a bilinear group generator (cp. Definition A.1) that outputs descriptions of asymmetric bilinear groups gp ← SetupGrp(1n ). The following building blocks all make use of SetupGrp as their common group setup algorithm. A.2.2 NIZKs. Let R be a witness relation for some NP language L = {stmnt | ∃ wit s.t. (stmnt, wit) ∈ R}. A zero-knowledge proof system allows a prover P to convince a verifier V that some stmnt is contained in L without V learning anything beyond that fact. In a non-interactive zero-knowledge (NIZK) proof, only one message, the proof π , is sent from P to V for that purpose. More precisely, a (group-based) NIZK proof system is defined as: Definition A.4 (Group-based NIZK proof system). Let R be an efficiently verifiable relation containing triples (gp, x, w ). We call gp the group setup, x the statement, and w the witness. Given some gp, let L gp be the language containing all statements x such that (gp, x, w ) ∈ R. Let POK := (SetupGrp, SetupPoK, Prove, Vfy) be a tuple of PPT algorithms such that • SetupGrp takes as input a security parameter 1n and outputs public parameters gp. We assume that gp is given as implicit input to all algorithms. • SetupPoK takes as input gp and outputs a (public) common reference string CRSpok . • Prove takes as input the common reference string CRSpok , a statement x, and a witness w with (gp, x, w ) ∈ R and outputs a proof π . • Vfy takes as input the common reference string CRSpok , a statement x, and a proof π and outputs 1 or 0. POK is called a non-interactive zero-knowledge proof system for R with F gp -extractability, if the following properties are satisfied: (1) Perfect completeness: For all gp ← SetupGrp(1n ), CRSpok ← SetupPoK(gp), (gp, x, w ) ∈ R, and π ← Prove(CRSpok , x, w ) we have that Vfy(CRSpok , x, π ) = 1. (2) Perfect soundness: For all (possibly unbounded) adversaries A we have that   Pr  Vfy(CRSpok , x, π ) = 0  

gp ← SetupGrp(1n ) CRS pok ← SetupPoK(gp) (x, π ) ← A(CRSpok ) x < L gp

    

is 1. (3) Perfect F gp -extractability: There exists a polynomial-time extractor (SetupEPoK, ExtractW) such that for all (possibly unbounded) adversaries A pok-ext-setup (a) we have that the advantage AdvPOK, A (n) defined by " Pr 1 ← A(CRSpok )  − Pr  1 ← A(CRS ′ )  pok 

is zero.

# gp ← SetupGrp(1n ), CRSpok ← SetupPoK(gp) gp ← SetupGrp(1n ), ′ , td (CRS epok ) ← SetupEPoK(gp) pok

  



M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

31

pok-ext

(b) we have that the advantage AdvPOK, A (n) defined by    Pr  ∃ w : F gp (w ) = W ∧ (gp, x, w ) ∈ R   

gp ← SetupGrp(1n ) ′ (CRSpok , tdepok ) ← SetupEPoK(gp) ′ ) (x, π ) ← A(CRSpok ′ , x, π ) 1 ← Vfy(CRSpok W ← ExtractW(CRS ′ , td epok , x, π ) pok

       

is 1. (4) Composable Zero-knowledge: There exists a polynomial-time simulator (SetupSPoK, SimProof) and hint generator GenHint such that for all PPT adversaries A pok-zk-setup (a) we have that the advantage AdvPOK, A (n) defined by " Pr 1 ← A(CRSpok )   ′ − Pr  1 ← A(CRSpok ) 

# gp ← SetupGrp(1n ), CRSpok ← SetupPoK(gp) gp ← SetupGrp(1n ), hint ← GenHint(gp), ′ , td (CRSpok spok ) ← SetupSPoK(gp, hint),

   



is negligible in n. pok-zk (b) we have that the advantage AdvPOK, A (n) defined by   SimProof ′ (CRS′pok,tdspok, ·, ·) n ′ , td Pr 1 ← A (1 , CRSpok spok )   ′ − Pr 1 ← A Prove(CRSpok, ·, ·) (1n , CRS ′ , tdspok ) pok ′ ′ is negligible in n, where gp ← SetupGrp(1n ), (CRSpok , tdspok ) ← SetupSPoK(gp), and SimProof ′ (CRSpok , ′ ′ tdspok , ·, ·) is an oracle which on input (x, z) ∈ R returns SimProof(CRSpok , tdspok , x ). Both SimProof and Prove return ⊥ on input (x, z) < R.

We wish to point out some remarks. Remark A.5. (1) The considered language L gp may depend on gp. (2) F gp -extractability actually implies soundness independent of F gp : If there was a false statement x which verifies, violating soundness, then obviously there is no witness w for x, which violates extractability. (3) Extractability essentially means that ExtractW—given a trapdoor tdepok —is able to extract F gp (wit) for an NP-witness wit for stmnt ∈ L gp from any valid proof π . If F gp is the identity function, then the actual witness is extracted and the system is called a proof of knowledge. Our Instantiation. We choose the SXDH-based Groth-Sahai proof system [30, 33] as our NIZK, as it allows for very efficient proofs (under standard assumptions). On the other hand, GS comes with some drawbacks, which makes applying it sometimes pretty tricky: It only works for algebraic languages containing certain types of equations, it is not always zero-knowledge, and F gp is not always the identity function. For the sake of completeness Appendix A.3 contains a description what types of equations are supported by GS. When choosing our remaining building blocks and forming equations we ensured that they fit into this framework. Likewise, we ensured that the ZK-property holds for the languages we consider. For proving correctness of the computations taking place on the user’s side we need three different instantiations (1) (2) (3) of the GS proof system, denoted by P1, P2 and P3, respectively. The corresponding functions F gp , F gp and F gp depend on the considered languages L 1 , L 2 and L 3 (defined in Appendices D.6 to D.8) but they have the following

32

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

in common: They behave as the identity function with respect to group elements and map elements from Zp either to G 1 or G 2 (by exponentiation with basis д1 or д2 ) depending on whether these are used as exponents of a G 1 or G 2 element in the language. All proof systems will share a common reference string. More precisely, we demand that there is a shared extraction setup algorithm which generates the CRS and also a single extraction trapdoor for P1, P2 and P3. Let us denote this algorithm by SetupEPoK and its output by (CRSpok , tdepok ) ← SetupEPoK(gp) in the following. Furthermore, let us denote the prove and verify algorithms of these proof systems by PX .Prove and PX .Vfy, for 1 ≤ X ≤ 3. Range Proofs. For one particular task 17 we need range proofs in order to show that some Zp -element λi′ is “smaller” than some fixed system parameter B with both elements being regarded as elements from {0, . . . , p − 1} and the normal ≤-relation from the integers. We realize these range proofs using Groth-Sahai by applying the signature-based technique in [11]. Here, the verifier initially chooses parameters q and t such that every possible P λi′ can be represented as λi′ = tj=0 d j q j with 0 ≤ d j < q. He also generates a signature on every possible value of a digit, i.e., 0, . . . , q − 1. The prover then shows using a Groth-Sahai NIZK that each λi′ can be indeed represented in this way and that he knows a signature by the verifier for each of its digits. Clearly, a structure-preserving signature scheme is needed for this purpose and we use the one in [1]. A.2.3 Commitments. A commitment scheme allows a user to commit to a message m and publish the result, called commitment c, in a way that m is hidden from others, but also the user cannot claim a different m afterwards when he opens c. A commitment scheme is called an F gp -binding commitment scheme for a bijective function F gp on the message space, if one commits to a message m but opens the commitment using F gp (m). We call the codomain of F gp the implicit message space. Definition A.6. A commitment scheme COM := (SetupGrp, Gen, Com, Open) consists of four algorithms: • SetupGrp takes as input a security parameter 1n and outputs public parameters gp. These parameters also define a message space M, an implicit message space M ′ and a function F gp : M → M ′ mapping a message to its implicit representation. We assume that gp is given as implicit input to all algorithms. • Gen is a PPT algorithm, which takes gp as input and outputs public parameters CRScom . • Com is a PPT algorithm, which takes as input parameters CRScom and a message m ∈ M and outputs a commitment c to m and some decommitment value d. • Open is a deterministic polynomial-time algorithm, which takes as input parameters CRScom , commitment c, an implicit message M ∈ M ′, and opening d. It returns either 0 or 1. COM is correct if for all gp ← SetupGrp(1n ), CRScom ← Gen(gp), m ∈ M, and (c, d ) ← Com(CRScom , m) it holds that 1 = Open(CRScom , F gp (m), c, d ). We say that COM is a (computationally) hiding, F gp -binding, equivocal, extractable commitment scheme if it has the following properties: Hiding

(1) Hiding: For all PPT adversaries A it holds that the advantage AdvCOM, A (1n ) defined by     Pr  b = b ′     17 More

precisely: The task Wallet Issuing

gp ← SetupGrp(1n ) CRScom ← Gen(gp) (m 0 , m 1 , state) ← A(1n , CRScom ) R b ← {0, 1} (c, d ) ← Com(CRScom , mb ) b ′ ← A(c, state)

    1  − 2    

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

33 Hiding

is negligible in n. The scheme is called statistically hiding if AdvCOM, A (1n ) is negligible even for an unbounded adversary A. F -Binding n (2) F gp -Binding: For all PPT adversaries A it holds that the advantage Adv Agp (1 ) defined by   Open(CRScom , M, c, d ) = 1 Pr  ∧  ′ ′ Open(CRS com , M , c, d ) = 1 

gp ← SetupGrp(1n ) CRScom ← Gen(gp) (c, M, d, M ′, d ′ ) ← A(1n , CRScom ) M , M′

    

is negligible in n. (3) Equivocal: There exist PPT algorithms SimGen, SimCom and Equiv such that for all PPT adversaries A (a) we have that the advantage AdvSimGen COM, A (n) defined by " Pr 1 ← A(CRScom ) " ′ ) − Pr 1 ← A(CRScom

gp ← SetupGrp(1n ), # CRS ← Gen(gp) com # gp ← SetupGrp(1n ), ′ (CRScom , tdeqcom ) ← SimGen(gp)

is negligible in n. Equiv (b) we have that the advantage AdvCOM, A (n) defined by   Pr  1 ← A      − Pr  1 ← A   

gp ← SetupGrp(1n ), ′ ′ , td (CRScom , tdeqcom ) ← SimGen(gp), CRScom eqcom , m, c, d m ← M, ′ , m) (c, d ) ← Com(CRScom gp ← SetupGrp(1n ), ′   (CRScom , tdeqcom ) ← SimGen(gp), ′ , td ′ ′ CRScom (c ′, r ) ← SimCom(gp), eqcom , m, c , d m ← M, d ′ ← Equiv(CRS ′ , tdeqcom , m, r ) com 



           



is zero. (4) Extractable: There exist PPT algorithms ExtGen and Extract such that for all PPT aversaries A (a) we have that the advantage AdvExtGen COM, A (n) defined by " Pr 1 ← A(CRScom ) " ′ ) − Pr 1 ← A(CRScom

gp ← SetupGrp(1n ), # CRScom ← Gen(gp) # gp ← SetupGrp(1n ), ′ (CRScom , tdextcom ) ← ExtGen(gp)

is negligible in n. (b) we have that the advantage AdvExt COM, A (n) defined by   ′ , td Pr  Extract(CRScom extcom , c) , F gp (m)  

is zero.

gp ← SetupGrp(1n ), ′ (CRScom , tdextcom ) ← ExtGen(gp), ′ ), c ← A(CRScom ∃!m ∈ M, r : c ← Com(CRS ′ , m; r ) com

    

34

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Furthermore, assume that the message space of COM is an additive group. Then COM is called additively homomorphic, if there exist additional PPT algorithms c ← CAdd(CRScom , c 1 , c 2 ) and d ← DAdd(CRScom , d 1 , d 2 ) which on input of two commitments and corresponding decommitment values (c 1 , d 1 ) ← Com(CRScom , m 1 ) and (c 2 , d 2 ) ← Com(CRScom , m 2 ), output a commitment c and decommitment d, respectively, such that Open(CRScom , c, F gp (m 1 + m 2 ), d ) = 1. Finally, we call COM opening complete if for all M ∈ M ′ and arbitrary values c, d with Open(CRScom , M, c, d ) = 1 holds that there exists m ∈ M and randomness r such that (c, d ) ← Com(CRScom , m; r ). Our Instantiation. We will make use of two commitment schemes that are both based on the SXDH assumption. We first use the shrinking α-message-commitment scheme from Abe et al. [2]. This commitment scheme has message space Zpα , commitment space G 2 and opening value space G 1 . It is statistically hiding, additively homo′ -Binding, for F ′ (m , . . . , m ) := (дm 1 , . . . , дm α ). We use this commitment scheme as morphic, equivocal, and F gp 1 α gp 1 1 C1 with CRS CRS1com in the following ways in our system: • In the Wallet Issuing task we use C1 for messages from Zp2 (α := 2). • In the Wallet Issuing and Debt Accumulation tasks we use C1 for messages from Zp4 (α := 4). • In the Debt Accumulation task we use C1 for messages from Zp (α := 1). We also use the (dual-mode) equivocal and extractable commitment scheme from Groth and Sahai [33]. This commitment scheme has message space G 1 , commitment space G 12 and opening value space Zp2 . It is equivocal, ′ -Binding for F ′ (m) := m. In our system, we use this commitment scheme as C2 with extractable, hiding and F gp gp 2 CRS CRScom in the Wallet Issuing and Debt Accumulation tasks. A.2.4 Digital signatures. A signature allows a signer to issue a signature σ on a message m using its secret signing key sk such that anybody can publicly verify that σ is a valid signature for m using the public verification key pk of the signer but nobody can feasibly forge a signature without knowing sk. Definition A.7. A digital signature scheme S := (SetupGrp, Gen, Sgn, Vfy) consists of four PPT algorithms: • SetupGrp takes as input a security parameter 1n and outputs public parameters gp. We assume that gp is given as implicit input to all algorithms. • Gen takes gp as input and outputs a key pair (pk, sk). The public key and gp define a message space M. • Sgn takes as input the secret key sk and a message m ∈ M, and outputs a signature σ . • Vfy takes as input the public key pk, a message m ∈ M, and a purported signature σ , and outputs a bit. We call S correct if for all n ∈ N, gp ← SetupGrp(1n ), m ∈ M, (pk, sk) ← Gen(gp), σ ← Sgn(sk, m) we have 1 ← Vfy(pk, σ , m). We say that S is EUF-CMA secure if for all PPT adversaries A it holds that the advantage AdvEUF-CMA (1n ) S, A defined by   Pr  Vfy(pk, σ ∗ , m ∗ ) = 1  

gp ← SetupGrp(1n ) (pk, sk) ← Gen(gp) (m∗ , σ ∗ ) ← A Sgn(sk, ·) (1n , pk) m∗ < {m 1 , . . . , mq }

    

is negligible in n, where Sgn(sk, ·) is an oracle that, on input m, returns Sgn(sk, m), and {m 1 , . . . , mq } denotes the set of messages queried by A to its oracle. Our Instantiation. As we need to prove statements about signatures, the signature scheme has to be algebraic. For our construction, we use the structure-preserving signature scheme of Abe et al. [1], which is currently the most efficient structure-preserving signature scheme. Its EUF-CMA security proof is in the generic group model, a restriction we consider reasonable with respect to our goal of constructing a highly efficient P4TC scheme. An alternative secure in the plain model would be [46]. For the scheme in [1], one needs to fix two

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

35 µ+ν +2

µ

additional parameters µ, ν ∈ N0 defining the actual message space G 1ν × G 2 . Then sk ∈ Zp and σ ∈ G 22 × G 1 . We use the signature scheme S from Abe et al. [1] in the following ways in our system: • • • • •

µ+2

, pk ∈ G 1

× G 2ν

In the tasks Wallet Issuing and Debt Accumulation we use S for messages from G 2 × G 1 (ν = 1 and µ = 1). In the Wallet Issuing task we use S for messages from G 12ℓ+4 (ν = 2ℓ + 4 and µ = 0). Also in the Wallet Issuing task we use S for messages from G 21+j (ν = 0 and µ = 1 + j). 3+y In the RSU Certification task we use S for messages from G 1 (ν = 3 + y and µ = 0). 3+y In the TSP Registration task we use S for messages from G 1 (ν = 3 + y and µ = 0).

A.2.5 Asymmetric Encryption. We use the standard definitions for asymmetric encryption schemes and corresponding security notions, except that we enhance them with a SetupGrp algorithm to fit our algebraic setting. Definition A.8 (Asymmetric Encryption). An asymmetric encryption scheme E := (SetupGrp, Gen, Enc, Dec) consists of four PPT algorithms: • SetupGrp takes as input a security parameter 1n and outputs public parameters gp. We assume that gp is given as implicit input to all algorithms. • Gen(gp) outputs a pair (pk, sk) of keys, where pk is the (public) encryption key and sk is the (secret) decryption key. • Enc(pk, m) takes a key pk and a plaintext message m ∈ M and outputs a ciphertext c. • Dec(sk, c) takes a key sk and a ciphertext c and outputs a plaintext message m or ⊥. We assume that Dec is deterministic. Correctness is defined in the usual sense. We will make use of an IND-CPA-secure as well as an IND-CCA-secure encryption scheme. Definition A.9 (IND-CPA-Security for Asymmetric Encryption). An asymmetric encryption scheme E is INDIND-CPA-sym n CPA-secure if for all PPT adversaries A it holds that the advantage AdvE, A (1 ) defined by     Pr  b = b ′    

gp ← SetupGrp(1n ) (pk, sk) ← Gen(gp) (state, m 0 , m 1 ) ← A(1n , pk) R b ← {0, 1} c ∗ ← Enc(pk, mb ) b ′ ← A(state, c ∗ )

    1  − 2    

is negligible in n, where |m 0 | = |m 1 |. Definition A.10 (IND-CCA2-Security for Asymmetric Encryption). An asymmetric encryption scheme E is IND-CCA-sym n IND-CCA2-secure if for all PPT adversaries A it holds that the advantage AdvE, A (1 ) defined by     Pr  b = b ′    

gp ← SetupGrp(1n ) (pk, sk) ← Gen(gp) (state, m 0 , m 1 ) ← A Dec(sk, ·) (1n , pk) R b ← {0, 1} c ∗ ← Enc(pk, mb ) ′ ′ b ← A Dec (sk, ·) (state, c ∗ )

    1  − 2    

36

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

is negligible in n, where |m 0 | = |m 1 |, Dec(sk, ·) is an oracle that gets a ciphertext c from the adversary and returns Dec(sk, c) and Dec′ (sk, ·) is the same, except that it returns ⊥ on input c ∗ . Our Instantiation. The encryption scheme in the construction of our base protocol requires IND-CPA-security and needs to be algebraic since we want to use it with GS proofs. This building block can be instantiated with the ElGamal encryption scheme [29] which is IND-CPA-secure under the DDH assumption.18 We use this encryption scheme E for messages in G 1 in the Wallet Issuing task. In order to establish secure channels for exchanging protocol messages we use the Twin-DH-based encryption scheme from Cash et al. [19]. A.2.6 Symmetric Encryption. We use standard definitions for symmetric encryption schemes and corresponding security notions. Definition A.11 (Symmetric Encryption). A symmetric encryption scheme E := (Gen, Enc, Dec) consists of three PPT algorithms: • Gen(1n ) outputs a (random) key k. • Enc(k, m) takes a key k and a plaintext message m ∈ M and outputs a ciphertext c. • Dec(k, c) takes a key k and a ciphertext c and outputs a plaintext message m or ⊥. We assume that Dec is deterministic. As for asymmetric encryption, we require correctness in the usual sense. We now define a multi-message version of IND-CCA2 security. It is a well-known fact that IND-CCA2 security in the multi-message setting is equivalent to standard IND-CCA2 security. (This can be shown via a standard hybrid argument.) Definition A.12 (IND-CCA2-Security for Symmetric Encryption). A symmetric encryption scheme E is INDIND-CCA-sym n CCA2-secure if for all PPT adversaries A it holds that the advantage AdvE, A (1 ) defined by    Pr  b = b ′   

k ← Gen(1n ) Enc(k, ·),Dec(k, ·) (1n ) (state, j, m0 , m1 ) ← A b ← {0, 1} c∗ ← (Enc(k, m ), . . . , Enc(k, m )) b,1 b, j ′ b ′ ← A Enc(k, ·),Dec (k, ·) (state, c∗ )

   1  − 2   

is negligible in n, where m0 , m1 are two vectors of j ∈ N bitstrings each such that for all 1 ≤ i ≤ j: m 0,i = m 1,i , Enc(k, ·) and Dec(k, ·) denote oracles that return Enc(k, m) and Dec(k, c) for a m or c chosen by the adversary, and Dec′ (k, ·) is the same as Dec(k, ·), except that it returns ⊥ on input of any c i∗ that is contained in c∗ . Our Instantiation. We use an IND-CCA2-secure symmetric encryption scheme in our protocol to encrypt the exchanged protocol messages. To this end, we combine an IND-CCA2-secure asymmetric encryption (see section above) with an IND-CCA2-secure symmetric encryption in the usual KEM/DEM approach. The symmetric encryption can for example instantiated with AES in CBC mode together with HMAC based on the SHA-256 hash function. The result will be IND-CCA2-secure if AES is a pseudo-random permutation and the SHA-256 compression function is a PRF when the data input is seen as the key [9]. A.2.7 Pseudo-Random Functions. A pseudo-random function (PRF) F : K × X → Y is a keyed function whose output cannot be distinguished from randomness, i.e., any PPT adversary given oracle access to either F (k, ·) or a randomly chosen function R : X → Y, cannot distinguish between them with non-negligible probability. More precisely, a PRF–more precisely a family of PRF’s in the security parameter 1n —is defined as follows. 18 The

DDH assumption is implied by the SXDH assumption.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

37

Definition A.13. A (group-based) pseudo-random function (PRF) PRF := (SetupGrp, Gen, Eval) consists of three PPT algorithms: • SetupGrp takes as input a security parameter 1n and outputs public parameters gp. We assume that gp is given as implicit input to all algorithms. The input domain Xgp , the key space Kgp , and the codomain Ygp may all depend on gp. R

• Gen takes gp as input and outputs a key k ∈ Kgp . (Typically, we have k ← Kgp .) • Eval is a deterministic algorithm which takes as input a key k ∈ Kgp and a value x ∈ Xgp , and outputs some y ∈ Ygp . Usually, we simply write y = F (k, x ) for short. prf

We say that PRF is secure if for all PPT adversaries A it holds that the advantage Adv A (1n ) defined by " Pr 1 ← A F (k, ·) (gp)  − Pr  1 ← A R (·) (gp)  

gp ← SetupGrp(1n ), # k ← Gen(gp) gp ← SetupGrp(1n ),   R R ← {R : Xgp → Ygp } 



is negligible in n. Our Instantiation. As we want to efficiently prove statements about PRF outputs, we use an efficient algebraic 1

construction, namely the Dodis-Yampolskiy PRF [28]. This function is defined by F (k, x ) : Zp2 → G 1 , (k, x ) 7→ д1x +k , R

where k ← Zp is the random PRF key. It is secure for inputs {0, . . . , nPRF } ⊂ Zp under the nPRF -DDHI assumption. This is a family of increasingly stronger assumptions which is assumed to hold for asymmetric bilinear groups.

A.3

Types of Equations Supported by GS-NIZKs

Let SetupGrp be a bilinear group generator (cf. Definition A.1) for which the SXDH assumption holds and gp := (G 1 , G 2 , GT , e, p, д1 , д2 , дT ) ← SetupGrp(1n ) denotes the output of SetupGrp. Furthermore, let X 1 , . . . , Xm1 ∈ G 1 , x 1 , . . . , xm2 ∈ Zp , Y1 , . . . , Ym3 ∈ G 2 , and y1 , . . . , ym4 ∈ Zp denote variables in the following types of equations: • Pairing-Product Equation (PPE): m3 m3 m1 m1 Y Y Y Y e (Ai , Yi ) e (X i , Bi ) e (X i , Yi )γi, j = tT i=1

i=1

i=1 j=1

for constants Ai ∈ G 1 , Bi ∈ G 2 , tT ∈ GT , γi, j ∈ Zp . • Multi-Scalar Equation (MSE) over G 1 : m4 m1 m1 Y m4 Y Y Y γ y y Ai i X ibi X i i, j j = t 1 i=1

i=1

i=1 j=1

for constants Ai , t 1 ∈ G 1 , bi , γi, j ∈ Zp . • Multi-Scalar Equation (MSE) over G 2 : m3 m3 m2 m2 Y Y Y Y γ x Bix i Yiai Yj i, j i = t 2 i=1

i=1

i=1 j=1

for constants Bi , t 2 ∈ G 2 , ai , γi, j ∈ Zp . • Quadratic Equation (QE) over Zp : m4 m2 m2 X m4 X X X a i yi + x i bi + γi, j x i y j = t i=1

i=1

i=1 j=1

38

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

for constants ai , bi , γi, j , t ∈ Zp . Let L gp be a language containing statements described by the conjunction of n 1 pairing-product equations over gp, n 2 multi-scalar equations over G 1 , n 3 multi-scalar equations over G 2 , and n 4 quadratic equations over Zp , where ni ∈ N0 are constants, as well as by witnesses w = (X 1 , . . . , Xm1 , x 1 , . . . , xm2 , Y1 , . . . , Ym3 , y1 , . . . , ym4 ) , where mi ∈ N0 . Then the Groth-Sahai proof system for L gp , as introduced by [33], is perfectly correct, perfectly sound, and satisfies F gp -extractability [30, 33] for m3 m3 m2 m4 m1 m2 m4 1 F gp : G m 1 × Zp × G 2 × Z p → G 1 × G 1 × G 2 × G 2

with y

F gp (w ) := ((X i )i ∈[m1 ] , (д1x i )i ∈[m2 ] , (Yi )i ∈[m3 ] , (д2 i )i ∈[m4 ] ) . 19 It is also known to be composable zero-knowledge [30, 33] as long as for all PPEs in L gp holds that either • tT = 1 or Q • the right-hand side of the PPE can be written as ki=1 e (Ai , Bi ) for constants Ai ∈ G 1 , Bi ∈ G 2 , such that for each i DLOG(Ai ) or DLOG(Bi ) is known. In the latter case, hint from Definition A.4 would contain these discrete logarithms which would simply be put (as additional elements) into the simulation trapdoor tdspok . Also note that if these discrete logarithms are not known there is a workaround which consists of adding new helper variables to L gp [33].

B

ADVERSARIAL MODEL

For our security analysis to hold we consider a restricted class of adversarial environments Z and will argue why these restrictions are reasonable.

B.1

Restricted Corruption

Firstly, we only consider security under static corruption. This is a technical necessity to enable the use of PRFs to generate fraud detection IDs. With adaptive corruption the simulator would be required to come up with a consistent PRF that could explain the up to the point of corruption uniformly and randomly drawn fraud detection IDs. We deem static corruption to provide a sufficient level of security as a statically corrupted party may always decide to interact honestly first and then deviate from the protocol later. Adaptive corruption only comes into play with deniability which is not part of our desired properties. Secondly, we only consider adversaries Z that corrupt one of the following sets 20 : (1) A subset of users. (2) All users and a subset of RSUs, TSP and SA. (3) A subset of RSUs, TSP and SA. (4) All of RSUs, TSP and SA as well as a subset of users. We subsume the cases (1) and (2) under the term Operator Security and the cases (3) and (4) under the term User Security. For both Operator Security and User Security the two subordinate cases are collectively treated by the same proof. It is best to picture the cases inversely: To prove Operator Security we consider a scenario in which at least some parties at the operator’s side remain honest; to prove User Security we consider a scenario in which at least some users remain honest. Please note that both scenarios also commonly cover the case in which all parties are corrupted, however, this extreme case is tedious as it is trivially simulatable. 19 F

acts as identity function on group elements a ∈ G 1 and b ∈ G 2 but returns д1s ∈ G 1 or д2s ∈ G 2 for exponents s ∈ Zp . that “subset” also includes the empty or full set.

20 Note

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

39

One might believe that the combination of all cases above should already be sufficient to guarantee privacy, security and correctness under arbitrary corruption. For example, case (4) guarantees that privacy and correctness of accounting are still provided for honest users, even if all of the operator’s side and some fellow users are corrupted. This ought to be the worst case from a honest user’s perspective. Further note that the proof of indistinguishability quantifies over all environments Z. This includes environments that—still in case (4)—first corrupt all the operator’s side but then let some (formally corrupted) parties follow the protocol honestly. However, consider a scenario in which a party acts as a Man-in-the-Middle (MitM) playing the roles of a user and an RSU at the same time while interacting with an honest user in the left interaction and an honest RSU in the right interaction. The MitM simply relays messages back and forth unaltered. If the MitM approaches the RSU and the RSU requests the MitM to participate in Debt Accumulation, the MitM relays all messages of the RSU to the honest user (possibly driving the same road behind the MitM). The honest user replies and the MitM forwards the messages to the honest RSU. The MitM passes by the RSU unnoticed and untroubled, while the honest user pays for the MitM. This scenario is not captured by any of the above cases and is the missing gap towards arbitrary corruption. As the MitM is corrupted and plays both roles of a user and RSU, this falls into case (2) or (4). But either all users are corrupted in case (2), which contradicts the existence of an honest user in the left interaction, or all of RSUs, TSP and SA are corrupted in case (4), which does not allow for an honest RSU in the right interaction. This attack is known as relay attack. Please note that the MitM does not need to break any cryptographic assumption for this kind of attack as it just poses as a prolonged communication channel. There are some possible counter measures that can be applied in the real world. For example, using distance-bounding the honest user could refuse to participate in the protocol, if the RSU is known to be to far away. However, these are physical counter measures and thus are not captured by the UC notion nor any other cryptographic notion. Actually, it is a strength of the UC model that this gap is made explicit. For example, a set of game-based security notions that cover a list of individual properties would most likely not unveil this issue.

B.2

Channel Model

Most of the time we assume channels to be secure and authenticated. The only exception is Debt Accumulation, which uses a secure but only half-authenticated channel. Half-authenticated channel means only the RSU authenticates itself and the user does not. These channels exempt us from the burden of defining a simulator for the case where only honest parties interact with each other and rules out some trivial replay attacks. Of course, the authentication of the channels must be tied to the parties’ credentials used in the toll collection system. In other words, the same key registration service that registers the public keys for the toll collection system must also be used to register the public keys to authenticate the communication.

B.3

Handling of Aborts

Lastly, we assume our functionality also uses the implicit writing conventions for ideal functionalities [13]. In particular, our simulator can delay outputs and abort at any point. Beyond that, the simulator has the power to override the output to honest parties with an abort reason (e.g., “blacklisting”) if it decides to abort. Another important aspect with respect to aborts is that privacy may be partially lost for an honest user if a task aborts prematurely. In this case the user’s identity can be unveiled if he chooses to take part in another transaction with the same wallet. The reason for this is the double-spending detection mechanism. If the (honest) user has not correctly updated his previous state, the user must start the next interaction from the same state as before. Thus an (honest) user can be tricked into committing a double-spending without any fault of his own. We explicitly model this artifact into the simulator.

40

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Again, as in Appendix B.1 this kind of “attack” does not require to break any cryptographic assumptions. Hence, the UC notion as well as any other cryptographic security notion does not take aborts into account—the more so as aborts can occur at any time due to technical problems. To mitigate the effect of aborts we propose that each user has one or more backup wallets and switches over to another wallet if the principal wallet becomes unusable and the user does not accept to become linkable for a single transaction. Of course, at the end of a billing period a user must always clear all of his wallets. If aborts occur more frequently than one would reasonably expect due to technical problems and the RSU/TSP is suspected to purposely abort in order to lift privacy, the user (or some NGO) is expected to file a claim. However, these are non-technical countermeasures.

C

FULL SYSTEM DEFINITION

In this appendix we give a detailed description and explanation of our ideal privacy-preserving electronic toll collection functionality FP4TC . As explained before we define this as a monolithic, reactive functionality with polynomially many parties. This is mainly due to a shared state that the system requires. We will therefore firstly explain how this state is recorded by FP4TC before we go on to describe its behavior in a modular way by explaining each task 21 it provides. The main feature of FP4TC is that it keeps track of all conducted transactions in a global transaction database TRDB (see Fig. 4). Note that in this case by “transaction” we mean every instance of the tasks Wallet Issuing, Debt Accumulation or Debt Clearance, not just Debt Accumulation. Each transaction entry trdb ∈ TRDB is of the form trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b). It contains the identities pid U and pid R of the involved user and RSU (or TSP in the case of Wallet Issuing and Debt Clearance) respectively, the ID λ of the wallet that was used as well as the price p and total balance b of the wallet state after this transaction. Furthermore, each transaction entry is identified by a unique serial number s and links via s prev to the previous transaction trdbprev (which corresponds to the wallet state before trdb). Lastly, a fraud detection ID φ and a counter x are part of the transaction entry. The counter starts at zero for any newly registered wallet and x = (x prev + 1) always holds. Hence it is unique across all wallet states belonging to the same wallet λ if and only if no double-spending has been committed with this wallet. The fraud detection ID is constant for each pair (λ, x ) of wallet ID and counter instead of being unique for each transaction, but unique for different pairs of wallet ID and counter. Therefore fraud detection IDs are stored in a partially defined, but one-to-one mapping f Φ : (L × N0 ) → Φ within FP4TC . Full transaction entries trdb are only created by instances of Debt Accumulation. Both Wallet Issuing and Debt Clearance create stubs of the form (⊥, s,φ, 0, λ, pid U , pid T , 0, 0) (s

prev

, ⊥,φ, x, λ, pid U , pid T , −b

bill

and , 0)

respectively. Every other task does not alter TRDB but only queries it. Although this database and the mapping to fraud detection IDs contains most of the information our toll collection scheme needs, FP4TC stores three more partially defined mappings: f AU : L → A U and f AR : PID R → A R of user and RSU attribute vectors as well as f Π of proofs of guilt that have been issued or queried in the context of double-spending detection. The ideal function FP4TC provides twelve different tasks in total which we divide up into three categories: “System Setup Tasks” (comprising all Registrations and RSU Certification), “Basic Tasks” (Wallet Issuing, Debt Accumulation and Debt Clearance), and “Feature Tasks” (Prove Participation, Double-Spending Detection, Guilt Verification and User Blacklisting). 21 Note

that we are intentionally avoiding the word “phase”—which is commonly used in other composite functionalities—as it suggests a predefined order/number of executions.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

41

Functionality FP4TC I. State • Set TRDB = {trdb} of transactions trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b) ∈ S × S × Φ × N0 × L × PID U × PID R × Zp × Zp . • A (partial) mapping f Φ giving the fraud detection ID φ corresponding to given wallet ID λ and counter x: f Φ : L × N0 → Φ, (λ, x ) 7→ φ • A (partial) mapping f AU assigning user attributes to a given wallet ID λ: f AU : L → A U , λ 7→ a U • A (partial) mapping f AR assigning RSU attributes to a given RSU PID pid R : f AR : PID R → A R , pid R 7→ a R • A (partial) mapping f Π assigning a user PID pid U and a proof of guilt π to a validity bit: f Π : PID U × Π → {OK, NOK} II. Behavior • DR Registration (Fig. 5) • TSP Registration (Fig. 6) • RSU Registration (Fig. 7) • User Registration (Fig. 8) • RSU Certification (Fig. 9)

• Wallet Issuing (Fig. 10) • Debt Accumulation (Fig. 11) • Debt Clearance (Fig. 12)

• • • •

Prove Participation (Fig. 13) Double-Spending Detection (Fig. 14) Guilt Verification (Fig. 15) User Blacklisting (Fig. 16)

Fig. 4. The functionality FP4TC Table 2. Notation that only occurs in the ideal functionality

Identifier

Description

PID corrupt fΦ

set of corrupted party identifiers (partial) mapping giving the fraud detection ID φ corresponding to given wallet ID λ and counter x (partial) mapping assigning user attributes to a given wallet ID λ (partial) mapping f AR assigning RSU attributes to a given RSU PID pid R

f AU f AR

For better clarity of the following task descriptions, an overview of the variables used can be found in Tables 2 and 3.

C.1

System Setup Tasks

To set up the system two things are required: All parties—the DR, TSP, RSUs and users—have to register public keys with the bulletin board G bb to be able to participate in the toll collection system. As all of these registration

42

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Table 3. Notation that occurs in the ideal functionality and in the real protocol

Identifier

Ideal Type

Real Type

Description

pid DR pid T pid R pid U

PID DR PID T PID R PID U

{0, 1}∗ {0, 1}∗ {0, 1}∗ {0, 1}∗

party identifier of the DR party identifier of the TSP party identifier of a RSU party identifier of a user

pkDR pk T pk R pk U

PKDR PK T PK R PK U

G1 3+y G 13 × (G 12 × G 2 ) × G 13 G 13 G 1 × (G 12 × G 22 )

public DR key public TSP key public RSU key public user key

aU aR aT

AU AR AR

G 2j y G1 y G1

user attributes RSU attributes TSP attributes

b p φ s λ x bl R x blR bl T

Zp Z Φ S L N0 list of Φ elements N list of PK U elements

Zp Z G1 G1 Zp {0, . . . , nPRF } list of G 1 elements N list of G 1 × (G 12 × G 22 ) elements

balance price to pay at an RSU fraud detection ID serial number wallet ID; is used as PRF seed (PRF) counter RSU blacklist RSU blacklist parameter TSP blacklist

tasks are similar, we will not describe them separately. In the special case of RSUs a certification conducted with the TSP also needs to take place. C.1.1 Registrations. The tasks of DR, RSU and User Registration (cp. Figs. 5 to 8) are straightforward and analogous. They do not take any input apart from “register” but in the case of the user we assume the physical identity of the party has been verified out-of-band before this task is conducted. In each case a check is performed first whether the task has been run before for this party. If this does not lead to an abort, the adversary is asked to provide a public key pk for the respective party which is then registered with the bulletin board G bb and output to the newly registered party. The registration of the TSP is slightly different (cp. Fig. 6). In addition to “register” it takes an attribute vector a T as input, which—after a check if this task has been run before—is leaked to the adversary together with pid T when the public key pk T is obtained. In addition all data structures of FP4TC are initialized as empty sets and empty (partial) mappings respectively. Again, the public key pk T is output to the TSP. C.1.2 RSU Certification. RSU certification (cp. Fig. 9) is a two-party task between the TSP and an RSU in which

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

43

Functionality FP4TC (cont.) – Task DR Registration DR input: (register) (1) If this task has been run before, output ⊥ and abort. (2) Send (registering_dr, pid DR ) to the adversary and obtain the key (pkDR ).a (3) Call G bb with input (register, pkDR ). DR output: (pkDR ) a Giving the adversary the power to control the key generation serves two purposes: i) It gives a stronger security guarantee, i.e., security for a honest TSP is retained, even if its keys are maliciously generated (due to bad random number generator). ii) It gives the simulator the lever to simulate faithfully.

Fig. 5. The functionality FP4TC (cont. from Fig. 4)

Functionality FP4TC (cont.) – Task TSP Registration TSP input: (register, a T ) (1) If this task has been run before, output ⊥ and abort. (2) TRDB := ∅ (3) Send (registering_tsp, pid T , a T ) to the adversary and obtain the key (pk T ).a (4) Call G bb with input (register, pk T ). TSP output: (pk T ) a Giving

the adversary the power to control the key generation serves two purposes: i) It gives a stronger security guarantee, i.e., security for a honest TSP is retained, even if its keys are maliciously generated (due to bad random number generator). ii) It gives the simulator the lever to simulate faithfully.

Fig. 6. The functionality FP4TC (cont. from Fig. 4)

Functionality FP4TC (cont.) – Task RSU Registration RSU input: (register) (1) If this task has been run before, output ⊥ and abort. (2) Send (registering_rsu, pid R ) to the adversary and obtain the key (pk R ).a (3) Call G bb with input (register, pk R ). RSU output: (pk R ) a Giving

the adversary the power to control the key generation serves two purposes: i) It gives a stronger security guarantee, i.e., security for honest parties is retained, even if their keys are maliciously generated (due to bad random number generator). ii) It gives the simulator the lever to simulate faithfully.

Fig. 7. The functionality FP4TC (cont. from Fig. 4)

the RSU is assigned an attribute vector a R .22 The content of attribute vectors is in no way restricted by FP4TC 22 Although

only attributes are set in this task, we will later see in the task of Debt Accumulation that these also serve as a kind of certificate, as RSUs are only able to successfully participate in Debt Accumulation if they have been assigned attributes by the TSP.

44

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Functionality FP4TC (cont.) – Task User Registration User input: (register) (1) If this task has been run before, output ⊥ and abort. (2) Send (registering_user, pid U ) to the adversary and obtain the key (pk U ).a (3) Call G bb with input (register, pk U ). User output: (pk U ) a Giving

the adversary the power to control the key generation serves two purposes: i) It gives a stronger security guarantee, i.e., security for honest parties is retained, even if their keys are maliciously generated (due to bad random number generator). ii) It gives the simulator the lever to simulate faithfully.

Fig. 8. The functionality FP4TC (cont. from Fig. 4)

Functionality FP4TC (cont.) – Task RSU Certification RSU input: (certify) TSP input: (certify, a R ) (1) If f AR (pid R ) is already defined, output ⊥ to both parties and abort; else append f AR (pid R ) := a R to f AR . (2) Leak (certifying_rsu, pid R , a R ) to the adversary. RSU output: (a R ) TSP output: (OK) Fig. 9. The functionality FP4TC (cont. from Fig. 4)

and can be used to implement different scenarios like location-based toll collection or entry-exit toll collection, but could also be maliciously used to void unlinkability. This a R is input by the TSP, while the RSU only inputs its desire to be certified. FP4TC checks if there have already been attributes assigned to the RSU previously (in which case it aborts) and otherwise appends f AR (pid R ) := a R to the partial mapping f AR which internally stores all RSU attributes already assigned. The identity pid R and attributes a R are leaked to the adversary before the attributes are output to the RSU.

C.2

Basic Tasks

In this section we describe the basic tasks you would expect from any toll collection scheme. These tasks are Wallet Issuing, Debt Accumulation and Debt Clearance. As mentioned before, those are the only tasks in which transaction entries are created. C.2.1 Wallet Issuing. Wallet Issuing (cp. Fig. 10) is a two-party task between a user and the TSP in which a new and empty wallet is created for the user. The TSP inputs an attribute vector a U and a blacklist bl T of user public keys that are not allowed to obtain any new wallets. Firstly FP4TC randomly picks a (previously unused) serial number s for the new transaction entry trdb. If the user is corrupted the adversary may at this point choose another corrupted user’s identity pid U that is to be used for this wallet. Multiple corrupted users are allowed to have wallets issued for one another but are not able to request a new wallet for an honest user. The corresponding public key for the user ID pid U is obtained from the bulletin board G bb and checked against the TSP’s blacklist bl T . If this does not lead to an abort a new wallet ID λ and fraud detection ID φ are uniquely and randomly picked,

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

45

Functionality FP4TC (cont.) – Task Wallet Issuing User input: (issue) TSP input: (issue, a U , bl T ) R

(1) Pick serial number s ← S that has not previously been used. (2) If pid U ∈ PID corrupt leak a U to the adversary, and ask if another PID pid U ∈ PID corrupt should be used instead.a (3) Receive pk U from the bulletin-board G bb for PID pid U .⊥ (4) If pk U ∈ bl T , output blacklisted to both parties and abort. R

(5) Pick wallet ID λ ← L that has not previously been used. R

(6) If pid U < PID corrupt pick φ ← Φ that has not previously been used, otherwise ask the adversary for a fraud detection ID φ that has not previously been used.b Append f Φ (λ, 0) := φ to f Φ . (7) Append trdb := (⊥, s, φ, 0, λ, pid U , pid T , 0, 0) to TRDB (8) Append f AU (λ) := a U to f AU . User output: (s, a U ) TSP output: (s) ⊥ If this does not exist, output ⊥ and abort. a group of corrupted users collude, a correct mapping to a specific users cannot be guaranteed, because corrupted users might share their credentials. b Picking the upcoming fraud detection ID randomly asserts untrackability for honest users. For corrupted user, we do not (and cannot) provide such a guarantee.

a If

Fig. 10. The functionality FP4TC (cont. from Fig. 4)

unless the user is corrupted in which case the adversary chooses φ. This may infringe upon the unlinkability of the user’s transactions and we do not give any privacy guarantees for corrupted users. Lastly a transaction entry trdb := (⊥, s, φ, 0, λ, pid U , pid T , 0, 0) corresponding to the new and empty wallet is stored in TRDB and the wallet’s attributes f AU (λ) := a U appended to the partial mapping f AU . Both parties get the serial number s as output; the user also receives the attribute vector a U to check this has been assigned correctly and more importantly does not contain any identifying information. C.2.2 Debt Accumulation. This two-party task (cp. Fig. 11) is conducted whenever a registered user passes an RSU and it serves the main purpose of adding toll to a previous wallet state of the user. In this task the user only inputs a serial number s prev , indicating which past wallet state he wishes to use for this transaction. The participating RSU in turn inputs a blacklist bl R of fraud detection IDs. Firstly, FP4TC randomly picks a (previously unused) serial number s for the new transaction entry trdb. If the user is corrupted the adversary may at this point choose another corrupted user’s identity pid U that is to be used for this transaction. FP4TC looks up if a wallet state trdbprev in TRDB corresponds to the user input s prev and belongs to the users pid U . This guarantees that each user can only accumulate debt on a wallet that was legitimately issued to him. Multiple corrupted users may choose to swap wallets between them but are not able to use an honest user’s wallet. From the previous wallet state prev trdbprev = (·, s prev , φ prev , x prev , λprev , pid U , pid R , ·, b prev )

46

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Functionality FP4TC (cont.) – Task Debt Accumulation User input: (pay_toll, s prev ) RSU input: (pay_toll, bl R ) (1) (2) (3) (4) (5) (6)

R

Pick serial number s ← S that has not previously been used. If pid U ∈ PID corrupt ask the adversary, if another PID pid U ∈ PID corrupt should be used instead.a prev Select (·, s prev , φ prev , x prev , λprev , pid U , pid R , ·, b prev ) ∈ TRDB (with s prev , pid U being the uniqe key)⊥ . prev If φ ∈ bl R , output blacklisted to both parties and abort. Set λ := λprev and x := x prev + 1. If f Φ (λ, x ) is already defined, set φ := f Φ (λ, x ). R

Else, if pid U < PID corrupt pick φ ← Φ that has not previously been used, otherwise ask the adversary for a fraud detection ID φ that has not previously been used.b Append f Φ (λ, x ) := φ to f Φ . prev prev (7) Set a U := f AU (λ), a R := f AR (pid R ), and a R := f AR (pid R ).⊥ prev prev (8) Calculate price p := Opricing (a U , a R , a R ). If pid R ∈ PID corrupt , then leak (a U , a R , a R ) to the adversary and obtain a price p. (9) b := b prev + p. (10) Append (s prev , s, φ, x, λ, pid U , pid R , p, b) to TRDB. User output: (s, a R , p, b) prev RSU output: (s, φ, a U , a R ) ⊥ If

this does not exist, output ⊥ and abort. a group of corrupted users collude, a correct mapping to a specific users cannot be guaranteed, because corrupted users might share their credentials. b The ideal model only guarantees privacy for honest users. For corrupted users the fraud detection ID might be chosen adversarially (cp. text body). a If

Fig. 11. The functionality FP4TC (cont. from Fig. 4)

the ideal function gets a fraud detection ID φ prev that is checked against bl R . The other information in trdbprev is then used to determine the content of the new transaction entry trdb. The user ID pid U and wallet ID λ stay the same, pid R is set to the identity of the participating RSU, and the counter x prev increased by one to obtain x. FP4TC checks if there is already a fraud detection ID φ := f Φ (λ, x ) assigned to the pair (λ, x ) (either because the user committed double spending or because it has been precalculated for blacklisting purposes). If not and the user is honest it picks a new φ uniquely at random. If the user is corrupted the fraud detection ID is not randomly drawn but picked by the adversary. This may infringe upon the unlinkability of the user’s transactions but as prev mentioned before, we do not give any privacy guarantees for corrupted users. The attributes a U , a R and a R are again looked up internally and leaked to the adversary who chooses the price p of this transaction. Having the price determined in this way makes it clear that FP4TC does not give any guarantees on the “right” amount of debt being added at this point. Instead it gives the user enough information about the transaction to appeal out-of-band afterwards if the wrong amount of debt is added. We assume this detectability will keep RSUs in the real world from adding too much debt. Lastly the new balance b is calculated from the price and old balance before trdb is stored in TRDB. Note that all information leading to the new wallet state came from data internally stored in FP4TC itself, not from an input by the user or RSU, and can therefore not be compromised. The serial number, RSU attributes, price and balance are output to the user so he may check he only paid the amount he

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

47

Functionality FP4TC (cont.) – Task Debt Clearance User input: (clear_debt, s prev ) TSP input: (clear_debt) (1) (2) (3) (4) (5)

R

Pick serial number s ← S that has not previously been used. If pid U ∈ PID corrupt ask the adversary, if another PID pid U ∈ PID corrupt should be used instead.a prev Select (·, s prev , φ prev , x prev , λprev , pid U , pid R , ·, b prev ) ∈ TRDB (with s prev , pid U being the unique key).⊥ prev prev Set λ := λ and x := x + 1. If f Φ (λ, x ) is already defined, set φ := f Φ (λ, x ). R

Else, if pid U < PID corrupt pick φ ← Φ that has not previously been used, otherwise ask the adversary for a fraud detection ID φ that has not previously been used.b Append f Φ (λ, x ) := φ to f Φ . prev prev prev (6) If pid T ∈ PID corrupt , set a R := f AR (pid R ) ⊥ , and leak a R to the adversary. bill prev (7) b := b . (8) Append (s prev , s, φ, x, λ, pid U , pid T , −b bill , 0) to TRDB. User output: (b bill ) TSP output: (pid U , φ, b bill ) ⊥ If

this does not exist, output ⊥ and abort. a group of corrupted users collude, a correct mapping to a specific users cannot be guaranteed, because corrupted users might share their credentials. b The ideal model only guarantees privacy for honest users. For corrupted users the fraud detection ID might be chosen adversarially (cp. text body).

a If

Fig. 12. The functionality FP4TC (cont. from Fig. 4)

expected. The RSU gets the serial number as well but also the fraud detection ID to enable double-spending detection and the attributes of the user and previous RSU. C.2.3 Debt Clearance. As Debt Clearance (cp. Fig. 12) is very similar to the task of Debt Accumulation, we will refrain from describing it again in full detail but rather just highlight the differences to Debt Accumulation. The first difference is that it is conducted with the TSP rather than an RSU and no blacklist is taken as input as we do not want to prevent anyone from paying their debt. Although this task results in a transaction entry trdb as well, no new serial number s is picked. This emphasizes that the new wallet state is final and can not be updated again by using its serial number as input for another transaction. Instead of obtaining a price from the adversary, prev prev the attributes a R of the previous RSU pid R are leaked to the adversary in case the TSP is corrupted. The (negative) price for the transaction entry is set to the billing amount b bill which in turn is taken to be the previous balance b prev of the wallet. As new transaction entry trdb := (s prev , ⊥, φ, x, λ, pid U , pid T , −b bill , 0) is added to TRDB and the bill b bill output to both parties. Furthermore, the TSP gets the user’s ID pid U , as we assume Debt Clearance to be identifying, as well as the fraud detection ID φ to enable double-spending detection.

48

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Functionality FP4TC (cont.) – Task Prove Participation User input: (prove_participation) pp SA input: (prove_participation, pid U , S R ) pp

(1) If pid U ∈ PID corrupt leak S R to the adversary. pp (2) If ∃ (·, s, ·, ·, ·, pid U , ·, ·, ·) ∈ TRDB such that s ∈ S R , then out U := outSA := OK else out U := outSA := NOK User output: (out U ) SA output: (outSA ) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 13. The functionality FP4TC (cont. from Fig. 4)

Functionality FP4TC (cont.) – Task Double-Spending Detection TSP input: (scan_for_fraud, φ) (1) Pick trdb , trdb ′ in TRDB such that trdb = (·, ·, φ, ·, ·, pid U , ·, ·, ·) and trdb ′ = (·, ·, φ, ·, ·, pid U , ·, ·, ·).⊥ (2) Ask the adversary for a proof π ∈ Π corresponding to pid U and append (pid U , π ) 7→ OK to f Π . TSP output: (pid U , π ) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 14. The functionality FP4TC (cont. from Fig. 4)

C.3

Feature Tasks

To obtain a more secure toll collection system we also provide the feature tasks Prove Participation, DoubleSpending Detection, Guilt Verification and User Blacklisting. All of those tasks deal with different aspects arising from fraudulent user behavior. C.3.1 Prove Participation. This is a two-party task involving a user and the SA (cp. Fig. 13) and assumed to be conducted with every user physically identified by one of the SA’s cameras. It is meant to allow every honest user to prove his successful participation in a transaction with the RSU where the photo was taken, while the fraudulent user will not be able to do so. This deanonymizes the one transaction proven by the user but does not effect the anonymity or unlinkability of any other transactions. As input the task requires a user ID pid U and a pp set S R of serial numbers in question from the SA, but only consent from the user. If the user is corrupted, the serial numbers are leaked to the adversary. FP4TC then checks if there is a transaction recorded in TRDB with the pp user ID pid U and a serial number s contained in S R . The result of the check (OK or NOK) is output to both parties. C.3.2 Double-Spending Detection and Guilt Verification. Due to our requirement to allow offline RSUs, a user is able to fraudulently collect debt on outdated states of his wallet. This double-spending can not be prevented but must be detected afterwards. To ensure this, FP4TC provides the tasks Double-Spending Detection (cp. Fig. 14) and Guilt Verification (cp. Fig. 15).

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

49

Functionality FP4TC (cont.) – Task Guilt Verification Party input: (verify_guilt, pid U , π ) (1) If f Π (pid U , π ) is defined, then set out := f Π (pid U , π ) and output (out). (2) If pid U ∈ PID corrupt , then leak (pid U , π ) to the advesary and obtain result out, else set out := NOK. (3) Append (pid U , π ) 7→ out to f Π , Party output: (out) Fig. 15. The functionality FP4TC (cont. from Fig. 4)

Double-Spending Detection is a one-party task performed by the TSP. It takes a fraud detection ID φ as input and checks the transaction database TRDB for two distinct entries containing this same fraud detection ID. In case such entries are present the adversary is asked for a proof π to be issued for this instance of double-spending. The user ID and proof (pid U , π ) are appended to f Π and marked as valid. Additionally, both are output to the TSP. Guilt Verification is a one-party task as well but can be performed by any party. It takes a user ID pid U and a double-spending proof π as input. First, it checks if this particular pair (pid U , π ) has already been defined and outputs whatever has been output before. This is necessary to ensure consistency across different invocations. If (pid U , π ) has neither been issued nor queried before and the affected user is corrupted, the adversary is allowed to decide, if this proof is should be accepted. This means, we do not protect corrupted users from false accusations of guilt. If the user is honest and (pid U , π ) has neither been issued nor queried before, then the proof is marked as invalid. This protects honest user from being accused by made-up proofs which have not been issued by the ideal functionality itself. Finally, the result is recorded for the future and output to the party. This possibility of public verification is vital to prevent the TSP from wrongly accusing any user of double-spending and should for instance be utilized by the DR before it agrees to blacklist and therefore deanonymize a user on the basis of double-spending. C.3.3 User Blacklisting. The task of User Blacklisting (cp. Fig. 16) is a two-party task between the DR and TSP and serves two purposes: Firstly, the debt b bill owed by the user that is to be blacklisted is calculated. Secondly, fraud detection IDs for all of the user’s wallets are determined and handed to the TSP so it may add them to the RSU blacklists bl R . Note that the generation of the blacklist bl T of user public keys is handled internally by the TSP and not in the scope of this task or FP4TC . The TSP inputs the ID pid U of the user in question while the DR only needs to consent. To calculate the user’s outstanding debt, all transaction entries in TRDB containing pid U are taken and their respective prices p summed up to obtain b bill . Note that although this sum may contain the prices of transactions and wallets that have already been cleared, this does not falsify the value of b bill as every successful execution of Debt Clearance creates an entry with the amount that was cleared as negative price. For the actual blacklisting the set of all wallet IDs belonging to pid U is looked up and the remainder of the task is conducted for every wallet λ separately. FP4TC checks how many values of f Φ (λ, ·) are already defined and extends them to the first x blR fraud detection IDs, where x blR is a parameter we assume to be greater than the number of transactions a user would be involved in within one billing period. To that end, yet undefined fraud detection IDs f Φ (λ, x ) with x ≤ x blR are uniquely and randomly drawn or—in case of a corrupted user—obtained from the adversary. Finally, all fraud detection IDs φ = f Φ (λ, x ) for x ≤ x blR and all wallets λ of the user are output to the TSP together with the outstanding debt b bill .

50

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Functionality FP4TC (cont.) – Task User Blacklisting DR input: (blacklist_user) TSP input: (blacklist_user, pid U ) (1) If pid T < PID corrupt , set Lbl := {λ | (·, ·, ·, ·, λ, pid U , ·, ·, ·) ∈ TRDB}, otherwise obtain a set of serial numbers S root from the adversary and set Lbl := {λ | (⊥, s, ·, ·, λ, pid U , ·, ·, ·) ∈ TRDB with s ∈ S root }. (2) TRDBbl := {trdb ∈ TRDB | trdb = (·, ·, ·, ·, λ, ·, ·, p, ·) s. t. λ ∈ Lbl } P (3) b bill := trdb∈TRDBbl p. (4) For each λ ∈ Lbl : (a) x λ := max{ x | f Φ (λ, x ) is already defined}. (b) For x ∈ {x λ + 1, . . . , x blR }: R

(i) If pid U < PID corrupt , pick φ ← Φ that has not previously been used, otherwise leak (λ, x ) to the adversary and obtain fraud detection ID φ that has not previously been used. (ii) Append (λ, x ) 7→ φ to f Φ . (5) Φbl := { f Φ (λ, x ) | λ ∈ Lbl , 0 ≤ x ≤ x blR }. DR output: (OK) TSP output: (b bill , Φbl ) Fig. 16. The functionality FP4TC (cont. from Fig. 4)

D

FULL PROTOCOL DESCRIPTION

In this appendix we describe and define a real protocol π P4TC that implements our Toll Collection System FP4TC . We say Definition D.1 (P4TC Scheme). A protocol π is called a privacy-preserving electronic toll collection scheme, if it GUC-realizes FP4TC . The proof that π P4TC is a GUC-realization of FP4TC is postponed to Appendix E. The style of the presentation follows the same lines as the presentation of the ideal model FP4TC in Appendix C: Although πP4TC is a single, monolithic protocol with different tasks, the individual tasks are presented as if they were individual protocols. While in the ideal model all information is kept in a single, pervasive, trustworthy database, in the real model such a database does not exist. Instead, the state of the system is distributed across all parties. Each party locally stores a piece of information: The user owns a “User Wallet” which is updated during each transaction, the RSU collects “Double-Spending Tags” as well as “Proof of Participation Challenges” which are periodically sent to the TSP and the TSP creates and keeps “Hidden User Trapdoors” for each wallet issued. A precise definition what is stored by which party is depicted in Fig. 17. For typographic reasons we additionally split the presentation of most tasks into a wrapper protocol and a core protocol. Except for a few cases, there is a one-to-one correspondence between wrapper and core protocols. The wrapper protocols have the same input/output interfaces as their ideal counterparts and describe steps that are executed by each party locally before and after the respective core protocol. These steps include loading keys, parsing the previously stored state, persisting the new state after the core protocol has returned, etc. The core protocols describe the actual interaction between parties and what messages are exchanged. This dichotomy between wrapper and core protocols is lifted for four exceptions:

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

51

UC-Protocol πP4TC I. Local State (1) The TSP internally records: • It’s public and private key (pk T , sk T ). • A self-signed certificate cert RT . • A set AHTD of augmented hidden trapdoors. • A (partial) mapping {pk R 7→ a R } of RSU attributes. (2) Each RSU internally records: • It’s public and private key (pk R , sk R ). • A certificate cert R signed by the TSP. pp • Sets Ωdsp , Ωbl and Ω R of transaction information for double-spending detection, blacklisting and prove participation respectively. (3) Each user internally records: • His public and private key (pk U , sk U ). • A set {τ } of all past tokens issued to him. pp • A set Ω U of transaction information for prove participation. II. Behavior • DR Registration (Fig. 19) • Wallet Issuing (Fig. 29) • Prove Participation (Fig. 35) • TSP Registration (Fig. 21) • Debt Accumulation (Fig. 31) • Double-Spending Detection (Fig. 37) • Guilt Verification (Fig. 38) • RSU Registration (Fig. 23) • Debt Clearance (Fig. 33) • User Blacklisting (Fig. 39) • User Registration (Fig. 25) • RSU Certification (Fig. 27)

Fig. 17. The UC-protocol π P4TC

(1) We give an algorithm for the setup of the system (cf. Fig. 18) which explains how the CRS is generated. Of course, there is no wrapper protocol because setup of the CRS is not even part of our protocol but part of the setup assumption and provided by FCRS . (2) We describe a “utility algorithm” WalletVerification (cf. Fig. 41). This algorithm has no purpose on its own, but simple collects some shared code of multiple tasks. (3)+(4) We only have “wrapper protocols” for the tasks Double-Spending Detection and Guilt Verification (cf. Figs. 37 and 38) because they are so simple that splitting it each into two yields no advantage.

D.1

Secure Channels

In our system, all protocol messages are encrypted using CCA-secure encryption. For this purpose, a new session key chosen by the user is encrypted under the public key of an RSU/TSP for each interaction. We omit these encryptions when describing the protocols.

D.2

Wallets

A central component of our toll collection system is the wallet that is created during Wallet Issuing. It is of the form τ := (s, φ, x next , λ, a U , c R , d R , σ R , cert R , c T , d T , σ T , b, u 1next ).

52

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Setup(1n , B) gp := (G 1 , G 2 , GT , e, p, д1 , д2 ) ← SetupGrp(1n ) CRS1com ← C1.Gen(gp) CRS2com ← C2.Gen(gp) CRSpok ← SetupPoK(gp) CRS := (gp, B, CRS1com , CRS2com , CRSpok ) return CRS Fig. 18. System Setup Algorithm

Some of the components are fixed after creation, some change after every transaction. The fixed components consist of the wallet ID λ (which is also used as the PRF seed), the user attributes a U , the TSP commitment c T (a commitment on λ and the secret user identification key skid U ), its corresponding opening d T and a signature σ T on c T and a U created by the TSP. The alterable components consist of the RSU commitment c R (a commitment on λ, b, u 1next and x next ), its corresponding opening d R , a signature σ R on c R and s created by a RSU, the balance b, the double-spending mask u 1next for the next transaction, the PRF counter x next for the next interaction, a RSU certificate cert R and the serial number s and fraud detection ID φ := PRF(λ, x next − 1) for the current transaction. These components change after each interaction with a RSU via the Debt Accumulation task. In the following, a protocol or algorithm for each task is presented. For a better overview, it is depicted in Fig. 1 which parties are involved in each task (except for some registration tasks). Also, the used variables are summarized in Tables 3 and 4.

D.3

System Setup

To setup the system once (see Fig. 18), the public parameter CRS must be generated in a trustworthy way. The CRS CRS consists of a description of the underlying algebraic framework gp, a splitting base B and the individual CRSs for the used commitments and zero-knowledge proofs. We assume that the CRS is implicitly available to all protocols and algorithms. Either a number of mutually distrusting parties run a multi-party computation (using some other sort of setup assumption) to generate the CRS or a commonly trusted party is charged with this task. As a trusted-third party (the DR) explicitly participates in our system (for resolving disputes), this party could also run the system setup.

D.4

Registration

The registration algorithms of DR, T , R and U are all presented with a wrapper protocol πP4TC (see Figs. 19, 21, 23 and 25), and a core protocol (see Figs. 20, 22, 24 and 26). The wrapper protocol interacts with other UC entities, pre-processes the inputs, post-processes the outputs and internally invokes the core protocols. In the following, only the mechanics of the core protocols are explicitly described. The Dispute Resolver (DR) computes a key pair (pkDR , skDR ) which can be used to remove the unlinkability of user transactions in case of a dispute between the user and the TSP (see Figs. 19 and 20). The DR could be a (non-governmental) organization trusted by both, users to protect their privacy and the TSP to protect system security. The TSP must also generate a key pair (see Figs. 21 and 22). Therefore, the TSP generates several signature cert R R T key pairs (pk TT , sk TT ), (pkcert T , sk T ), (pk T , sk T ), where sk T is used in the Wallet Issuing task to sign the TSP

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

53

Table 4. Notation that only occurs in the real protocol Identifier

Type

Description

skDR sk T pk T T sk T T pkcert T skcert T pk R T sk R T sk R sk U pkid U skid U pkauth U skauth U

Zp 5+y Zp3 × Zp × Zp3 G 13 Zp3 3+y G 12 × G 2 5+y Zp G 13 Zp3 Zp3 Zp × Zp4 G1 Zp G 12 × G 22 Zp4

secret DR key secret TSP key public TSP commitment signing key (part of pk T ) secret TSP commitment signing key (part of sk T ) public certification key (part of pk T ) secret certification key (part of sk T ) public TSP’s RSU commitment signing key (part of pk T ) secret TSP’s RSU commitment signing key (part of sk T ) secret RSU key secret user key public user identification key (part of pk U ) secret user identification key (part of sk U ) public user authentication key (part of pk U ) secret user authentication key (part of sk U )

cT dT σT

G2 G1 G 22 × G 1

TSP commitment decommitment of c T signature on c T and a U

cR dR σR

G2 G1 G 22 × G 1

RSU commitment decommitment of c R signature on c R and s

cert R cert R T cert σR cert σT

G 13 × G 1 × (G 22 × G 1 ) y G 13 × G 1 × (G 22 × G 1 ) 2 G2 × G1 G 22 × G 1

RSU certificate TSP certificate signature on pk R and a R signature on pk R and a T T

c hid d hid ′′ c ser ′′ d ser

G2 G1 G 12 Zp2

hidden ID opening of hidden ID commitment on the RSU half of the serial number ′′ opening of c ser

ω dsp ω bl pp ωU pp ωR

G 1 × Zp × Zp G 1 × Zp G1 × G2 × G1 G1 × G2

transaction information for double-spending transaction information for blacklisting user transaction information prove participation RSU transaction information for prove participation

u1 u2 t

Zp Zp Zp

double-spending mask double-spending randomness double-spending tag

htd HTD ahtd AHTD nPRF

Zp × G 2 × G 1 × G 12ℓ+4 × (G 22 × G 1 ) set of Zp × G 2 × G 1 × G 12ℓ+4 × (G 22 × G 1 ) elements PID U × S × HTD set of PID U × S × HTD elements N

hidden user trapdoor set of hidden trapdoors augmented hidden user trapdoor set of augmented hidden trapdoors maximum value of the PRF counter

y

54

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

UC-Protocol πP4TC (cont.) – Task DR Registration DR input: (register) (1) If a key pair (pkDR , skDR ) has already been recorded, output ⊥ and abort. (2) Obtain CRS CRS from FCRS . (3) Run (pkDR , skDR ) ← DRRegistration(CRS) (see Fig. 20). (4) Record (pkDR , skDR ) internally and call G bb with input (register, pkDR ). DR output: (pkDR ) Fig. 19. The UC-protocol πP4TC (cont. from Fig. 17)

DRRegistration(CRS) parse (gp, B, CRS1com , CRS2com , CRSpok ) := CRS (pkDR , skDR ) ← E.Gen(gp) return (pkDR , skDR ) Fig. 20. DR Registration Core Protocol

UC-Protocol πP4TC (cont.) – Task TSP Registration TSP input: (register, a T ) (1) If a key pair (pk T , sk T ) has already been recorded, output ⊥ and abort. (2) Obtain CRS CRS from FCRS . (3) Run (pk T , sk T , cert RT ) ← TSPRegistration(CRS, a T ) (see Fig. 22). (4) Record (pk T , sk T ) and (cert RT ) internally and call G bb with input (register, pk T ). TSP output: (pk T ) Fig. 21. The UC-protocol πP4TC (cont. from Fig. 17)

TSPRegistration(CRS, a T ) parse (gp, B, CRS1com , CRS2com , CRSpok ) := CRS T (pk T T , sk T ) ← S.Gen(gp) cert (pkcert T , sk T ) ← S.Gen(gp) R (pk R T , sk T ) ← S.Gen(gp)   cert R T cert R (pk T , sk T ) := (pk T T , pk T , pk T ), (sk T , sk T , sk T ) cert R σT ← S.Sgn(skcert T , (pk T , a T )) R cert cert R T := (pk T , a T , σ T ) return (pk T , sk T , cert R T)

Fig. 22. TSP Registration Core Protocol

P4TC

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

UC-Protocol π P4TC (cont.) – Task RSU Registration RSU input: (register) (1) If a key pair (pk R , sk R ) has already been stored, output ⊥ and abort. (2) Obtain CRS CRS from FCRS . (3) Run (pk R , sk R ) ← RSURegistration(CRS) (see Fig. 24). (4) Store (pk R , sk R ) internally and call G bb with input (register, pk R ). RSU output: (pk R ) Fig. 23. The UC-protocol πP4TC (cont. from Fig. 17)

RSURegistration(CRS) parse (gp, B, CRS1com , CRS2com , CRSpok ) := CRS (pk R , sk R ) ← S.Gen(gp) return (pk R , sk R ) Fig. 24. RSU Registration Core Protocol

UC-Protocol πP4TC (cont.) – Task User Registration User input: (register) (1) If a key pair (pk U , sk U ) has already been stored, output ⊥ and abort. (2) Obtain CRS CRS from FCRS . (3) Run (pk U , sk U ) ← UserRegistration(CRS) (see Fig. 26). (4) Store (pk U , sk U ) internally and call G bb with input (register, pk U ). User output: (pk U ) Fig. 25. The UC-protocol πP4TC (cont. from Fig. 17)

UserRegistration(CRS) parse (gp, B, CRS1com , CRS2com , CRSpok ) := CRS R

y ← Zp y

id (pkid U , sk U ) := (д1 , y) auth (pkauth U , sk U ) ← S.Gen(gp)   auth id auth (pk U , sk U ) := (pkid U , pk U ), (sk U , sk U )

return (pk U , sk U ) Fig. 26. User Registration Core Protocol

55

56

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

UC-Protocol πP4TC (cont.) – Task RSU Certification RSU input: (certify) TSP input: (certify, a R ) (1) At the RSU side: • Load the internally recorded (pk R , sk R ).⊥ • Receive pk T from the bulletin-board G bb for PID pid T .⊥ (2) At the TSP side: • Load the internally recorded (pk T , sk T ).⊥ • Receive pk R from the bulletin-board G bb for PID pid R .⊥ • Check that no mapping pk R 7→ a′R has been registered before, else output ⊥ and abort. (3) Both sides: Run the code of RSUCertification between the RSU and the TSP (see Fig. 28) D E ((cert R ) , (OK)) ← RSUCertification R (pk T , pk R ), T (pk T , sk T , pk R , a R ) . (4) At the RSU side: • Parse a R from cert R . • Record cert R internally. (5) At the TSP side: • Record pk R 7→ a R internally. RSU output: (a R ) TSP output: (OK) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 27. The UC-protocol πP4TC (cont. from Fig. 17)

commitment c T and the user attributes a U , skcert T is used to sign RSU public keys in the RSU Certification task and sk RT is used in the Wallet Issuing task to sign the RSU commitment c R and the serial number s in place of a RSU. The TSP also generates a certificate cert RT for its own key pk RT . Each RSU must generate a key pair as well (see Figs. 23 and 24). For that purpose, each RSU generates a signature key pair (pk R , sk R ) that is used in the Debt Accumulation task to sign the RSU commitment c R . Each User also has to generate a key pair (see Figs. 25 and 26). The public key of the user identification key pair id (pkid U , sk U ) will be used to identify the user in the system and is assumed to be bound to a physical ID such as a passport number, social security number, etc. Of course, for this purpose the public key needs to be unique. We assume that ensuring the uniqueness of user public keys as well as verifying and binding a physical ID to them is done “out-of-band” before participating in the Wallet Issuing task. A simple way to realize the latter could be to make use of external trusted certification authorities. Each user also creates a user authentication key pair (pkauth U , auth sk U ) that is used to bind the hidden user trapdoor htd in Wallet Issuing and User Blacklisting to the user.

D.5

RSU Certification

The RSU Certification task is executed between R and T when a new RSU is deployed into the field. The task is presented in two parts: a wrapper protocol π P4TC (see Fig. 27) and a core protocol RSUCertification (see Fig. 28). The wrapper protocol interacts with other UC entities, pre-processes the input and post-processes the output.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

57

R (pk T , pk R )

T (pk T , sk T , pk R , a R )

cert R parse (pk T T , pk T , pk T ) := pk T

cert R parse (pk T T , pk T , pk T ) := pk T cert R parse (sk T T , sk T , sk T ) := sk T cert σR ← S.Sgn(skcert T , (pk R , a R )) cert cert R := (pk R , a R , σ R )

cert R cert parse (pk ′R , a R , σ R ) := cert R cert if S.Vfy(pkcert T , σ R , (pk R , a R )) = 0

return ⊥ return (cert R )

return (OK) Fig. 28. RSU Certification Core Protocol

The wrapper protocol internally invokes the core protocol E D ((cert R ) , (OK)) ← RSUCertification R (pk T , pk R ), T (pk T , sk T , pk R , a R ) . In the core protocol, the TSP certifies the validity of the RSU public key and stores the certificate on the RSU. Note that the public key of an RSU pk R and the associated certificate cert R has to be refreshed from time to time. For the ease of presentation we assume that the same RSU (identified by its PID pid R ) can only be registered once. In other words, if the (physically identical) RSU is removed from the field, goes to maintenance and is re-deployed to the field, we consider this RSU a “new” RSU.

D.6

Wallet Issuing

The Wallet Issuing task is executed between U and T . It is executed at the beginning of each billing period to generate a fresh wallet for the user. The task is presented in two parts: a wrapper protocol πP4TC (see Fig. 29) and a core protocol WalletIssuing (see Fig. 30). The wrapper protocol interacts with other UC entities, pre-processes the input, post-processes the output and checks the validity of the created wallet by executing the WalletVerification algorithm (see Fig. 41) after the core protocol. The wrapper protocol internally invokes the core protocol E    D (τ ) , pkid ← WalletIssuing U (pkDR , pk U , sk U ), T (pkDR , sk T , a U , cert RT , bl T ) . U , htd The joint input of the core protocol is the public key of the DR pkDR . The user additionally obtains its public and secret key pair (pk U , sk U ). The TSP also gets his own secret key sk T , the attribute vector a U for the user, its own certificate cert RT and the TSP blacklist bl T as input. The protocol fulfills four objectives: (1) Jointly computing a fresh and random wallet ID for the user that is only known to the user. (2) Storing this wallet ID in a hidden fashion at the TSP such that it can only be recovered by the DR in the case that the user conducts a fraud. (3) Jointly computing a fresh and random serial number for this transaction. (4) Creating a new wallet for the user. For the first objective, both parties randomly choose a preliminary wallet ID λ ′ and λ ′′, respectively, that together form the wallet ID λ := λ ′ + λ ′′. Note that the TSP does not learn the wallet ID.

58

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

UC-Protocol πP4TC (cont.) – Task Wallet Issuing User input: (issue) TSP input: (issue, a U , bl T ) (1) At the user side: • Load the internally recorded (pk U , sk U ).⊥ • Receive pk T from the bulletin-board G bb for PID pid T .⊥ • Receive pkDR from the bulletin-board G bb for PID pid DR .⊥ (2) At the TSP side: • Load the internally recorded (pk T , sk T ).⊥ • Load the internally recorded cert RT .⊥ • Receive pkDR from the bulletin-board G bb for PID pid DR .⊥ (3) Both sides: Run the code of WalletIssuing between the user and the TSP (see Fig. 30)    D E (τ ) , s, pkid ← WalletIssuing U (pkDR , pk U , sk U ), T (pkDR , sk T , a U , cert RT , bl T ) . U , htd (4) At the user side: • Run the code of WalletVerification(pk T , pk U , τ ) (see Fig. 41). • If WalletVerification returns 0, output ⊥ and abort. • Record τ internally. • Parse s and a U from τ . (5) At the TSP side: • Receive pid U from the bulletin-board G bb for public key pkid U. • Insert (pid U , s, htd) into AHTD. User output: (s, a U ) TSP output: (s) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 29. The UC-protocol πP4TC (cont. from Fig. 17)

For the second objective it would not be sufficient for the user to simply commit to λ ′, since then one could ′ ′ only prove that д1λ is contained. Considering the DLog assumption in G 1 , recovering λ ′ ∈ Zp from д1λ is infeasible. P Therefore, the user splits λ ′ into λ 0′ , . . . , λ ℓ′ ∈ {0, . . . , B − 1} s.t. λ ′ = li=0 λi′ · B i for some splitting base B. The λ′

base B is chosen in a way that it is feasible for the DR to recover λi′ from д1 i in a reasonable amount of time (e.g., λ′

B = 232 ). Then the user encrypts each д1 i under the public key of the DR and transmits these encryptions ei to ′ the TSP. To bind these encryptions to the user and to prevent misuse from the TSP, the user also encrypts д1λ and ∗ signs the resulting ciphertext e along with the ciphertexts e 0 , . . . , e ℓ with S under his secret user authentication ∗ ∗ key skauth U . The ciphertext e and signature σ are also sent to the TSP. The TSP creates the hidden user trapdoor ′′ ∗ ∗ as htd := (λ , e 0 , . . . , e ℓ , e , σ ). In the case that the user commits a fraud, the TSP can send all htd of this user along with pk U to the DR and the DR can recover the wallet ID λ. The DR in this case also checks that htd and pk U match (by verifying σ ∗ ). For more details see the User Blacklisting task in Fig. 40. The goal of the third objective is to create a truly random serial number s ∈ S for this transaction (we use S := G 1 ). To ensure that the serial number is indeed random (and not maliciously chosen by the user or the TSP),

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

59 T (pkDR , sk T , a U , cert R T , bl T )

U (pkDR , pk U , sk U ) 



auth id auth parse (pkid U , pk U ), (sk U , sk U ) := (pk U , sk U )

cert R parse (sk T T , sk T , sk T ) := sk T

′ R

R

s ′′ ← S

s ←S R

R

λ ′ ← Zp

λ ′′ ← Zp

R

u 1next ← Zp Split λ ′ into λ 0′ , . . . , λ ℓ′ ∈ {0, . . . , B − 1} s.t. λ ′ =

ℓ X

λi′ · B i

i=0

∀i ∈ {0, . . . , ℓ} : R

r i ← Zp λ′

ei ← E.Enc(pkDR , д1 i ; r i ) R

r ∗ ← Zp ′

e ← E.Enc(pkDR , д1λ ; r ∗ ) ∗

′ (c ′T , d T ) ← C1.Com(CRS1com , (λ ′, skid U ))

′′ 1 ′′ (c ′′ T , d T ) ← C1.Com(CRScom , (λ , 0))

← C1.Com(CRS1com , (λ ′, 0, u 1next , 0)) ∗ ′ ′ stmnt := (pkid U , pkDR , e , e 0 , . . . , e ℓ , c R , c T ) ′ ′ ′ ∗ wit := (λ , λ 0 , . . . , λ ℓ , r , r 1 , . . . , r ℓ ,

′′ ′′ (c ser , d ser ) ← C2.Com(CRS2com , s ′′ )

′ ′ (c R ,dR )



λ′

λ′

u next

skid U

′ ′ д1λ , д1 0 , . . . , д1 ℓ , д1 1 , d R ,dT , д2

)

π ← P1.Prove(CRSpok , stmnt, wit) ′′ ′′ a U , c ser , cert R T ,cT cert R parse (pk R T , a T , σ T ) := cert T cert R if S.Vfy(pkcert T , σ T , (pk T , a T )) = 0

return ⊥ ′′ ∗ σ ← S.Sgn(skauth U , (c T , e 0 , . . . , e ℓ , e )) ∗

′ pk U , s ′, e ∗ , σ ∗ , π , e 0 , . . . , e ℓ , c R , c ′T auth parse (pkid U , pk U ) := pk U

if pk U ∈ bl T return blacklisted ∗ ′′ ∗ if S.Vfy(pkauth U , σ , (c T , e 0 , . . . , e ℓ , e )) = 0

return ⊥ ∗ ′ ′ stmnt := (pkid U , pkDR , e , e 0 , . . . , e ℓ , c R , c T ) if P1.Vfy(CRSpok , stmnt, π ) = 0

return ⊥ c T := c ′T · c ′′ T σ T ← S.Sgn(sk T T , (c T , a U )) s := s ′ · s ′′ ′′ ′′ (c R , d R ) ← C1.Com(CRS1com , (λ ′′, 0, 0, 1)) ′ ′′ c R := c R · cR

σ R ← S.Sgn(sk R T , (c R , s)) ′′ ′′ ′′ s ′′, d ser , λ ′′, c R , d R , σR, c T , d T , σT ′′ ′′ if C2.Open(CRS2com , s ′′, c ser , d ser )=0

return ⊥ τ := ((s ′ · s ′′ ), PRF(λ, 0), 1, (λ ′ + λ ′′ ), a U , ′ ′′ ′ ′′ next c R , (d R · dR ), σ R , cert R T , c T , (d T · d T ), σ T , 0, u 1 ) return (τ )

Fig. 30. Wallet Issuing Core Protocol

′′ ∗ ∗ htd := (λ ′′, c ′′ T , d T , e0, . . . , e ℓ , e , σ )

return (s, pkid U , htd)

60

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt • R

P4TC

R

the user and the TSP engage in a Blum coin toss. First they randomly choose a s ′ ← S, respectively s ′′ ← S. Then ′′ on s ′′ using the equivocable and extractable commitment scheme C2. The the TSP creates a commitment c ser ′′ ′ to the TSP, the TSP transmits c ′′ and the opening of TSP then transmits c ser to the user, the user transmits c ser ser ′′ ′′ is valid. If yes, the serial number is defined as c ser to the user and finally the user checks if the commitment c ser s := s ′ + s ′′ and is known to both the user and the TSP. For the last objective, the user generates preliminary TSP and RSU commitments. He commits to his preliminary ′ wallet ID λ ′ and his secret user identification key skid U for the preliminary TSP commitment c T . For the preliminary ′ ′ RSU commitment c R , he commits to his preliminary wallet ID λ , the balance b := 0, a fresh double-spending mask u 1next and the PRF counter x := 0. He then computes a proof showing that these commitments are formed correctly and bound to his public identification key. In the proof it is also shown that the encryptions e 0 , . . . , e ℓ , e ∗ are formed correctly and that each λi′ is smaller than B. More precisely, P1 is used to compute a proof π for a (1) statement stmnt from the language L gp defined by

(1)

L gp

                               :=                                



pkid *. U +/ ..pkDR // .. ∗ // .. e // .. e 0 // ... .. /// .. . // .. / .. e ℓ /// ′ / .. c R / ′ c , T -



 ∃ λ ′, λ 0′ , . . . , λ ′ℓ , r ∗, r 1, . . . , r ℓ ∈ Zp ;       Λ′, Λ′0, . . . , Λ′ℓ , U1next, d R′ , d T′ ∈ G 1 ; SKid ∈ G :  2  U      id id   e (pkU , д2 ) = e (д1, SKU )     id  1 ′ ′ ′  C1.Open(CRScom, (Λ , pkU ), c T , d T ) = 1     1 ′ next ′ ′  C1.Open(CRScom, (Λ , 1, U1 , 1), c R , d R ) = 1     P ′  ℓ ′ λ ′ ′ i Λ = д1 , λ = i =0 λ i · B       e ∗ = E.Enc(pkDR, Λ′ ; r ∗ )       ∀i ∈ {0, . . . , ℓ } :       λ i′ ∈ {0, . . . , B − 1}       e i = E.Enc(pkDR, Λ′i ; r i )     ′  λi  ′ Λi = д1  skid U

Note that the first equation in Eq. (2) actually proves the knowledge of д2 skid U

(2)

23 (rather than skid U itself). However,

id computing д2 without knowing skid U (only given pk U ) is assumed to be a hard problem (Co-CDH). The user then transmits his public key, the proof and the two preliminary commitments to the TSP. The TSP first checks if the user’s public key is contained in the TSP blacklist bl T . If yes, the protocol is aborted and the user does not get a fresh wallet. Else, the protocol continues. Then the TSP verifies the signature σ ∗ and the proof π . The TSP then uses the homomorphic property of the commitment scheme to create the final TSP and RSU commitments. ′′ It adds λ ′′ to the preliminary TSP commitment to create a TSP commitment c T on (λ, skid U ). It adds λ to the next preliminary RSU commitment and sets the PRF counter to 1 to create a RSU commitment c R on (λ, 0, u 1 , 1). The TSP also creates a signature σ T ← S.Sgn(sk TT , (c T , a U )) and a signature σ R ← S.Sgn(sk RT , (c R , s)). The TSP then sends its preliminary wallet ID λ ′′, the TSP and RSU commitments along with their openings and signatures to the user. The user then assembles his wallet

τ := (s, φ := PRF(λ, 0), x next := 1, λ := λ ′ + λ ′′, a U , c R , d R , σ R , cert RT , c T , d T , b := 0, u 1next ). At the end of the protocol, the user returns the wallet τ . The TSP returns the user’s public identification key pkid U and the hidden user trapdoor htd.

23 Note

skid U

id that proving a statement ∃skid U ∈ Zp : pkU = д1

skid U

instead would not help as we can only extract д1

from the proof.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

61

UC-Protocol πP4TC (cont.) – Task Debt Accumulation User input: (pay_toll, s prev ) RSU input: (pay_toll, bl R ) (1) At the user side: • Load the internally recorded (pk U , sk U ).⊥ • Receive pk T from the bulletin-board G bb for PID pid T .⊥ • Load the internally recorded token τ prev for serial number s prev .⊥ (2) At the RSU side: • Load the internally recorded (pk R , sk R ).⊥ • Load the internally recorded cert R .⊥ • Receive pk T from the bulletin-board G bb for PID pid T .⊥ (3) Both sides: Run the code of DebtAccumulation between the user and the RSU (see Fig. 32)   pp * + U (pk T , pk U , sk U , τ prev ), τ , ω , U *.  +  / ← DebtAccumulation prev pp dsp bl R Opricing (·, ·, ·) (pk T , cert R , sk R , bl R ) , aU , aR , ω , ω , ω R prev and forward calls to the pricing oracle Opricing of the form (a U , a R , a R ) to the adversary and pass the result p back. (4) At the user side: • Run the code of WalletVerification(pk T , pk U , τ ) (see Fig. 41). • If WalletVerification returns 0, output ⊥ and abort. pp • Record τ and ω U internally. • Parse s, cert R , p and b from τ . • Parse a R from cert R . (5) At the RSU side: pp • Record ω dsp , ω bl and ω R internally. pp • Parse s from ω R . • Parse φ from ω bl . User output: (s, a R , p, b) prev RSU output: (s, φ, a U , a R ) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 31. The UC-protocol πP4TC (cont. from Fig. 17)

D.7

Debt Accumulation

When a driving car passes a RSU, the Debt Accumulation task is executed between U and R. The task is presented in two parts: a wrapper protocol πP4TC (see Fig. 31) and a core protocol DebtAccumulation (see Fig. 32). The wrapper protocol interacts with other UC entities, pre-processes the input, post-processes the output and lets the user execute the WalletVerification algorithm (see Fig. 41) after the core protocol has terminated. The wrapper protocol internally invokes the core protocol   * + pp τ , ω , U (pk T , pk U , sk U , τ prev ), U * +  ← DebtAccumulation Opricing (·, ·, ·) . prev pp dsp bl R (pk T , cert R , sk R , bl R ) , aU , aR , ω , ω , ω R -

62

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

U (pk T , pk U , sk U , τ prev )

R Opricing (·, ·, ·) (pk T , cert R , sk R , bl R )

cert R parse (pk T T , pk T , pk T ) := pk T   auth id auth parse (pkid U , pk U ), (sk U , sk U ) := (pk U , sk U )

cert R parse (pk T T , pk T , pk T ) := pk T

prev

prev

prev

parse (s prev , φ prev , x, λ, a U , c R , d R , σ R c T , d T , σ T , b prev , u 1 ) := τ prev prev

prev

cert parse (pk R , a R , σ R

prev

P4TC

, cert R prev ,

) := cert R prev

cert parse (pk R , a R , σ R ) := cert R

φ := PRF(λ, x ) R

R

s′ ← S

s ′′ ← S

R

R

u 1next ← Zp

u 2 ← Zp

′ ′ (c R ,dR ) ← C1.Com(CRS1com , (λ, b prev , u 1next , x ))

′′ ′′ (c ser , d ser ) ← C2.Com(CRS2com , s ′′ ) ′′ u 2 , c ser , cert R

cert parse (pk R , a R , σ R ) := cert R cert if S.Vfy(pkcert T , σ R , (pk R , a R )) = 0

return ⊥ t := skid U u2 + u1

mod p

(c hid , d hid ) ← C1.Com(CRS1com , skid U) prev

cert ′ stmnt := (pk T T , pk T , φ, a U , a R , c hid , c R , t, u 2 ) prev prev x λ b wit := (x, λ, skid ,φ , д1 , д1 , pkid U , u1 , s U , д1 prev

prev

prev

prev

′ d hid , d R , d R , d T , pk R , c R , c T , σ R π ← P2.Prove(CRSpok , stmnt, wit)

prev

u next

, д1u1 , д1 1 ,

cert , σR

prev

, σT ) prev

′ s ′, π , φ, a U , a R , c hid , c R ,t prev

cert ′ stmnt := (pk T T , pk T , φ, a U , a R , c hid , c R , t, u 2 )

if P2.Vfy(CRSpok , stmnt, π ) = 0 return ⊥ if φ ∈ bl R return blacklisted // obtains the price from the pricing oracle prev

// based on aU , aR , aR

and possibly

// other public-verifiable, environmental information prev

p ← Opricing (a U , a R , a R ) s := s ′ · s ′′ ′′ ′′ (c R , d R ) ← C1.Com(CRS1com , (0, p, 0, 1)) ′ ′′ c R := c R · cR σ R ← S.Sgn(sk R , (c R , s)) ′′ ′′ s ′′, d ser , cR,dR , σR, p ′′ ′′ if C2.Open(CRS2com , s ′′, c ser , d ser )=0

return ⊥ s := s ′ · s ′′ ′ ′′ d R := d R · dR prev b := b +p

x next := x + 1 τ := (s, φ, x pp ωU

next

ω bl := (φ, p) , λ, a U , c R , d R , σ R , cert R , c T , d T , σ T , b, u 1next )

:= (s, c hid , d hid ) pp

return (τ , ω U )

ω dsp := (φ, t, u 2 ) pp

ω R := (s, c hid ) prev

pp

return (a U , a R , ω dsp , ω bl , ω R )

Fig. 32. Debt Accumulation Core Protocol

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

63

The user gets his public and private key, the public key of the TSP and his current wallet prev

prev

prev

τ prev := (s prev , φ prev , x, λ, a U , c R , d R , σ R , cert R prev , c T , d T , σ T , b prev , u 1 ) as input. The RSU gets its own secret key and certificate, the public key of the TSP and the RSU blacklist bl R as input. It has also access to a pricing oracle Opricing (·, ·, ·), which helps it to determine the price the user has to pay. In the protocol the RSU and the user utilize a Blum coin toss to jointly compute a fresh and random serial number s. The coin toss is analogous to WalletIssuing and therefore omitted in this protocol description. The RSU starts the protocol by sending its certificate cert R and a fresh double-spending randomness u 2 to the user. The user checks the validity of the certificate and uses the randomness to calculate the double-spending tag t := skid U · u 2 + u 1 mod p. He then calculates the fraud detection ID for the current transaction as φ := PRF(λ, x ). The user then proceeds by preparing the updated wallet. Therefore he first chooses a fresh double-spending mask u 1next and executes (c R′ , d R′ ) ← C1.Com(CRS1com , (λ, b prev , u 1next , x )) to commit to his wallet ID, the current balance, the fresh double-spending mask and the current counter. He also executes (c hid , d hid ) ← C1.Com(CRS1com , skid U) to create a fresh commitment on his secret identification key. This commitment can be used at a later point in the Prove Participation task (cf. Figs. 35 and 36) to prove to the TSP that the user behaved honestly in this transaction. (2) The user continues by using P2 to compute a proof π for a statement stmnt from the language L gp defined by

(2)

L gp

                                         :=                                          



pkT T + *. cert ..pkT /// .. φ // .. / .. aU /// .. prev // .. aR // .. c hid // .. ′ // .. c R // .. t // , u2 -



prev, φ prev, X, Λ, pkid , ∃ x, λ, skid U , u 1 ∈ Zp ; s U

                             C1.Open(CRS1com, (Λ, pkid ), c , d ) = 1 T T  U      C1.Open(CRS1com, pkid , c , d ) = 1  U hid hid    prev prev  C1.Open(CRS1com, (Λ, B prev, U1, X ), c R , d R ) = 1    ′ , d′ ) = 1  C1.Open(CRS1com, (Λ, B prev, U1next, X ), c R   R     S.Vfy(pkT , σ , (c , a )) = 1  T T U  T   prev prev prev prev    S.Vfy(pkR , σR , (c R , s )) = 1     prev prev prev cert   S.Vfy(pkT , σRcert , (pkR , aR )) = 1       φ prev = PRF(λ, x − 1), φ = PRF(λ, x ),      id  t = skU u 2 + u 1     id  skU  u1  id x λ pkU = д1 , U1 = д1 , X = д1 , Λ = д1  prev

B prev, U1, U1next, d hid, d R , d R′ , d T ∈ G 1 ; prev prev prev prev pkR ∈ G 13 ; c R , c T ∈ G 2 ; σR , σRcert , 2 σT ∈ G2 × G1 :

(3)

prev

This proof essentially shows that the wallet τ is valid, i.e., that the commitments c T and c R are valid, bound to the user and have valid signatures, that the certificate cert R prev from the previous RSU is valid and that the fraud detection ID φ prev from the last transaction has been computed correctly. It also shows that c hid is prev valid and contains the secret user identification key, that c R and c R′ contain the same values (except for the double-spending mask) and that the fraud detection ID φ and the double-spending tag t are computed correctly. prev The user then sends (π , φ, a U , a R , c hid , c R′ , t ) to the RSU, who first checks if the proof π verifies and if the fraud detection ID φ is on the RSU blacklist bl R or not. If one of the checks fails, the RSU aborts the communication with the user and takes certain measures. These measures should include instructing the connected camera to take a picture of the cheating vehicle. If the tests have been passed, the RSU calculates with the help of the pricing oracle the price p the user has to pay, depending on factors like the user’s attributes, the time of the day, the current traffic volume and the attributes of the current and previous RSU. Then the RSU does its part to update the user’s wallet. It blindly adds the price p to the wallet balance b and increases the PRF counter x by 1 by calculating (c R′′ , d R′′ ) := C1.Com(CRS1com , (0, p, 0, 1)). Then it computes the new RSU commitment c R by adding c R′ and c R′′ (remember that C1 is homomorphic) and

64

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

UC-Protocol πP4TC (cont.) – Task Debt Clearance User input: (clear_debt, s prev ) TSP input: (clear_debt) (1) At the user side: • Load the internally recorded (pk U , sk U ).⊥ • Receive pk T from the bulletin-board G bb for PID pid T .⊥ • Load the internally recorded token τ prev for serial number s prev .⊥ (2) At the TSP side: • Load the internally recorded (pk T , sk T ).⊥ (3) Both sides: Run the code of DebtClearance between the user and the TSP (see Fig. 34)   D E bl dsp (b bill ), (pkid ) ← DebtClearance U (pk T , pk U , sk U , τ prev ), T (pk T ) . U,ω ,ω (4) At the TSP side: • Receive pid U from the bulletin-board G bb for public key pkid U. • Record ω bl and ω dsp internally. • Parse φ and −b bill from ω bl . User output: (b bill ) TSP output: (pid U , φ, b bill ) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 33. The UC-protocol πP4TC (cont. from Fig. 17)

also signs it along with the serial number s. The RSU then sends (c R , d R′′ , σ R , p) to the user. It also stores several transaction information: The blacklisting transaction information ω bl := (φ, p) can be used at a later point in the User Blacklisting task (cf. Fig. 39) to calculate the total fee of a fraudulent user. The double-spending transaction information ω dsp := (φ, t, u 2 ) enables the TSP to identify the user if he uses the old version of the wallet (with unchanged balance) in another transaction (cf. Fig. 37). The RSU prove participation transaction information pp ω R := (s, c hid ) can be used later in the Prove Participation task (cf. Figs. 35 and 36). The user can then calculate the remaining values needed to update the wallet, e.g., increasing the counter and the balance. Then he can construct the updated wallet τ := (s, φ, x next , λ, a U , c R , d R , σ R , cert R , c T , d T , σ T , b, u 1next ). pp

The user’s output is the updated wallet along with the user prove participation transaction information ω U := prev (s, c hid , d hid ). The RSU’s output are the user’s attributes a U , the attributes a R of the RSU from the previous pp bl dsp transaction of the user and the three transaction information ω , ω and ω R .

D.8

Debt Clearance

After the end of a billing period, the Debt Clearance task is executed between U and T . The task is presented in two parts: a wrapper protocol π P4TC (see Fig. 33) and a core protocol DebtClearance (see Fig. 34). The wrapper protocol interacts with other UC entities, pre-processes the input and post-processes the output. The wrapper protocol internally invokes the core protocol     D    E bl dsp b bill , pkid ← DebtClearance U pk T , pk U , sk U , τ prev , T pk T . U,ω ,ω

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

65

U (pk T , pk U , sk U , τ prev )

T (pk T )

cert R parse (pk T T , pk T , pk T ) := pk T   auth id auth parse (pkid U , pk U ), (sk U , sk U ) := (pk U , sk U )

cert R parse (pk T T , pk T , pk T ) := pk T

prev

prev

prev

parse (s prev , φ prev , x, λ, a U , c R , d R , σ R c T , d T , σ T , b prev , u 1 ) := τ prev prev

prev

cert parse (pk R , a R , σ R φ := PRF(λ, x )

prev

, cert R prev ,

) := cert R prev R

u 2 ← Zp u2 t := skid U u2 + u1

mod p

prev b prev T cert stmnt := (pkid , t, u 2 ) U , pk T , pk T , φ, a U , a R , д1 id prev prev x λ u 1 prev wit := (x, λ, sk U , u 1 , s ,φ , д1 , д1 , д1 , d R , d T , prev prev prev cert prev pk R , c R , c T , σ R , σ R , σT )

π ← P3.Prove(CRSpok , stmnt, wit) prev

pk U , π , φ, a U , a R , b prev , t auth parse (pkid U , pk U ) := pk U prev

T cert b stmnt := (pkid U , pk T , pk T , φ, a U , a R , д1 if P3.Vfy(CRSpok , stmnt, π ) = 0

prev

, t, u 2 )

return ⊥ OK b bill := b prev

b bill := b prev ω bl := (φ, −b bill ) ω dsp := (φ, t, u 2 )

return (b

bill

bl dsp return (pkid ) U,ω ,ω

)

Fig. 34. Debt Clearance Core Protocol

The user gets the public key pk T of the TSP, his own public and private key (pk U , sk U ) and his current wallet prev

prev

prev

τ prev := (s prev , φ prev , x, λ, a U , c R , d R , σ R , cert R prev , c T , d T , σ T , b prev , u 1 ) as input. The TSP gets its own public key pk T as input. The protocol is similar to the DebtAccumulation protocol, with the difference that the user here is not anonymous, the wallet balance is no secret and the TSP does not give the user an updated wallet. Like in DebtAccumulation, the TSP first sends a fresh double-spending randomness u 2 to the user and the user calculates the double-spending tag t := skid U u 2 + u 1 mod p and the fraud detection ID φ for this transaction. He then continues by preparing a proof of knowledge. More precisely, P3 is used to compute a proof π for a statement

66

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

(3) stmnt from the language L gp defined by

(3)

L gp

                                :=                                 



pkid +/ *. U .. pkT T / ..pkcert /// .. T // .. φ // / .. aU / ... prev /// .. aR // .. B prev // / .. . t // , u2 -



prev, s prev, X, Λ, U , ∃ λ, x, u 1, skid 1 U ∈ Zp ; φ prev prev prev d R , d T ∈ G 1 ; pkR ∈ G 13 ; c R , c T ∈ G 2 ;

            prev prev 2 cert   σR , σR , σT ∈ G2 × G1 :        id 1   C1.Open(CRScom, (Λ, pkU ), c T , d T ) = 1     prev prev prev 1  C1.Open(CRScom, (Λ, B , U1, X ), c R , d R ) = 1     T  S.Vfy(pkT , σ T , (c T , aU )) = 1    prev prev prev prev   S.Vfy(pkR , σR , (c R , s )) = 1     prev prev prev  cert cert   , (pkR , aR )) = 1 S.Vfy(pkT , σR     prev   φ = PRF(λ, x − 1), φ = PRF(λ, x ),     id   t = skU u 2 + u 1     id  sk  u1 id  U x λ pkU = д1 , U1 = д1 , X = д1 , Λ = д1 

(4)

The proof is a simplified version of the one in the DebtAccumulation protocol. The balance and the public user identification key are now in the statement and not in the witness and one does not need to prove anything about c R′ and c hid . prev The user then sends (pk U , π , φ, a U , a R , b prev , t ) to the TSP. Unlike in DebtAccumulation, the balance and the user’s public key are transmitted. The TSP then checks the validity of the proof and signals the user that the proof successfully verified. At the end of the protocol, the TSP outputs the user’s public identification key pkid U , the blacklisting transaction information ω bl := (φ, −b prev ) and the double-spending transaction information ω dsp := (φ, t, u 2 ). The user just outputs his final debt b prev for this billing period. Note that the wallet itself is discarded. It is expected that the user and the TSP execute the Wallet Issuing task next, to give the user a fresh wallet. After the protocol has ended, the TSP can issue an invoice to the user for the current billing period. The specifics of the actual payment process are out of scope.

D.9

Prove Participation

The Prove Participation task is used by a user to prove to the SA that he behaved honestly at a specific Debt Accumulation transaction. In the case that more than one vehicle is captured on a photograph taken by a RSU camera after a fraud occurred, this protocol can be used to identify the fraudulent driver.24 The SA recovers the identities of the users captured on the photograph and executes the Prove Participation task which each of them. The user that is not able to prove that he honestly participated in a corresponding RSU transaction is found guilty. The task is presented in two parts: a wrapper protocol πP4TC (see Fig. 35) and a core protocol ProveParticipation pp (see Fig. 36). In the wrapper protocol, the SA interacts with other UC entities and processes the set S R of all serial numbers that were recorded by the RSU that took the photo at roughly the time the photo was taken. In pp particular, the SA loads the internally recorded set Ω R of all RSU prove participation transaction information pp that contain serial numbers that are also in S R . Afterwards, the wrapper protocol invokes the core protocol D E pp pp pp ((out U ), (outSA )) ← ProveParticipation U (Ω U ), SA(pk U , S R , Ω R ) . pp

The SA gets the user’s public key pk U , the set S R of serial numbers that were observed roughly at the time the pp photograph was taken and the set of corresponding RSU prove participation transaction information Ω R as pp input. The user gets his internally recorded set Ω U of all prove participation transaction information as input. 24 Of

course, the task can also be used in the case that only one vehicle was captured on the photograph to eliminate the possibility that the RSU falsely instructed the camera to take a photograph.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

67

UC-Protocol πP4TC (cont.) – Task Prove Participation User input: (prove_participation) pp SA input: (prove_participation, pid U , S R ) (1) At the SA side: • Receive pk U from the bulletin-board G bb for PID pid U .⊥ pp • Load the internally recorded set Ω R of all prove participation transaction information with serial numbers pp in S R . (2) At the user side: pp • Load the internally recorded set Ω U of all prove participation transaction information. (3) Both sides: Run the code of ProveParticipation between the User and the SA (see Fig. 36) D E pp pp pp ((out U ), (outSA )) ← ProveParticipation U (Ω U ), SA(pk U , S R , Ω R ) . User output: (out U ) SA output: (outSA ) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 35. The UC-protocol πP4TC (cont. from Fig. 17) pp

pp

U (Ω U )

pp

SA(pk U , S R , Ω R ) auth parse (pkid U , pk U ) := pk U pp

SR pp

pp

pp

if ∃ ω U = (s, c hid , d hid ) ∈ Ω U with s ∈ S R then out U := OK else out U := NOK

s, c hid , d hid outSA := OK pp

pp

if ω R = (s, c hid ) < Ω R outSA := NOK

if C1.Open(CRS1com , pkid U , c hid , d hid ) = 0 outSA := NOK return (out U )

return (outSA ) Fig. 36. Prove Participation Core Protocol pp

pp

The protocol itself is simple. The SA first sends S R to the user who searches Ω U for a user prove participation pp pp pp transaction information ω U for which the serial number s in ω U is also in S R . If one is found the user’s output pp bit out U is set to OK, if none is found out U is set to NOK (“not ok”). The user then sends ω U := (s, c hid , d hid ) to

68

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

UC-Protocol πP4TC (cont.) – Task Double-Spending Detection TSP input: (scan_for_fraud, φ) (1) Load the internally recorded set Ωdsp of all double spending transaction information. ′ (2) Pick transaction information ω dsp = (φ, t, u 2 ) and ω dsp = (φ ′, t ′, u 2′ ) from the database Ωdsp , such that φ = φ ′ and u 2 , u 2′ .⊥ ′ ′ −1 mod p. (3) skid U := (t − t ) · (u 2 − u 2 ) skid

U (4) pkid U := д1 . ⊥ (5) Receive pid U from the bulletin-board G bb for key pkid U. id (6) π := sk U . TSP output: (pid U , π )

⊥ If

this does not exist, output ⊥ and abort.

Fig. 37. The UC-protocol πP4TC (cont. from Fig. 17) pp

the SA and the SA checks if (s, c hid ) ∈ Ω R and if d hid is the opening of c hid under pkid U . If both checks succeed the SA’s output bit outSA is set to OK, if at least one check fails outSA is set to NOK. At the end of the protocol both parties output their bits out U and outSA , respectively 25 If the SA’s output equals NOK, the user is found guilty and appropriate measures are taken (e.g., the user gets blacklisted).

D.10

Double-Spending Detection

The double-spending transaction information ω dsp collected by the RSUs are periodically transmitted to the TSP’s database, which is regularly checked for two double-spending transaction information associated with the same serial number. If the database contains two such records, the Double-Spending Detection task (see Fig. 37) can be used by the TSP to extract the public identification key of the user these double-spending transactions information belong to as well as a proof (such as his secret identification key) that the user is guilty. In particular, the task gets a serial number s as input and searches the internal database for two double-spending ′ transaction information ω dsp = (φ, t, u 2 ) and ω dsp = (φ ′, t ′, u 2′ ) that contain the same serial number s = s ′, but not the same double-spending randomness u 2 , u 2′ . Then the fraudulent user’s secret identification key can be skid

′ −1 mod p. His public identification key is then pkid := д U . The secret recovered as skid 1 U := (t − t ) · (u 2 − u 2 ) U identification key skid can be used as a proof of guilt in the Guilt Verification task (cf. Fig. 38). U Every user that is convicted of double-spending is added to the blacklist via the User Blacklisting task (cf. Figs. 39 and 40) and additional measures are taken (these are out of scope).

D.11

Guilt Verification

If a user is indeed guilty of double-spending can be verified using the Guilt Verification task depicted in Fig. 38. This algorithm may be run by anyone, in particular by justice. Essentially, the algorithm checks if a given public user identification key pkid U and a proof of guilt π match. This is easily accomplished because they match if and 25 Note

that after a successful (both parties output OK) execution of the Prove Participation protocol a single transaction can be linked to the user. But as long as the user does not get guiltlessly photographed at every RSU he passes, tracking is not possible.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

69

UC-Protocol πP4TC (cont.) – Task Guilt Verification Party input: (verify_guilt, pid U , π ) (1) Receive pk U from the bulletin-board G bb for PID pid U .⊥ (2) Parse pkid U from pk U . π (3) If д1 = pkid U , then out := OK, else out := NOK. Party output: (out) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 38. The UC-protocol πP4TC (cont. from Fig. 17) id only if д1π = pkid U holds. This equation holds if and only if π equals the user’s secret identification key sk U that was recovered using the Double-Spending Detection task (cf. Fig. 37).

D.12

User Blacklisting

The User Blacklisting task executed between the DR and TSP is used to put a user on the blacklist. There are several reasons why a user is entered on the blacklist: (1) (2) (3) (4)

A user did not submit his balance at the end of the billing period. A user did not physically pay his debt after submitting his balance. A user has been convicted of double-spending. The wallet (or the vehicle including the wallet) of a user has been stolen and the user wants to prevent the thief from paying tolls from his account.

Users that are on the blacklist are unable to get issued a new wallet and they also get photographed at every RSU they pass. They can also be punished by some other means (which are out of scope). The User Blacklisting task is presented in two parts: a wrapper protocol πP4TC (see Fig. 39) and a core protocol UserBlacklisting (see Fig. 40). The wrapper protocol interacts with other UC entities, pre-processes the input and afterwards internally invokes the core protocol D E ((OK), (Φ U )) ← UserBlacklisting DR(pkDR , skDR ), T (pk U , HTD) . The DR gets its public and private key (pkDR , skDR ) as input and the TSP gets the public key pk U of the user to be blacklisted along with a set HTD U containing all his hidden user trapdoors from the current billing period as input.26 At the beginning of the protocol, the TSP sends its input pk U , HTD to the DR. The DR then recovers for every htd := (λ ′′, e 0 , . . . , e ℓ , e ∗ , σ ∗ ) ∈ HTD the corresponding wallet id λ. For that purpose, the DR first checks if the signature σ ∗ of (e 0 , . . . , e ℓ , e ∗ ) under the user’s authentication key verifies. This ensures that the DR can not be tricked by the TSP into recovering the wallet ID for a different user. λ′ λ′ The DR proceeds by decrypting every ei to get (д1 0 , . . . , д1 ℓ ). Since each λi′ is small (λi′ < B), the DR can compute the discrete logarithms in a reasonable amount of time to recover (λ 0′ , . . . , λ ℓ′ ). This algorithm is also not time-critical and is expected to be executed only a few times per billing period, therefore the amount of required computation should be acceptable. The DR then calculates the user-chosen part of the wallet ID as 26 In

the case that a user owns more than one vehicle he can have more than one wallet and hence more than one hidden user trapdoor is stored at the TSP for this user.

70

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

UC-Protocol πP4TC (cont.) – Task User Blacklisting DR input: (blacklist_user) TSP input: (blacklist_user, pid U ) (1) At the DR side: • Load the internally recorded (pkDR , skDR ) ⊥ . (2) At the TSP side: • Receive pk U from the bulletin-board G bb for PID pid U ⊥ . • Load internally recorded set AHTD of all augmented hidden trapdoors and set HTD := {htd | (pid U , ·, htd) ∈ AHTD} (3) Both sides: Run the code of UserBlacklisting between the DR and the TSP (see Fig. 40) D E ((OK), (Φ U )) ← UserBlacklisting DR(pkDR , skDR ), T (pk U , HTD) . (4) At the TSP side: • Load the internally recorded set Ωbl of all blacklisting transaction information. • Let Ωbl be the subset of transaction entries ω bl = (φ, p) with serial numbers φ ∈ Φ U . UP • b bill := ω bl ∈Ωbl p. U

DR output: (OK) TSP output: (b bill , Φ U ) ⊥ If

this does not exist, output ⊥ and abort.

Fig. 39. The UC-protocol πP4TC (cont. from Fig. 17)

Pℓ ′ ′ λ ′ := i=0 λi · B i and checks if e ∗ is indeed an encryption of д1λ . If the check succeeds, the wallet ID can be ′ ′′ ′′ recovered as λ := λ + λ (remember that λ is directly stored in htd). The set Φλ := {PRF(λ, 0), . . . , PRF(λ, x blR )} of fraud detection IDs is then sent to the TSP (for every htd ∈ HTD). The blacklist parameter x blR is chosen in such a way that a user is expected to perform at most x blR executions of the Debt Accumulation task in a single billing period. The DR’s output of the core protocol is simply OK, while S the TSP’s output is Φ U := HTD Φλ . After the termination of the core protocol, the TSP calculates the total toll amount the user owes for the billing period in question in the wrapper protocol. Therefore, the TSP uses the internally recorded set Ωbl of all U blacklisting transaction information for this user (the TSP can calculate this set with its knowledge of Φ U ). By summing up all the prices inside the blacklisting transaction information, the TSP can calculate the total fee b bill the user owes. After the termination of the wrapper protocol, the fraud detection IDs in Φ U are added to the RSU blacklist bl R and the user’s public identification key pkid U is added on the TSP blacklist bl T . The TSP blacklist bl T is used by the TSP in the Wallet Issuing task to ensure that no user that did not pay his invoice receives a fresh wallet. In the Debt Accumulation task a RSU checks if the fraud detection ID that the current user presents is on the RSU blacklist bl R .

D.13

Wallet Verification

The WalletVerification algorithm is depicted in Fig. 41. A user can verify with this algorithm that the wallet he stores at the end of a transaction is valid. In particular, the algorithm verifies that the commitments c T and c R are valid and contain the values they are supposed to contain, that σ T is a valid signature under sk TT of c T and

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

71

DR(pkDR , skDR )

T (pk U , HTD) pk U , HTD

for all htd ∈ HTD ′′ ∗ ∗ parse (λ ′′, c ′′ T , d T e 0 , . . . , e ℓ , e , σ ) := htd auth parse (pkid U , pk U ) := pk U ∗ ′′ ∗ if S.Vfy(pkauth U , σ , (c T , e 0 , . . . , e ℓ , e )) = 0

return ⊥ ′′

′′ if C1.Open(CRS1com , (д1λ , 0), c ′′ T ,dT ) = 0

return ⊥ ′

λ := 0 for i from 0 to ℓ : λi′ := DLOG(E.Dec(skDR , ei )) λ ′ := λ ′ + λi′ · B i ′

if д1λ , E.Dec(skDR , e ∗ ) return ⊥ λ := λ ′ + λ ′′ Φλ := {PRF(λ, 0), . . . , PRF(λ, x blR )} [ Φλ Φ U := HTD

ΦU return (OK)

return (Φ U ) Fig. 40. User Blacklisting Core Protocol

a U , that σ R is a valid signature under sk R of c R and s, that the certificate cert R containing pk R is valid and that the fraud detection id φ was calculated using the correct values. Of course, this algorithm can also be run by a third party to verify the validity of a wallet (since no secret keys are needed to run this algorithm).

E

SECURITY PROOF

In this appendix we show that π P4TC UC-realizes FP4TC in the (FCRS , G bb )-hybrid model for static corruption. More precisely, we show the following theorem: Theorem E.1 (Security Statement). Assume that the SXDH-problem is hard for gp := (G 1 , G 2 , GT , e, p, д1 , д2 ), the Co-CDH problem is hard for (G 1 , G 2 ), the nPRF -DDHI problem is hard for G 1 , and the DLOG-problem is hard for G 1 and our building blocks (NIZK, commitment schemes, signature scheme, encryption schemes and PRF) are instantiated as described in Appendix A.2. Then F

, G bb

CRS πP4TC

≥UC F G bb ,

72

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

WalletVerification(pk T , pk U , τ ) cert R parse (pk T T , pk T , pk T ) := pk T auth parse (pkid U , pk U ) := pk U

parse (s, φ, x next , λ, a U , c R , d R , σ R , cert R , c T , d T , σ T , b, u 1next ) := τ cert parse (pk R , a R , σ R ) := cert R

if C1.Open(CRS1com , (д1λ , pkid U ), c T , d T ) = 0 ∨ S.Vfy(pk T T , σ T , (c T , a U )) = 0 ∨ u next

C1.Open(CRS1com , (д1λ , д1b , д1 1 , д1x S.Vfy(pk R , σ R , (c R , s)) = 0 ∨

next

), c R , d R ) = 0 ∨

cert S.Vfy(pkcert T , σ R , (pk R , a R )) = 0 ∨

PRF(λ, x next − 1) , φ then return 0 else return 1 Fig. 41. Algorithm for Wallet Verification

if either only a subset of the users or only a subset of the RSUs, the TSP and the SA is statically corrupted. Please note, that the hardness of the Co-CDH problem and DLOG-problem is already implied by the SXDHassumption. For a discussion of the reasons and why this limited corruption model is not a severe restriction from a practical vantage point see Appendix B.1. We prove Theorem E.1 in three steps: F

, G bb

CRS • In Appendix E.1 we first show in Theorem E.21 that πP4TC follow the protocol honestly.

F

,G

F

,G

correctly implements F G bb , if all parties

CRS bb ≥UC F G bb , if only a subset of the users is statically corrupted. • In the next step, Theorem E.22 proves π P4TC We call this case “System Security”. Theorem E.22 is presented in Appendix E.2. CRS bb • In the last step, Theorem E.36 proves πP4TC ≥UC F G bb , if only a subset of the RSUs is statically corrupted. We call this case “User Security and Privacy” and present it in Appendix E.3.

Proof of Theorem E.1. The theorem is immediately implied by Theorems E.21, E.22 and E.36.



Please note that showing correctness separately is technically not necessary. Both Theorems E.22 and E.36 individually imply correctness, as the case in which all parties are honest is a special case of a scenario in which a subset of parties is corrupted. Showing correctness separately serves two purposes: For didactic reasons a restricted setting (i.e., all parties are assumed to be honest) is convenient to introduce some notation and to present the underlying idea that is common to all following proofs. Secondly, having shown correctness as a special case before helps during the course of the proof of system security to keep matters simple, because we can revert to the special case.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt s prev

pid R , p

• P4TC

73

s, φ, x, λ, pid U , b

Fig. 42. An entry trdb ∈ TRDB visualized as an element of a directed graph

E.1

Proof of Correctness

In many papers where something is shown to be UC-secure the considered protocols are rather simple (e.g., a commitment, an oblivious transfer, a coin toss) and correctness is mostly obvious. Our protocol in contrast is a complex system with polynomially many parties that can reactively interact forever, i.e., the protocol itself has no inherent exit point except that at some point the polynomially bounded runtime of the environment is exhausted. In order to prove the security of our system later (see Appendices E.2 and E.3) we make use of a theoretical construct we call the Augmented Transaction Graph. This graph helps us to link the interactions of the parties across the various tasks our protocol provides. However, the Augmented Transaction Graph is also useful to prove correctness and hence we use this proof to introduce the concept. In this section no security reduction occurs, because we assume all parties to be honest and solely show that the subroutine input/output of the main parties is indistinguishable between the ideal and the real model. We start with a series of simple lemmas about the ideal functionality FP4TC which also help to develop a good conception about how the individual tasks/transactions are connected. Internally, FP4TC stores a pervasive database TRDB whose entries trdb are of the form trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b). This set can best be visualized as a directed graph in which each node represents the state of a user after the respective transaction, i.e., at the end of an execution of Wallet Issuing, Debt Accumulation or Debt Clearance, and the edges correspond to the transition from the previous to the next state. Each trdb entry represents a node together with an edge pointing to its predecessor node. The node is labeled with (s, φ, x, λ, pid U , b) and identified by s. The edge to the predecessor is identified by (s prev , s) and labeled with (pid R , p). See Fig. 42 for a depiction. Transaction entries or nodes that are inserted by Wallet Issuing do not have a predecessor, therefore s prev = ⊥ and also p = 0 holds. All other tasks besides Wallet Issuing, Debt Accumulation and Debt Clearance do not alter the graph but only query it. We show that the graph is a directed forest, i.e., a set of directed trees. Wallet Issuing creates a new tree by inserting a new root node. Debt Accumulation and Debt Clearance extend a tree. Debt Clearance results in a leaf node from where no further extension is possible. As long as no double spending occurs, each tree is a path graph. Definition E.2 (Ideal Transaction Graph (informal)). The transaction database TRDB = {trdbi } with trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b) is a directed, labeled graph as defined above. This graph is called the Ideal Transaction Graph. Lemma E.3. The Ideal Transaction Graph TRDB is a forest. Proof. TRDB is a forest, if and only if it is cycle-free and every node has in-degree at most one. A new node is only inserted in the scope of Wallet Issuing, Debt Accumulation or Debt Clearance. Proof by Induction: The statement is correct for the empty TRDB. If Wallet Issuing is invoked, a new node with no predecessor is inserted. Moreover, the serial number s of the new node is randomly chosen from the set of unused serial numbers, i.e., it is unique and no existing node can point to the new node as its predecessor. If Debt Accumulation or Debt Clearance is invoked, a new node is inserted that points to an existing node. Again, the serial number s of the new node is randomly chosen from the set of unused serial numbers, i.e., it is unique and no existing node can point to the new node as its predecessor. Hence, no cycle can be closed. Since the only incoming edge of a node is defined by the stated predecessor s prev (which may also be ⊥), each vertex has in-degree at most one. □

74

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Lemma E.4. The wallet ID λ maps one-to-one and onto a connected component (i.e., tree) of the Ideal Transaction Graph. Proof. “ ⇐= ”: Let trdbi be an arbitrary node in TRDB and λ be its wallet ID. Furthermore let trdbi∗ be the root of the tree containing trdbi . Then on the (unique) path from trdbi∗ to trdbi , every node apart from trdbi∗ was inserted by means of either Debt Accumulation or Debt Clearance, both of which ensure the inserted node has the same λ as its predecessor. By induction over the length of the path, trdbi has the same wallet ID as trdbi∗ and hence the wallet ID is a locally constant function on TRDB. “ =⇒ ”: For contradiction assume there are two nodes trdbi and trdb j with equal wallet IDs λi = λ j in two different connected components. Pick the root nodes trdbi∗ and trdb∗j of their respective trees. By “ ⇐= ” it we get λi∗ = λi = λ j = λ∗j , i.e., the root nodes have equals wallet IDs, too. Both root nodes are inserted in the scope of Wallet Issuing and the wallet ID is randomly drawn from the set of unused wallet IDs, i.e., they can not both have the same wallet ID. Contradiction! □ Lemma E.5. Within a tree of the Ideal Transaction Graph the PID pid U of the corresponding user is constant. Proof. Same proof as “ ⇐= ” in the proof of Lemma E.4.



In other words, Lemma E.5 states that a wallet (a tree in TRDB) is always owned by a distinct user. But a user can own multiple wallets. Lemma E.6. Within a tree of TRDB, every node trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b) has depth x and all nodes of the same depth in the same tree have the same fraud detection ID φ. Conversely, nodes with the same fraud detection ID are in the same tree and have the same depth within this tree. Proof. Proof by Induction. The statement is true for the empty TRDB. In the scope of Wallet Issuing a new root node is inserted, Wallet Issuing sets x := 0 and an unused φ is chosen. In the scope of Debt Accumulation or Debt Clearance, x is calculated as x := x prev + 1, where by induction x prev is the depth of its predecessor. With respect to φ we note that when inserted, every node gets as fraud detection ID the value stored in f Φ (λ, x ) which only depends on the node’s wallet ID and depth. When this value is set (in either Wallet Issuing, Debt Accumulation, Debt Clearance or User Blacklisting) it is chosen from the set of unused fraud detection IDs and therefore unique for given λ and x. □ Lemma E.7. Let trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b) be an arbitrary but fixed node. If trdb is not a root let prev trdbprev = (s prev,prev , s prev , φ prev , x prev , λ, pid U , pid R , p prev , b prev ) be its predecessor. Then b = b prev + p holds for non-root nodes and p = ⊥, b = 0 for root nodes. Proof. Same induction argument as in proof of Lemma E.6.



Before we start to show that πP4TC correctly implements FP4TC , we make note of two additional simple statements about the ideal functionality itself. Lemma E.8 (Protection Against False Accusation). (1) The task Double-Spending Detection returns a proof π , ⊥ if and only if the user has committed double-spending. (2) The task Verify Guilt returns OK if and only if its input (pid U , π ) has been output at a previous invocation of Double-Spending Detection. Proof. The first part obviously follows by the definition of the task Double-Spending Detection (cp. Fig. 14). Note for the second part that users are assumed to be honest. If f Π (pid U , π ) is undefined, then out is set to NOK and the result is recorded (cp. Steps 2 and 3 in Fig. 15). Guilt Verification only returns OK, if f Π (pid U , π ) = OK has already been defined (cp. Step 1). As (in case of honest users) Guilt Verification extends f Π by nothing but invalid proofs, Double-Spending Detection exclusively sets f Π (pid U , π ) = OK (cp. Step 2, in Fig. 14). □

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

75

Lemma E.9 (Correctness of Blacklisting). Let U be an arbitrary but fixed user with pid U . Under the assumption that U participates in less then x blR transactions, i.e., in less then x blR invocations of Wallet Issuing, Debt Accumulation and Debt Clearance, the following two statements hold: (1) The set Φ U returned to T by User Blacklisting contains all fraud detection IDs that have ever been used by U. (2) Any invocation of Debt Accumulation for U with input bl R = Φ U aborts with message blacklisted. Proof. (1) Let TRDB U ⊆ TRDB be the subset of all transaction entries trdb = (·, ·, ·, ·, ·, pid U , ·, p, ·) corresponding to pid U and let L U denote the set of wallet IDs occurring in TRDB U . For λ ∈ L U the depth of the tree associated to the wallet id λ is given by x λ := max{x | f Φ (λ, x ) , ⊥}. If U with pid U participated in less than x blR transaction, then the maximum depth x max = maxλ ∈LU x λ is smaller than x blR . The set of used fraud detection IDs is given by { f Φ (λ, x ) | λ ∈ L U , 0 ≤ x ≤ x λ } which is a subset of Φ U := { f Φ (λ, x ) | λ ∈ L U , 0 ≤ x ≤ x blR }. (2) Let s prev denote the serial number for which Debt Accumulation is invoked and let trdbprev = (·, s prev , φ prev , x prev , λ, . . .) be the corresponding transaction entry. By assumption x prev < x blR holds. As User Blacklisting has previously been called, φ = f Φ (λ, x prev + 1) is already fixed. Moreover, φ ∈ Φ U = bl R holds and thus Debt Accumulation aborts. □ After these preliminaries that only dealt with the ideal functionality on its own, we are now ready to argue why π P4TC implements FP4TC correctly. Please note that to this extent all parties are assumed to be honest. The simulator Sπcorrect is depicted in Fig. 43. As before in Appendix B we assume encrypted and one-sidedly P4TC authenticated channels. Hence, the simulator is not required to simulate any messages, but only needs to provide the ideal function with implementation-specific values if being asked to do so. This also implies that the only option for the environment Z to distinguish between the real and the ideal model is by means of different outputs from the different tasks. Luckily, either these tasks run independent from each other (e.g., registration) or their output is information-theoretically independent from any shared state (e.g., various OK-messages). Hence, only a handful of options remain and those can be handled individually. Lemma E.10 (Registration Correctness). Let Z correct be an environment that does not corrupt any parties. Then the subroutine input/output of the main parties of πP4TC and FP4TC resp. in the scope of the tasks DR/TSP/RSU/User Registration is perfectly indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

Proof. Obvious. Sπcorrect runs the real algorithms DRRegistration, TSPRegistration, RSURegistration, and P4TC UserRegistration in the ideal experiment. □ Lemma E.11 (RSU Certification Correctness). Let Z correct be an environment that does not corrupt any parties. Under the assumption that S is correct, the subroutine input/output of the main parties of πP4TC and FP4TC resp. in the scope of the task RSU Certification is perfectly indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

Proof. Obvious. In the ideal model the TSP inputs a R and the RSU outputs a R again. In the real model, the TSP sends a R to the RSU together with a signature. As S is correct and all parties honest, the RSU does not abort. □

76

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Simulator Sπcorrect P4TC Setup: (1) Run the algorithm CRS ← Setup(1n ). (2) Record CRS. DR Registration: Upon receiving (registering_dr, pid DR ) run (pkDR , skDR ) ← DRRegistration(CRS), return pkDR to FP4TC and record pid DR 7→ (pkDR , skDR ). TSP Registration: Upon receiving (registering_tsp, pid T , a T ) run (pk T , sk T , cert RT ) ← R TSPRegistration(CRS, a T ), return pk T to FP4TC and record pid T 7→ (pk T , sk T , cert T ). RSU Registration: Upon receiving (registering_rsu, pid R ) run (pk R , sk R ) ← RSURegistration(CRS), return pk R to FP4TC and record pid R 7→ (pk R , sk R ). User Registration: Upon receiving (registering_user, pid U ) run (pk U , sk U ) ← UserRegistration(CRS), return pk U to FP4TC and record pid U 7→ (pk U , sk U ). RSU Certification: (nothing to do) Wallet Issuing: (nothing to do) pp Debt Accumulation: Upon being asked by FP4TC to provide ω U for s, look up pid U 7→ (pk U , sk U ), run pp (c hid , d hid ) ← C1.Com(CRS1com , skid U ), and return ω U := (s, c hid , d hid ). Debt Clearance: (nothing to do) Prove Participation: (nothing to do) Double-Spending Detection: Upon being ask to provide a proof for pid U , look up pid U 7→ (pk U , sk U ), parse skid U from sk U and return skid . U Guilt Verification: (nothing to do) User Blacklisting: (nothing to do) Fig. 43. The simulator for System Correctness

Lemma E.12 (Wallet Issuing Correctness). Let Z correct be an environment that does not corrupt any parties and does not conduct blacklisting.27 Under the assumption that all building blocks are correct and PRF is a pseudorandom function, the subroutine input/output of the main parties of πP4TC and FP4TC resp. in the scope of the task Wallet Issuing is perfectly indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

Proof. In the ideal model the TSP inputs a U which is output by the user. In the real model the TSP sends a U together with a signature. As S is correct and all parties honest, the user does not abort. In the ideal model the user outputs s which is uniformly drawn. In the real model s is drawn by means of a Blum coin toss. □ Until now, all previous lemmas have been rather trivial which is likely one of the reasons why correctness is usually not considered separately in UC proofs. Generally, it is quite obvious that a real protocol does what it is expected to do. The following lemma is slightly more involved and the main reason why we decided to show correctness first. 27 The

case that Wallet Issuing aborts due to blacklisting is postponed to Lemma E.20.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt s prev

• P4TC

pid R , p c Tin , d Tin , M Tin , c Rin , d Rin , M Rin

77 s, φ, x, λ, pid U , b

c Tout , d Tout , M Tout , c Rout , d Rout , M Rout

Fig. 44. An entry trdb ∈ TRDB visualized as an element of a directed graph

We start with more notation and augment the entries trdb of the Ideal Transaction Graph TRDB with additional information. An Augmented Transaction Entry trdb has the form trdb = (s prev , s, φ, x, λ, pid U , pid R , p, b, in in in in in c in T , dT , MT , cR, dR , MR ,

(5)

out out out out out c out T , dT , MT , cR , dR , MR )

with c, d and M with equal suffixes denoting a commitment, its decommitment information and the opening in the implicit message space. At the beginning of a transaction in the scope of Debt Accumulation or Debt Clearance prev the user loads his token τ prev which contains two commitments c T and c R , randomizes the commitments and at the end the user possesses two updated commitments c T , c R which are stored in τ again. We call the initial commitments the in-commitments of the transaction and the resulting commitments the out-commitments. Definition E.13 (Augmented Transaction Graph (informal)). The set TRDB = {trdbi } with trdbi defined as in Eq. (5) is called the Augmented Transaction Graph. It inherits the graph structure of the Ideal Transaction Graph and augments each edge by additional labels, called the in-commitments and out-commitments. Two remarks are in order: Firstly, none of the (commitment, decommitment, message)-triples is neither completely received not sent by the RSU or TSP, respectively. The RSU receives a randomized version of the in-commitment and no decommitment at all. In the reverse direction, the RSU sends the out-commitment and a share of the decommitment. The complete triples only exist inside the user’s token. Secondly, it is tempting prev but misleading to assume that c in (or similar equations) hold. Note that we do not make any of these R = cR assumptions for the definition. Hence we decided on a new notion and coined the term in-/out-commitments instead of re-using the term “previous commitment”. Actually, these kind of equalities is what we have to show. While this is rather easy to show in the scope of the proof of correctness if all parties are honest, these equalities are not obviously true as soon as corrupted parties are considered. The augmented transaction information is depicted in Fig. 44. The next proposition will be used several times below and maps an execution of the real protocol one-to-one and onto an execution of the ideal functionality. In order to be formally correct we do not only claim that an Augmented Transaction Graph exists mathematically, but also show how it can be obtained computationally. To this end we describe a UC-like experiment EXEC FP4TC, G bb, S ∗, Z ∗ (1n ). We consider a special environment Z ∗ that corrupts all parties but plays them honestly, i.e., Z ∗ is an honest adversary. This is a technical requirement such that the simulator S ∗ is able to “see” the messages. S ∗ simulates a CRS such that the ZK-proofs become extractable and otherwise behaves like the dummy adversary. Please note that S ∗ does not need to do anything with respect to Z ∗ except message delivery, as Z ∗ plays with itself. In addition to the dummy adversary S ∗ performs some bookkeeping (not required for simulation). It keeps track of all interactions and maintains a shadow copy of the Transaction Graph perfectly in sync with the Ideal Transaction Graph that FP4TC maintains internally. Moreover, S ∗ augments its own copy of the Transaction Graph with the commitments it sees. S ∗ can obtain the in-commitments via extraction of the ZK-proofs, and the out-commitments are sent by the TSP/RSU in the clear.

78

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Proposition E.14. Consider the experiment EXEC FP4TC, G bb, S ∗, Z ∗ (1n ) as defined earlier, and the Augmented Transaction Graph TRDB corresponding to this experiment. prev

= (s prev,prev , s prev , . . . ) and trdb = (s prev , s, . . . ) be two transaction entries that are connected by (1) Let trdb prev out prev = c in hold. an edge. Then c out = c in T and c R R T ∗





out = c in . = c in (2) Let trdb = (s prev,∗ , s ∗ , . . . ) and trdb = (s prev , s, . . . ) be two transaction entries with c out T and c R R T ∗ prev Then s = s holds with overwhelming probability.

In other words, the in-/out-commitments also induce a graph structure on the Augmented Transaction Graph and for an honest execution of the protocol this structure coincides with the graph structure inherited from the Ideal Transaction Graph by means of serial numbers s. Proof. (1) Follows from the fact that the users are honest. At the end of some transaction with serial number prev out prev s prev the user obtains the out-commitments c out , cR and records them locally. If the user invokes T prev prev a new transaction with serial s and s as its predecessor, he loads the recorded commitments c out , T prev in , c in . c out and uses them as in-commitments c T R R (2) Follows from the fact that the out-commitments are honestly generated and therefore unique with over∗ out ∗ = c in hold but s ∗ , s prev . This can whelming probability. Assume the contrary, i.e., c out = c in T and c R R T only happen due to one of two reasons: (a) At the beginning of a transaction the environment uses an in-commitment that is not the out-commitment of the preceding transaction. This contradicts the assumption that the user behaves honestly. (b) At the end of some transaction an out-commitment is accidentally generated that equals an in-commitment elsewhere. Again, as the out-commitments are honestly generated, this happens only with negligible probability. □ With Proposition E.14 we can show the next lemma on our way to proving correctness. Lemma E.15 (Debt Accumulation Correctness). Let Z correct be an environment that does not corrupt any parties and does not conduct blacklisting.28 Under the assumption that all building blocks are correct, C1 is homomorphic, and PRF is a pseudo-random function the subroutine input/output of the main parties of πP4TC and FP4TC resp. in the scope of the task Debt Accumulation is computationally indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

Proof. With respect to the output of s, a R and a U we refer to the proofs of Lemmas E.11 and E.12. Here, the same arguments hold. It remains to show that the output with respect to b, p is indistinguishable. Consider the path in the Augmented Transaction Graph TRDB from the root to this invocation of the Debt Accumulation task. prev prev Let trdb and trdb be two arbitrary but fixed transaction entries with trdb being the predecessor of trdb. prev in and c out prev = c in holds. Moreover, c in is updated to an out-commitment c out Due to Proposition E.14 c out = c T R R T R R using the homomorphism of the commitment scheme C1 that is consistent with the modification of the semantic information in the ideal model, i.e., b = b prev + p. □ 28 The

case that Debt Accumulation aborts due to blacklisting is postponed to Lemma E.20.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

79

Lemma E.16 (Debt Clearance Correctness). Let Z correct be an environment that does not corrupt any parties. Under the assumption that all building blocks are correct, C1 is homomorphic, and PRF is a pseudo-random function the subroutine input/output of the main parties of πP4TC and FP4TC resp. in the scope of the task Debt Clearance is computationally indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

Proof. Uses the same arguments as the proof of Lemma E.15.



The remaining Lemmas E.17 to E.20 deal with correctness of the auxiliary tasks Prove Participation, DoubleSpending Detection, Guilt Verification, and User Blacklisting. The proofs are actually neither complicated nor give any technical insight, because correctness simply follows from comparing the code of the real implementation with the pseudo-code of the ideal functionality under the assumption that the building blocks are correct. We merely include those proofs for the sake of completeness. Lemma E.17 (Prove Participation Correctness). Let Z correct be an environment that does not corrupt any parties. Under the assumption that all building blocks are correct, the subroutine input/output of the main parties of πP4TC and FP4TC resp. in the scope of the task Prove Participation is indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1 ). n

P4TC

Proof. The statement obviously follows by comparison of what the ideal functionality and the real implementation do. If Prove Participation returns OK in the ideal experiment, then there exists s s.t. (·, s, ·, x, λ, pid U , ·, ·, ·) ∈ TRDB. This means the respective user has successfully participated in the xth transaction for a wallet with ID λ in the scope of Debt Accumulation. In the real experiment the user sends an honestly generated commitment c hid which is recorded for s by both parties. During Prove Participation this commitment is re-sent and successfully opened. The reverse direction is likewise obvious. □ Lemma E.18 (Double-Spending Detection Correctness). Let Z correct be an environment that does not corrupt any parties. Under the assumption that all building blocks are correct, the subroutine input/output of the main parties of πP4TC and FP4TC resp. in the scope of the task Double-Spending Detection is indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

Proof. In the ideal model Double-Spending Detection returns the correct skid U for pid U in case of a fraud by id correct definition of SπP4TC . In the real model Double-Spending Detection returns sk U with overwhelming probability. As the honest user is committed to the same u 1 , the RSU picks two different values u 2 , u 2′ which result into two different points (u 2 , t ), (u 2′ , t ′ ) with overwhelming probability. This means the linear equation system has a unique solution which equals skid □ U. Lemma E.19 (Guilt Verification Correctness). Let Z correct be an environment that does not corrupt any parties. Under the assumption that all building blocks are correct, and DLOG is a hard problem for G 1 , the subroutine

80

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

input/output of the main parties of πP4TC and FP4TC resp. in the scope of the task Guilt Verification is indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

Proof. In the ideal model Guilt Verification returns OK if and only if π has been an output of Double-Spending correct Detection (by Lemma E.8) and thus д1π = pkid U holds (by definition of SπP4TC , cp. Fig. 43). In the real model Guilt id π Verification returns OK if and only if д1 = pk U . Hence, the only case in which the ideal and the real model differ is if д1π = pkid U holds but π is not an output of Double-Spending Detection. This event is negligible, as DLOG is assumed to be a hard problem for G 1 . □ Lemma E.20 (User Blacklisting Correctness). Let Z correct be an environment that does not corrupt any parties. Assume that all building blocks are correct and PRF is a pseudo-random function. (1) If no user uses his wallet more than x blR times, the subroutine input/output of the main parties of πP4TC and FP4TC resp. in the scope of the task User Blacklisting is indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

(2) The abort behavior of πP4TC and FP4TC resp. in the scope of the task Debt Accumulation is computationally indistinguishable between the experiments EXECπ FCRS , G P4TC

correct correct bb, SπP4TC , Z

(1n )

and

EXEC FP4TC, G bb, Sπcorrect , Z correct (1n ). P4TC

Proof. (1) In the ideal model, the output of User Blacklisting depends on the transaction graph TRDB. For this reason, we proceed by a proof of induction for a fixed but arbitrary PID pid U . There are only three tasks, namely Wallet Issuing, Debt Accumulation and Debt Clearance, that modify TRDB. Assume that Z correct has not triggered any of these tasks yet for pid U . Then, in the ideal model TRDB U is empty and User Blacklisting returns b bill = 0 and Φ U = ∅. Likewise in the real model the set of hidden trapdoors HTD is empty and thus b bill = 0 and Φ U = ∅ is returned. Assume that correctness holds for n transactions before User Blacklisting is called and we consider an execution with n + 1 previous transactions for the fixed pid U . We look at the three cases of Wallet Issuing, Debt Accumulation and Debt Clearance separately. (a) The (n + 1) th transaction is Wallet Issuing: In the ideal model a new entry trdb∗ with a unique λ∗ and p ∗ = 0 is inserted into TRDB, i.e., b bill does not change. In the real model b bill does not change either, because Wallet Issuing does not insert a ω bl into Ωbl . In the ideal model the partial mapping f Φ is extended onto the domain {(λ∗ , 0), . . . , (λ∗ , x blR )} and mapped to uniformly drawn fraud detection IDs. The output Φ U increases by x blR + 1 elements as the probability to pick the same element twice is negligible. In the real model Wallet Issuing inserts a new element htd into HTD which is sent to DR and—by assumption that every party is honest—verifies correctly and is extracted to a unique λ∗ .29 Consequently, DR inserts x blR +1 elements {PRF(λ∗ , 0), . . . , PRF(λ∗ , x blR )} into Φ U . Under the assumption that PRF is a pseudo-random function, these elements are indistinguishable from those drawn in the ideal experiment. 29 N.b.:

By abuse of notation we use the same variable here as in the ideal experiment, but of course they are not necessarily equal—only computationally indistinguishable over the randomness of the experiment.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

81

(b) The (n +1) th transaction is Debt Accumulation: In the ideal model a new entry trdb∗ with fraud detection ID ∗ φ ∗ and price p ∗ is inserted into TRDB, i.e., b bill increases by p ∗ . In the real model R outputs ω bl = (φ ∗ , p ∗ ) ∗ ∗ ∗ with φ = PRF(λ , x ) at the end of Debt Accumulation. By assumption that the wallet has been generated honestly, a hidden trapdoor htd ∗ that can be extracted to λ∗ is contained in HTD. By assumption that the user does not use his wallet more than x blR times, x ∗ ≤ x blR holds. Hence, φ ∗ ∈ Φ U and ω bl ∈ Ωbl U follows whereby b bill increases by p ∗ . (c) The (n + 1) th transaction is Debt Clearance: Same argument as for Debt Accumulation. (2) Let s prev denote the serial number for which Debt Accumulation is invoked and let trdb

prev

= (·, s prev , φ prev , x prev , λ, . . .)

be the corresponding transaction entry. We distinguish two cases: (a) In the ideal model f Φ (λ, x prev + 1) has already been defined: This can happen due to two cases; either there ∗ is a trdb with λ∗ = λ and x ∗ = x prev + 1 or User Blacklisting has been invoked and x prev < x blR holds. In either case the fraud detection ID φ = f Φ (λ, x prev + 1) associated with the considered transaction has already been fixed and given as part of some output to Z correct . In the real model φ = PRF(λ, x prev + 1) has likewise been output to Z correct . Debt Accumulation aborts with blacklisted, if and only if φ 30 is contained in bl R . (b) In the ideal model f Φ (λ, x prev + 1) has not yet been defined: In the ideal model a fresh φ is uniformly drawn, the probability that φ ∈ bl R holds is negligible and thus Debt Accumulation negligibly aborts with blacklisted. In the real model, φ is set as PRF(λ, x + 1) and φ has never been output before. If Z correct inputs a bl R such that blacklisted occurs with non-negligible probability this immediately yields an efficient adversary B against the unpredictability of PRF. Assume such a Z correct exists, then B can be constructed as follows: B internally executes the real experiment, i.e., it runs Z correct and plays all parties and the adversary for Z correct honestly in its head. Externally B plays the unpredictability game. B needs to guess for which wallet Z correct eventually causes an abort. In the scope of Wallet Issuing B ˜ it is only a picks an unused seed λ˜ but queries its external PRF-oracle for x = 0. N.b., B never uses λ, C placeholder for the unknown challenge λ used by the external oracle. Whenever B needs to evaluate ˜ x ) it queries its external oracle instead. Note, that Z correct cannot distinguish, if it is executed PRF(λ, within a reduction, because λ˜ and λ C are both randomly chosen. B guesses the interaction in which Z correct causes the abort, takes the blacklist bl R from Z correct , guesses what element of the blacklist is the next image of PRF and outputs this externally. □ Theorem E.21 (System Correctness). Let Z correct be an environment that does not corrupt any parties. Under the assumption that all building blocks are correct, C1 is homomorphic, PRF is a pseudo-random function, and DLOG is a hard problem for G 1 F

, G bb

CRS π P4TC

F

, G bb

CRS holds. In other words, πP4TC

≥UC F G bb

is a correct implementation of F G bb .

Proof. A direct consequence of Lemmas E.10 to E.12 and E.15 to E.20. 30 N.b.:



By abuse of notation we use the same variable for the real and the ideal experiment, but of course they are not necessarily equal—only computationally indistinguishable over the randomness of the experiment

82

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

sys-sec

Simulator SπP4TC

Setup: (1) Run a modified version of the algorithm CRS ← Setup(1n ) with SetupPoK being replaced by SetupEPoK and C2.Gen being replaced by C2.SimGen. (2) Record CRS, tdspok and tdeqcom . (3) Set Ωdsp := ∅. pp (4) Set Ω U := ∅. DR Registration: Upon receiving (registering_dr, pid DR ) run (pkDR , skDR ) ← DRRegistration(CRS), return pkDR to FP4TC and record pid DR 7→ (pkDR , skDR ). receiving (registering_tsp, pid T , a T ) run (pk T , sk T , cert RT ) ← TSP Registration: Upon TSPRegistration(CRS, a T ), return pk T to FP4TC and record pid T 7→ (pk T , sk T , cert RT ). RSU Registration: Upon receiving (registering_rsu, pid R ) run (pk R , sk R ) ← RSURegistration(CRS), return pk R to FP4TC and record pid R 7→ (pk R , sk R ). User Registration: If user is corrupted, then nothing to do, as this is a local algorithm. If user is honest: Upon receiving (registering_user, pid U ) run (pk U , sk U ) ← UserRegistration(CRS), return pk U to FP4TC and record pid U 7→ (pk U , sk U ). RSU Certification: Upon receiving (certifying_rsu, pid R , a R ) . . . (1) Load the recorded pid T 7→ (pk T , sk T , cert RT ), and pid R 7→ (pk R , sk R ); if any of these do not exist, let FP4TC abort. (2) Generate cert R := (pk R , a R , σ Rcert ) with σ Rcert ← S.Sgn(skcert T , (pk R , a R )) faithfully. (3) Update record pid R 7→ (pk R , sk R , cert R ). Fig. 45. The simulator for System Security

E.2

Proof of System Security

In this section we show the following theorem. Theorem E.22 (System Security). Under the assumptions of Theorem E.1 and static corruption of a subset of the users FCRS, G bb π P4TC ≥UC F G bb holds. sys-sec

The definition of the UC-simulator SπP4TC for Theorem E.22 can be found in Figs. 45 to 48. Please note that while the real protocol πP4TC lives in the (FCRS , G bb )-model the ideal functionality FP4TC has no CRS. Hence, sys-sec the CRS (but not the bulletin board) is likewise simulated by SπP4TC , giving it a lever to extract the ZK proofs P1, P2, and P3 and to equivoke the commitment C2. Please note that the simulator records more information during its execution—namely the Augmented Transaction Graph (cp. Definition E.13) and the set of augmented pp proof-of-participation information Ω U —than is technically required by the simulator for an indistinguishable simulation from the viewpoint of the environment. This additional information only simplifies the presentation of the reduction argument within the security proof. By skipping ahead we shortly explain how this additional tracking of information supports the proof by giving pp a concrete example for Ω U and the binding property of the commitment scheme. Each of the upcoming security proofs roughly follows the same lines of argument. If an environment Z can efficiently distinguish between the real and the ideal experiment, then we can construct an efficient adversary B against one of the underlying

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

83

sys-sec

Simulator SπP4TC

Wallet Issuing: (1) Load the recorded pid T 7→ (pk T , sk T , cert RT ), pid DR 7→ (pkDR , skDR ); if any of these do not exist, let FP4TC abort. R

(2) λ ′′ ← Zp . 1 2 ′′ ′′ ′′ sim (3) Run (c ′′ T , d T ) ← C1.Com(CRScom , (λ , 0)), (c ser , d ser ) ← C2.SimCom(CRScom ). a (4) Call FP4TC with input (issue) and obtain leaked a U . ′′ , cert R ) to Z sys-sec as the message from T to U. (5) Output (a U , c ser T (6) Upon receiving (pk U , s ′, e ∗ , σ ∗ , π , e 0 , . . . , e ℓ , c R′ , c ′T ) from Z sys-sec in the name of U with pid ∗U . . . ∗ ′ ′ (a) stmnt := (pkid U , pkDR , e , e 0 , . . . , e ℓ , c R , c T ). auth ∗ ∗ (b) If S.Vfy(pk U , σ , (e 0 , . . . , e ℓ , e )) = 0 or P1.Vfy(CRSpok , stmnt, π ) = 0 let FP4TC abort. (c) Look up pid U from G bb for which pkid U has been recorded; if no pid U exists abort. (d) If pid U , pid ∗U and pid U < PID corrupt give up simulation. (e) Extract Wit = (Λ′, Λ0′ , . . . , Λ′ℓ , R ∗ , R 1 , . . . , R ℓ , Λ′, Λ0′ , . . . , Λ′ℓ , Λ′, Λ0′ , . . . , Λ′ℓ , U1 , d R′ , d T′ , SKid U) ← P1.ExtractW(CRS, tdepok , stmnt, π ). (f) Set htd := (λ ′′, e 0 , . . . , e ℓ , e ∗ , σ ∗ ) and extract λ from htd. (g) Provide alternative user PID pid U to FP4TC . (h) Upon being ask by FP4TC to provide φ, return φ := PRF(λ, x ) with x := 0 to FP4TC . (i) Obtain the user’s output (s, a U ) from FP4TC and set b := 0. T 1 ′′ ′′ ′′ (j) Run c T := c ′T · c ′′ T , σ T ← S.Sgn(sk T , (c T , a U )), (c R , d R ) ← C1.Com(CRScom , (λ , 0, 0, 1)), c R := c R′ · c R′′ and σ R ← S.Sgn(sk RT , (c R , s)) honestly as the real protocol would do. ′′ ← C2.Equiv(CRS2 , s ′′, c ′′ , d sim ). (k) Set s ′′ := s · s ′−1 and equivoke d ser com ser ser prev in in in in in out (l) Set s := ⊥, p := 0, b := 0, c T := ⊥, d in T := ⊥, M T := ⊥, c R := ⊥, d R := ⊥, M R := ⊥, c T := c T , id out ′ ′′ out out out ′ ′′ out b x +1 d T := d T · d T , M T := (Λ, pk U ), c R := c R , d R := d R · d R , and M R := (Λ, д1 , U1 , д1 , s). in in in in in out out out out out out (m) Append (s prev , s, φ, x, λ, pid U , pid T , p, b, c in T , d T , M T , c R , d R , M R , c T , d T , M T , c R , d R , M R ) to TRDB. ′′ , λ ′′, c , d ′′ , σ , c , d ′′ , σ ) to Z sys-sec as the message from T to U . (n) Output (s ′′, d ser R R R T T T a Do

not provide an alternative pid U immediately, but let FP4TC sleep.

Fig. 46. The simulator for System Security (cont. from Fig. 45)

cryptographic building blocks, i.e., the binding property of the commitment scheme for the example at hand. To this end, B plays the adversary against the binding property in the outer game and internally executes the UC-experiment in its head while mimicking the role of the simulator. It is important to note that although B emulates the environment and the ideal functionality internally, it only has black-box access to them. In other words, although everything happens inside “the head of B” it cannot somehow magically extract Z’s attack strategy. Moreover, in order for B to win the external game against the binding property the mere existence of a break does not suffice, but B actually has to present two different openings for the same commitment. In order to do so, B observes all of Z’s communication and systematically records parts of the observed transcript such that it can efficiently look up a commitment and its (former) opening if the same commitment with a different opening occurs a second time. This is the only reason why the simulator additionally records the observed

84

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

sys-sec

Simulator SπP4TC (cont.) Debt Accumulation: (1) Load the recorded pid T 7→ (pk T , sk T , cert RT ), and pid R 7→ (pk R , sk R , cert R ); if any of these do not exist, let FP4TC abort. R

(2) Pick u 2 ← Zp . ′′ , d sim ) ← C2.SimCom(CRS2 ). (3) (c ser ser com ′′ , cert ) to Z as the first message from R to U . (4) Output (u 2 , c ser R prev (5) Upon receiving (s ′, π , φ, a U , a R , c hid , c R′ , t ) from Z in the name of U . . . prev ′ (a) stmnt := (pk TT , pkcert T , φ, a U , a R , c hid , c R , t, u 2 ). (b) If P2.Vfy(CRSpok , stmnt, π ) = 0 let FP4TC abort. prev , φ prev , X , Λ, pkid , B prev , U , U next , d , d prev , d ′ , d , pkprev , c prev , c , (c) Extract Wit = (X , Λ, pkid 1 1 hid R T U , U1 , s U R T R R prev cert prev σR , σR , σ T ) ← P2.ExtractW(CRS, tdepok , stmnt, π ). ∗

















in in in in in out out (d) Look up trdb := (s prev,∗ , s ∗ , φ ∗ , x ∗ , λ∗ , pid ∗U , pid ∗T , p ∗ , b ∗ , c in T , dT , MT , cR , dR , MR , cT , dT , prev ∗ ∗ ∗ ∗ ∗ ∗ = c R and c out = c T being used as keys; if no unique entry M out , c out , d out , M out ) with c out R T T R R R exists, give up simulation (event F1). ∗ id ∗ out ∗ = (e) Give up simulation if any of these conditions meet: Λ , д1λ (event F2), pkid U , pk U for M T ∗ prev , s ∗ (event F4), B prev , дb ∗ (event D1), or X , дx ∗ +1 (event D2). (Λ∗ , pkid 1 1 U ) (event F3), s ∗ (f) Retrieve pid U from G bb for pkid and assert pid = pid else abort (event F5). U U U (g) Call FP4TC with input (pay_toll, s prev ); upon being asked by FP4TC to provide an alternative PID, return pid U to FP4TC . (h) If FP4TC aborts for some other reason than blacklisted, give up simulation (event F6). (i) If being asked by FP4TC to provide φ, return φ := PRF(λ, x ) with x := x ∗ + 1 to FP4TC .a prev (j) Upon being ask to provide a price for (a U , a R , a R ) return a price p as the dummy adversary would do. (k) Obtain the user’s output (s, a R , p, b) from FP4TC . pp pp pp (l) Set ω U := (s, c hid , d hid , pkid U ) and append ω U to Ω U . (m) Run (c R′′ , d R′′ ) ← C1.Com(CRS1com , (0, p, 0, 1)), c R := c R′ · c R′′ , and σ R ← S.Sgn(sk R , (c R , s)) honestly as the real protocol would do. ′′ ← C2.Equiv(CRS2 , s ′′, c ′′ , d sim ). (n) Set s ′′ := s · s ′−1 and equivoke d ser com ser ser prev prev id in in in in in prev , U , X , s prev ), (o) Set c in 1 T := c T , d T := d T , M T := (Λ, pk U ), c R := c R , d R := d R , M R := (Λ, B id out out ′ ′′ out out out ′ ′′ out c T := c T , d T := d T · d T , M T := (Λ, pk U ), c R := c R , d R := d R · d R , and M R := (Λ, д1b , U1 , д1x +1 , s). in in in in in out out out out out out (p) Append (s prev , s, φ, x, λ, pid U , pid R , p, b, c in T , d T , M T , c R , d R , M R , c T , d T , M T , c R , d R , M R ) to TRDB. (q) Append ω dsp = (φ, t, u 2 ) to Ωdsp ‡ (r) Check if ω dsp = (φ ‡ , t ‡ , u 2‡ ) ∈ Ωdsp exists with φ = φ ‡ and u 2 , u 2‡ ; in this case ‡ −1 ‡ (i) skid mod p U := (t − t ) · (u 2 − u 2 ) id (ii) Record pid U 7→ (pk U , (skid , U ⊥)) internally ′′ , c , d ′′ , σ , p) to Z as the second message from R to U . (s) Output (s ′′, d ser R R R a N.b.:

FP4TC does not always ask for the next serial number. If the corrupted user re-uses an old token, then FP4TC internally picks the next serial number which has already been determined in some earlier interaction. Hence, the simulator only needs to provide the next serial number, if the chain of transactions is extended.

Fig. 47. The simulator for System Security (cont. from Fig. 45)

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

85

sys-sec

Simulator SπP4TC (cont.) Debt Clearance: (1) Load the recorded pid T 7→ (pk T , sk T , cert RT ) if this does not exist, let FP4TC abort. R

(2) Pick u 2 ← Zp . (3) Output u 2 to Z as the first message from T to U . prev prev (4) Upon receiving (pkid , t ) from Z in the name of U . . . U , π , φ, a U , a R , b prev prev id T cert (a) stmnt := (pk U , pk T , pk T , φ, a U , a R , д1b , t, u 2 ). (b) If P3.Vfy(CRSpok , stmnt, π ) = 0 let FP4TC abort. prev , φ prev , X , Λ, U , d prev , d , pkprev , c prev , c , σ prev , σ cert prev , σ ) ← (c) Extract Wit = (X , Λ, pkid 1 R T T T U , U1 , s R R R R P3.ExtractW(CRS, tdepok , stmnt, π ). ∗ ∗ in ∗ in ∗ in ∗ in ∗ in ∗ out ∗ out ∗ (d) Lookup trdb := (s prev,∗ , s ∗ , φ ∗ , x ∗ , λ∗ , pid ∗U , pid ∗T , p ∗ , b ∗ , c in T , dT , MT , cR , dR , MR , cT , dT , prev ∗ ∗ ∗ ∗ ∗ ∗ M out , c out , d out , M out ) with c out = c R and c out = c T being used as keys; if no unique entry R T T R R R exists, give up simulation (event F1). ∗ id ∗ out ∗ = (e) Give up simulation if any of these conditions meet: Λ , д1λ (event F2), pkid U , pk U with M T ∗ ∗ id (Λ∗ , pk U ) (event F3), s prev , s ∗ (event F4), or B prev , д1b (event D1). ∗ (f) Retrieve pid U from G bb for pkid U and assert pid U = pid U else abort (event F5). prev (g) Call FP4TC with input (clear_debt, s ); upon being asked by FP4TC to provide an alternative PID, return pid U to FP4TC . (h) If FP4TC aborts, give up simulation (event F6). (i) If being asked by FP4TC to provide φ, return φ := PRF(λ, x ) with x := x prev + 1 to FP4TC .a (j) Obtain the user’s output (b bill ) from FP4TC . prev prev id in in in in in b prev , U , X , s prev ), (k) Set c in 1 T := c T , d T := d T , M T := (Λ, pk U ), c R := c R , d R := d R , M R := (Λ, д1 out := ⊥, M out := ⊥, c out := ⊥, d out := ⊥, and M out := ⊥. c out := ⊥, d T T T R R R in in in in in out out out out out out (l) Append (s prev , s, φ, x, λ, pid U , pid R , p, b, c in T , d T , M T , c R , d R , M R , c T , d T , M T , c R , d R , M R ) to TRDB. (m) Append ω dsp = (φ, t, u 2 ) to Ωdsp ‡ (n) Check if ω dsp = (φ ‡ , t ‡ , u 2‡ ) ∈ Ωdsp exists with φ = φ ‡ and u 2 , u 2‡ ; in this case ‡ −1 ‡ (i) skid mod p U := (t − t ) · (u 2 − u 2 ) id (ii) Record pid U 7→ (pk U , (skid , U ⊥)) internally (o) Output (OK) to Z as the second message from T to U . a N.b.:

FP4TC does not always ask for the next serial number. If the corrupted user re-uses an old token, then FP4TC internally picks the next serial number which has already been determined in some earlier interaction. Hence, the simulator only needs to provide the next serial number, if the chain of transactions is extended.

Fig. 48. The simulator for System Security (cont. from Fig. 45)

pp

opening messages in Ω U but never uses them for its own simulation. These opening are only required in the outer game during the reduction argument. As already stated in Appendix B we assume half-authenticated secure channels. In this model the adversary only learns the length of the messages being sent between honest parties. Secure channels can be realized by

86

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

sys-sec

Simulator SπP4TC (cont.) Prove Participation: (1) Call FP4TC with input (prove_participation). pp (2) Obtain leaked set S R of serial numbers from FP4TC . (3) Obtain the user’s output (out U ) from FP4TC and delay the output of the SA. pp (4) Output S R to Z as message from SA to U . (5) Upon receiving (s, c hid , d hid ) from Z in the name of U . . . pp (a) If (s, c hid , ·, ·) < Ω U , let FP4TC abort. (b) Look up pk U from G bb for PID pid U ; if no pk U has been recorded abort. (c) Parse pkid U from pk U . (d) If C1.Open(CRS1com , pkid U , c hid , d hid ) = 0, let FP4TC abort. (e) If out U = NOK give up simulation. (f) Else let FP4TC return the delayed output to the SA. Double-Spending Detection: Upon being ask to provide a proof for pid U , look up pid U 7→ (pk U , sk U ), parse skid U from sk U and return skid U. Guilt Verification: Upon being ask by FP4TC to provide out for (pid U , π ) . . . auth (1) Receive pk U from G bb for pid U and parse (pkid U , pk U ) := pk U id π (2) If д1 = pk U , then return out := OK, else out := NOK to FP4TC . User Blacklisting: Upon receiving (λ, x ) and being ask to provide a serial number, return φ := PRF(λ, x ). Fig. 49. The simulator for System Security (cont. from Fig. 45)

any IND-CCA secure encryption scheme (cp. [14]). Secure channels release us from the burden to spell out a simulator for messages between honest parties. First, note the following simple observation. sys-sec

Lemma E.23. The messages simulated by SπP4TC in the name of honest parties and sent to Z sys-sec are perfect, i.e., statistically identical to real messages. sys-sec

Proof. Follows from the definition of SπP4TC (cp. Figs. 45 to 48).



sys-sec

The messages generated by SπP4TC do not depend on any unknown secrets. Moreover, another simple observation is Lemma E.24. Whenever any Z sys-sec triggers an abort of one of the tasks in the real experiment EXECπ FCRS , G sys-sec

the simulator SπP4TC lets FP4TC also abort in the ideal experiment EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ).

P4TC

sys-sec sys-sec bb, SπP4TC , Z

P4TC

Proof. Follows from the definition of

sys-sec SπP4TC

(cp. Figs. 45 to 48).



An abort in the real model only happens if one of the messages is invalid, e.g., a signature does not verify or a ZK proof is not valid. If Z sys-sec playing the corrupted parties sends a message in the ideal experiment, the sys-sec simulator SπP4TC playing the honest receiver performs the same checks and lets FP4TC abort, if necessary. Due to Lemmas E.23 and E.24 there is only a limited number of ways how Z sys-sec can distinguish between the real and the ideal experiment. All options fall into one out of two categories: • Z sys-sec triggers an abort in the ideal experiment that does not exist in the real experiment. In other words sys-sec SπP4TC is not able to continue the simulation and gives up.

(1n ),

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

87

• Z sys-sec makes the output of an honest party differ between the real and ideal experiment. As this events sys-sec sys-sec are directly observable by the simulator SπP4TC as well, SπP4TC proceeds as above and gives up simulation. We call events of the first type failure events and events of the second type discrepancy events. Naturally, the proof of Theorem E.22 proceeds in two parts according to these categories. Please note that discrepancy events do not sys-sec sys-sec immediately require SπP4TC to give up, we could also decide to let SπP4TC ignore this kind of event. But this way, the proof is simpler. Both parts heavily rely on the Augmented Transaction Graph. In the first part we exploit that its graph structure is redundantly defined: Edges are defined by linking serial numbers to predecessor serial numbers and a second set of edges is defined by means of in- and out-commitments. As already shown in Appendix E.1, both of these graph structures coincide if all parties are honest. We show that this still holds with overwhelming probability if the users are corrupted. More precisely, a failure event is a sys-sec successful attempt of Z sys-sec to make both graph structures fall apart. In this case SπP4TC immediately gives up the simulation. We show that a failure event implies a successful attack on one of our cryptographic building blocks. The second part of the proof considers executions of the UC-experiment in which Z sys-sec does not force sys-sec SπP4TC to give up simulation prematurely, but in which Z sys-sec can distinguish due to different input/output distributions of honest parties. The category of failure events comprises the following individual events: sys-sec

Definition E.25 (Failure Events). SπP4TC gives up simulation because. . . prev

(F1) The extracted c T and c R do not point to a unique predecessor (cp. Fig. 46, Step 5d and Fig. 47, Step 4d). (F2) The extracted Λ does not correspond to the previously recorded λ∗ in the Augmented Transaction Graph (cp. Fig. 46, Step 5e and Fig. 47, Step 4e). id ∗ (F3) The extracted pkid U does not match the previously recorded pk U in the Augmented Transaction Graph (cp. Fig. 46, Step 5e and Fig. 47, Step 4e). (F4) The extracted s prev does not match the previously recorded s ∗ in the Augmented Transaction Graph (cp. Fig. 46, Step 5e and Fig. 47, Step 4e). (F5) The extracted pkid U has not been registered in G bb (cp. Fig. 46, Step 5f) and Fig. 47, Step 4f). (F6) FP4TC aborts for some other reason than blacklisted (cp. Fig. 46, Step 5h and Fig. 47, Step 4h). The category of discrepancy events comprises the following individual events: Definition E.26 (Discrepancy Events). (D1) The dummy T outputs a different b bill to Z sys-sec in the scope of Debt Clearance than a real T running πP4TC (cp. Fig. 46, Step 5e and Fig. 47, Step 4e). (D2) Z sys-sec triggers a non-negligible discrepancy between the abort behavior of the dummy T and the real T within the scope of Debt Accumulation (cp. Fig. 46, Step 5e). (D3) The dummy SA outputs a different outSA to Z sys-sec in the scope of Prove Participation than the real SA (cp. Fig. 48, Step 5e). We start by ruling out failure events. As a preliminary step we need some kind of “loop invariant” that holds as long as no failure event has occurred. Lemma E.27. For every execution of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

that does not trigger a failure event the two graph structures in the Augmented Transaction Graph TRDB maintained sys-sec by SπP4TC are identical to the Ideal Transaction Graph TRDB maintained by FP4TC immediately after the first

88

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

message from the user (cp. Fig. 45 Step 6, Fig. 46, Step 5 and Fig. 47, Step 4) and immediately before the last message to the user (cp. Fig. 45 Step 6n, Fig. 46, Step 5s and Fig. 47, Step 4o) within the scope of the tasks Wallet Issuing, Debt Accumulation and Debt Clearance. Proof. (Informal) The statement is true for every execution of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) that never P4TC triggered Wallet Issuing, Debt Accumulation or Debt Clearance. Assume the statement holds at the end of n such interactions when the (n + 1) th interaction starts. By assumption the Ideal Transaction Graph of FP4TC and sys-sec the two graph structures of the Augmented Transaction graph of SπP4TC are in sync before the first message. sys-sec After the the first message of the (n + 1) th interaction has been received, the SπP4TC conducts a collection of consistency checks, that by assumption do not fail (otherwise a failure event is triggered). The ideal function is invoked and terminates without abort (again by assumption). During its execution FP4TC updates its internal Ideal sys-sec sys-sec Transaction Graph TRDB. After FP4TC returned its output to the SπP4TC and before SπP4TC sends its last message sys-sec sys-sec SπP4TC updates the Augmented Transaction Graph TRDB. By definition of FP4TC and SπP4TC both updates are consistent. □ An immediate consequence of Lemma E.27 is the following Lemma E.28 (No unexpected abort). The failure event (F6) does not occur. Proof. Assume for contradiction that (F6) has occurred. This implies that no other failure event has occurred before, in particular none of the events (F3) or (F4). Due to Lemma E.27 both transaction graphs TRDB and TRDB are in sync. But FP4TC only aborts unexpectedly and raises (F6) if the predecessor serial number s prev and the PID pid U are not recorded in TRDB. In this case (F3) or (F4) must have occurred before. □ Lemma E.29. If P2 and P3 are perfectly sound, C1 is computationally binding and S is EUF-CMA secure, then in a run of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

the event (F1) (cp. Definition E.25) only occurs with negligible probability for any PPT environment Z sys-sec that is restricted to static corruption of users. Proof. Assume a Z sys-sec exists that triggers (F1) with non-negligible probability. Without loss of generality we only consider that (F1) occurs in Debt Accumulation; the case of Debt Clearance is analogous. prev Let (s ′, π , φ, a U , a R , c hid , c R′ , t ) denote the message that was sent by Z sys-sec and raised (F1). Let Wit ← prev P2.ExtractW(CRS, tdepok , stmnt, π ) denote the extracted witness and parse it as (·, Λ, . . . , s prev , . . . , d R , d T , prev prev prev cert prev pk R , c R , c T , σ R , σ R , σ T ) := Wit. We observe the following facts: sys-sec • P2.Vfy(CRSpok , stmnt, π ) = 1 holds, otherwise SπP4TC would have let FP4TC aborted earlier. • As P2 is perfectly sound, the equations C1.Open(CRS1com , (Λ, . . .), c T , d T ) = 1 prev

prev

C1.Open(CRS1com , (Λ, . . .), c R , d R ) = 1 S.Vfy(pk TT , σ T , (c T , a U )) = 1 prev

prev

prev

S.Vfy(pk R , σ R , (c R , s prev )) = 1 cert S.Vfy(pkcert T , σR

prev

prev

, (pk R , . . .)) = 1

hold. prev First we deal with the cases that at least one of c T or c R does not exist as an out-commitment in TRDB.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

89

(1) The commitment c T does not exist as an out-commitment: In other words, the commitment c T has never been issued by the TSP. We construct an efficient adversary B against the EUF-CMA security of S. Internally, B runs Z sys-sec in its head, and simulates EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) for Z sys-sec , i.e., B internally plays P4TC

sys-sec

the role of SπP4TC and FP4TC . Externally, B plays the EUF-CMA security experiment with a challenger C R T and a signing oracle OSpk,sk . When B must internally provide pk T = (pkcert T , pk T , pk T ) playing the role of sys-sec

SπP4TC in the scope of the TSP Registration, B embeds the external challenge as pk TT := pk C . Whenever sys-sec B playing the role of SπP4TC needs to issue signatures with respect to pk T , it does so using its external EUF-CMA oracle OSpk,sk . When the event (F1) occurs, B extracts c T and σ T from the proof and outputs the forgery. prev (2) The commitment c R does not exist as an out-commitment: We need to distinguish two sub-cases. On an abstract level these sub-cases correspond to the following scenarios: Either the previous RSU exists. Then prev the signature on c R is a forgery. Or alternatively, the allegedly previous RSU does not exits but has been prev imagined by the user. Then c R may have a honest, valid signature (because the user feigned the RSU), but the certificate for the fake RSU is a forgery. prev prev prev (a) A record pid R 7→ (pk R , sk R ) has been recorded: Again, we construct an efficient adversary B against the EUF-CMA security of S along the same lines as above. But in this case, B needs to guess for which prev prev sys-sec pid R the event (F1) eventually occurs. When the RSU with pid R registers itself, and B playing SπP4TC prev prev sys-sec needs to provide pk R it embeds the challenge as pk R := pk C . Whenever B playing the role of SπP4TC prev needs to issue a signature with respect to pk R , it does so using its external EUF-CMA oracle OSpk,sk . prev prev prev When the event (F1) occurs, B extracts (c R , s ) and σ R from the proof and outputs the forgery. prev N.b., has never been signed wrpt. pk R = pk C by assumption. prev prev prev (b) No pid R 7→ (pk R , sk R , cert R ) has been recorded: We construct an efficient adversary B against the EUF-CMA security of S. Internally, B runs Z sys-sec in its head, and simulates EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

sys-sec

for Z sys-sec , i.e., B internally plays the role of SπP4TC and FP4TC . Externally, B plays the EUF-CMA security experiment with a challenger C and a signing oracle OSpk,sk . When the adversary B has to internally sys-sec

R T provide pk T = (pkcert T , pk T , pk T ) playing the role of Sπ P4TC in the scope of the TSP Registration, B sys-sec embeds the external challenge as pkcert T := pk C . Whenever B playing the role of Sπ P4TC in the scope of RSU Certification needs to issue signatures with respect to pkcert T , it does so using its external EUF-CMA prev oracle OSpk,sk . When the event (F1) occurs, B extracts cert R prev and σ Rcert from the proof and outputs prev the forgery. N.b.: cert R has never been signed by the TSP with respect to pkcert T = pk C as otherwise a prev prev prev mapping pid R 7→ (pk R , sk R , cert R ) would have been recorded. prev

At this point we know that both commitments c T and c R

exist as an out-commitment somewhere in TRDB. By





assumption there is no single trdb with both commitments as out-commitments. Let trdb denote the transaction ∗∗ prev ∗ ∗∗ entry with c out = c T and Let trdb denote the transaction entry with c out = c R . We have T R ∗







out , d out trdb = (. . . , λ∗ , . . . , c out T T , M T , . . .) |{z} =c T

trdb

∗∗

∗∗

∗∗

∗∗

= (. . . , λ , . . . , c out , d out , M out ) R R R |{z} ∗∗

prev

=c R

90

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt • ∗

P4TC

∗∗

and trdb , trdb . Because no failure event occurred before, TRDB and TRDB are in sync by Lemma E.27, TRDB ∗ out ∗ is correct by definition and hence all c in T i , c T i are constant and equal to c T within the tree to that the node ∗



∗∗

prev

trdb belongs. This implies the nodes trdb and trdb and therefore the commitments c T and c R belong to different trees. From Lemma E.4 we conclude that λ∗ , λ∗∗ holds with overwhelming probability. On the one hand it follows that ∗



M out = (Λ∗ , pkid T U ) ∗∗ M out R

and

= (Λ∗∗ , . . .)

are implicit openings with different wallet IDs. On the other hand the perfect correctness and extractability of the proof system yield M T = (Λ, pkid U) prev MR

and

= (Λ, . . .) ∗

to be valid implicit openings to the same wallet ID. In summary, at least one of the commitments c out = cT T prev ∗∗ out or c R = c R has two different implicit openings. We construct an efficient adversary B against the binding property of C1. Internally, B runs Z sys-sec in its head, and simulates EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) for Z sys-sec , P4TC

sys-sec

i.e., B internally plays the role of SπP4TC and FP4TC . When the event (F1) occurs, B searches the forest TRDB for the duplicate of the commitment and outputs both openings. □ The above lemma requires a remark. If the lemma is used to construct an efficient adversary B out of an environment Z sys-sec that triggers the failure event (F1), the adversary B is fixated on the security game it plays externally. As B uses Z sys-sec in a black-box fashion it does not know how Z sys-sec eventually causes the event (F1). But there are only finite many options and if the probability that Z sys-sec distinguishes correctly is non-negligible, then at least one of the options is non-negligible, too. In other words, B needs to push its luck that Z sys-sec breaks the same underlying building block which B attacks externally. If B guesses wrong, B loses its external game, but the advantage is still non-negligible. Lemma E.30. If P2 and P3 are perfectly sound, C1 is computationally binding and S is EUF-CMA secure, then in a run of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

the events (F2) or (F3) (cp. Definition E.25) only occur with negligible probability for any PPT environment Z sys-sec that is restricted to static corruption of users. Proof. Assume a Z sys-sec exists that triggers (F2) or (F3) with non-negligible probability but not (F1) as sys-sec otherwise SπP4TC would have aborted earlier. Without loss of generality we only consider Debt Accumulation; the prev case of Debt Clearance is analogous. Let (s ′, π , φ, a U , a R , c hid , c R′ , t ) denote the message that was sent by Z sys-sec and raised (F2) or (F3). Let Wit = (. . . , Λ, pkid U , . . . , d T , . . . , c T , . . . ) ← P2.ExtractW(CRS, tdepok , stmnt, π ) denote the extracted witness. We observe the following facts: sys-sec • P2.Vfy(CRSpok , stmnt, π ) = 1 holds, otherwise SπP4TC would have let FP4TC aborted earlier. • As P2 is perfectly sound, the equation C1.Open(CRS1com , (Λ, pkid U ), c T , d T ) = 1, holds. ∗







As (F1) has not occurred we know by Lemma E.29 that there is a unique trdb = (. . . , λ∗ , . . . , c out , d out , M out , . . .) T T T ∗ ∗ ∗ id out out ∗ with c T = c T and M T = (Λ , pk U ). Hence ∗



out C1.Open(CRS1com , (Λ∗ , pkid U ), c T , d T ) = 1

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

91



id holds, too. By assumption at least one of Λ∗ , Λ or pkid U , pk U holds. This immediately yields an efficient adversary B against the binding property of C1: B internally executes the experiment EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

sys-sec

and plays SπP4TC and FP4TC for Z sys-sec until B observes event (F2) or (F3). Then B outputs c T together with out ∗ = (Λ∗ , pkid ∗ ), and d out ∗ . two different openings M := (Λ, pkid □ U ), d T M T U T Lemma E.31. If P2 and P3 are perfectly sound, C1 is computationally binding and S is EUF-CMA secure, then in a run of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

the event (F4) (cp. Definition E.25) only occurs with negligible probability for any PPT environment Z sys-sec that is restricted to static corruption of users. Proof. Assume a Z sys-sec exists that triggers (F4) with non-negligible probability but not (F1) as otherwise sys-sec SπP4TC would have aborted earlier. Without loss of generality we only consider Debt Accumulation; the case prev of Debt Clearance is analogous. Let (s ′, π , φ, a U , a R , c hid , c R′ , t ) denote the message that was sent by Z sys-sec prev prev and raised (F4). Let Wit = (. . . , s prev , . . . , c R , . . . , σ R , . . . ) ← P2.ExtractW(CRS, tdepok , stmnt, π ) denote the extracted witness. We observe the following facts: sys-sec • P2.Vfy(CRSpok , stmnt, π ) = 1 holds, otherwise SπP4TC would have let FP4TC aborted earlier. prev prev prev • As P2 is perfectly sound, the equation S.Vfy(pk R , σ R , (c R , s prev )) = 1 holds. ∗

prev

As (F1) has not occurred we know by Lemma E.29 that there exists a unique trdb = (. . . , s ∗ , . . . , pid R , . . . , prev prev prev ∗ ∗ c out ) with c out = c R , but s ∗ , s prev . Due to the uniqueness of c R the message (c R , s prev ) has never been R R prev signed and thus σ R is a forgery. We show how to construct an efficient adversary B against the EUF-CMA security of S. Externally, B plays the EUF-CMA security experiment with a challenger C and a signing oracle sys-sec OSpk,sk . Internally, B executes the experiment EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) in its head and plays SπP4TC and P4TC

prev

FP4TC for Z sys-sec . B needs to guess for which pid R the event (F1) eventually occurs. When the RSU with prev prev prev sys-sec pid R registers itself, and B playing SπP4TC needs to provide pk R it embeds the challenge as pk R := pk C . prev sys-sec Whenever B playing the role of SπP4TC needs to issue a signature with respect to pk R , it does so using its prev prev external EUF-CMA oracle OSpk,sk . When the event (F4) occurs, B extracts (c R , s prev ) and σ R from the proof and outputs the forgery. □ Lemma E.32. If P2 and P3 are perfectly sound, C1 is computationally binding and S is EUF-CMA secure, then in a run of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

the event (F5) (cp. Definition E.25) does not occur for any PPT environment Z sys-sec that is restricted to static corruption of users. Proof. Without loss of generality we only consider Debt Accumulation; the case of Debt Clearance is analoprev gous. Let (s ′, π , φ, a U , a R , c hid , c R′ , t ) denote the message that was sent by Z sys-sec and raised (F5). Let Wit ← prev P2.ExtractW(CRS, tdepok , stmnt, π ) denote the extracted witness and parse (. . . , pkid U , . . . , c R , . . .) := Wit. As ∗





(F1) has not occurred we know by Lemma E.29 that there is a unique trdb = (. . . , pid ∗U , . . . , c out , . . . , M out , R T prev ∗ out . . .) with c R = c R . We distinguish two cases: ∗

(1) trdb is the result of a run of Debt Accumulation or Debt Clearance: This immediately leads to a contradiction. sys-sec id ∗ The event (F3) has not occurred, as otherwise SπP4TC would have aborted earlier. Hence pkid U = pk U ∗ ∗ out = (Λ∗ , pkid ∗ ) holds and this implies that pkid U does not exist in the bulletin board neither. However, M T U

92

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC ∗

is the result of a successful previous run of Debt Accumulation or Debt Clearance and hence pkid U must exist in the bulletin board. Contradiction! ∗ sys-sec (2) trdb is the result of a run of Wallet Issuing: The event (F3) has not occurred, as otherwise SπP4TC would id id ∗ id ∗ have aborted earlier. Hence pk U = pk U holds and this implies that pk U does not exist in the bulletin ∗ ∗ board neither. However, M out = (Λ∗ , pkid U ) is the result of Wallet Issuing. But Wallet Issuing asserts that T ∗ pkid U exists or aborts (cp. Fig. 45, Step 6c). Contradiction! □ Lemma E.33. If P2 and P3 are perfectly sound, C1 is computationally binding and S is EUF-CMA secure, then in a run of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

the event (D1) (cp. Definition E.26) only occurs with negligible probability for any PPT environment Z sys-sec that is restricted to static corruption of users. Proof. Assume a Z sys-sec exists that triggers (D1) with non-negligible probability but not (F1) as otherwise would have aborted earlier. Unveiling a different, final balance b bill in the scope of Debt Clearance implies that the balance b ∗ of the (real) wallet diverges from the expected balance of the (ideal) wallet at some point along the path of transactions from the root node with b ∗ = 0 (Wallet Issue) to the leaf node with b ∗ = b bill (Debt Clearance). Without loss of generality we only consider a transition between inner nodes (i.e., Debt Accumulation); the transition from the next-to-last to the leaf node (i.e., Debt Clearance) is analogous. prev Let (s ′, π , φ, a U , a R , c hid , c R′ , t ) denote the message that was sent by Z sys-sec and raised (D1). Let Wit = (X , prev prev . . . , Λ, . . . , B prev , . . . , U1 , . . . , d R , . . . , c R , . . . ) ← P2.ExtractW(CRS, tdepok , stmnt, π ) denote the extracted witness. We observe the following facts: sys-sec • P2.Vfy(CRSpok , stmnt, π ) = 1 holds, otherwise SπP4TC would have let FP4TC aborted earlier. prev prev prev prev • As P2 is perfectly sound, the equation C1.Open(CRS1com , M R , c R , d R ) = 1 with M R := (Λ, B prev , U1 , X ), holds. ∗ ∗ out ∗ ∗ As (F1) has not occurred we know by Lemma E.29 that there is a unique trdb = (. . . , λ∗ , . . . , c out , d R , M out ) R R ∗ prev prev ∗ ∗ ∗ ∗ 1 out , c out ) = 1 holds, too. Let be M out = (Λ∗ , дb , U , with c out = c and hence C1.Open(CRS , M , d 1 com 1 R R R R R R ∗ ∗ prev ∗ . This д1x +1 ). By assumption д1b , B prev holds (raise condition for event (D1)) and so does M R , M out R immediately yields an efficient adversary B against the binding property of C1: B internally executes the sys-sec experiment EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) in its head and plays SπP4TC and FP4TC for Z sys-sec until B observes sys-sec SπP4TC

P4TC

prev

event (D1). Then B outputs c R

prev

prev

together with two different openings M R , d R





and M out , d out . R R



Please note that for an honest RSU R and and a malicious user U the blacklist bl R (input of R) and fraud detection ID φ (part of the message from U to R) are both given by Z sys-sec . Moreover, Z sys-sec provides a proof that φ = PRF(λ, x ) has been evaluated correctly. Hence, the only way how Z sys-sec can achieve a discrepancy with respect to the abort behavior between the real and ideal experiment is to manipulate x. Lemma E.34. If P2 and P3 are perfectly sound, C1 is computationally binding and S is EUF-CMA secure, then in a run of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

the event (D2) (cp. Definition E.26) only occurs with negligible probability for any PPT environment Z sys-sec that is restricted to static corruption of users. Proof. Identical to the proof of Lemma E.33 except for a different raise condition.



M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

93

Lemma E.35. If P2 is perfectly sound and C1 is computationally binding, then in a run of EXEC FP4TC, G bb, Sπsys-sec, Z sys-sec (1n ) P4TC

the event (D3) (cp. Definition E.26) only occurs with negligible probability for any PPT environment Z sys-sec that is restricted to static corruption of users. Proof. Please note that in the event of (D3), we already know that C1.Open(CRS1com , pkid U , c hid , d hid ) = 1 pp ∗ ∗ , d ∗ , pkid ∗ ) ∈ Ω pp denote the recorded holds as otherwise FP4TC would have aborted earlier. Let ω U = (s ∗ , c hid U U hid ∗ = c . Obviously, C1.Open(CRS1 , pkid ∗ , c ∗ , d ∗ ) = 1 holds, as otherwise the entry entry with s ∗ = s and c hid hid com U hid hid never would have been recorded. On the other hand, the user with pid U and pkid U has never participated in a id ∗ transaction with serial number s, because FP4TC returned out U = 0 to the simulator. Hence, pkid U , pk U follows which is a contradiction to the binding property of C1. An reduction to an adversary B can be constructed the usual way. □ Taking all the aforementioned statements together, Theorem E.22 from the beginning of this section follows. For the sake of formal completeness we recall it again. Theorem E.22 (System Security). Under the assumptions of Theorem E.1 and static corruption of a subset of the users FCRS, G bb π P4TC ≥UC F G bb holds. Proof. A direct consequence of Lemmas E.23, E.24 and E.27 to E.35.

E.3



Proof of User Security and Privacy

In this section we show the following theorem. Theorem E.36 (User Security and Privacy). Under the assumptions of Theorem E.1 and static corruption of a subset of the RSUs, TSP and SA F

, G bb

CRS π P4TC

≥UC FP4TC

holds. The definition of the UC-simulator Sπuser-sec for Theorem E.36 can be found in Figs. 50 to 54. Please note, that P4TC while the real protocol πP4TC lives in the (FCRS , G bb )-model the ideal functionality FP4TC has no CRS. Hence, the CRS (but not the bulletin board) is likewise simulated by Sπuser-sec , giving it the lever to simulate the ZK proofs P4TC P1, P2, and P3, to equivoke C1 and to extract C2. The overall proof idea is to define a sequence of hybrid experiments Hi together with simulators Si and protocols πi such that the first hybrid H0 is identical to the real experiment and the last hybrid H11 is identical to the ideal experiment. Each hybrid has the form Hi := EXECπi , G bb, Si , Z user-sec (1n ). We show that whenever Z user-sec can distinguish between two consecutive hybrids with non-negligible probability this yields an efficient adversary against one of the underlying cryptographic assumptions. Hybrid H0 . The hybrid H0 is defined as H0 := EXECπ0, G bb, S0, Z user-sec (1n ) with S0 = A being identical to the dummy adversary and π0 = π P4TC . Hence, H0 denotes the real experiment.

94

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Simulator Sπuser-sec P4TC Setup: (1) Run a modified version of the algorithm CRS ← Setup(1n ) with SetupPoK being replaced by SetupSPoK, C1.Gen being replaced by C1.SimGen, and C2.Gen being replaced by C2.ExtGen. (2) Record CRS, tdspok , tdeqcom and tdextcom . (3) Set Ωdsp := ∅. pp (4) Set Ω U := ∅. (5) Set AHTD := ∅. DR Registration: Upon receiving (registering_dr, pid DR ) run (pkDR , skDR ) ← DRRegistration(CRS), return pkDR to FP4TC and record pid DR 7→ (pkDR , skDR ). TSP Registration: Distinguish two cases: TSP is corrupted: (nothing to do as this is a local algorithm for a corrupted TSP) receiving (registering_tsp, pid T , a T ) run (pk T , sk T , cert RT ) ← TSP honest: Upon R TSPRegistration(CRS, a T ), return pk T to FP4TC and record pid T 7→ (pk T , sk T , cert T ). RSU Registration: Distinguish two cases: RSU is corrupted: (nothing to do as this is a local algorithm for a corrupted RSU) RSU honest: Upon receiving (registering_rsu, pid R ) run (pk R , sk R ) ← RSURegistration(CRS), return pk R to FP4TC and record pid R 7→ (pk R , sk R ). User Registration: Upon receiving (registering_user, pid U ) run (pk U , sk U ) ← UserRegistration(CRS), return pk U to FP4TC and record pid U 7→ (pk U , sk U ). RSU Certification: Distinguish four cases: TSP and RSU honest: Upon receiving (certifying_rsu, pid R , a R ) . . . (1) Load the recorded pid T 7→ (pk T , sk T , cert RT ), and pid R 7→ (pk R , sk R ); if any of these do not exist, let FP4TC abort. (2) Generate cert R := (pk R , a R , σ Rcert ) with σ Rcert ← S.Sgn(skcert T , (pk R , a R )) faithfully. (3) Update record pid R 7→ (pk R , sk R , cert R ). TSP honest, RSU corrupted: Upon receiving (certifying_rsu, pid R , a R ) . . . (1) Load the recorded pid T 7→ (pk T , sk T , cert RT ), and obtain pk R from G bb for pid R ; if any of these do not exist, let FP4TC abort. (2) Generate cert R := (pk R , a R , σ Rcert ) with σ Rcert ← S.Sgn(skcert T , (pk R , a R )) faithfully. (3) Record pid R 7→ (pk R , ⊥, cert R ). (4) Output cert to Z user-sec . TSP corrupted, RSU honest: Upon receiving (cert R ) from Z user-sec in the name of T with pid T . . . (1) Load the recorded pid R 7→ (pk R , sk R ), and obtain pk T from G bb for pid T ; if any of these do not exist, let FP4TC abort. (2) Parse a R and σ Rcert from cert R . cert (3) If S.Vfy(pkcert T , σ R , (pk R , a R )) = 0, let FP4TC abort. (4) Call FP4TC with input (certify, a R ) in the name of T with pid T . TSP and RSU corrupted: (nothing to do as Z user-sec plays both parties) Fig. 50. The simulator for User Security and Privacy

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

95

Simulator Sπuser-sec (cont.) P4TC Wallet Issuing: Distinguish two cases: TSP is honest: (nothing to do) TSP is corrupted: (1) Load the recorded pid U 7→ (pk U , sk U ), and obtain pk T from G bb for pid T ; if any of these do not exist, let FP4TC abort. ′′ , cert R , c ′′ . . . (2) Upon receiving a U , c ser T T (a) Parse (pk RT , a T , σ Tcert ) := cert RT . R cert (b) If S.Vfy(pkcert T , σ T , (pk T , a T )) = 0 abort. ′′ ). (c) s ′′ ← C2.Extract(CRS2com , c ser (d) Call FP4TC with input (issue, a U , ∅) a in the name of T with pid T . (e) Obtain TSP’s output (s) from FP4TC , and delay the output of the user. (f) s ′ := s · s ′′−1 . (3) Upon being requested by Z user-sec to provide the second message, . . . R

(a) r 0 , . . . , r ℓ , r ∗ ← Zp . (b) ei ← E.Enc(pkDR , 1; r i ) for i = 0, . . . , ℓ. (c) e ∗ ← E.Enc(pkDR , 1; r ∗ ). ′′ ∗ (d) σ ∗ ← S.Sgn(skauth U , (c T , e 0 , . . . , e ℓ , e )). 1 ′ ′ (e) (c T , d T ) ← C1.Com(CRScom , (0, 0). (f) (c R′ , d R′ ) ← C1.Com(CRS1com , (0, 0, 0, 0)). ∗ ′ ′ (g) stmnt := (pkid U , pkDR , e , e 0 , . . . , e ℓ , c R , c T ). (h) π ← P1.SimProof(CRSpok , tdspok , stmnt). (i) Output (pk U , s ′, e ∗ , σ ∗ , π , e 0 , . . . , e ℓ , c R′ , c ′T ) to Z user-sec . ′′ , λ ′′, c , d ′′ , σ , c , d ′′ , σ ) from Z user-sec in the name of T with pid . . . b (4) Upon receiving (s ′′, d ser R R R T T T T ′′ ′′ (a) Set htd := (λ , c T , d T′′ , e 0 , . . . , e ℓ , e, σ ) and insert (pid U , s, htd) into AHTD. (b) Create real token τ faithfully. (c) If WalletVerification(pk T , pk U , τ ) = 0, let FP4TC abort. (d) Let FP4TC return the delayed output to the User. a Use

b If

empty set as blacklist. no message is received, let FP4TC abort; if blacklisted is received, override FP4TC ’s delayed output for the User with blacklisted.

Fig. 51. The simulator for User Security and Privacy (cont. from Fig. 50)

Hybrid H1 . In hybrid H1 we modify S1 such that CRSpok is generated by SetupSPoK, CRS1com is generated by C1.SimGen and CRS2com is generated by C2.ExtGen. Additionally, S1 initializes the internal sets Ωdsp := ∅, pp Ω U := ∅ and AHTD := ∅ and records the respective entries as the final simulator Sπuser-sec does. P4TC Hybrid H2 . In hybrid H2 we replace the code in the tasks DR/TSP/RSU/User Registration of the protocol π2 such that the simulator S2 is asked for the keys. This equals the method in which the keys are generated in the final ideal experiment. Hybrid H3 . In hybrid H3 the task RSU Certification is modified. For an honest T or an honest R the code of π 3 is replaced by the code of a dummy party. The simulator S3 proceeds as the final simulator Sπuser-sec would do. P4TC

96

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Simulator Sπuser-sec (cont.) P4TC Debt Accumulation: Distinguish two cases: RSU is honest: (nothing to do) RSU is corrupted: (1) Obtain pk T from G bb for pid T ; if it does not exist, let FP4TC abort. ′′ , cert from Z user-sec in the name of R with pid , do . . . (2) Upon receiving u 2 , c ser R R cert (a) Parse (pk R , a R , σ R ) := cert R . cert (b) If S.Vfy(pkcert T , σ R , (pk R , a R )) = 0 abort. ′′ ). ′′ (c) s ← C2.Extract(CRS2com , c ser (d) Call FP4TC with input (pay_toll, ∅) a in the name of R with pid R . (e) Obtain RSU’s output (s, φ, a U , a R ) from FP4TC , and delay the output of the User. (f) s ′ := s · s ′′−1 . (3) Upon being requested by Z user-sec to provide the second message . . . sim ) ← C1.SimCom(CRS1 ) and append (s, c , d sim ) to Ω pp . (a) Run (c hid , d hid hid hid com U ′ ′ (b) (c R , d R ) ← C1.Com(CRS1com , (0, 0, 0, 0)). (c) Check if any (φ, t ′, u 2′ ) ∈ Ωdsp has been recorded previously with (φ) being used as key. If no, pick R

t ← Zp . If yes, load the recorded pid U 7→ (pk U , sk U ) and set t := t ′ + sk U (u 2 − u 2′ ). Insert (φ, t, u 2 ) into Ωdsp . prev ′ (d) stmnt := (pk T , pkcert T , φ, a U , a R , c hid , c R , t, u 2 ). (e) π ← P2.SimProof(CRSpok , tdspok , stmnt). prev (f) Output (s ′, π , φ, a U , a R , c hid , c R′ , t ) to Z user-sec . prev (4) Upon being ask to provide a price for (a U , a R , a R ) return a price p as the dummy adversary would do. ′′ , c , d ′′ , σ , p) from Z user-sec . . . b (5) Upon receiving (s ′′, d ser R R R ′ ′′ (a) d R := d R · d R . p (b) If C1.Open(CRS1com , (1, д1 , 1, д1 ), c R , d R ) = 0 let FP4TC abort. (c) If S.Vfy(pk R , σ R , (c R , s)) = 0 let FP4TC abort. (d) Let FP4TC return the delayed output to the User. b If

a Use empty set as blacklist. no message is received, let FP4TC abort; if blacklisted is received, override FP4TC ’s delayed output for the User with blacklisted.

Fig. 52. The simulator for User Security and Privacy (cont. from Fig. 50)

Hybrid H4 . Hybrid H4 mostly modifies the task User Blacklisting. If the task Wallet Issuing is executed, π 4 still runs the real protocol as in πP4TC , but the simulator S4 already records htd and inserts (pid U , s, htd) into AHTD. Moreover, S4 additionally checks if the set of hidden trapdoors HTD provided by the environment is a subset of the hidden trapdoors that have actually been generated in the scope of Wallet Issuing for this particular user and thus have been recorded by the simulator (cp. Fig. 54, Steps 2b to 2d). This equals the behavior of the final simulator Sπuser-sec .31 P4TC 31 Note

that these modifications only have an effect if T is malicious.

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

97

Simulator Sπuser-sec (cont.) P4TC Debt Clearance: Distinguish two cases: TSP is honest: (nothing to do) TSP is corrupted: (1) Load the recorded pid U 7→ (pk U , sk U ), and obtain pk T from G bb for pid T ; if any of these do not exist, let FP4TC abort. (2) Upon receiving u 2 from Z user-sec in the name of T with pid T , do . . . (a) Call FP4TC with input (clear_debt) in the name of T with pid T . prev (b) Obtain leaked a R . (c) Obtain TSP’s output (pid U , φ, b bill ) from FP4TC , and delay the output of the User. (3) Upon being requested by Z user-sec to provide the second message . . . (a) Check if any (φ, t ′, u 2′ ) ∈ Ωdsp has been recorded previously with (φ) being used as key. If no, pick R

t ← Zp . If yes, load the recorded pid U 7→ (pk U , sk U ) and set t := t ′ + sk U (u 2 − u 2′ ). Insert (φ, t, u 2 ) into Ωdsp . prev b bill cert (b) stmnt := (pkid U , pk T , pk T , φ, a U , a R , д1 , t, u 2 ). (c) π ← P3.SimProof(CRSpok , tdspok , stmnt). prev (d) Output (pk U , π , φ, a U , a R , b bill , t ) to Z user-sec . (4) Upon receiving (OK) from Z user-sec ,a let FP4TC return the delayed output to the User. a If

no message is received, let FP4TC abort.

Fig. 53. The simulator for User Security and Privacy (cont. from Fig. 50)

Hybrid H5 . In H5 the simulator S5 replaces all proofs in a message from a user to a TSP or RSU by a simulated proof.32 Hybrid H6 . In H6 the simulator S6 prepares a pool of independently and uniformly drawn fraud detection IDs Φpid U ,sroot := {φ 1 , . . . , φ x blR } every time a new wallet with s root being its root’s serial number for a user with pid U pseudo

is issued. Moreover, S6 keeps a partial mapping f Φ that maps (pid U , φ real )-pairs to φ ideal . In all messages that the simulator delivers in the name of a user with pid U as the pretended sender the simulator substitutes “real” fraud detection IDs φ real with “ideal” fraud detection IDs φ ideal consistently. To this end, whenever the simulator forwards a message and encounters a fraud detection ID φ = φ real , the simulator S6 first checks if pseudo pseudo fΦ (pid U , φ real ) is already defined. If so, φ ideal = f Φ (pid U , φ real ) is taken as the substitute. If not, S6 pseudo pseudo picks an element φ ideal ∈ Φpid U that is not yet used as an image for f Φ (pid U , ·) and sets f Φ (pid U , φ real ) ideal := φ . If User Blacklisting is invoked for the user with pid U and the consistency check introduced in H4 succeeds, the simulator S6 does not deliver the message HTD to DR but instead replies to T with Φpid U in the name of DR for every wallet whose root serial number s is an element of AHTD ′. Some remarks about H6 are in order. In H6 an honest user still runs almost the real protocol, i.e., an honest user sends fraud detection IDs that are generated by PRF for a seed that S6 does not know. In the step from H5 to H6 these IDs are replaced by uniformly drawn IDs as in the ideal model. However, S6 must replace the IDs consistently in case Z user-sec lets a user re-use an old wallet. If Z user-sec invokes the task Blacklist User the simulator S6 simply returns the prepared pool of fraud detection IDs. In particular, this implies that DR does not 32 This

modification only has an effect if T or R is malicious.

98

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

Simulator Sπuser-sec (cont.) P4TC Prove Participation: (1) Load the recorded pid U 7→ (sk U , pk U ); if this does not exist let FP4TC abort. pp (2) Upon receiving S R from Z user-sec in the name of SA . . . pp (a) Call FP4TC with input (prove_participation, pid U , S R ). (b) Obtain the SA’s output (out) from FP4TC . (c) If out = NOK, abort. pp sim ) from Ω pp such that s ∈ S pp ; if this does not exist abort. (d) Pick ω U = (s, c hid , d hid U R (e) Parse skid U from sk U . sim (f) Equivoke d hid ← C1.Equiv(CRS1com , skid U , c hid , d hid ). (g) Output (s, c hid , d hid ) to Z user-sec as message from U to SA. Double-Spending Detection: Upon being ask to provide a proof for pid U , look up pid U 7→ (pk U , sk U ), parse skid U from sk U and return skid . U Guilt Verification: Upon being ask by FP4TC to provide out for (pid U , π ) . . . auth (1) Receive pk U from G bb for pid U and parse (pkid U , pk U ) := pk U (2) If д1π = pkid U , then return out := OK, else out := NOK to FP4TC . User Blacklisting: Distinguish two cases: TSP honest: (nothing to do) TSP corrupted: (1) Load the recorded pid DR 7→ (pkDR , skDR ). (2) Upon receiving (pk U , HTD) from Z user-sec . . . (a) Obtain pid U from G bb for pk U . (b) AHTD ′ := {ahtd ′ ∈ AHTD | ahtd ′ = (pid ′U , s ′, htd ′ ) with pid ′U = pid U ∧ htd ′ ∈ HTD}. (c) HTD ′ := {htd ′ | (·, ·, htd ′ ) ∈ AHTD ′ }. (d) Assert that HTD = HTD ′, else abort. (e) Call FP4TC with (blacklist_user, pid U ). (f) Upon being asked by FP4TC to provide S root , return S root := {s | (·, s, ·) ∈ AHTD ′ } (g) Forward output of FP4TC to Z user-sec . Fig. 54. The simulator for User Security and Privacy (cont. from Fig. 50)

need to decrypt any hidden trapdoor htd which is a crucial point for hybrid H8 in which the encryption of the Wallet ID is replaced by an encryption of zero. Hybrid H7 . H7 modifies the tasks of Wallet Issuing and Debt Accumulation. The code of π 7 for the user is modified such that it does not send s ′ but randomly picks s and sends it to S7 . Then S7 extracts s ′′ ← ′′ ), calculates s ′ := s · (s ′′ ) −1 and S inserts s ′ into the message from the user to the C2.Extract(CRS2com , c ser 7 TSP or RSU respectively. Hybrid H8 . In the scope of Wallet Issuing the simulator S8 replaces e ∗ , ei for i = 0, . . . , ℓ by encryption of zero and c ′T , c R′ by commitments to zero. This equals the behavior of the final simulator Sπuser-sec . P4TC Hybrid H9 . In H9 in the scope of Debt Accumulation and Debt Clearance, the simulator S9 replaces t in the R

message from the user. If no (φ, t ′, u 2′ ) ∈ Ωdsp has been recorded previously, S9 picks t ← Zp , else S9 sets

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt

• P4TC

99

t := t ′ + sk U (u 2 − u 2′ ). Finally, S9 inserts (φ, t, u 2 ) into Ωdsp . This equals the behavior of the final simulator Sπuser-sec . P4TC Hybrid H10 . Within the scope of Debt Accumulation the simulator S10 runs (c R′ , d R′ ) ← C1.Com(CRS1com , (0, 0, 0, 0)) and replaces the original c R′ in the message from the user to the TSP. Hybrid H11 . The hybrid H11 modifies Debt Accumulation and Prove Participation. In Debt Accumulation S11 runs sim ) ← C1.SimCom(CRS1 ) and appends (s, c , d sim ) to Ω pp . In Prove Participation the code of S for (c hid , d hid hid hid 11 com U ∗ , d ∗ ) and the honest user is replaced by a code that just checks if the user has a matching and correct (s ∗ , c hid hid pp sim ) from Ω pp such that sends OK or NOK resp. to S11 . If S11 receives OK from the user, then it picks ω U = (s, c hid , d hid U pp auth id 1 sim s ∈ S R . Furthermore it parses (skid U , sk U ) := sk U , runs d hid ← C1.Equiv(CRScom , sk U , c hid , d hid ) and sends (s, c hid , d hid ) to T . Again, this equals the behavior of the final simulator Sπuser-sec . P4TC Please note, that the combinations of all modifications from H0 to H11 yields H11 = EXECπ11, G bb, S11, Z user-sec (1n ) = EXEC FP4TC, G bb, Sπuser-sec, Z user-sec (1n ). P4TC

We are now prepared to give the proof of Theorem E.36. We do not spell out the reductions in detail, but only give hints. Proof of Theorem E.36. From H0 to H1 : This hop only changes how the CRS is created during the setup phase. This is indistinguishable for CRSpok , CRS1com , and CRS2com (see the composable zero-knowledge property of Definition A.4, the equivocality property and the extractability property of Definition A.6, resp., condition (a) each). From H1 to H2 : This hop does not change anything as S2 runs the same key generation algorithm as the real protocol does for honest parties. From H2 to H3 : Again, this hop only changes which party runs which part of code, but this has no effect on the view of Z user-sec . From H3 to H4 : This hop introduces an additional check. Assume for the sake of contraction that there is a Z user-sec that can distinguish between H3 and H4 with non-negligible advantage. This can only happen if Z user-sec triggers an abort in H4 due to a failing check HTD = {htd | (pid U , s, htd) ∈ AHTD} but DR does not abort in H3 . However, this immediately yields an efficient adversary against the EUF-CMA security of the signature scheme S or against the binding property of the commitment scheme C2. The adversary can only trigger a difference if it gets the DR to evaluate the pseudo-random function on a seed λ in the real experiment that has not been issued as a legitimate wallet ID for the particular user and thus has no correspondent mapping in the ideal experiment.33 As λ = λ ′ + λ ′′ holds, tampering with λ implies that λ ′ or λ ′′ must have been modified, too. The former is the user’s share of the wallet ID and has been encrypted under the DR’s public key and then signed by the user. The latter is the TSP’s share of the wallet ID to which the TSP has committed itself as c ′′ T during Wallet Issuing and then has been signed by the user. From H4 to H5 : This game hop replaces the real proofs by simulated proofs. To this end, we have to consider a sequence of sub-hybrids—one for each of the different ZK proof systems P1, P2 and P3. In the first sub-hybrid all proofs for P1 are replaced by simulated proofs, in the second sub-hybrid all proofs for P2 are replaced and finally all proofs for P3. Assume there exists Z user-sec that notices a difference between H4 and the first sub-hybrid. Then pok-zk we can construct an adversary B that has a non-negligible advantage AdvPOK, B (n). Internally B runs Z user-sec 33 Please

further note that this “trivial” attack does not yield any practical benefit for a corrupted TSP in the real world. This attack implies that the TSP avoids to completely blacklist a user, but blacklists the user only on a subset of its wallet versions (or none). However, this would be a noticeable difference from what happens in the ideal experiment.

100

M. Hoffmann, V. Fetzer, M. Nagel, A. Rupp, R. Schwerdt •

P4TC

and plays the protocol and simulator for Z user-sec . All calls of the simulator to P1.Prove are forwarded by B to its own oracle in the external challenge game which is either P1.Prove or P1.SimProof. B outputs whatever Z user-sec outputs. The second and third sub-hybrid follow the same line, but this time B internally needs to generate simulated proofs for the proof system that has already been replaced in the previous sub-hybrid. As B gets the simulation trapdoor as part of its input in the external challenge game, B can do so. From H5 to H6 : In this hop the pseudo-random fraud detection IDs are replaced by uniformly drawn IDs. Again, we proceed by a sequence of sub-hybrids. In each sub-hybrid the fraud detection IDs of one particular wallet are replaced. If Z user-sec can distinguish between two of the sub-hybrids, this immediately yields an efficient adversary against the pseudo-random game as defined in Definition A.13. Internally, B runs Z user-sec and plays the protocol and simulator for Z user-sec . Externally, B either interacts with an oracle that is either a true random oracle or a PRF for an unknown seed. Whenever B playing the simulator for Z user-sec internally needs to draw a fraud detection ID for the particular wallet, B uses its external oracle. B outputs whatever Z user-sec outputs. From H6 to H7 : This hop does not change anything from the perspective of Z user-sec as C2 is perfectly extractable. The change is just syntactical to push the simulator closer to the final one. From H7 to H8 : In this hop the commitments c ′T , c R′ and the encryptions of the user’s share of the seed e ∗ , ei for i = 0, . . . , ℓ are replaced with zero-messages for every user that participates in the system. To this end, the hop from H7 to H8 is further split into a sequence of sub-hybrids with each sub-hybrid replacing all the messages of only one particular (user) wallet. The sub-hybrids are ordered according to the invocations of Wallet Issuing. Assume Z user-sec can distinguish between H7 and H8 with non-negligible advantage. This either yields an efficient adversary B against the IND-CPA security of the encryption E or one against the hiding property of C1. We only sketch the proof for the IND-CPA security of E. Internally, B runs Z user-sec and plays the role of all parties and the simulator for Z user-sec . Externally, B plays the IND-CPA game. When B playing the role of the simulator needs to provide the public key in the scope of DR Registration, it embeds the challenge key pkDR := pk C . B needs to guess the index of the sub-hybrid that causes a non-negligible difference, i.e., B needs to guess which (user) wallet “makes the difference”. For the first (i − 1) invocations of Wallet Issuing, B encrypts the true seed, in the i th invocation B embeds the external challenge and B encrypts zero for the remaining invocations of Wallet Issuing. B outputs whatever Z user-sec outputs. From H8 to H9 : This hop is statistically identical. As long as no double-spending has occurred, the user chooses a fresh u 1 in every transaction and thus a single point (u 2 , t ) is information-theoretically independent from sk U . From H9 to H10 : This hop is indistinguishable by the same argument as from H7 to H8 . From H10 to H11 : In this hop the simulator S11 sends simulated commitments c hid for the hidden user ID instead of commitments to the true values and later S11 equivokes these commitments to the correct pk U on demand, if Z user-sec triggers Prove Participation. Again, if Z user-sec has a non-negligible advantage to distinguish between H10 and H11 , then an efficient adversary B can be constructed against the hiding property and equivocability of C1. □