Proactive Secure Multiparty Computation with a Dishonest Majority Karim Eldefrawy1 , Rafail Ostrovsky2 , Sunoo Park3 , and Moti Yung4 1

2

Computer Science Laboratory, SRI International Department of Computer Science and Department of Mathematics, UCLA 3 Department of Computer Science, MIT 4 Department of Computer Science, Columbia University

Abstract. Secure multiparty computation (MPC) protocols enable n distrusting parties to perform computations on their private inputs while guaranteeing confidentiality of inputs (and outputs, if desired) and correctness of the computation, as long as no adversary corrupts more than a threshold t of the n parties. Existing MPC protocols assure perfect security for t ≤ dn/2e − 1 active corruptions with termination (i.e., robustness), or up to t = n − 1 under cryptographic assumptions (with detection of misbehaving parties). However, when computations involve secrets that have to remain confidential for a long time such as cryptographic keys, or when dealing with strong and persistent adversaries, such security guarantees are not enough. In these situations, all parties may be corrupted over the lifetime of the secrets used in the computation, and the threshold t may be violated over time (even as portions of the network are being repaired or cleaned up). Proactive MPC (PMPC) addresses this stronger threat model: it guarantees correctness and input privacy in the presence of a mobile adversary that controls a changing set of parties over the course of a protocol, and could corrupt all parties over the lifetime of the computation, as long as no more than t are corrupted in each time window (called a refresh period ). The threshold t in PMPC represents a tradeoff between the adversary’s penetration rate and the cleaning speed of the defense tools (or rebooting of nodes from a clean image), rather than being an absolute bound on corruptions. Prior PMPC protocols only guarantee correctness and confidentiality in the presence of an honest majority of parties, an adversary that corrupts even a single additional party beyond the n/2 − 1 threshold, even if only passively and temporarily, can learn all the inputs and outputs; and if the corruption is active rather than passive, then the adversary can even compromise the correctness of the computation. In this paper, we present the first feasibility result for constructing a PMPC protocol secure against a dishonest majority. To this end, we develop a new PMPC protocol, robust and secure against t < n − 2 passive corruptions when there are no active corruptions, and secure but non-robust (but with identifiable aborts) against t < n/2 − 1 active corruptions when there are no passive corruptions. Moreover, our protocol is secure (with identifiable aborts) against mixed adversaries controlling, both, passively and actively corrupted parties, provided that if there are k active corruptions, there are less than n − k − 1 total corruptions.

1

Introduction

Secure multiparty computation (MPC) protocols allow a set of distrusting parties, each holding private inputs, to jointly and distributedly compute a function of the inputs while guaranteeing correctness of its evaluation, and privacy of inputs (and outputs, if desired) for honest parties. The study of secure computation has been combining distributed computing paradigms and security methodologies. It was initiated by [Yao82] for two parties and [GMW87] for many parties, and both of these works relied on cryptographic primitives. The information-theoretic setting was introduced by [BGW88] and [CCD88] which, assuming private channels, constructed information-theoretically secure MPC protocols tolerating up to n/3 malicious parties. Assuming a broadcast channel, [RB89] constructs a protocol

that can tolerate up to n/2 malicious parties. These thresholds, n/3 and n/2, are optimal in the information-theoretic setting, in their respective communication models. In the context of public key cryptography, schemes for enhancing distributed trust, e.g., threshold encryption and threshold signatures, are a special case of MPC, e.g., [FGMY97a,FGMY97b,Rab98,CGJ+ 99,FMY01,Bol03,JS05,JO08] [ADN06]. Also, when the computation to be performed via MPC involves private keys, e.g., for threshold decryption or signature generation, it is of utmost importance for trustworthy operation to guarantee the highest possible level of corruption tolerance, since confidentiality of cryptographic keys should be ensured for a long time (e.g., years). Constructing MPC protocols that guarantee security against stronger adversaries and at the same time satisfy low communication and computation complexity bounds has seen significant progress, e.g., [IKOS08,DIK+ 08,DIK10,BFO12,OY91,BELO14] [BELO15]. While enforcing an honest majority bound on the adversary’s corruption limit renders the problem (efficiently) solvable, it is often criticized, from a distributed systems point of view, as unrealistic for protocols that require long-term security of shared secrets used in the computation, or for very long computations (i.e., reactive operation, typical in systems maintenance), or may be targeted by nation-state adversaries (often called “Advanced Persistent Threats”). With advancements of cloud hosting of security services, and online exchanges for cryptocurrencies which require trustworthy services protected by their distributed nature, the above criticism makes sense. This concern is especially relevant when considering so-called “reactive” functionalities that never stop executing, e.g., continuously running control loops that perform threshold decryption or signature generation via a secret shared key. Such long-running reactive functionalities will become increasingly important for security in always-on cloud applications: example settings could include the use of MPC to compute digital signatures in online financial transactions between large institutions, or to generate securely co-signed cryptocurrency transactions via secretshared (or otherwise distributed) keys [GGN16]. In both these cases, one should expect persistent strong adversaries to continuously attack the parties involved in the MPC protocol, and given enough time vulnerabilities in underlying software (or even some hardware) will eventually be found, and the cryptographic keys may be compromised. An approach to deal with an adversary’s ability to eventually corrupt all parties is the proactive security model [OY91]. The proactive security model introduces the notion of a mobile adversary, motivated by the persistent corruption of participating parties in a distributed computation and the continuous race between parties’ corruption and recovery. A mobile adversary is one that can corrupt all parties in a distributed protocol over the course of a protocol execution but with the following limitations: (1) only a constant fraction of parties can be corrupted during any round, and (2) parties periodically get rebooted to a clean initial state — in a fashion designed to mitigate the total adversarial corruption at any given time — guaranteeing that some fraction of honest parties will be maintained as long as the 2

corruption rate is not more than the reboot rate5 . The [OY91] model also assumes that an adversary does not have the ability to predict or reconstruct the randomness used by parties in any uncorrupted period of time, as demarcated by rebooting; in other words, a reboot entails erasing all previous state. This paper’s main goal is to answer the following basic question: Is it feasible to construct a proactive MPC protocol for the dishonest majority setting? 1.1

Contributions

We answer this question in the affirmative by developing the first proactive secure multiparty computation (PMPC) protocol that is secure in the presence of a dishonest majority. Our new protocol is, first, secure and robust against t < n − 2 passive adversaries (parties which follow the protocol but leak what they know) when there are no active corruptions (arbitrarily misbehaving parties), and when parties are serially rebooted. Secondly, the same protocol preserves secrecy but is unfair (with identifiable aborts) against t < n/2 − 1 active adversaries when there are no additional passive corruptions. Thirdly, the protocol is also secure (but non-robust with identifiable aborts) against mixed adversaries that control a combination of passively and actively corrupted parties such that if there are k active corruptions there are less than n − k − 1 total corruptions6 . We note that the number of parties we start from is n − 1 not n because we assume that parties may be serially rebooted and need recovery from the rest of the n − 1 parties. The threshold t is n − 3 and not n − 2 because in the refresh protocol, the secret being shared in the randomizing polynomial is always 0, so the free coefficient in those polynomials is always an additional point that the adversary knows, hence we can tolerate one less corruption than in the non-proactive gradual secret sharing case. Our design and analysis require new ideas, since the security guarantees of all existing PMPC protocols do not apply in the case of a dishonest passive majority, or in the case of mixed adversaries that may form a majority as described above. That is, all existing PMPC protocols can only guarantee secrecy in the presence of an honest majority with at most n/2 − 1 total corruptions; an adversary that corrupts a single additional party beyond the n/2 − 1 threshold, even if only passively and only for a short period of time, obtains all the shared secrets, inputs of the parties, and intermediate and final results of the computation. Our PMPC protocol construction requires new techniques for refreshing, recovering, adding, and multiplying secret-shared data with security in the presence of a dishonest majority. This is achieved using a combination of information-theoretic (additive and polynomial-based) secret-sharing, and cryptographic techniques to protect against active adversaries. (Recall that cryptographic assumptions are necessary in the dishonest majority setting, due to the well-known impossibility of perfectly secure MPC in the presence of a dishonest majority.) Our PMPC protocol can be based on any one-way function and oblivious transfer (the same assumptions as 5

6

We model rebooting to a clean initial state to include global computation information, e.g., the circuit representing the function to be computed, identities of parties, access to secure point-to-point and broadcast channels. The threshold in this case is actually the minimum of n − 3 and n − k − 1.

3

the classic [GMW87] protocol, and formally requires only oblivious transfer which implies the existence of one-way functions). The secret sharing scheme underlying our PMPC protocol is an adaptation of [DEL+ 16], which recently constructed the first stand-alone proactive secret sharing scheme secure against a dishonest majority. The [DEL+ 16] scheme makes use of discrete-logarithm-based verification of secret shares (similar to [Fel87]); for our PMPC protocol (being a portion of a more general protocol), we replace this component with another technique (described below as “mini MPC”) to overcome problematic proactive simulation issues in the security proof. Computing on secretshared data (with security against mobile dishonest-majority adversaries) is a topic unaddressed by prior work. Our addition and multiplication sub-protocols are the building blocks that enable the parties to jointly compute a secret sharing of the desired output value. Addition of two secret-shared values can be performed by local addition of shares (as in many common secret sharing schemes), but multiplication requires more work. Our multiplication sub-protocol makes use of the [GMW87] protocol for standard MPC to perform a “mini MPC” on the proactive secret shares held by the parties, in order to obtain a proactive secret sharing of the multiplication of two secrets. (More generally, the multiplication sub-protocol can be instantiated based on any standard MPC protocol Φ secure against a dishonest majority, and inherits the efficiency properties from Φ.) To build in security against mobile adversaries, we intersperse the execution of the addition and multiplication sub-protocols with a refresh sub-protocol that “refreshes” the shares held by all parties: informally, each time shares are refreshed, any knowledge of shares from previous “pre-refresh” sharings becomes useless to the adversary. This effectively prevents the adversary from learning sensitive information by putting together shares obtained from corruptions that occur far apart in time. Moreover, whenever a party is de-corrupted (rebooted), its memory contents are erased, so it needs to “recover” the necessary share information, this is achieved using our recovery sub-protocol which is triggered dynamically each time a memory loss occurs. The number of parties that can simultaneously lose memory is a parameter of our protocol, which trades off with the number of corruptions allowed per phase. This sensitive trade-off is inherent, if n − τ parties can restore the shares of τ parties who lost memory, then they could also collude to learn the shares of those τ parties. We highlight as an additional contribution the first (formal) definition of secure PMPC in the presence of adversaries that may make a combination of passive and active corruptions, and may corrupt a majority of the parties. Prior security definitions for PMPC only addressed the honest majority setting, so they did not have to address potential failures of robustness and fairness. Moreover, no existing definitions considered PMPC security with mixed adversaries. Our ideal functionality for the dishonest majority setting models robustness and fairness as a fine-grained function of the passive and active corruptions that actually occur during a protocol execution (rather than a coarser-grained guarantee depending on adherence to a corruption threshold that is fixed as a protocol parameter), by adapting for the proactive setting the multi-thresholds paradigm that was introduced by [HLM13] in the context of standard (not proactive) MPC. 4

Scheme

Threshold Passive (Active)

[WWW02] t < n/2 (n/2) [ZSvR05] t < n/3 (n/3) [CKLS02] t < n/3 (n/3) [Sch07] t < n/3 (n/3) [HJKY95] t < n/2 (n/2) [BELO14] t < n/3 − (n/3 − ) [BELO14] t < n/2 − (n/2 − ) [DEL+ 16] t < n − 1 (n/2 − 1)

Security Cryptographic Cryptographic Cryptographic Cryptographic Cryptographic Perfect Statistical Cryptographic

Network Type

Comm. Complexity

Synch. exp(n) Asynch. exp(n) Asynch. O(n4 ) Asynch. O(n4 ) Synch. O(n2 ) Synch. O(1) (amortized) Synch. O(1) (amortized) Synch O(n4 )

Table 1. Comparison of existing proactive secret sharing schemes; threshold is for each refresh phase. Note that the work in [DEL+ 16] also handles mixed adversaries which are characterized by two thresholds, one for passive corruptions and one for active corruptions.

1.2

Related Work

Proactive Secret Sharing (PSS). Secret sharing is often utilized as a building block for MPC protocols. There is significant work on PSS schemes, summarized in Table 1; most of the existing PSS schemes are insecure when a majority of the parties are compromised, even if the compromise is only passive. Such schemes [OY91,HJKY95,WWW02,ZSvR05,Sch07,BELO14,BELO15] typically store the secret as the free term in a polynomial of degree t < n/2, thus once an adversary compromises t + 1 parties (even if only passively), it can reconstruct the polynomial and recover the secret. Recently [DEL+ 16] developed the first PSS scheme for a dishonest majority. The work in [DEL+ 16] only describes a PSS scheme and does not specify how to perform secure computation for the same thresholds. Our work builds on [DEL+ 16] and develops the first PMPC protocol for a dishonest majority. In addition to proactive secret sharing, there has also been substantial research on proactively secure threshold encryption and signature schemes for the honest majority setting, e.g., [FGMY97a,FGMY97b,Rab98,CGJ+ 99,FMY01,Bol03,JS05,JO08,ADN06]. Proactive Secure Multi-Party Computation (PMPC). To the best of our knowledge there are currently only two generic PMPC protocols, [OY91] (requires O(Cn3 ) communication, where C is the size of the circuit to be computed via MPC) and [BELO14] (requiring O(C log2 (C)polylog(n) + Dpoly(n) log2 (C)), where C is the size of the circuit to be computed via MPC and D its depth). These PMPC protocols are inherently designed for an honest majority and it seems difficult to redesign them for a dishonest majority; the reason is that the underlying secret sharing scheme stores secrets as points on polynomials of degree less than n/2, so the only adversary structure that can be described is one in terms of a fraction of the degree of the polynomial and once the adversary compromises enough parties (even if only passively), it can reconstruct the polynomial and recover the secret. 1.3

Outline

The rest of the paper is organized as follows. Section 2 outlines the terminology of proactively secure computation, communication and adversary models; the corresponding formal definitions can be found in Appendix A. Section 3 presents details 5

of our PMPC protocol for a dishonest majority. Section 4 concludes with a discussion of open issues and possible extensions. The security proofs are provided in Appendix B due to space constraints.

2

Model and Definitions

We consider n parties (pi where i ∈ [n]) connected by a synchronous network and an authenticated broadcast channel. Protocol communication proceeds in discrete rounds which are grouped into consecutive blocks called stages. We consider a mobile adversary with polynomially bounded computing power, which “moves around” and chooses a (new) set of parties to corrupt per stage, subject to a maximum threshold of corruptions for any stage. Note that parties once corrupted do not necessarily remain so for the remainder of the protocol, which means that over the course of protocol execution, the adversary can corrupt all the parties, although not all at the same time. 2.1

Phases and Stages of a Proactive Protocol

We adopt terminology from previous formalizations of proactive protocols such as [ADN06] and [BELO14]. Phases. The rounds of a proactive protocol are grouped into phases ϕ1 , ϕ2 , . . . . A phase ϕ consists of a sequence of consecutive rounds, and every round belongs to exactly one phase. There are two types of phases, refresh phases and operation phases. The protocol phases alternate between refresh and operation phases; the first and last phase of the protocol are both operation phases. Each refresh phase is furthermore subdivided into a closing period consisting of the first k rounds of the phase, followed by an opening period consisting of the final ` − k rounds of the phase, where ` is the total number of rounds in the phase. In non-reactive MPC, the number of operation phases can be thought to correspond to the depth of the circuit to be computed. Intuitively, each operation phase serves to compute a layer of the circuit, and each refresh phase serves to re-randomize the data held by parties such that combining the data of corrupt parties across different phases will not be helpful to an adversary. Stages. A stage σ of the protocol consists of an opening period of a refresh phase, followed by the subsequent operation phase, followed by the closing period of the subsequent refresh phase. Thus, a stage spans (but does not cover) three consecutive phases, and the number of stages in a protocol is equal to its number of operation phases. In the case of the first and last stages of a protocol, there is an exception to the alternating “refresh-operation-refresh” format, the first stage starts with the first operation phase, and the last stage ends with the last operation phase. Corruptions. If a party pi is corrupted by the adversary (A) during an operation phase of a stage σj , then A learns the view of pi starting from its state at the beginning of stage σj . If the corruption is made during a refresh phase between 6

consecutive stages σj and σj+1 , then A learns pi ’s view starting from the beginning of stage σj . Moreover, in the case of a corruption during a refresh phase, pi is considered to be corrupt in both stages σj and σj+1 . Finally, a party pi that is corrupt during the closing period of a refresh phase in stage σj may become decorrupted. In this case, pi is considered to be no longer corrupt in stage σj+1 (unless A corrupts him again before the end of the next closing period). A decorrupted party immediately rejoins the protocol as an honest party, if it was passively corrupted, then it rejoins with the correct state according to the protocol up to this point; or if it was actively corrupted, then it is restored to a clean default state (which may be a function of the current round). Note that in restoring a party to the default state, its randomness tapes are overwritten with fresh randomness: this is important since otherwise, any once-corrupted party would be deterministic to the adversary. In terms of modeling, parties to decorrupt are chosen arbitrarily from the corrupt set by the environment. Erasing State. In our model, parties erase their internal state (i.e., the content of their tapes) between phases. The capability of erasing state is necessary in the proactive model, if an adversary could learn all previous states of a party upon corruption, then achieving security would be impossible, since over the course of a protocol execution a mobile adversary would eventually learn the state of all parties in certain rounds. 2.2

Mixed Corruption Model

We consider mixed adversaries [HLM13] which can perform two distinct types of corruptions. The adversary can passively corrupt a set of parties (P) and only read their internal state; the adversary may also actively corrupt some of these parties (A) and make them deviate arbitrarily from the protocol. We assume that A ⊆ P. In traditional MPC, a common notation is to denote the number of parties by n, and the maximum threshold of corrupt parties by t. For mixed adversaries, there are distinct thresholds for active and passive corruptions. We write ta and tp to denote the thresholds of active and passive corruptions, respectively, i.e., |A| ≤ ta and |P| ≤ tp . Note that since we have defined each active corruption to be also a passive corruption, each active corruption counts towards both ta and tp . Following the notation of [HLM13] and [DEL+ 16], in order to model security guarantees 1 1against incomparable maximal adversaries, we consider multi-thresholds T = (ta , tp ), . . . , (tka , tkp ) which are sets of pairs of thresholds (ta , tp ). Security properties are guaranteed if (A, P) ≤ (ta , tp ) for some (ta , tp ) ∈ T , where (A, P) ≤ (ta , tp ) is a shorthand for |A| ≤ ta and |P| ≤ tp . If this condition is satisfied, we write that (A, P) ≤ T . We define our MPC protocols in terms of four security properties: correctness, secrecy, robustness, and fairness.7 The security properties which are guaranteed 7

These terms are standard in the MPC literature. Correctness means that all parties that output a value must output the correct output value with respect to the set of all parties’ inputs and the function being computed by the MPC. Secrecy means that the adversary cannot learn anything more about honest inputs and outputs than can already be inferred from the corrupt parties’ inputs and outputs (more formally, secrecy requires that the adversary’s view during protocol execution can be simulated given only the corrupt parties’ input and output values). Robustness means that the adversary must not be able to prevent honest parties from learning their outputs. Finally, fairness requires that either all honest parties learn their own output values, or no party learns its own output value.

7

in any given protocol execution is a function of the number of actually corrupted parties. Accordingly, we consider four multi-thresholds Tc , Ts , Tr , Tf . Correctness (with agreement on abort) is guaranteed if (A, P) ≤ Tc , secrecy is guaranteed if (A, P) ≤ Ts , robustness is guaranteed if (A, P) ≤ Tr , and fairness is guaranteed if (A, P) ≤ Tf . Note that Tr ≤ Tc and Tf ≤ Ts ≤ Tc , since secrecy and robustness are not well-defined without correctness, and secrecy is a precondition of fairness.8 2.3

New PMPC and Security Definitions

Formal definitions for a proactive MPC protocol and the corresponding ideal functionality, and security for mixed mobile adversaries and dishonest majorities can be found in Appendix A due to space constraints. These definitions are new to this work; they do not exist in prior proactive MPC literature since the dishonest majority setting is unaddressed. One notable difference of the proactive dishonest majority definition we develop compared to the dishonest majority model for standard MPC is that in the standard model, it is acceptable to exclude parties found to be corrupt and simply restart the protocol with the remaining parties, whereas in the proactive setting this could result in the exclusion of all parties even though the adversary cannot actually corrupt all parties simultaneously. Thus, exclusion of misbehaving parties in our proactive model is only temporary, and the protocol is guaranteed to make progress in any phase when the adversary does not cause a majority of parties to deviate from the protocol (otherwise, the phase is restarted). An adversary could cause multiple restarts of a phase and delay protocol execution — which seems unavoidable in a dishonest majority model with a mobile adversary — but cannot cause a phase to have an incorrect output. Due to the definitions’ length and notational complexity, we have opted for a less formal protocol description in the limited space in the body.

3

Construction of a PMPC Protocol for Dishonest Majorities

3.1

Intuition and Overview of Operation

Our PMPC protocol consists of six sub-protocols. GradualShare allows a dealer to share a secret s among n parties. Reconstruct allows parties to reconstruct the underlying secret s based on shares that they hold. Refresh is executed between two consecutive phases, w and w + 1, and generates new shares for phase w + 1 that encode the same secret as the shares in phase w. Recover allows parties that lost their shares to obtain new shares encoding the same secret s, with the help of other honest parties. Add allows parties holding shares of two secrets s and s0 to obtain shares that encode the sum s + s0 . Mult allows parties holding shares of two secrets s and s0 to obtain shares that encode the product s × s0 . The overall operation of the PMPC protocol is as follows. First, each party uses GradualShare to distribute its private input among the n parties (including itself). The circuit to be computed via PMPC is public, and consists of multiple 8

We write T ≤ T 0 if ∀(ta , tp ) ∈ T, ∃(t0a , t0p ) ∈ T 0 such that ta ≤ t0a and tp ≤ t0p .

8

layers each comprised of a set of Add or Mult gates which are executed via the corresponding sub-protocols (layer by layer). Between circuit layers, the shares of all parties are refreshed via Refresh. Decorrupted parties obtain new shares encoding the same shared secrets corresponding to the current state of the MPC computation, i.e., the output of the current circuit layer and any shard values that will be needed in future layers, by triggering the Recover sub-protocol as soon as they find themselves rebooted. When the (secret-shared) output of the final layer of the circuit is computed, parties use Reconstruct to reconstruct the final output. In order to tolerate a dishonest majority, it is not enough to directly store the inputs of the parties (the secrets to be computed on, and which will at the end be transformed into the outputs) in the free term, or as other points on a polynomial. What is needed is to encode the secrets, and compute using them, in a different form resistant to a dishonest majority of say up to n − 1 parties. At a high level, this can be achieved by first additively sharing the secret into d = n − 1 random additive summands (this provides security against t = n − 3 passive corruptions), then sharing each summand using polynomial-based secret sharing for a range of different reconstruction thresholds: this is the key insight of the “gradual secret sharing” scheme of [DEL+ 16]. We develop protocols to add and multiply shares to perform computation on the secret shares. Addition can be performed locally, but to multiply we utilize a standard MPC protocol for a dishonest majority. A simple version of our protocol yields security against passive corruptions; to furthermore achieve active security, we leverage constant round non-malleable homomorphic commitments and zeroknowledge proofs based on one-way functions and oblivious-transfer. The protocol description thus far makes the following two simplifying assumptions: (1) the function f to be computed is deterministic, and (2) all output wire values are learned by all parties. The next two paragraphs discuss how to generalize our protocols, eliminating these assumptions. We address randomized functions using a standard technique, each party pi initially chooses a random value ζi . We treat (xi , ζi ) as the input of party pi (instead of just xi as above), and compute the deterministic function f 0 defined by f 0 ((x1 , ζ1 ), . . . , (xn , ζn )) = f (x1 , . . . , xn ; ζ1 + · · · + ζn ). As this is a standard transformation, we omit further details, and for simplicity of exposition, the rest of the paper deals only with deterministic functions. We now describe an adaptation for the case when each party pi is to receive its own private output yi , as follows. This is a slight variation of the standard technique of “masking” output values using a random mask known only to the intended recipient—but we highlight that the standard technique requires a tweak for the proactive setting.9 Before the reconstruction step, the parties possess a gradual secret sharing of the output values (y1 , . . . , yn ). At this point, each party chooses a secret random value ρi (called a mask ) and shares it among the n parties using GradualShare. Then, the Add sub-protocol is run to obtain a gradual secret sharing of (y1 + ρ1 , . . . , yn + ρn ) instead of (y1 , . . . , yn ). Next, the Reconstruct sub-protocol is run so that every party learns (y1 + ρ1 , . . . , yn + ρn ). Finally, each party pi performs 9

The standard trick is to consider the masks ρi to be part of the parties’ inputs. In the proactive setting, it is important that the masks be chosen later on, as we shall see in the security proof.

9

an additional local computation at the end of the protocol, subtracting ρi from the value on his output wire to obtain his final output yi . 3.2

Real-world Protocol Operation

We now give the formal definition of protocol operation based on the sub-protocols. Definition 1 is the formalization of the description given in prose in Section 3.1. The description of how each sub-protocol works will be given in Section 3.3. Within Definition 1 below, the subprotocols are invoked in black-box manner. Definition 1 (PMPC Protocol Operation). Given an arithmetic circuit C (of depth dC ) that is to be computed by an MPC protocol on inputs x1 , . . . , xn , the proactive MPC protocol is defined as follows. For simplicity, we assume that refresh phases occur between layers of the circuit, and let R ⊆ [dC ] be the set of circuit layers after which a refresh phase is to be triggered.10 1. Each party pi acts as the dealer in GradualShare to share its own input xi among all n parties. (Note that at the conclusion of this step, the parties hold secret sharings of all the values on the input wires of C, i.e., all the inputs to gates at layer 1 of C.) 2. Run the Refresh sub-protocol. The duration of a single Refresh sub-protocol execution is considered to be a refresh phase. 3. For each layer of the circuit, ` = 1, . . . , dC : – For each addition or multiplication gate µ in layer `:11 Compute a sharing of the value on the output wire of µ by using the Add or Mult sub-protocol respectively. The parties’ inputs to the Add or Mult protocol will be the sharings of the values on the input wires of µ, which the parties already possess (the input sharings are computed by step 1 for ` = 1, and subsequently, the input sharings for layer ` + 1 are computed during step `). – If ` ∈ R, run the Refresh sub-protocol. 4. At the conclusion of step 3, the parties possess a gradual sharing of the value (y1 , . . . , yn ) on the output wire(s) of the circuit C, where each yi is the output intended for party pi . The period from this step until the end of the protocol is a single operation phase. Each party now samples a random value ρi and acts as the dealer in GradualShare to share ρi among all n parties. Then, the Add sub-protocol is run to obtain a gradual sharing of the value (z1 , . . . , zn ) where zi = yi + ρi . 5. The Reconstruct sub-protocol is then run to reconstruct the shared value (z1 , . . . , zn ). 6. Each party pi obtains its output yi by subtraction: yi = zi − ρi . Moreover, the adversary may decorrupt a party at any point, during operation or refresh phases, upon which the decorrupted party is restored to a default state which we shall call ⊥. – Whenever a party finds itself with internal state ⊥, it broadcasts a message Help!. 10

11

In general, more complex refresh patterns are possible, e.g., at the level of gates rather than circuit layers. If the Add and Mult sub-protocols are secure under parallel composition, the iterations of this for-loop can be executed in parallel for all gates in layer `.

10

– Upon receiving message Help! from a party pi , all parties immediately execute the Recover sub-protocol so that pi ends up with the secret shares: of all values on circuit wires that will be used for later computation, or in steps 4–6, of the masks ρ1 , . . . , ρn and the shared output (z1 , . . . , zn ). In addition, from step 4 onwards, pi is assisted to recover his own mask ρi , by the other parties sending to pi their shares thereof. Then, the interrupted operation phase or refresh phase is resumed, starting with the next round after the last completed operation-phase or refresh-phase round. 3.3

Sub-protocol Specifications

In the following, field operations occur over a finite field F (of prime characteristic). The sub-protocols make use of a polynomial-based secret sharing schemes, e.g., [Sha79], and are implicitly parametrized by (F, n, d) where n is the number of parties and n − d − 1 is the number of parties that can simultaneously undergo a reboot (thus losing their shares, and requiring recovery). The multiplication sub-protocol is additionally parametrized by Φ (which, in turn, is parametrized by a security parameter κ), which can be any general MPC protocol secure against up to n − 1 active corruptions (such as [GMW87]). For simplicity, secret values are assumed to be field elements; multi-element secrets can be handled by running the sub-protocols on each element separately. The proactive MPC protocol that results from instantiating Definition 1 with the sub-protocols defined in this subsection shall be denoted by ProactiveMPCF,n,d,Φ . Overview of subprotocols GradualShare is used by parties to share their inputs, i.e., each party acts as a dealer when sharing its own inputs. Parties holding sharings (under GradualShare) of secrets s may use subprotocol Reconstruct to reconstruct s, or use subprotocol Refresh to refresh (re-randomize) their shares. Parties holding sharings of secrets s, s0 can compute a sharing of s + s0 using Add, or a sharing of s × s0 using Mult. Subprotocol 1 (GradualShare) We denote by pD the dealer who starts in possession of the secret value s to be shared. At the conclusion of this protocol, each party (including the dealer) will possess a share of the secret s. d 1. pD chooses d random summands s1 , ..., sd which add up to s, Σδ=1 sδ = s. 2. For δ = 1, . . . , d, the dealer pD does the following: (a) pD samples a random degree-δ polynomial fδ over finite field F, subject to fδ (0) = sδ . pD stores the evaluations fδ (1), . . . , fδ (n) and deletes fδ from memory. (b) For i ∈ [n], the dealer pD sends shδ,i = fδ (i) to pi , then deletes fδ (i) from memory. 3. Each party pi stores its d shares shi = (sh1,i , . . . , shd,i ). Subprotocol 2 (Reconstruct) After a sharing of a secret s using GradualShare, the n parties can reconstruct s as follows. 1. For δ = d, . . . , 1: 11

(a) Each party pi broadcasts its share shδ,i . (b) Each party locally interpolates to determine the polynomial fδ , then computes sδ = fδ (0). 2. Each party outputs the secret s computed as s = s1 + s2 + · · · + sd . Subprotocol 3 (Refresh) Each party pi ∈ {pi |i ∈ [n]} begins this protocol in possession of shares shi = (sh1,i , . . . , shd,i ) and ends this protocol in possession of new “refreshed” shares sh0i = (sh01,i , . . . , sh0d,i ). 1. Each party pi generates an additive sharing of 0 (i.e., d randomization summands which add up to 0). Let the additive shares of pi be denoted by rδ,i : note that d Σδ=1 rδ,i = 0. 2. For δ = 1, . . . , d do: (a) For i = 1, . . . , n: Party pi shares rδ,i by running GradualShare and acting as the dealer. Pn j (b) Each party pi adds up the shares it received: sh00i = j=1 shδ,i and sets sh0δ,i = shδ,i + sh00i . 3. Each honest party pi deletes the old shares shi and stores instead: sh0i = (sh01,i , . . . , sh0d,i ). The following sub-protocol is used by parties to recover shares (under GradualShare) for a rebooted party. Subprotocol 4 (Recover) Let parties {pr }r∈R be the ones that need recovery, where R ⊂ [n]. We refer to the other parties, {pi }i∈R / , as “non-recovering parties.” Below, we describe the procedure to recover the shares of a single party pr . To recover the shares of all parties, the below procedure should be run ∀r ∈ R. 1. For δ = 1, . . . , d do: (a) Each non-recovering party pi chooses a random degree-δ polynomial gδ,i subject to the constraint that gδ,i (r) = 0. (b) Each non-recovering party pi shares its polynomial with the other n − |R| − 1 non-recovering parties as follows: pi computes and sends to each receiving party pj the value shiδ,j = gδ,i (j). (c) Each non-recovering party pj adds all the shares it received from the other n − |R| − 1 parties for the recovery polynomials gδ,i to its share of fδ , i.e., n n zδj = fδ (j) + Σi=1 shiδ,j = fδ (j) + Σi=1 gδ,i (j). (d) Each non-recovering party pj sends zδj to pr . Using this information, pr n interpolates the recovery polynomial gδ = fδ + Σi=1 gδ,i and computes shδ,r = gδ (r) = fδ (r). Subprotocol 5 (Add) Each party pi ∈ {pi |i ∈ [n]} begins this protocol in possession of shares shi = (sh1,i , . . . , shd,i ) corresponding to a secret s and sh0i = (sh01,i , . . . , sh0d,i ) corresponding to a secret s0 , and ends this protocol in possession of + + 0 shares sh+ i = (sh1,i , . . . , shd,i ) corresponding to the secret s + s . 0 1. For each δ ∈ {1, . . . , d} and each i ∈ [n], party pi sets sh+ δ,i = shδ,i + shδ,i . Subprotocol 6 (Mult) Each party pi ∈ {pi |i ∈ [n]} begins this protocol in possession of shares shi = (sh1,i , . . . , shd,i ) corresponding to a secret s and sh0i = 12

(sh01,i , . . . , sh0d,i ) corresponding to a secret s0 , and ends this protocol in possession of × × 0 shares sh× i = (sh1,i , . . . , shd,i ) corresponding to the secret s × s . P 1. Each party pi adds up its local shares of s and s0 respectively: θi = δ∈[d] shδ,i P and θi0 = δ∈[d] sh0δ,i . By construction of the gradual secret sharing scheme, these sums can be expressed as θi = fb(i) and θi0 = fb0 (i) for some degree-d polynomials fb, fb0 such that fb(0) = s and fb0 (0) = s0 . 2. Run the MPC protocol of [GMW87] as follows: – The input of party pi to the MPC is (θi , θi0 ). – The function to be computed by the MPC on the collective input (θ1 , θ10 ), . . . , (θn , θn0 ) is: (a) Interpolate (θi )i∈[n] and (θi0 )i∈[n] to recover the secrets s and s0 as the free terms of the respective polynomials fb and fb0 . (b) Compute the product s× = s × s0 . × (c) Compute shares (sh× δ,i )δ∈[d],i∈[n] as a dealer would when sharing secret s using GradualShare. (d) For each i ∈ [n], output (sh× δ,i )δ∈[d] to party pi . 3.4

Security Proofs

Security proofs of the full protocol with respect to the formal definitions in Appendix A are given in Appendix B due to space constraints.

4

Conclusion and Open Issues

This paper presents the first proactive secure multiparty computation (PMPC) protocol for a dishonest majority. Our PMPC protocol is robust and secure against t < n − 2 passive only corruptions, and secure but non-robust (but with identifiable aborts) against t < n/2 − 1 active corruptions when there are no additional passive corruptions. The protocol is also secure, and non-robust but with identifiable aborts, against mixed adversaries that control a combination of passively and actively corrupted parties such that with k active corruptions there are less than n − k − 1 total corruptions. In this paper we prove the feasibility of constructing PMPC protocols secure against dishonest majorities. Optimizing computation and communication in such protocols (and making them practical) is not the goal of this paper and is an interesting open problem. Specifically, we highlight the following issues of interest which remain open: – There are currently no practical proactively secure protocols for dishonest majorities for specific classes of computations of interest such as threshold decryption and signature generation; all existing practical proactively secure threshold encryption and signature schemes such as [FGMY97a,FGMY97b,Rab98,FMY01,Bol03,JS05] [JO08,ADN06] require an honest majority. – There are currently no PMPC protocols (or even only proactive secret sharing schemes) for asynchronous networks and secure against dishonest majorities. Our PMPC protocol assumes a synchronous network. 13

– It is unclear what the lowest bound for communication required for a PMPC protocol secure against a dishonest majority is. We achieve O(n4 ) communication for the refresh and recover sub-protocols which are typically the bottleneck; it remains open if this can be further reduced. PMPC protocols [BELO14,BELO15] for an honest majority have constant (amortized) communication overhead; it is unlikely that this can be matched in the dishonest majority case, but it may be possible to achieve O(n3 ) or O(n2 ).

Acknowledgements. We thank Antonin Leroux for pointing out typos and issues in the statement of theorem 2 in the appendix. We also thank the SCN 2018 reviewers for their constructive feedback which helped us improve the readability of the paper. The second author’s research is supported in part by NSF grant 1619348, DARPA SafeWare subcontract to Galois Inc., DARPA SPAWAR contract N66001-15-1C-4065, US-Israel BSF grant 2012366, OKAWA Foundation Research Award, IBM Faculty Research Award, Xerox Faculty Research Award, B. John Garrick Foundation Award, Teradata Research Award, and Lockheed-Martin Corporation Research Award. The views expressed are those of the authors and do not reflect position of the Department of Defense or the U.S. Government.

References [ADN06]

[BELO14]

[BELO15]

[BFO12] [BGW88]

[Bol03]

[CCD88]

[CGJ+ 99]

Jes´ us F. Almansa, Ivan Damg˚ ard, and Jesper Buus Nielsen. Simplified threshold RSA with adaptive and proactive security. In Serge Vaudenay, editor, Advances in Cryptology EUROCRYPT 2006, 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, May 28 - June 1, 2006, Proceedings, volume 4004 of Lecture Notes in Computer Science, pages 593–611. Springer, 2006. Joshua Baron, Karim Eldefrawy, Joshua Lampkins, and Rafail Ostrovsky. How to withstand mobile virus attacks, revisited. In Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, PODC ’14, pages 293–302, New York, NY, USA, 2014. ACM. Joshua Baron, Karim Eldefrawy, Joshua Lampkins, and Rafail Ostrovsky. Communicationoptimal proactive secret sharing for dynamic groups. In Proceedings of the 2015 International Conference on Applied Cryptography and Network Security, ACNS ’15, 2015. Eli Ben-Sasson, Serge Fehr, and Rafail Ostrovsky. Near-linear unconditionally-secure multiparty computation with a dishonest minority. In CRYPTO, pages 663–680, 2012. Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. Completeness theorems for noncryptographic fault-tolerant distributed computation (extended abstract). In STOC, pages 1–10, 1988. Alexandra Boldyreva. Threshold signatures, multisignatures and blind signatures based on the gap-diffie-hellman-group signature scheme. In Yvo Desmedt, editor, Public Key Cryptography - PKC 2003, 6th International Workshop on Theory and Practice in Public Key Cryptography, Miami, FL, USA, January 6-8, 2003, Proceedings, volume 2567 of Lecture Notes in Computer Science, pages 31–46. Springer, 2003. David Chaum, Claude Cr´epeau, and Ivan Damgard. Multiparty unconditionally secure protocols. In Proceedings of the twentieth annual ACM symposium on Theory of computing, STOC ’88, pages 11–19, New York, NY, USA, 1988. ACM. Ran Canetti, Rosario Gennaro, Stanislaw Jarecki, Hugo Krawczyk, and Tal Rabin. Adaptive security for threshold cryptosystems. In Michael J. Wiener, editor, Advances in Cryptology CRYPTO ’99, 19th Annual International Cryptology Conference, Santa Barbara, California, USA, August 15-19, 1999, Proceedings, volume 1666 of Lecture Notes in Computer Science, pages 98–115. Springer, 1999.

14

[CKLS02]

Christian Cachin, Klaus Kursawe, Anna Lysyanskaya, and Reto Strobl. Asynchronous verifiable secret sharing and proactive cryptosystems. In ACM Conference on Computer and Communications Security, pages 88–97, 2002. [COSV16] Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, and Ivan Visconti. 4-round concurrent non-malleable commitments from one-way functions. Cryptology ePrint Archive, Report 2016/621, 2016. http://eprint.iacr.org/2016/621. [DEL+ 16] Shlomi Dolev, Karim Eldefrawy, Joshua Lampkins, Rafail Ostrovsky, and Moti Yung. Proactive secret sharing with a dishonest majority. In Vassilis Zikas and Roberto De Prisco, editors, Security and Cryptography for Networks - 10th International Conference, SCN 2016, Amalfi, Italy, August 31 - September 2, 2016, Proceedings, volume 9841 of Lecture Notes in Computer Science, pages 529–548. Springer, 2016. [DIK+ 08] Ivan Damg˚ ard, Yuval Ishai, Mikkel Krøigaard, Jesper Buus Nielsen, and Adam Smith. Scalable multiparty computation with nearly optimal work and resilience. In CRYPTO, pages 241–261, 2008. [DIK10] Ivan Damg˚ ard, Yuval Ishai, and Mikkel Krøigaard. Perfectly secure multiparty computation and the computational overhead of cryptography. In EUROCRYPT, pages 445–465, 2010. [Fel87] Paul Feldman. A practical scheme for non-interactive verifiable secret sharing. In 28th Annual Symposium on Foundations of Computer Science, Los Angeles, California, USA, 27-29 October 1987, pages 427–437. IEEE Computer Society, 1987. [FGMY97a] Yair Frankel, Peter Gemmell, Philip D. MacKenzie, and Moti Yung. Optimal resilience proactive public-key cryptosystems. In 38th Annual Symposium on Foundations of Computer Science, FOCS’97, Miami Beach, Florida, USA, October 19-22, 1997, pages 384–393. IEEE Computer Society, 1997. [FGMY97b] Yair Frankel, Peter Gemmell, Philip D. MacKenzie, and Moti Yung. Proactive RSA. In Burton S. Kaliski Jr., editor, Advances in Cryptology - CRYPTO ’97, 17th Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 1997, Proceedings, volume 1294 of Lecture Notes in Computer Science, pages 440–454. Springer, 1997. Yair Frankel, Philip D. MacKenzie, and Moti Yung. Adaptive security for the additive-sharing [FMY01] based proactive RSA. In Kwangjo Kim, editor, Public Key Cryptography, 4th International Workshop on Practice and Theory in Public Key Cryptography, PKC 2001, Cheju Island, Korea, February 13-15, 2001, Proceedings, volume 1992 of Lecture Notes in Computer Science, pages 240–263. Springer, 2001. [GGN16] Rosario Gennaro, Steven Goldfeder, and Arvind Narayanan. Threshold-optimal DSA/ECDSA signatures and an application to bitcoin wallet security. In Mark Manulis, Ahmad-Reza Sadeghi, and Steve Schneider, editors, Applied Cryptography and Network Security - 14th International Conference, ACNS 2016, Guildford, UK, June 19-22, 2016. Proceedings, volume 9696 of Lecture Notes in Computer Science, pages 156–174. Springer, 2016. [GMW87] Oded Goldreich, Silvio Micali, and Avi Wigderson. How to play any mental game or a completeness theorem for protocols with honest majority. In Alfred V. Aho, editor, STOC, pages 218–229. ACM, 1987. [Goy11] Vipul Goyal. Constant round non-malleable protocols using one way functions. In Lance Fortnow and Salil P. Vadhan, editors, Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6-8 June 2011, pages 695–704. ACM, 2011. [HJKY95] Amir Herzberg, Stanislaw Jarecki, Hugo Krawczyk, and Moti Yung. Proactive secret sharing or: How to cope with perpetual leakage. In CRYPTO, pages 339–352, 1995. [HLM13] Martin Hirt, Christoph Lucas, and Ueli Maurer. A dynamic tradeoff between active and passive corruptions in secure multi-party computation. In Ran Canetti and Juan A. Garay, editors, Advances in Cryptology - CRYPTO 2013 - 33rd Annual Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2013. Proceedings, Part II, volume 8043 of Lecture Notes in Computer Science, pages 203–219. Springer, 2013. [IKOS08] Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai. Cryptography with constant computational overhead. In STOC, pages 433–442, 2008. [JO08] Stanislaw Jarecki and Josh Olsen. Proactive RSA with non-interactive signing. In Gene Tsudik, editor, Financial Cryptography and Data Security, 12th International Conference, FC 2008, Cozumel, Mexico, January 28-31, 2008, Revised Selected Papers, volume 5143 of Lecture Notes in Computer Science, pages 215–230. Springer, 2008. [JS05] Stanislaw Jarecki and Nitesh Saxena. Further simplifications in proactive RSA signatures. In Joe Kilian, editor, Theory of Cryptography, Second Theory of Cryptography Conference, TCC 2005, Cambridge, MA, USA, February 10-12, 2005, Proceedings, volume 3378 of Lecture Notes in Computer Science, pages 510–528. Springer, 2005.

15

[LPTV10]

Huijia Lin, Rafael Pass, Wei-Lung Dustin Tseng, and Muthuramakrishnan Venkitasubramaniam. Concurrent non-malleable zero knowledge proofs. In Tal Rabin, editor, Advances in Cryptology - CRYPTO 2010, 30th Annual Cryptology Conference, Santa Barbara, CA, USA, August 15-19, 2010. Proceedings, volume 6223 of Lecture Notes in Computer Science, pages 429–446. Springer, 2010. [OY91] Rafail Ostrovsky and Moti Yung. How to withstand mobile virus attacks (extended abstract). In PODC, pages 51–59, 1991. [Rab98] Tal Rabin. A simplified approach to threshold and proactive RSA. In Hugo Krawczyk, editor, Advances in Cryptology - CRYPTO ’98, 18th Annual International Cryptology Conference, Santa Barbara, California, USA, August 23-27, 1998, Proceedings, volume 1462 of Lecture Notes in Computer Science, pages 89–104. Springer, 1998. [RB89] T. Rabin and M. Ben-Or. Verifiable secret sharing and multiparty protocols with honest majority. In Proceedings of the twenty-first annual ACM symposium on Theory of computing, STOC ’89, pages 73–85, New York, NY, USA, 1989. ACM. [Sch07] David Schultz. Mobile Proactive Secret Sharing. PhD thesis, Massachusetts Institute of Technology, 2007. [Sha79] Adi Shamir. How to share a secret. Commun. ACM, 22(11):612–613, 1979. [WWW02] Theodore M. Wong, Chenxi Wang, and Jeannette M. Wing. Verifiable secret redistribution for archive system. In IEEE Security in Storage Workshop, pages 94–106, 2002. Andrew Chi-Chih Yao. Protocols for secure computations (extended abstract). In 23rd Annual [Yao82] Symposium on Foundations of Computer Science, Chicago, Illinois, USA, 3-5 November 1982, pages 160–164. IEEE Computer Society, 1982. [ZSvR05] Lidong Zhou, Fred B. Schneider, and Robbert van Renesse. Apss: proactive secret sharing in asynchronous systems. ACM Trans. Inf. Syst. Secur., 8(3):259–286, 2005.

A

Formal Definitions for Proactive MPC

In this section we provide formal definitions for a proactive MPC protocol and the corresponding ideal functionality, and security for mixed mobile adversaries and dishonest majorities can be found in the appendix. These definitions are new to this work: they do not exist in prior proactive MPC literature since the dishonest majority setting is unaddressed. A.1

New Security Definitions

To accommodate dishonest majority, we have to adapt the earlier proactive MPC security definitions to include the above-described multi-thresholds. An interesting aspect of these new definitions is dealing with the possibility of aborts when the adversary has corrupted a majority of the parties: the standard security notion for (non-proactive) MPC allows for the unavoidable possibility that a majority of corrupt parties could all abort and thus prevent the protocol from proceeding, by stipulating that in such an event, the culprits must be able to be identified and eliminated from the protocol by the rest of the parties who can then proceed running the protocol without the offenders (while setting the offenders’ input values to some known defaults). In the proactive setting, this solution does not seem directly applicable, since it seems that the adversary could corrupt more and more parties in successive phases, until all parties have been eliminated from the protocol. Proactive MPC Protocols with Mixed Mobile Adversaries 16

Definition 2 (Proactive MPC Protocol for Mixed Mobile Adversaries). A proactively secure multiparty protocol Π specifies a system of interaction between an environment Z, an adversary A, and parties p1 , . . . , pn (denoted by the set {pi |i ∈ [n]}). Z, A, and the parties are modeled as interactive Turing machines. Specification of a protocol Π is structured in synchronous rounds of interaction, and designates each round as part of either a refresh phase or an operation phase as described in Section 2.1. A “real-world” protocol execution is structured as follows: Initialization 1. The environment Z invokes the adversary A with optional auxiliary input z. 2. The environment Z invokes each party from the set {pi |i ∈ [n]} with an input xi .12 3. Initialize the sets of passively and actively corrupted parties, P := ∅ and A := ∅. Protocol execution 4. In each round of Π: (a) Each (semi-)honest party in ({pi |i ∈ [n]} \ A) prepares a message to broadcast in this round, as prescribed by Π. (b) The adversary A may perform corruptions and/or decorruptions as described below. This step can be repeated as many times as A (adaptively) decides. • Corruption: Upon corruption of a party pi ∈ {pi |i ∈ [n]} \ P, update P := P ∪ {pi }. A gains access to the internal state of pi , including pi ’s view since the start of the current stage. Moreover, if pi is corrupted actively, update A := A ∪ {pi }. • Decorruption: Upon decorruption of a party pi ∈ P, the set P := P\{pi } is updated and the randomness tape of pi is overwritten with fresh randomness unknown to A. Moreover, if pi was actively corrupted, then A := A \ {pi } is updated and the internal state of pi is overwritten by a “default” state as specified by Π. After this, if pi has not yet broadcasted a message in this round, he prepares a message as prescribed by Π (after the randomness/state overwrite). (c) The adversary broadcasts a message on behalf of each corrupt party pi ∈ P. (d) All honest parties broadcast their prepared messages. Outputs: Each honest party ({pi |i ∈ [n]} \ P) outputs a special symbol ⊥1 if he has not received an output value, or otherwise outputs his received value yi . The adversary A outputs vA , which may be an arbitrary (randomized) function of the information he has learned during the ideal protocol execution. The environment Z, after observing the outputs of all parties and A, outputs a bit bZ . Let ExecΠ,A,Z (κ, f, z) denote the distribution of bZ .

12

Without loss of generality, |xi | = |xj | for all i, j ∈ [n].

17

Ideal PMPC Functionality for Mixed Mobile Adversaries Ideal process IdealF proactive-mixed In the ideal process, the environment Z initializes the parties and an ideal adversary S with inputs of its choice. The parties and ideal adversary (who may choose to corrupt and take control of a subset of parties of its choice) interact with an ideal functionality F which behaves as a trusted third party that receives the inputs x1 , . . . , xn of the parties, computes the function f on the inputs, and outputs to each party pi his respective output yi . At the conclusion of the protocol, each uncorrupted party outputs his output value yi , each corrupted party outputs a special symbol ⊥0 , and S outputs an arbitrary function of its view13 of the protocol execution. The environment Z observes the outputs of all parties and S, then outputs a single bit. Parameters: κ, the security parameter; f , the n-ary function to be computed; multi-thresholds Tc , Ts , Tr , Tf . Specification of the Ideal Process: Initialization 1. The environment Z invokes the ideal adversary S with optional auxiliary input z. 2. The environment Z invokes each party in the set {pi |i ∈ [n]} with an input xi . 3. F initializes the sets of passively and actively corrupted parties, P := ∅ and A := ∅, and also initializes µP := 0 and µA := 0. Inputs 4. Each party pi sends his input xi to the ideal functionality F. Input corruption phase 5. F receives message (τ, i) from S, where τ ∈ {passive, active, decorrupt} and i ∈ [n] ∪ {⊥}. 6. If i 6= ⊥: • If τ ∈ {passive, active}: Update P := P ∪ {pi } then set µP := max(µP , |P|). F sends xi to S. • If τ = active: Update A := A ∪ {pi } then set µA := max(µA , |A|). • If τ = decorrupt: Update P := P \ {pi } and A := A \ {pi }. • Go back to step 5. 7. If (µP , µA ) 6≤ Ts : F sends all inputs {xi }i∈[n] to the adversary S. 8. For randomized functionalities only: If (µP , µA ) ≤ Ts : F samples a random bit-string r of appropriate length. Else: F receives r from the adversary S. Computation 9. If f is deterministic, F evaluates (y1 , . . . , yn ) := f (x1 , . . . , xn ). If f is randomized, F evaluates (y1 , . . . , yn ) := f (x1 , . . . , xn ; r). Output corruption phase 10. S sends message StartOutputPhase to F, whereupon F resets P := ∅ and A := ∅. 11. F receives message (τ, i) from S, where τ ∈ {passive, active} and i ∈ [n] ∪ {⊥}. 12. If i 6= ⊥: • If τ ∈ {passive, active}: Update P := P ∪ {pi } then set µP := max(µP , |P|). F sends yi to S. • If τ = active: Update A := A ∪ {pi } then set µA := max(µA , |A|). • Go back to step 11. 13. If (µP , µA ) 6≤ Ts : F sends all outputs {yi }i∈[n] to the adversary S. Outputs 14. If (µP , µA ) 6≤ Tf : For each pi ∈ P, F sends yi to the adversary S. Then S sends a bit β to F. If β = 1, then F aborts. 15. If (µP , µA ) 6≤ Tr : S sends a bit β to F. If β = 1, then F aborts. 16. If (µP , µA ) 6≤ Tc : S sends (yi0 )i∈[n] to F and F sets yi = yi0 for all i ∈ [n]. 17. For each honest party in ({pi |i ∈ [n]} \ P), F sends yi to party pi . For each corrupt party pi ∈ P, F sends yi to the adversary S. Outputs: Each honest party ({pi |i ∈ [n]} \ P) outputs a special symbol ⊥1 if he has not received an output value, or otherwise outputs his received value yi . The adversary S outputs vS , which may be an arbitrary (randomized) function of the information he has learned during the ideal protocol execution. The environment Z, after observing the outputs of all parties and S, outputs a bit bZ . Let ,S,Z IdealF proactive-mixed (κ, f, z, (Tc , Ts , Tr , Tf )) denote the distribution of bZ .

13

The view is comprised of S’s auxiliary input, internal randomness, and messages received.

18

Note that the adversary in the ideal process effectively gets to choose one set of corrupt parties whose inputs it learns, and another set of corrupt parties whose outputs it learns. This models a mobile adversary whose set of corrupted players changes over time. We have used P and A to denote the sets of passively and actively corrupted parties both in the input phase and output phase; however, notice that the output-phase corruptions could be very different from the input-phase corruptions, since in step 10, P and A are reset to be empty. Definition 3 (T -bounded Real Adversary). For a multi-threshold T , an adversary A (attacking a protocol Π) is said to be T -bounded if (µA , µP ) ≤ T , where µA (resp. µP ) is the maximum number of passive (resp. active) corruptions made by A in any protocol stage. Definition 4 (T -bounded Ideal Adversary). For a multi-threshold T , an adversary S in the ideal process IdealF proactive-mixed is said to be T -bounded if (µA , µP ) ≤ T at the end of the ideal process. Definition 5 (Mixed Proactive Security). A multiparty protocol Π is said to securely realize IdealF proactive-mixed with multi-thresholds (Tc , Ts , Tr , Tf ) if for all efficient adversaries A attacking Π, there exists an efficient ideal adversary S such that for all T ∈ {Tc , Ts , Tr , Tf }, if A is T -bounded, then S is also T -bounded; and for every efficient environment Z, it holds that n o ,S,Z IdealF (κ, f, z, (T , T , T , T )) ≈c ExecΠ,A,Z (κ, f, z) κ,f,z , c s r f proactive-mixed κ,f,z

where ≈c denotes computational indistinguishability in the security parameter κ.

B

Security Proofs

In this section, we first prove that ProactiveMPCF,n,d,Φ is secure against passive adversaries, then we describe how the protocol can be modified to furthermore achieve security against adversaries. Theorem 1 states that ProactiveMPCF,n,d,Φ securely realizes an ideal functionality F IdealF proactive-passive , a natural adaptation of Idealproactive-mixed for the case of passive adversaries. Next, in Section B.1, we give formal definitions of IdealF proactive-passive and what it means for a PMPC protocol to be secure against passive adversaries. Then, in Section B.2, we formally state and prove Theorem 1 (i.e., passive security of ProactiveMPCF,n,d,Φ ). Finally, in Section B.3, we address the extension to active security. B.1

Ideal Process for Passive Mobile Adversaries

This ideal process is for passive mobile adversaries. The ideal process is defined by a system of interactive Turing machines: an environment Z, an ideal adversary S, an ideal functionality F, and parties p1 , . . . , pn (denoted by the set {pi |i ∈ [n]}) which interact with each other as described in the ideal process specification. 19

Note that in the passive setting, correctness, robustness, and fairness are guaranteed to hold, so we need only consider one threshold Ts which determines whether or not secrecy holds. Ideal process IdealF proactive-passive In the ideal process, the environment Z initializes the parties and an ideal adversary S with inputs of its choice. The parties and ideal adversary (who may choose to corrupt and take control of a subset of parties of his choice) interact with an ideal functionality F which behaves as a “trusted third party” that receives the inputs x1 , . . . , xn of the parties, computes the function f on the inputs, and outputs to each party pi his respective output yi . The environment Z observes the entire interaction between the parties, S, and F, and may interact arbitrarily with the adversary during the ideal process. Finally, Z outputs a single bit. Parameters: κ, the security parameter; N , the number of protocol stages; f , the n-ary function to be computed; secrecy threshold 0 ≤ t ≤ n. Specification of the Ideal Process: Initialization 1. The environment Z invokes the ideal adversary S with optional auxiliary input z. 2. The environment Z invokes each party in the set {pi |i ∈ [n]} with an input xi .14 3. Initialize the set of passively corrupted parties P := ∅. Initialize µP := 0. Inputs 4. Each party pi sends his input xi to the ideal functionality F. Input corruption 5. Z receives message (passive, i) from the adversary S, where i ∈ [n] ∪ {⊥}. 6. If i 6= ⊥: • Update P := P ∪ {pi } then set µP := max(µP , |P|). • Go back to step 5. 7. If µP 6≤ t: F sends all inputs {xi }i∈[n] to the adversary S. 8. For randomized functionalities only: If µP ≤ t: F samples a random bit-string r of appropriate length, and sends r to Z. Else: F receives r from the adversary S, and sends r to Z. Computation 9. If f is deterministic, F evaluates (y1 , . . . , yn ) := f (x1 , . . . , xn ). If f is randomized, F evaluates (y1 , . . . , yn ) := f (x1 , . . . , xn ; r). Output corruption 10. Reset P := ∅. 11. Z receives message (passive, i) from the adversary S, where i ∈ [n] ∪ {⊥}. 12. If i 6= ⊥: • Update P := P ∪ {pi } then set µP := max(µP , |P|). • Go back to step 11. 13. If µP 6≤ t: F sends all outputs {yi }i∈[n] to the adversary S. Outputs 14. For each corrupted party pi in P, F sends yi to the adversary S. 15. For each honest party in {pi |i ∈ [n]} \ P, F sends yi to pi . Outputs: Each honest party in {pi |i ∈ [n]} \ P outputs ⊥ if he has not received an output value, or otherwise outputs his received value y. The adversary S outputs vS , which may be an arbitrary (randomized) function of the information he has learned during the ideal protocol execution. The environment Z, after observing the ideal process and the input and output of all parties and S, outputs a bit bZ .

For an ideal functionality F, an ideal adversary S, and an environment Z, let ,S,Z IdealF proactive-passive (κ, f, z, t) 14

Without loss of generality, |xi | = |xj | for all i, j ∈ [n].

20

denote the distribution of the output of Z in the ideal process IdealF proactive-passive when interacting with F and S with security parameter κ, n-ary functionality f , auxiliary input z, and secrecy threshold t. For a proactively secure multi-party protocol Π, let ExecΠ,A,Z (κ, f, z) denote the distribution of the output of Z in an execution of Π when interacting with A for parameters κ, f , z and z = t as above. Definition 6 (Passive Proactive Security). A multiparty protocol Π is said to securely realize IdealF proactive-passive with up to t corruptions if for all adversaries A attacking Π, there exists an ideal adversary S such that: – if A corrupts at most t parties per protocol stage, then S corrupts at most t parties in each corruption phase of the ideal process; and – for every environment Z, it holds that n o c F ,S,Z Idealproactive-passive (κ, f, z, t) ≈ ExecΠ,A,Z (κ, f, z) κ,f,z . κ,f,z

B.2

Security Proof for Passive Adversaries

Theorem 1. If Φ is an MPC protocol secure against up to n − 1 (passive) corruptions, then ProactiveMPCF,n,d,Φ is a secure proactive MPC protocol against passive adversaries. That is, ProactiveMPCF,n,d,Φ securely realizes IdealF proactive-passive with secrecy threshold Ts = d−1, provided that at most n−d−1 parties may simultaneously crash/reboot at any point during protocol execution. Remark 1. While the proof could be simplified in certain places if we were only interested in addressing passive adversaries, we have chosen to write the proof with some additional generality so that it bears more similarity to the proof of active security. Proof. Assume that Φ is a standard MPC protocol (i.e., not proactive) secure against up to n − 1 active corruptions. Let f be any n-ary function, let A be any adversary against ProactiveMPCF,G,γ,n,d,Φ that adheres to multi-thresholds (Tc , Ts , Tr , Tf ), and let z be any auxiliary input. We describe a simulator S that, when instantiated with auxiliary input z by an environment Z, instantiates A with the same auxiliary input z, and then feeds simulated “real-world” protocol messages to A(z) (as well as random coins, if A is randomized), and eventually outputs a view which is indistinguishable from that of A in the real protocol (while adhering to the same corruption thresholds as A). Henceforth, we leave the auxiliary input z implicit. We now describe the simulation step by step. First, we describe the simulation assuming that no executions of the Recover sub-protocol are triggered; then at the end, we describe how to simulate the Recover sub-protocol too. Recall that Definition 2 gives a general description of how messages are sent by the adversary and honest parties, and how corruptions and decorruptions are performed, during each step of a real-world protocol execution. 21

Recall (from Definition 2) that the first phase of the ideal functionality IdealF proactive-mixed in which the simulator participates is the input corruption phase. Our simulator chooses its corruptions in this phase by “copying” the corruptions of A during steps 1–2 of the real-world protocol execution (Definition 1). The simulator sends protocol messages on behalf of honest parties, which are generated by running GradualShare honestly on behalf of those parties based on an input value of 0. Whenever A outputs a message (τ, i) for τ ∈ {passive, active, decorrupt} (during step 4(b) of Definition 2), the simulator forwards the message (τ, i) to Z and, if τ ∈ {passive, active}, receives the input xi of party i in return. Upon receiving xi , if pi has already performed the secret-sharing of his input value (i.e., if the shares have been sent out to the other parties), the simulator alters the shares in the degree-d polynomial in the gradual sharing of pi ’s input which are held by honest parties who have never been corrupted (including pi if he has never been corrupted before), in order to alter the free term of the degree-d polynomial such that xi is equal to the sum of the free terms of all the polynomials in the gradual sharing. (Note that it is important that the shared values in the polynomials of degree less than d remain unchanged.) Then S updates pi ’s internal state to be consistent with the new sharing of input xi . If pi has not yet performed the secret-sharing of his input value, then S simply updates pi ’s internal state consistent with xi . Then, S forwards pi ’s internal state to A. Note: If the secrecy threshold Ts is exceeded at any point, then S will learn all the inputs x1 , . . . , xn of all parties; in this event, all secrecy is lost, and S produces a perfect simulation by generating an honest protocol transcript based on inputs x1 , . . . , xn (without any further interaction with A). We now focus on the case when Ts is not exceeded by A during step 1 of Definition 1. Any set of fewer than d shares in a polynomial secret-sharing of degree d information-theoretically hides the free term, and is distributed identically to a random set of points in F. Moreover, any set of fewer than n shares in an n-way additive secret-sharing information-theoretically hides the secret (i.e., the sum of all shares), and is also distributed identically to a random set of points in F. Recall that a gradual secret sharing consists of polynomial secret sharings of degree 1, . . . , d of additive secret shares s1 , . . . , sd (respectively) of the secret s = s1 + · · · + sd . By assumption, the adversary is Ts -bounded, and so must corrupt fewer than d parties. Therefore, the joint distribution of all honest shares (in the gradual secret-sharing scheme) received by A (whether as a protocol message or as part of the internal state of a corrupted party) during steps 1–2 of the real-world protocol execution is identical to a random set of points in F. So the messages and internal states produced by S on behalf of the parties are indistinguishable to A from those in a real protocol execution. We now move on to step 3 of the real-world protocol execution (Definition 1), i.e., computing the circuit layer by layer (with some interspersed Refresh sub-protocols). In this step, whenever A outputs a message (τ, i) for τ ∈ {passive, active, decorrupt}, S does not forward it to Z. However, S keeps track of the sequence of corruptions made by A and whether these exceed multi-thresholds (Tc , Ts , Tr , Tf ); and if Ts is exceeded at any point, then S concludes the simulation as described in the Note above. S interacts with A on behalf of the honest parties in the real-world protocol execution, by generating honest messages according to 22

the protocol specification with respect to the set of shares held at the end of step 2. Recall that Add consists entirely of local computation, so we need only consider the simulation of Mult, which essentially consists of an execution of the GMW protocol. The standard MPC security guarantee of the GMW protocol ensures that an adversary observing honest parties’ protocol messages learns nothing about the honest input values apart from what can inferred from the output values of corrupt parties. In our case, the output values of the corrupt parties consist of secret-shares of the product of the values reconstructed from the input shares: in particular, nothing can be inferred from these output values about either the product or the multiplicands, provided that the adversary respects the secrecy threshold Ts . It follows that as long as Ts is not exceeded, the joint distribution of all shares seen by A during step 3 is indistinguishable from that of a random set of points in F, in both the real and simulated executions, even in the presence of the GMW protocol messages exchanged during executions of Mult. Finally, we arrive at steps 4–6, the masking and reconstruction steps (which comprise a single operation phase). This coincides with the start of the output corruption phase in the ideal functionality. Note that the simulator, can already compute the values (z1 , . . . , zn ) that would be reconstructed from the secret-sharing held by the parties at the end of step 3. S interacts with A on behalf of the honest parties in the real-world protocol execution, by generating honest messages according to the protocol specification with respect to the set of shares held at the end of step 3, and honestly generating random masks ρ1 , . . . , ρn . Whenever A outputs a message (τ, i) for τ ∈ {passive, active, decorrupt}, the simulator forwards the message (τ, i) to Z and, if τ ∈ {passive, active}, receives the output yi of party i in return. Upon receiving yi , S sets ρi = zi − yi . If pi has already performed the secret-sharing of his mask ρi (i.e., if the shares have been sent out to the other parties), the simulator alters the shares in the degree-d polynomial in the gradual sharing of pi ’s input which are held by honest parties who have never been corrupted in the current phase (including pi if he has not been corrupted in this phase), in order to alter the free term of the degree-d polynomial such that ρi is equal to the sum of the free terms of all the polynomials in the gradual sharing. Then S updates pi ’s internal state to be consistent with the new sharing of mask ρi . If pi has not yet performed the secret-sharing of his mask, then S simply updates pi ’s internal state consistent with ρi . Then, S forwards pi ’s internal state to A. By the a very similar argument to above, provided that Ts is not exceeded, the joint distribution of all shares seen by A during steps 4–6 is identical to that of a random set of points in F conditioned on reconstructing to the correct output values of the corrupt parties, in both the real and simulated executions. We have thus far described the simulator for each numbered step of Definition 2. We now return to address the simulation of the Recover sub-protocol. S interacts with A on behalf of the honest parties in a Recover sub-protocol execution, by generating honest messages according to the protocol specification with respect to the set of shares held at any point when the Recover sub-protocol was triggered. The distribution of messages seen by A in a thus-simulated Recover sub-protocol execution is identical to real execution of Recover. (We have considered a single 23

recovering party here, but this argument straightforwardly generalizes to the case of multiple parties recovering at once.) At the end of the simulated protocol execution, the simulator outputs the view of A in the simulated protocol. As argued above, the simulated protocol execution was indistinguishable to A from a real protocol execution. Moreover, invoking the information-theoretic hiding of the secret-sharing scheme again, the view of a Ts -bounded adversary A is distributed independently of the inputs of parties not corrupted during the input corruption phase and the outputs of parties not corrupted during the output corruption phase. The theorem follows. B.3

Dealing with Active Adversaries

The MPC sub-protocols in Section 3.3 are only secure against passive adversaries. To withstand active adversaries these sub-protocols have to be augmented to ensure that parties corrupted by such adversaries cannot misbehave without getting caught. This can be achieved by adding checks in the sub-protocols to ensure correctness of results of each step using generic zero-knowledge (ZK) proofs and non-malleable commitments based on one-way functions. Such augmentation exists, with constant round overhead, as discussed in more detail below. The purpose of the augmentation is to force each party to prove (over the public broadcast channel) to the other ones that it correctly performed the steps involved in each sub-protocol and thus active misbehavior will be detected and the protocols can abort upon detection of active misbehavior. At first glance, it may seem that generic ZK simply provides a solution to this: each party simply proves to the others, at each step, that it has performed the required computation correctly (and reveals no information other than that single bit). The issue arises when trying to achieve agreement between the honest parties on which parties are maliciously deviating from the protocol. Ideally, we would like each party to broadcast a single proof to all the other parties, which convinces them with regard to whether the proving party is deviating from the protocol; then, all honest parties will agree on whether a proof verified or not. To do this, we would like the n − 1 verifying parties to “act as a single verifier” in a ZK protocol, and we require completeness of verification to hold even when some of the verifying parties are corrupt. A standard MPC protocol suffices to achieve this. Theorem 1 in [Goy11] states that: “A constant round (semi-honest secure) oblivious transfer protocol is necessary and sufficient to obtain a constant round secure multi-party computation protocol (unconditionally) [against active adversaries corrupting a dishonest majority].” Theorem 4 in [Goy11] states that: “There exists a constant round many-many non-malleable commitment scheme using only one-way functions.”15 The non-malleable commitment scheme of Theorem 4 is a key building block in the MPC protocol of Theorem 1. To prove Theorem 4, [Goy11] develops a new constant-round non-malleable commitment scheme; [Goy11] then discusses that this commitment scheme can 15

The theorem numbers are taken from the extended version of the paper which is on the Cryptology ePrint Archive (report 2010/487, revision uploaded August 2015), rather than the conference version cited in the references.

24

be plugged in the construction of [LPTV10] to give rise to a constant-round nonmalleable ZK protocol assuming only OWF. A constant-round non-malleable ZK protocol can be used to compile an MPC protocol secure only against passive adversaries into one that is secure against active ones. This requires non-malleable commitments based on one-way functions that follows from the work of [Goy11] and have subsequently been improved to four rounds (still based on one-way functions) by [COSV16]. Specifically, Theorem 1 in [Goy11] shows that the existence of a constant-round OT protocol is necessary and sufficient to obtain a constant-round MPC protocol building it out of non-malleable commitments. So in summary, assuming OWF and using Theorem 1 from [Goy11], there exists an augmented version of our PMPC protocol in the OT-hybrid model that is secure against active corruptions with constant round overhead. The generic construction carries over to our setting as well. Theorem 2. Let ProactiveMPC0F,n,d,Φ denote the version of ProactiveMPCF,n,d,Φ augmented with zero-knowledge checks of adherence to the protocol at every step. If Φ is an MPC protocol secure against up to n − 1 active corruptions, then ProactiveMPC0F,n,dΦ securely realizes IdealF proactive-mixed with multi-thresholds (Tc , Ts , Tr , Tf ), provided that at most n − d − 1 parties may simultaneously crash/reboot at any given point during the protocol execution, where Tc = {(n, n)}, Ts = {(d − 1, d − 1)}, Tr = {(1, n − 3) : 1 ≤ k ≤ dn/2e − 1}, and Tf = {(k, min(n − k − 1, n − 3)) : 1 ≤ k ≤ dn/2e − 1}. Moreovoer, ProactiveMPC0F,n,d,Φ can be based on any one-way function and oblivious transfer, when instantiated with Φ that depends on the same assumptions (such as [GMW87,Goy11]). Proof (sketch). This proof sketch describes the simulator for ProactiveMPC0F,n,d,Φ and argues informally why its output is indistinguishable from the adversary’s view in a real protocol execution. The simulator may be thought of a composition of a simulator for the underlying protocol ProactiveMPCF,n,d,Φ , and simulators for the added zero-knowledge checks which are implemented by MPC. The latter follows directly from the simulatability of the MPC used to run the ZK checks. We begin with the simulator for the passive case which was described in the proof of Theorem 1, and discuss how to adapt it to work for active adversaries. The main changes are as follows. In the passive case, S was able to generate the corrupt parties’ messages himself, since A’s messages on behalf of the corrupt parties were distributed identically to honest protocol messages; now, the adversary may deviate from the protocol so that is no longer acceptable. Instead, the simulator S, while running A, incorporates the messages outputted by A on behalf of the corrupt parties into the simulated protocol transcript. Also, the execution of Φ within the multiplication sub-protocol is simulated using the simulator for Φ, which is guaranteed to exist since Φ is a secure MPC protocol (against a dishonest active majority). In the passive case, when the secrecy threshold Ts was violated, the simulator learnt all the inputs and simulated an honest protocol execution on those inputs. Again, with an active adversary, this is not an adequate simulation because the adversary’s messages may deviate from the protocol specification. Instead, S must 25

run a protocol execution interacting with A on behalf of all the parties and Z, and output the view of A in that protocol execution. Finally, a proof of security against mixed adversaries must take into account the multi-thresholds Tc , Tr , and Tf which were not necessary to consider for the passive case. Tc is the highest threshold: correctness is only “violated” when the adversary corrupts all parties. However, the correctness condition refers to correctness of honest parties’ outputs, so vacuously holds when all parties are corrupt. Thus, our simulator does not need to take any actions with regard to the correctness multi-threshold Tc . The fairness threshold Tf addresses the case when the adversary A prematurely terminates the protocol after having received the outputs of the corrupted parties, so that the honest parties do not learn their outputs (and output ⊥1 ). If A exceeds Tf at any point during the simulated protocol execution, then S also performs enough corruptions to exceed Tf ; then S will receive the corrupt parties’ outputs (in step 14 of the ideal process) before the honest parties learn their outputs in the ideal process, and thus can simulate such premature termination by A by sending β = 1 in step 14. The robustness multi-threshold Tr is the most interesting one. The multi-threshold Tr = {(k, n − k − 2) : 1 ≤ k ≤ dn/2e − 1} is violated exactly when the ratio of active to passive corruptions is high enough that if all the actively corrupted parties were to quit, then the remaining players’ shares contain insufficient information to reconstruct the secret values shared using GradualShare. This is equivalent to the active adversary causing a protocol abort, and can be simulated as follows: if A exceeds Tr at any point during the simulated protocol execution, then S also performs enough corruptions to exceed Tr , and thereby gains the ability to cause a protocol abort at step 15 of the ideal process (before outputs are issued).

26

2

Computer Science Laboratory, SRI International Department of Computer Science and Department of Mathematics, UCLA 3 Department of Computer Science, MIT 4 Department of Computer Science, Columbia University

Abstract. Secure multiparty computation (MPC) protocols enable n distrusting parties to perform computations on their private inputs while guaranteeing confidentiality of inputs (and outputs, if desired) and correctness of the computation, as long as no adversary corrupts more than a threshold t of the n parties. Existing MPC protocols assure perfect security for t ≤ dn/2e − 1 active corruptions with termination (i.e., robustness), or up to t = n − 1 under cryptographic assumptions (with detection of misbehaving parties). However, when computations involve secrets that have to remain confidential for a long time such as cryptographic keys, or when dealing with strong and persistent adversaries, such security guarantees are not enough. In these situations, all parties may be corrupted over the lifetime of the secrets used in the computation, and the threshold t may be violated over time (even as portions of the network are being repaired or cleaned up). Proactive MPC (PMPC) addresses this stronger threat model: it guarantees correctness and input privacy in the presence of a mobile adversary that controls a changing set of parties over the course of a protocol, and could corrupt all parties over the lifetime of the computation, as long as no more than t are corrupted in each time window (called a refresh period ). The threshold t in PMPC represents a tradeoff between the adversary’s penetration rate and the cleaning speed of the defense tools (or rebooting of nodes from a clean image), rather than being an absolute bound on corruptions. Prior PMPC protocols only guarantee correctness and confidentiality in the presence of an honest majority of parties, an adversary that corrupts even a single additional party beyond the n/2 − 1 threshold, even if only passively and temporarily, can learn all the inputs and outputs; and if the corruption is active rather than passive, then the adversary can even compromise the correctness of the computation. In this paper, we present the first feasibility result for constructing a PMPC protocol secure against a dishonest majority. To this end, we develop a new PMPC protocol, robust and secure against t < n − 2 passive corruptions when there are no active corruptions, and secure but non-robust (but with identifiable aborts) against t < n/2 − 1 active corruptions when there are no passive corruptions. Moreover, our protocol is secure (with identifiable aborts) against mixed adversaries controlling, both, passively and actively corrupted parties, provided that if there are k active corruptions, there are less than n − k − 1 total corruptions.

1

Introduction

Secure multiparty computation (MPC) protocols allow a set of distrusting parties, each holding private inputs, to jointly and distributedly compute a function of the inputs while guaranteeing correctness of its evaluation, and privacy of inputs (and outputs, if desired) for honest parties. The study of secure computation has been combining distributed computing paradigms and security methodologies. It was initiated by [Yao82] for two parties and [GMW87] for many parties, and both of these works relied on cryptographic primitives. The information-theoretic setting was introduced by [BGW88] and [CCD88] which, assuming private channels, constructed information-theoretically secure MPC protocols tolerating up to n/3 malicious parties. Assuming a broadcast channel, [RB89] constructs a protocol

that can tolerate up to n/2 malicious parties. These thresholds, n/3 and n/2, are optimal in the information-theoretic setting, in their respective communication models. In the context of public key cryptography, schemes for enhancing distributed trust, e.g., threshold encryption and threshold signatures, are a special case of MPC, e.g., [FGMY97a,FGMY97b,Rab98,CGJ+ 99,FMY01,Bol03,JS05,JO08] [ADN06]. Also, when the computation to be performed via MPC involves private keys, e.g., for threshold decryption or signature generation, it is of utmost importance for trustworthy operation to guarantee the highest possible level of corruption tolerance, since confidentiality of cryptographic keys should be ensured for a long time (e.g., years). Constructing MPC protocols that guarantee security against stronger adversaries and at the same time satisfy low communication and computation complexity bounds has seen significant progress, e.g., [IKOS08,DIK+ 08,DIK10,BFO12,OY91,BELO14] [BELO15]. While enforcing an honest majority bound on the adversary’s corruption limit renders the problem (efficiently) solvable, it is often criticized, from a distributed systems point of view, as unrealistic for protocols that require long-term security of shared secrets used in the computation, or for very long computations (i.e., reactive operation, typical in systems maintenance), or may be targeted by nation-state adversaries (often called “Advanced Persistent Threats”). With advancements of cloud hosting of security services, and online exchanges for cryptocurrencies which require trustworthy services protected by their distributed nature, the above criticism makes sense. This concern is especially relevant when considering so-called “reactive” functionalities that never stop executing, e.g., continuously running control loops that perform threshold decryption or signature generation via a secret shared key. Such long-running reactive functionalities will become increasingly important for security in always-on cloud applications: example settings could include the use of MPC to compute digital signatures in online financial transactions between large institutions, or to generate securely co-signed cryptocurrency transactions via secretshared (or otherwise distributed) keys [GGN16]. In both these cases, one should expect persistent strong adversaries to continuously attack the parties involved in the MPC protocol, and given enough time vulnerabilities in underlying software (or even some hardware) will eventually be found, and the cryptographic keys may be compromised. An approach to deal with an adversary’s ability to eventually corrupt all parties is the proactive security model [OY91]. The proactive security model introduces the notion of a mobile adversary, motivated by the persistent corruption of participating parties in a distributed computation and the continuous race between parties’ corruption and recovery. A mobile adversary is one that can corrupt all parties in a distributed protocol over the course of a protocol execution but with the following limitations: (1) only a constant fraction of parties can be corrupted during any round, and (2) parties periodically get rebooted to a clean initial state — in a fashion designed to mitigate the total adversarial corruption at any given time — guaranteeing that some fraction of honest parties will be maintained as long as the 2

corruption rate is not more than the reboot rate5 . The [OY91] model also assumes that an adversary does not have the ability to predict or reconstruct the randomness used by parties in any uncorrupted period of time, as demarcated by rebooting; in other words, a reboot entails erasing all previous state. This paper’s main goal is to answer the following basic question: Is it feasible to construct a proactive MPC protocol for the dishonest majority setting? 1.1

Contributions

We answer this question in the affirmative by developing the first proactive secure multiparty computation (PMPC) protocol that is secure in the presence of a dishonest majority. Our new protocol is, first, secure and robust against t < n − 2 passive adversaries (parties which follow the protocol but leak what they know) when there are no active corruptions (arbitrarily misbehaving parties), and when parties are serially rebooted. Secondly, the same protocol preserves secrecy but is unfair (with identifiable aborts) against t < n/2 − 1 active adversaries when there are no additional passive corruptions. Thirdly, the protocol is also secure (but non-robust with identifiable aborts) against mixed adversaries that control a combination of passively and actively corrupted parties such that if there are k active corruptions there are less than n − k − 1 total corruptions6 . We note that the number of parties we start from is n − 1 not n because we assume that parties may be serially rebooted and need recovery from the rest of the n − 1 parties. The threshold t is n − 3 and not n − 2 because in the refresh protocol, the secret being shared in the randomizing polynomial is always 0, so the free coefficient in those polynomials is always an additional point that the adversary knows, hence we can tolerate one less corruption than in the non-proactive gradual secret sharing case. Our design and analysis require new ideas, since the security guarantees of all existing PMPC protocols do not apply in the case of a dishonest passive majority, or in the case of mixed adversaries that may form a majority as described above. That is, all existing PMPC protocols can only guarantee secrecy in the presence of an honest majority with at most n/2 − 1 total corruptions; an adversary that corrupts a single additional party beyond the n/2 − 1 threshold, even if only passively and only for a short period of time, obtains all the shared secrets, inputs of the parties, and intermediate and final results of the computation. Our PMPC protocol construction requires new techniques for refreshing, recovering, adding, and multiplying secret-shared data with security in the presence of a dishonest majority. This is achieved using a combination of information-theoretic (additive and polynomial-based) secret-sharing, and cryptographic techniques to protect against active adversaries. (Recall that cryptographic assumptions are necessary in the dishonest majority setting, due to the well-known impossibility of perfectly secure MPC in the presence of a dishonest majority.) Our PMPC protocol can be based on any one-way function and oblivious transfer (the same assumptions as 5

6

We model rebooting to a clean initial state to include global computation information, e.g., the circuit representing the function to be computed, identities of parties, access to secure point-to-point and broadcast channels. The threshold in this case is actually the minimum of n − 3 and n − k − 1.

3

the classic [GMW87] protocol, and formally requires only oblivious transfer which implies the existence of one-way functions). The secret sharing scheme underlying our PMPC protocol is an adaptation of [DEL+ 16], which recently constructed the first stand-alone proactive secret sharing scheme secure against a dishonest majority. The [DEL+ 16] scheme makes use of discrete-logarithm-based verification of secret shares (similar to [Fel87]); for our PMPC protocol (being a portion of a more general protocol), we replace this component with another technique (described below as “mini MPC”) to overcome problematic proactive simulation issues in the security proof. Computing on secretshared data (with security against mobile dishonest-majority adversaries) is a topic unaddressed by prior work. Our addition and multiplication sub-protocols are the building blocks that enable the parties to jointly compute a secret sharing of the desired output value. Addition of two secret-shared values can be performed by local addition of shares (as in many common secret sharing schemes), but multiplication requires more work. Our multiplication sub-protocol makes use of the [GMW87] protocol for standard MPC to perform a “mini MPC” on the proactive secret shares held by the parties, in order to obtain a proactive secret sharing of the multiplication of two secrets. (More generally, the multiplication sub-protocol can be instantiated based on any standard MPC protocol Φ secure against a dishonest majority, and inherits the efficiency properties from Φ.) To build in security against mobile adversaries, we intersperse the execution of the addition and multiplication sub-protocols with a refresh sub-protocol that “refreshes” the shares held by all parties: informally, each time shares are refreshed, any knowledge of shares from previous “pre-refresh” sharings becomes useless to the adversary. This effectively prevents the adversary from learning sensitive information by putting together shares obtained from corruptions that occur far apart in time. Moreover, whenever a party is de-corrupted (rebooted), its memory contents are erased, so it needs to “recover” the necessary share information, this is achieved using our recovery sub-protocol which is triggered dynamically each time a memory loss occurs. The number of parties that can simultaneously lose memory is a parameter of our protocol, which trades off with the number of corruptions allowed per phase. This sensitive trade-off is inherent, if n − τ parties can restore the shares of τ parties who lost memory, then they could also collude to learn the shares of those τ parties. We highlight as an additional contribution the first (formal) definition of secure PMPC in the presence of adversaries that may make a combination of passive and active corruptions, and may corrupt a majority of the parties. Prior security definitions for PMPC only addressed the honest majority setting, so they did not have to address potential failures of robustness and fairness. Moreover, no existing definitions considered PMPC security with mixed adversaries. Our ideal functionality for the dishonest majority setting models robustness and fairness as a fine-grained function of the passive and active corruptions that actually occur during a protocol execution (rather than a coarser-grained guarantee depending on adherence to a corruption threshold that is fixed as a protocol parameter), by adapting for the proactive setting the multi-thresholds paradigm that was introduced by [HLM13] in the context of standard (not proactive) MPC. 4

Scheme

Threshold Passive (Active)

[WWW02] t < n/2 (n/2) [ZSvR05] t < n/3 (n/3) [CKLS02] t < n/3 (n/3) [Sch07] t < n/3 (n/3) [HJKY95] t < n/2 (n/2) [BELO14] t < n/3 − (n/3 − ) [BELO14] t < n/2 − (n/2 − ) [DEL+ 16] t < n − 1 (n/2 − 1)

Security Cryptographic Cryptographic Cryptographic Cryptographic Cryptographic Perfect Statistical Cryptographic

Network Type

Comm. Complexity

Synch. exp(n) Asynch. exp(n) Asynch. O(n4 ) Asynch. O(n4 ) Synch. O(n2 ) Synch. O(1) (amortized) Synch. O(1) (amortized) Synch O(n4 )

Table 1. Comparison of existing proactive secret sharing schemes; threshold is for each refresh phase. Note that the work in [DEL+ 16] also handles mixed adversaries which are characterized by two thresholds, one for passive corruptions and one for active corruptions.

1.2

Related Work

Proactive Secret Sharing (PSS). Secret sharing is often utilized as a building block for MPC protocols. There is significant work on PSS schemes, summarized in Table 1; most of the existing PSS schemes are insecure when a majority of the parties are compromised, even if the compromise is only passive. Such schemes [OY91,HJKY95,WWW02,ZSvR05,Sch07,BELO14,BELO15] typically store the secret as the free term in a polynomial of degree t < n/2, thus once an adversary compromises t + 1 parties (even if only passively), it can reconstruct the polynomial and recover the secret. Recently [DEL+ 16] developed the first PSS scheme for a dishonest majority. The work in [DEL+ 16] only describes a PSS scheme and does not specify how to perform secure computation for the same thresholds. Our work builds on [DEL+ 16] and develops the first PMPC protocol for a dishonest majority. In addition to proactive secret sharing, there has also been substantial research on proactively secure threshold encryption and signature schemes for the honest majority setting, e.g., [FGMY97a,FGMY97b,Rab98,CGJ+ 99,FMY01,Bol03,JS05,JO08,ADN06]. Proactive Secure Multi-Party Computation (PMPC). To the best of our knowledge there are currently only two generic PMPC protocols, [OY91] (requires O(Cn3 ) communication, where C is the size of the circuit to be computed via MPC) and [BELO14] (requiring O(C log2 (C)polylog(n) + Dpoly(n) log2 (C)), where C is the size of the circuit to be computed via MPC and D its depth). These PMPC protocols are inherently designed for an honest majority and it seems difficult to redesign them for a dishonest majority; the reason is that the underlying secret sharing scheme stores secrets as points on polynomials of degree less than n/2, so the only adversary structure that can be described is one in terms of a fraction of the degree of the polynomial and once the adversary compromises enough parties (even if only passively), it can reconstruct the polynomial and recover the secret. 1.3

Outline

The rest of the paper is organized as follows. Section 2 outlines the terminology of proactively secure computation, communication and adversary models; the corresponding formal definitions can be found in Appendix A. Section 3 presents details 5

of our PMPC protocol for a dishonest majority. Section 4 concludes with a discussion of open issues and possible extensions. The security proofs are provided in Appendix B due to space constraints.

2

Model and Definitions

We consider n parties (pi where i ∈ [n]) connected by a synchronous network and an authenticated broadcast channel. Protocol communication proceeds in discrete rounds which are grouped into consecutive blocks called stages. We consider a mobile adversary with polynomially bounded computing power, which “moves around” and chooses a (new) set of parties to corrupt per stage, subject to a maximum threshold of corruptions for any stage. Note that parties once corrupted do not necessarily remain so for the remainder of the protocol, which means that over the course of protocol execution, the adversary can corrupt all the parties, although not all at the same time. 2.1

Phases and Stages of a Proactive Protocol

We adopt terminology from previous formalizations of proactive protocols such as [ADN06] and [BELO14]. Phases. The rounds of a proactive protocol are grouped into phases ϕ1 , ϕ2 , . . . . A phase ϕ consists of a sequence of consecutive rounds, and every round belongs to exactly one phase. There are two types of phases, refresh phases and operation phases. The protocol phases alternate between refresh and operation phases; the first and last phase of the protocol are both operation phases. Each refresh phase is furthermore subdivided into a closing period consisting of the first k rounds of the phase, followed by an opening period consisting of the final ` − k rounds of the phase, where ` is the total number of rounds in the phase. In non-reactive MPC, the number of operation phases can be thought to correspond to the depth of the circuit to be computed. Intuitively, each operation phase serves to compute a layer of the circuit, and each refresh phase serves to re-randomize the data held by parties such that combining the data of corrupt parties across different phases will not be helpful to an adversary. Stages. A stage σ of the protocol consists of an opening period of a refresh phase, followed by the subsequent operation phase, followed by the closing period of the subsequent refresh phase. Thus, a stage spans (but does not cover) three consecutive phases, and the number of stages in a protocol is equal to its number of operation phases. In the case of the first and last stages of a protocol, there is an exception to the alternating “refresh-operation-refresh” format, the first stage starts with the first operation phase, and the last stage ends with the last operation phase. Corruptions. If a party pi is corrupted by the adversary (A) during an operation phase of a stage σj , then A learns the view of pi starting from its state at the beginning of stage σj . If the corruption is made during a refresh phase between 6

consecutive stages σj and σj+1 , then A learns pi ’s view starting from the beginning of stage σj . Moreover, in the case of a corruption during a refresh phase, pi is considered to be corrupt in both stages σj and σj+1 . Finally, a party pi that is corrupt during the closing period of a refresh phase in stage σj may become decorrupted. In this case, pi is considered to be no longer corrupt in stage σj+1 (unless A corrupts him again before the end of the next closing period). A decorrupted party immediately rejoins the protocol as an honest party, if it was passively corrupted, then it rejoins with the correct state according to the protocol up to this point; or if it was actively corrupted, then it is restored to a clean default state (which may be a function of the current round). Note that in restoring a party to the default state, its randomness tapes are overwritten with fresh randomness: this is important since otherwise, any once-corrupted party would be deterministic to the adversary. In terms of modeling, parties to decorrupt are chosen arbitrarily from the corrupt set by the environment. Erasing State. In our model, parties erase their internal state (i.e., the content of their tapes) between phases. The capability of erasing state is necessary in the proactive model, if an adversary could learn all previous states of a party upon corruption, then achieving security would be impossible, since over the course of a protocol execution a mobile adversary would eventually learn the state of all parties in certain rounds. 2.2

Mixed Corruption Model

We consider mixed adversaries [HLM13] which can perform two distinct types of corruptions. The adversary can passively corrupt a set of parties (P) and only read their internal state; the adversary may also actively corrupt some of these parties (A) and make them deviate arbitrarily from the protocol. We assume that A ⊆ P. In traditional MPC, a common notation is to denote the number of parties by n, and the maximum threshold of corrupt parties by t. For mixed adversaries, there are distinct thresholds for active and passive corruptions. We write ta and tp to denote the thresholds of active and passive corruptions, respectively, i.e., |A| ≤ ta and |P| ≤ tp . Note that since we have defined each active corruption to be also a passive corruption, each active corruption counts towards both ta and tp . Following the notation of [HLM13] and [DEL+ 16], in order to model security guarantees 1 1against incomparable maximal adversaries, we consider multi-thresholds T = (ta , tp ), . . . , (tka , tkp ) which are sets of pairs of thresholds (ta , tp ). Security properties are guaranteed if (A, P) ≤ (ta , tp ) for some (ta , tp ) ∈ T , where (A, P) ≤ (ta , tp ) is a shorthand for |A| ≤ ta and |P| ≤ tp . If this condition is satisfied, we write that (A, P) ≤ T . We define our MPC protocols in terms of four security properties: correctness, secrecy, robustness, and fairness.7 The security properties which are guaranteed 7

These terms are standard in the MPC literature. Correctness means that all parties that output a value must output the correct output value with respect to the set of all parties’ inputs and the function being computed by the MPC. Secrecy means that the adversary cannot learn anything more about honest inputs and outputs than can already be inferred from the corrupt parties’ inputs and outputs (more formally, secrecy requires that the adversary’s view during protocol execution can be simulated given only the corrupt parties’ input and output values). Robustness means that the adversary must not be able to prevent honest parties from learning their outputs. Finally, fairness requires that either all honest parties learn their own output values, or no party learns its own output value.

7

in any given protocol execution is a function of the number of actually corrupted parties. Accordingly, we consider four multi-thresholds Tc , Ts , Tr , Tf . Correctness (with agreement on abort) is guaranteed if (A, P) ≤ Tc , secrecy is guaranteed if (A, P) ≤ Ts , robustness is guaranteed if (A, P) ≤ Tr , and fairness is guaranteed if (A, P) ≤ Tf . Note that Tr ≤ Tc and Tf ≤ Ts ≤ Tc , since secrecy and robustness are not well-defined without correctness, and secrecy is a precondition of fairness.8 2.3

New PMPC and Security Definitions

Formal definitions for a proactive MPC protocol and the corresponding ideal functionality, and security for mixed mobile adversaries and dishonest majorities can be found in Appendix A due to space constraints. These definitions are new to this work; they do not exist in prior proactive MPC literature since the dishonest majority setting is unaddressed. One notable difference of the proactive dishonest majority definition we develop compared to the dishonest majority model for standard MPC is that in the standard model, it is acceptable to exclude parties found to be corrupt and simply restart the protocol with the remaining parties, whereas in the proactive setting this could result in the exclusion of all parties even though the adversary cannot actually corrupt all parties simultaneously. Thus, exclusion of misbehaving parties in our proactive model is only temporary, and the protocol is guaranteed to make progress in any phase when the adversary does not cause a majority of parties to deviate from the protocol (otherwise, the phase is restarted). An adversary could cause multiple restarts of a phase and delay protocol execution — which seems unavoidable in a dishonest majority model with a mobile adversary — but cannot cause a phase to have an incorrect output. Due to the definitions’ length and notational complexity, we have opted for a less formal protocol description in the limited space in the body.

3

Construction of a PMPC Protocol for Dishonest Majorities

3.1

Intuition and Overview of Operation

Our PMPC protocol consists of six sub-protocols. GradualShare allows a dealer to share a secret s among n parties. Reconstruct allows parties to reconstruct the underlying secret s based on shares that they hold. Refresh is executed between two consecutive phases, w and w + 1, and generates new shares for phase w + 1 that encode the same secret as the shares in phase w. Recover allows parties that lost their shares to obtain new shares encoding the same secret s, with the help of other honest parties. Add allows parties holding shares of two secrets s and s0 to obtain shares that encode the sum s + s0 . Mult allows parties holding shares of two secrets s and s0 to obtain shares that encode the product s × s0 . The overall operation of the PMPC protocol is as follows. First, each party uses GradualShare to distribute its private input among the n parties (including itself). The circuit to be computed via PMPC is public, and consists of multiple 8

We write T ≤ T 0 if ∀(ta , tp ) ∈ T, ∃(t0a , t0p ) ∈ T 0 such that ta ≤ t0a and tp ≤ t0p .

8

layers each comprised of a set of Add or Mult gates which are executed via the corresponding sub-protocols (layer by layer). Between circuit layers, the shares of all parties are refreshed via Refresh. Decorrupted parties obtain new shares encoding the same shared secrets corresponding to the current state of the MPC computation, i.e., the output of the current circuit layer and any shard values that will be needed in future layers, by triggering the Recover sub-protocol as soon as they find themselves rebooted. When the (secret-shared) output of the final layer of the circuit is computed, parties use Reconstruct to reconstruct the final output. In order to tolerate a dishonest majority, it is not enough to directly store the inputs of the parties (the secrets to be computed on, and which will at the end be transformed into the outputs) in the free term, or as other points on a polynomial. What is needed is to encode the secrets, and compute using them, in a different form resistant to a dishonest majority of say up to n − 1 parties. At a high level, this can be achieved by first additively sharing the secret into d = n − 1 random additive summands (this provides security against t = n − 3 passive corruptions), then sharing each summand using polynomial-based secret sharing for a range of different reconstruction thresholds: this is the key insight of the “gradual secret sharing” scheme of [DEL+ 16]. We develop protocols to add and multiply shares to perform computation on the secret shares. Addition can be performed locally, but to multiply we utilize a standard MPC protocol for a dishonest majority. A simple version of our protocol yields security against passive corruptions; to furthermore achieve active security, we leverage constant round non-malleable homomorphic commitments and zeroknowledge proofs based on one-way functions and oblivious-transfer. The protocol description thus far makes the following two simplifying assumptions: (1) the function f to be computed is deterministic, and (2) all output wire values are learned by all parties. The next two paragraphs discuss how to generalize our protocols, eliminating these assumptions. We address randomized functions using a standard technique, each party pi initially chooses a random value ζi . We treat (xi , ζi ) as the input of party pi (instead of just xi as above), and compute the deterministic function f 0 defined by f 0 ((x1 , ζ1 ), . . . , (xn , ζn )) = f (x1 , . . . , xn ; ζ1 + · · · + ζn ). As this is a standard transformation, we omit further details, and for simplicity of exposition, the rest of the paper deals only with deterministic functions. We now describe an adaptation for the case when each party pi is to receive its own private output yi , as follows. This is a slight variation of the standard technique of “masking” output values using a random mask known only to the intended recipient—but we highlight that the standard technique requires a tweak for the proactive setting.9 Before the reconstruction step, the parties possess a gradual secret sharing of the output values (y1 , . . . , yn ). At this point, each party chooses a secret random value ρi (called a mask ) and shares it among the n parties using GradualShare. Then, the Add sub-protocol is run to obtain a gradual secret sharing of (y1 + ρ1 , . . . , yn + ρn ) instead of (y1 , . . . , yn ). Next, the Reconstruct sub-protocol is run so that every party learns (y1 + ρ1 , . . . , yn + ρn ). Finally, each party pi performs 9

The standard trick is to consider the masks ρi to be part of the parties’ inputs. In the proactive setting, it is important that the masks be chosen later on, as we shall see in the security proof.

9

an additional local computation at the end of the protocol, subtracting ρi from the value on his output wire to obtain his final output yi . 3.2

Real-world Protocol Operation

We now give the formal definition of protocol operation based on the sub-protocols. Definition 1 is the formalization of the description given in prose in Section 3.1. The description of how each sub-protocol works will be given in Section 3.3. Within Definition 1 below, the subprotocols are invoked in black-box manner. Definition 1 (PMPC Protocol Operation). Given an arithmetic circuit C (of depth dC ) that is to be computed by an MPC protocol on inputs x1 , . . . , xn , the proactive MPC protocol is defined as follows. For simplicity, we assume that refresh phases occur between layers of the circuit, and let R ⊆ [dC ] be the set of circuit layers after which a refresh phase is to be triggered.10 1. Each party pi acts as the dealer in GradualShare to share its own input xi among all n parties. (Note that at the conclusion of this step, the parties hold secret sharings of all the values on the input wires of C, i.e., all the inputs to gates at layer 1 of C.) 2. Run the Refresh sub-protocol. The duration of a single Refresh sub-protocol execution is considered to be a refresh phase. 3. For each layer of the circuit, ` = 1, . . . , dC : – For each addition or multiplication gate µ in layer `:11 Compute a sharing of the value on the output wire of µ by using the Add or Mult sub-protocol respectively. The parties’ inputs to the Add or Mult protocol will be the sharings of the values on the input wires of µ, which the parties already possess (the input sharings are computed by step 1 for ` = 1, and subsequently, the input sharings for layer ` + 1 are computed during step `). – If ` ∈ R, run the Refresh sub-protocol. 4. At the conclusion of step 3, the parties possess a gradual sharing of the value (y1 , . . . , yn ) on the output wire(s) of the circuit C, where each yi is the output intended for party pi . The period from this step until the end of the protocol is a single operation phase. Each party now samples a random value ρi and acts as the dealer in GradualShare to share ρi among all n parties. Then, the Add sub-protocol is run to obtain a gradual sharing of the value (z1 , . . . , zn ) where zi = yi + ρi . 5. The Reconstruct sub-protocol is then run to reconstruct the shared value (z1 , . . . , zn ). 6. Each party pi obtains its output yi by subtraction: yi = zi − ρi . Moreover, the adversary may decorrupt a party at any point, during operation or refresh phases, upon which the decorrupted party is restored to a default state which we shall call ⊥. – Whenever a party finds itself with internal state ⊥, it broadcasts a message Help!. 10

11

In general, more complex refresh patterns are possible, e.g., at the level of gates rather than circuit layers. If the Add and Mult sub-protocols are secure under parallel composition, the iterations of this for-loop can be executed in parallel for all gates in layer `.

10

– Upon receiving message Help! from a party pi , all parties immediately execute the Recover sub-protocol so that pi ends up with the secret shares: of all values on circuit wires that will be used for later computation, or in steps 4–6, of the masks ρ1 , . . . , ρn and the shared output (z1 , . . . , zn ). In addition, from step 4 onwards, pi is assisted to recover his own mask ρi , by the other parties sending to pi their shares thereof. Then, the interrupted operation phase or refresh phase is resumed, starting with the next round after the last completed operation-phase or refresh-phase round. 3.3

Sub-protocol Specifications

In the following, field operations occur over a finite field F (of prime characteristic). The sub-protocols make use of a polynomial-based secret sharing schemes, e.g., [Sha79], and are implicitly parametrized by (F, n, d) where n is the number of parties and n − d − 1 is the number of parties that can simultaneously undergo a reboot (thus losing their shares, and requiring recovery). The multiplication sub-protocol is additionally parametrized by Φ (which, in turn, is parametrized by a security parameter κ), which can be any general MPC protocol secure against up to n − 1 active corruptions (such as [GMW87]). For simplicity, secret values are assumed to be field elements; multi-element secrets can be handled by running the sub-protocols on each element separately. The proactive MPC protocol that results from instantiating Definition 1 with the sub-protocols defined in this subsection shall be denoted by ProactiveMPCF,n,d,Φ . Overview of subprotocols GradualShare is used by parties to share their inputs, i.e., each party acts as a dealer when sharing its own inputs. Parties holding sharings (under GradualShare) of secrets s may use subprotocol Reconstruct to reconstruct s, or use subprotocol Refresh to refresh (re-randomize) their shares. Parties holding sharings of secrets s, s0 can compute a sharing of s + s0 using Add, or a sharing of s × s0 using Mult. Subprotocol 1 (GradualShare) We denote by pD the dealer who starts in possession of the secret value s to be shared. At the conclusion of this protocol, each party (including the dealer) will possess a share of the secret s. d 1. pD chooses d random summands s1 , ..., sd which add up to s, Σδ=1 sδ = s. 2. For δ = 1, . . . , d, the dealer pD does the following: (a) pD samples a random degree-δ polynomial fδ over finite field F, subject to fδ (0) = sδ . pD stores the evaluations fδ (1), . . . , fδ (n) and deletes fδ from memory. (b) For i ∈ [n], the dealer pD sends shδ,i = fδ (i) to pi , then deletes fδ (i) from memory. 3. Each party pi stores its d shares shi = (sh1,i , . . . , shd,i ). Subprotocol 2 (Reconstruct) After a sharing of a secret s using GradualShare, the n parties can reconstruct s as follows. 1. For δ = d, . . . , 1: 11

(a) Each party pi broadcasts its share shδ,i . (b) Each party locally interpolates to determine the polynomial fδ , then computes sδ = fδ (0). 2. Each party outputs the secret s computed as s = s1 + s2 + · · · + sd . Subprotocol 3 (Refresh) Each party pi ∈ {pi |i ∈ [n]} begins this protocol in possession of shares shi = (sh1,i , . . . , shd,i ) and ends this protocol in possession of new “refreshed” shares sh0i = (sh01,i , . . . , sh0d,i ). 1. Each party pi generates an additive sharing of 0 (i.e., d randomization summands which add up to 0). Let the additive shares of pi be denoted by rδ,i : note that d Σδ=1 rδ,i = 0. 2. For δ = 1, . . . , d do: (a) For i = 1, . . . , n: Party pi shares rδ,i by running GradualShare and acting as the dealer. Pn j (b) Each party pi adds up the shares it received: sh00i = j=1 shδ,i and sets sh0δ,i = shδ,i + sh00i . 3. Each honest party pi deletes the old shares shi and stores instead: sh0i = (sh01,i , . . . , sh0d,i ). The following sub-protocol is used by parties to recover shares (under GradualShare) for a rebooted party. Subprotocol 4 (Recover) Let parties {pr }r∈R be the ones that need recovery, where R ⊂ [n]. We refer to the other parties, {pi }i∈R / , as “non-recovering parties.” Below, we describe the procedure to recover the shares of a single party pr . To recover the shares of all parties, the below procedure should be run ∀r ∈ R. 1. For δ = 1, . . . , d do: (a) Each non-recovering party pi chooses a random degree-δ polynomial gδ,i subject to the constraint that gδ,i (r) = 0. (b) Each non-recovering party pi shares its polynomial with the other n − |R| − 1 non-recovering parties as follows: pi computes and sends to each receiving party pj the value shiδ,j = gδ,i (j). (c) Each non-recovering party pj adds all the shares it received from the other n − |R| − 1 parties for the recovery polynomials gδ,i to its share of fδ , i.e., n n zδj = fδ (j) + Σi=1 shiδ,j = fδ (j) + Σi=1 gδ,i (j). (d) Each non-recovering party pj sends zδj to pr . Using this information, pr n interpolates the recovery polynomial gδ = fδ + Σi=1 gδ,i and computes shδ,r = gδ (r) = fδ (r). Subprotocol 5 (Add) Each party pi ∈ {pi |i ∈ [n]} begins this protocol in possession of shares shi = (sh1,i , . . . , shd,i ) corresponding to a secret s and sh0i = (sh01,i , . . . , sh0d,i ) corresponding to a secret s0 , and ends this protocol in possession of + + 0 shares sh+ i = (sh1,i , . . . , shd,i ) corresponding to the secret s + s . 0 1. For each δ ∈ {1, . . . , d} and each i ∈ [n], party pi sets sh+ δ,i = shδ,i + shδ,i . Subprotocol 6 (Mult) Each party pi ∈ {pi |i ∈ [n]} begins this protocol in possession of shares shi = (sh1,i , . . . , shd,i ) corresponding to a secret s and sh0i = 12

(sh01,i , . . . , sh0d,i ) corresponding to a secret s0 , and ends this protocol in possession of × × 0 shares sh× i = (sh1,i , . . . , shd,i ) corresponding to the secret s × s . P 1. Each party pi adds up its local shares of s and s0 respectively: θi = δ∈[d] shδ,i P and θi0 = δ∈[d] sh0δ,i . By construction of the gradual secret sharing scheme, these sums can be expressed as θi = fb(i) and θi0 = fb0 (i) for some degree-d polynomials fb, fb0 such that fb(0) = s and fb0 (0) = s0 . 2. Run the MPC protocol of [GMW87] as follows: – The input of party pi to the MPC is (θi , θi0 ). – The function to be computed by the MPC on the collective input (θ1 , θ10 ), . . . , (θn , θn0 ) is: (a) Interpolate (θi )i∈[n] and (θi0 )i∈[n] to recover the secrets s and s0 as the free terms of the respective polynomials fb and fb0 . (b) Compute the product s× = s × s0 . × (c) Compute shares (sh× δ,i )δ∈[d],i∈[n] as a dealer would when sharing secret s using GradualShare. (d) For each i ∈ [n], output (sh× δ,i )δ∈[d] to party pi . 3.4

Security Proofs

Security proofs of the full protocol with respect to the formal definitions in Appendix A are given in Appendix B due to space constraints.

4

Conclusion and Open Issues

This paper presents the first proactive secure multiparty computation (PMPC) protocol for a dishonest majority. Our PMPC protocol is robust and secure against t < n − 2 passive only corruptions, and secure but non-robust (but with identifiable aborts) against t < n/2 − 1 active corruptions when there are no additional passive corruptions. The protocol is also secure, and non-robust but with identifiable aborts, against mixed adversaries that control a combination of passively and actively corrupted parties such that with k active corruptions there are less than n − k − 1 total corruptions. In this paper we prove the feasibility of constructing PMPC protocols secure against dishonest majorities. Optimizing computation and communication in such protocols (and making them practical) is not the goal of this paper and is an interesting open problem. Specifically, we highlight the following issues of interest which remain open: – There are currently no practical proactively secure protocols for dishonest majorities for specific classes of computations of interest such as threshold decryption and signature generation; all existing practical proactively secure threshold encryption and signature schemes such as [FGMY97a,FGMY97b,Rab98,FMY01,Bol03,JS05] [JO08,ADN06] require an honest majority. – There are currently no PMPC protocols (or even only proactive secret sharing schemes) for asynchronous networks and secure against dishonest majorities. Our PMPC protocol assumes a synchronous network. 13

– It is unclear what the lowest bound for communication required for a PMPC protocol secure against a dishonest majority is. We achieve O(n4 ) communication for the refresh and recover sub-protocols which are typically the bottleneck; it remains open if this can be further reduced. PMPC protocols [BELO14,BELO15] for an honest majority have constant (amortized) communication overhead; it is unlikely that this can be matched in the dishonest majority case, but it may be possible to achieve O(n3 ) or O(n2 ).

Acknowledgements. We thank Antonin Leroux for pointing out typos and issues in the statement of theorem 2 in the appendix. We also thank the SCN 2018 reviewers for their constructive feedback which helped us improve the readability of the paper. The second author’s research is supported in part by NSF grant 1619348, DARPA SafeWare subcontract to Galois Inc., DARPA SPAWAR contract N66001-15-1C-4065, US-Israel BSF grant 2012366, OKAWA Foundation Research Award, IBM Faculty Research Award, Xerox Faculty Research Award, B. John Garrick Foundation Award, Teradata Research Award, and Lockheed-Martin Corporation Research Award. The views expressed are those of the authors and do not reflect position of the Department of Defense or the U.S. Government.

References [ADN06]

[BELO14]

[BELO15]

[BFO12] [BGW88]

[Bol03]

[CCD88]

[CGJ+ 99]

Jes´ us F. Almansa, Ivan Damg˚ ard, and Jesper Buus Nielsen. Simplified threshold RSA with adaptive and proactive security. In Serge Vaudenay, editor, Advances in Cryptology EUROCRYPT 2006, 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, May 28 - June 1, 2006, Proceedings, volume 4004 of Lecture Notes in Computer Science, pages 593–611. Springer, 2006. Joshua Baron, Karim Eldefrawy, Joshua Lampkins, and Rafail Ostrovsky. How to withstand mobile virus attacks, revisited. In Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, PODC ’14, pages 293–302, New York, NY, USA, 2014. ACM. Joshua Baron, Karim Eldefrawy, Joshua Lampkins, and Rafail Ostrovsky. Communicationoptimal proactive secret sharing for dynamic groups. In Proceedings of the 2015 International Conference on Applied Cryptography and Network Security, ACNS ’15, 2015. Eli Ben-Sasson, Serge Fehr, and Rafail Ostrovsky. Near-linear unconditionally-secure multiparty computation with a dishonest minority. In CRYPTO, pages 663–680, 2012. Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. Completeness theorems for noncryptographic fault-tolerant distributed computation (extended abstract). In STOC, pages 1–10, 1988. Alexandra Boldyreva. Threshold signatures, multisignatures and blind signatures based on the gap-diffie-hellman-group signature scheme. In Yvo Desmedt, editor, Public Key Cryptography - PKC 2003, 6th International Workshop on Theory and Practice in Public Key Cryptography, Miami, FL, USA, January 6-8, 2003, Proceedings, volume 2567 of Lecture Notes in Computer Science, pages 31–46. Springer, 2003. David Chaum, Claude Cr´epeau, and Ivan Damgard. Multiparty unconditionally secure protocols. In Proceedings of the twentieth annual ACM symposium on Theory of computing, STOC ’88, pages 11–19, New York, NY, USA, 1988. ACM. Ran Canetti, Rosario Gennaro, Stanislaw Jarecki, Hugo Krawczyk, and Tal Rabin. Adaptive security for threshold cryptosystems. In Michael J. Wiener, editor, Advances in Cryptology CRYPTO ’99, 19th Annual International Cryptology Conference, Santa Barbara, California, USA, August 15-19, 1999, Proceedings, volume 1666 of Lecture Notes in Computer Science, pages 98–115. Springer, 1999.

14

[CKLS02]

Christian Cachin, Klaus Kursawe, Anna Lysyanskaya, and Reto Strobl. Asynchronous verifiable secret sharing and proactive cryptosystems. In ACM Conference on Computer and Communications Security, pages 88–97, 2002. [COSV16] Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, and Ivan Visconti. 4-round concurrent non-malleable commitments from one-way functions. Cryptology ePrint Archive, Report 2016/621, 2016. http://eprint.iacr.org/2016/621. [DEL+ 16] Shlomi Dolev, Karim Eldefrawy, Joshua Lampkins, Rafail Ostrovsky, and Moti Yung. Proactive secret sharing with a dishonest majority. In Vassilis Zikas and Roberto De Prisco, editors, Security and Cryptography for Networks - 10th International Conference, SCN 2016, Amalfi, Italy, August 31 - September 2, 2016, Proceedings, volume 9841 of Lecture Notes in Computer Science, pages 529–548. Springer, 2016. [DIK+ 08] Ivan Damg˚ ard, Yuval Ishai, Mikkel Krøigaard, Jesper Buus Nielsen, and Adam Smith. Scalable multiparty computation with nearly optimal work and resilience. In CRYPTO, pages 241–261, 2008. [DIK10] Ivan Damg˚ ard, Yuval Ishai, and Mikkel Krøigaard. Perfectly secure multiparty computation and the computational overhead of cryptography. In EUROCRYPT, pages 445–465, 2010. [Fel87] Paul Feldman. A practical scheme for non-interactive verifiable secret sharing. In 28th Annual Symposium on Foundations of Computer Science, Los Angeles, California, USA, 27-29 October 1987, pages 427–437. IEEE Computer Society, 1987. [FGMY97a] Yair Frankel, Peter Gemmell, Philip D. MacKenzie, and Moti Yung. Optimal resilience proactive public-key cryptosystems. In 38th Annual Symposium on Foundations of Computer Science, FOCS’97, Miami Beach, Florida, USA, October 19-22, 1997, pages 384–393. IEEE Computer Society, 1997. [FGMY97b] Yair Frankel, Peter Gemmell, Philip D. MacKenzie, and Moti Yung. Proactive RSA. In Burton S. Kaliski Jr., editor, Advances in Cryptology - CRYPTO ’97, 17th Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 1997, Proceedings, volume 1294 of Lecture Notes in Computer Science, pages 440–454. Springer, 1997. Yair Frankel, Philip D. MacKenzie, and Moti Yung. Adaptive security for the additive-sharing [FMY01] based proactive RSA. In Kwangjo Kim, editor, Public Key Cryptography, 4th International Workshop on Practice and Theory in Public Key Cryptography, PKC 2001, Cheju Island, Korea, February 13-15, 2001, Proceedings, volume 1992 of Lecture Notes in Computer Science, pages 240–263. Springer, 2001. [GGN16] Rosario Gennaro, Steven Goldfeder, and Arvind Narayanan. Threshold-optimal DSA/ECDSA signatures and an application to bitcoin wallet security. In Mark Manulis, Ahmad-Reza Sadeghi, and Steve Schneider, editors, Applied Cryptography and Network Security - 14th International Conference, ACNS 2016, Guildford, UK, June 19-22, 2016. Proceedings, volume 9696 of Lecture Notes in Computer Science, pages 156–174. Springer, 2016. [GMW87] Oded Goldreich, Silvio Micali, and Avi Wigderson. How to play any mental game or a completeness theorem for protocols with honest majority. In Alfred V. Aho, editor, STOC, pages 218–229. ACM, 1987. [Goy11] Vipul Goyal. Constant round non-malleable protocols using one way functions. In Lance Fortnow and Salil P. Vadhan, editors, Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6-8 June 2011, pages 695–704. ACM, 2011. [HJKY95] Amir Herzberg, Stanislaw Jarecki, Hugo Krawczyk, and Moti Yung. Proactive secret sharing or: How to cope with perpetual leakage. In CRYPTO, pages 339–352, 1995. [HLM13] Martin Hirt, Christoph Lucas, and Ueli Maurer. A dynamic tradeoff between active and passive corruptions in secure multi-party computation. In Ran Canetti and Juan A. Garay, editors, Advances in Cryptology - CRYPTO 2013 - 33rd Annual Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2013. Proceedings, Part II, volume 8043 of Lecture Notes in Computer Science, pages 203–219. Springer, 2013. [IKOS08] Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai. Cryptography with constant computational overhead. In STOC, pages 433–442, 2008. [JO08] Stanislaw Jarecki and Josh Olsen. Proactive RSA with non-interactive signing. In Gene Tsudik, editor, Financial Cryptography and Data Security, 12th International Conference, FC 2008, Cozumel, Mexico, January 28-31, 2008, Revised Selected Papers, volume 5143 of Lecture Notes in Computer Science, pages 215–230. Springer, 2008. [JS05] Stanislaw Jarecki and Nitesh Saxena. Further simplifications in proactive RSA signatures. In Joe Kilian, editor, Theory of Cryptography, Second Theory of Cryptography Conference, TCC 2005, Cambridge, MA, USA, February 10-12, 2005, Proceedings, volume 3378 of Lecture Notes in Computer Science, pages 510–528. Springer, 2005.

15

[LPTV10]

Huijia Lin, Rafael Pass, Wei-Lung Dustin Tseng, and Muthuramakrishnan Venkitasubramaniam. Concurrent non-malleable zero knowledge proofs. In Tal Rabin, editor, Advances in Cryptology - CRYPTO 2010, 30th Annual Cryptology Conference, Santa Barbara, CA, USA, August 15-19, 2010. Proceedings, volume 6223 of Lecture Notes in Computer Science, pages 429–446. Springer, 2010. [OY91] Rafail Ostrovsky and Moti Yung. How to withstand mobile virus attacks (extended abstract). In PODC, pages 51–59, 1991. [Rab98] Tal Rabin. A simplified approach to threshold and proactive RSA. In Hugo Krawczyk, editor, Advances in Cryptology - CRYPTO ’98, 18th Annual International Cryptology Conference, Santa Barbara, California, USA, August 23-27, 1998, Proceedings, volume 1462 of Lecture Notes in Computer Science, pages 89–104. Springer, 1998. [RB89] T. Rabin and M. Ben-Or. Verifiable secret sharing and multiparty protocols with honest majority. In Proceedings of the twenty-first annual ACM symposium on Theory of computing, STOC ’89, pages 73–85, New York, NY, USA, 1989. ACM. [Sch07] David Schultz. Mobile Proactive Secret Sharing. PhD thesis, Massachusetts Institute of Technology, 2007. [Sha79] Adi Shamir. How to share a secret. Commun. ACM, 22(11):612–613, 1979. [WWW02] Theodore M. Wong, Chenxi Wang, and Jeannette M. Wing. Verifiable secret redistribution for archive system. In IEEE Security in Storage Workshop, pages 94–106, 2002. Andrew Chi-Chih Yao. Protocols for secure computations (extended abstract). In 23rd Annual [Yao82] Symposium on Foundations of Computer Science, Chicago, Illinois, USA, 3-5 November 1982, pages 160–164. IEEE Computer Society, 1982. [ZSvR05] Lidong Zhou, Fred B. Schneider, and Robbert van Renesse. Apss: proactive secret sharing in asynchronous systems. ACM Trans. Inf. Syst. Secur., 8(3):259–286, 2005.

A

Formal Definitions for Proactive MPC

In this section we provide formal definitions for a proactive MPC protocol and the corresponding ideal functionality, and security for mixed mobile adversaries and dishonest majorities can be found in the appendix. These definitions are new to this work: they do not exist in prior proactive MPC literature since the dishonest majority setting is unaddressed. A.1

New Security Definitions

To accommodate dishonest majority, we have to adapt the earlier proactive MPC security definitions to include the above-described multi-thresholds. An interesting aspect of these new definitions is dealing with the possibility of aborts when the adversary has corrupted a majority of the parties: the standard security notion for (non-proactive) MPC allows for the unavoidable possibility that a majority of corrupt parties could all abort and thus prevent the protocol from proceeding, by stipulating that in such an event, the culprits must be able to be identified and eliminated from the protocol by the rest of the parties who can then proceed running the protocol without the offenders (while setting the offenders’ input values to some known defaults). In the proactive setting, this solution does not seem directly applicable, since it seems that the adversary could corrupt more and more parties in successive phases, until all parties have been eliminated from the protocol. Proactive MPC Protocols with Mixed Mobile Adversaries 16

Definition 2 (Proactive MPC Protocol for Mixed Mobile Adversaries). A proactively secure multiparty protocol Π specifies a system of interaction between an environment Z, an adversary A, and parties p1 , . . . , pn (denoted by the set {pi |i ∈ [n]}). Z, A, and the parties are modeled as interactive Turing machines. Specification of a protocol Π is structured in synchronous rounds of interaction, and designates each round as part of either a refresh phase or an operation phase as described in Section 2.1. A “real-world” protocol execution is structured as follows: Initialization 1. The environment Z invokes the adversary A with optional auxiliary input z. 2. The environment Z invokes each party from the set {pi |i ∈ [n]} with an input xi .12 3. Initialize the sets of passively and actively corrupted parties, P := ∅ and A := ∅. Protocol execution 4. In each round of Π: (a) Each (semi-)honest party in ({pi |i ∈ [n]} \ A) prepares a message to broadcast in this round, as prescribed by Π. (b) The adversary A may perform corruptions and/or decorruptions as described below. This step can be repeated as many times as A (adaptively) decides. • Corruption: Upon corruption of a party pi ∈ {pi |i ∈ [n]} \ P, update P := P ∪ {pi }. A gains access to the internal state of pi , including pi ’s view since the start of the current stage. Moreover, if pi is corrupted actively, update A := A ∪ {pi }. • Decorruption: Upon decorruption of a party pi ∈ P, the set P := P\{pi } is updated and the randomness tape of pi is overwritten with fresh randomness unknown to A. Moreover, if pi was actively corrupted, then A := A \ {pi } is updated and the internal state of pi is overwritten by a “default” state as specified by Π. After this, if pi has not yet broadcasted a message in this round, he prepares a message as prescribed by Π (after the randomness/state overwrite). (c) The adversary broadcasts a message on behalf of each corrupt party pi ∈ P. (d) All honest parties broadcast their prepared messages. Outputs: Each honest party ({pi |i ∈ [n]} \ P) outputs a special symbol ⊥1 if he has not received an output value, or otherwise outputs his received value yi . The adversary A outputs vA , which may be an arbitrary (randomized) function of the information he has learned during the ideal protocol execution. The environment Z, after observing the outputs of all parties and A, outputs a bit bZ . Let ExecΠ,A,Z (κ, f, z) denote the distribution of bZ .

12

Without loss of generality, |xi | = |xj | for all i, j ∈ [n].

17

Ideal PMPC Functionality for Mixed Mobile Adversaries Ideal process IdealF proactive-mixed In the ideal process, the environment Z initializes the parties and an ideal adversary S with inputs of its choice. The parties and ideal adversary (who may choose to corrupt and take control of a subset of parties of its choice) interact with an ideal functionality F which behaves as a trusted third party that receives the inputs x1 , . . . , xn of the parties, computes the function f on the inputs, and outputs to each party pi his respective output yi . At the conclusion of the protocol, each uncorrupted party outputs his output value yi , each corrupted party outputs a special symbol ⊥0 , and S outputs an arbitrary function of its view13 of the protocol execution. The environment Z observes the outputs of all parties and S, then outputs a single bit. Parameters: κ, the security parameter; f , the n-ary function to be computed; multi-thresholds Tc , Ts , Tr , Tf . Specification of the Ideal Process: Initialization 1. The environment Z invokes the ideal adversary S with optional auxiliary input z. 2. The environment Z invokes each party in the set {pi |i ∈ [n]} with an input xi . 3. F initializes the sets of passively and actively corrupted parties, P := ∅ and A := ∅, and also initializes µP := 0 and µA := 0. Inputs 4. Each party pi sends his input xi to the ideal functionality F. Input corruption phase 5. F receives message (τ, i) from S, where τ ∈ {passive, active, decorrupt} and i ∈ [n] ∪ {⊥}. 6. If i 6= ⊥: • If τ ∈ {passive, active}: Update P := P ∪ {pi } then set µP := max(µP , |P|). F sends xi to S. • If τ = active: Update A := A ∪ {pi } then set µA := max(µA , |A|). • If τ = decorrupt: Update P := P \ {pi } and A := A \ {pi }. • Go back to step 5. 7. If (µP , µA ) 6≤ Ts : F sends all inputs {xi }i∈[n] to the adversary S. 8. For randomized functionalities only: If (µP , µA ) ≤ Ts : F samples a random bit-string r of appropriate length. Else: F receives r from the adversary S. Computation 9. If f is deterministic, F evaluates (y1 , . . . , yn ) := f (x1 , . . . , xn ). If f is randomized, F evaluates (y1 , . . . , yn ) := f (x1 , . . . , xn ; r). Output corruption phase 10. S sends message StartOutputPhase to F, whereupon F resets P := ∅ and A := ∅. 11. F receives message (τ, i) from S, where τ ∈ {passive, active} and i ∈ [n] ∪ {⊥}. 12. If i 6= ⊥: • If τ ∈ {passive, active}: Update P := P ∪ {pi } then set µP := max(µP , |P|). F sends yi to S. • If τ = active: Update A := A ∪ {pi } then set µA := max(µA , |A|). • Go back to step 11. 13. If (µP , µA ) 6≤ Ts : F sends all outputs {yi }i∈[n] to the adversary S. Outputs 14. If (µP , µA ) 6≤ Tf : For each pi ∈ P, F sends yi to the adversary S. Then S sends a bit β to F. If β = 1, then F aborts. 15. If (µP , µA ) 6≤ Tr : S sends a bit β to F. If β = 1, then F aborts. 16. If (µP , µA ) 6≤ Tc : S sends (yi0 )i∈[n] to F and F sets yi = yi0 for all i ∈ [n]. 17. For each honest party in ({pi |i ∈ [n]} \ P), F sends yi to party pi . For each corrupt party pi ∈ P, F sends yi to the adversary S. Outputs: Each honest party ({pi |i ∈ [n]} \ P) outputs a special symbol ⊥1 if he has not received an output value, or otherwise outputs his received value yi . The adversary S outputs vS , which may be an arbitrary (randomized) function of the information he has learned during the ideal protocol execution. The environment Z, after observing the outputs of all parties and S, outputs a bit bZ . Let ,S,Z IdealF proactive-mixed (κ, f, z, (Tc , Ts , Tr , Tf )) denote the distribution of bZ .

13

The view is comprised of S’s auxiliary input, internal randomness, and messages received.

18

Note that the adversary in the ideal process effectively gets to choose one set of corrupt parties whose inputs it learns, and another set of corrupt parties whose outputs it learns. This models a mobile adversary whose set of corrupted players changes over time. We have used P and A to denote the sets of passively and actively corrupted parties both in the input phase and output phase; however, notice that the output-phase corruptions could be very different from the input-phase corruptions, since in step 10, P and A are reset to be empty. Definition 3 (T -bounded Real Adversary). For a multi-threshold T , an adversary A (attacking a protocol Π) is said to be T -bounded if (µA , µP ) ≤ T , where µA (resp. µP ) is the maximum number of passive (resp. active) corruptions made by A in any protocol stage. Definition 4 (T -bounded Ideal Adversary). For a multi-threshold T , an adversary S in the ideal process IdealF proactive-mixed is said to be T -bounded if (µA , µP ) ≤ T at the end of the ideal process. Definition 5 (Mixed Proactive Security). A multiparty protocol Π is said to securely realize IdealF proactive-mixed with multi-thresholds (Tc , Ts , Tr , Tf ) if for all efficient adversaries A attacking Π, there exists an efficient ideal adversary S such that for all T ∈ {Tc , Ts , Tr , Tf }, if A is T -bounded, then S is also T -bounded; and for every efficient environment Z, it holds that n o ,S,Z IdealF (κ, f, z, (T , T , T , T )) ≈c ExecΠ,A,Z (κ, f, z) κ,f,z , c s r f proactive-mixed κ,f,z

where ≈c denotes computational indistinguishability in the security parameter κ.

B

Security Proofs

In this section, we first prove that ProactiveMPCF,n,d,Φ is secure against passive adversaries, then we describe how the protocol can be modified to furthermore achieve security against adversaries. Theorem 1 states that ProactiveMPCF,n,d,Φ securely realizes an ideal functionality F IdealF proactive-passive , a natural adaptation of Idealproactive-mixed for the case of passive adversaries. Next, in Section B.1, we give formal definitions of IdealF proactive-passive and what it means for a PMPC protocol to be secure against passive adversaries. Then, in Section B.2, we formally state and prove Theorem 1 (i.e., passive security of ProactiveMPCF,n,d,Φ ). Finally, in Section B.3, we address the extension to active security. B.1

Ideal Process for Passive Mobile Adversaries

This ideal process is for passive mobile adversaries. The ideal process is defined by a system of interactive Turing machines: an environment Z, an ideal adversary S, an ideal functionality F, and parties p1 , . . . , pn (denoted by the set {pi |i ∈ [n]}) which interact with each other as described in the ideal process specification. 19

Note that in the passive setting, correctness, robustness, and fairness are guaranteed to hold, so we need only consider one threshold Ts which determines whether or not secrecy holds. Ideal process IdealF proactive-passive In the ideal process, the environment Z initializes the parties and an ideal adversary S with inputs of its choice. The parties and ideal adversary (who may choose to corrupt and take control of a subset of parties of his choice) interact with an ideal functionality F which behaves as a “trusted third party” that receives the inputs x1 , . . . , xn of the parties, computes the function f on the inputs, and outputs to each party pi his respective output yi . The environment Z observes the entire interaction between the parties, S, and F, and may interact arbitrarily with the adversary during the ideal process. Finally, Z outputs a single bit. Parameters: κ, the security parameter; N , the number of protocol stages; f , the n-ary function to be computed; secrecy threshold 0 ≤ t ≤ n. Specification of the Ideal Process: Initialization 1. The environment Z invokes the ideal adversary S with optional auxiliary input z. 2. The environment Z invokes each party in the set {pi |i ∈ [n]} with an input xi .14 3. Initialize the set of passively corrupted parties P := ∅. Initialize µP := 0. Inputs 4. Each party pi sends his input xi to the ideal functionality F. Input corruption 5. Z receives message (passive, i) from the adversary S, where i ∈ [n] ∪ {⊥}. 6. If i 6= ⊥: • Update P := P ∪ {pi } then set µP := max(µP , |P|). • Go back to step 5. 7. If µP 6≤ t: F sends all inputs {xi }i∈[n] to the adversary S. 8. For randomized functionalities only: If µP ≤ t: F samples a random bit-string r of appropriate length, and sends r to Z. Else: F receives r from the adversary S, and sends r to Z. Computation 9. If f is deterministic, F evaluates (y1 , . . . , yn ) := f (x1 , . . . , xn ). If f is randomized, F evaluates (y1 , . . . , yn ) := f (x1 , . . . , xn ; r). Output corruption 10. Reset P := ∅. 11. Z receives message (passive, i) from the adversary S, where i ∈ [n] ∪ {⊥}. 12. If i 6= ⊥: • Update P := P ∪ {pi } then set µP := max(µP , |P|). • Go back to step 11. 13. If µP 6≤ t: F sends all outputs {yi }i∈[n] to the adversary S. Outputs 14. For each corrupted party pi in P, F sends yi to the adversary S. 15. For each honest party in {pi |i ∈ [n]} \ P, F sends yi to pi . Outputs: Each honest party in {pi |i ∈ [n]} \ P outputs ⊥ if he has not received an output value, or otherwise outputs his received value y. The adversary S outputs vS , which may be an arbitrary (randomized) function of the information he has learned during the ideal protocol execution. The environment Z, after observing the ideal process and the input and output of all parties and S, outputs a bit bZ .

For an ideal functionality F, an ideal adversary S, and an environment Z, let ,S,Z IdealF proactive-passive (κ, f, z, t) 14

Without loss of generality, |xi | = |xj | for all i, j ∈ [n].

20

denote the distribution of the output of Z in the ideal process IdealF proactive-passive when interacting with F and S with security parameter κ, n-ary functionality f , auxiliary input z, and secrecy threshold t. For a proactively secure multi-party protocol Π, let ExecΠ,A,Z (κ, f, z) denote the distribution of the output of Z in an execution of Π when interacting with A for parameters κ, f , z and z = t as above. Definition 6 (Passive Proactive Security). A multiparty protocol Π is said to securely realize IdealF proactive-passive with up to t corruptions if for all adversaries A attacking Π, there exists an ideal adversary S such that: – if A corrupts at most t parties per protocol stage, then S corrupts at most t parties in each corruption phase of the ideal process; and – for every environment Z, it holds that n o c F ,S,Z Idealproactive-passive (κ, f, z, t) ≈ ExecΠ,A,Z (κ, f, z) κ,f,z . κ,f,z

B.2

Security Proof for Passive Adversaries

Theorem 1. If Φ is an MPC protocol secure against up to n − 1 (passive) corruptions, then ProactiveMPCF,n,d,Φ is a secure proactive MPC protocol against passive adversaries. That is, ProactiveMPCF,n,d,Φ securely realizes IdealF proactive-passive with secrecy threshold Ts = d−1, provided that at most n−d−1 parties may simultaneously crash/reboot at any point during protocol execution. Remark 1. While the proof could be simplified in certain places if we were only interested in addressing passive adversaries, we have chosen to write the proof with some additional generality so that it bears more similarity to the proof of active security. Proof. Assume that Φ is a standard MPC protocol (i.e., not proactive) secure against up to n − 1 active corruptions. Let f be any n-ary function, let A be any adversary against ProactiveMPCF,G,γ,n,d,Φ that adheres to multi-thresholds (Tc , Ts , Tr , Tf ), and let z be any auxiliary input. We describe a simulator S that, when instantiated with auxiliary input z by an environment Z, instantiates A with the same auxiliary input z, and then feeds simulated “real-world” protocol messages to A(z) (as well as random coins, if A is randomized), and eventually outputs a view which is indistinguishable from that of A in the real protocol (while adhering to the same corruption thresholds as A). Henceforth, we leave the auxiliary input z implicit. We now describe the simulation step by step. First, we describe the simulation assuming that no executions of the Recover sub-protocol are triggered; then at the end, we describe how to simulate the Recover sub-protocol too. Recall that Definition 2 gives a general description of how messages are sent by the adversary and honest parties, and how corruptions and decorruptions are performed, during each step of a real-world protocol execution. 21

Recall (from Definition 2) that the first phase of the ideal functionality IdealF proactive-mixed in which the simulator participates is the input corruption phase. Our simulator chooses its corruptions in this phase by “copying” the corruptions of A during steps 1–2 of the real-world protocol execution (Definition 1). The simulator sends protocol messages on behalf of honest parties, which are generated by running GradualShare honestly on behalf of those parties based on an input value of 0. Whenever A outputs a message (τ, i) for τ ∈ {passive, active, decorrupt} (during step 4(b) of Definition 2), the simulator forwards the message (τ, i) to Z and, if τ ∈ {passive, active}, receives the input xi of party i in return. Upon receiving xi , if pi has already performed the secret-sharing of his input value (i.e., if the shares have been sent out to the other parties), the simulator alters the shares in the degree-d polynomial in the gradual sharing of pi ’s input which are held by honest parties who have never been corrupted (including pi if he has never been corrupted before), in order to alter the free term of the degree-d polynomial such that xi is equal to the sum of the free terms of all the polynomials in the gradual sharing. (Note that it is important that the shared values in the polynomials of degree less than d remain unchanged.) Then S updates pi ’s internal state to be consistent with the new sharing of input xi . If pi has not yet performed the secret-sharing of his input value, then S simply updates pi ’s internal state consistent with xi . Then, S forwards pi ’s internal state to A. Note: If the secrecy threshold Ts is exceeded at any point, then S will learn all the inputs x1 , . . . , xn of all parties; in this event, all secrecy is lost, and S produces a perfect simulation by generating an honest protocol transcript based on inputs x1 , . . . , xn (without any further interaction with A). We now focus on the case when Ts is not exceeded by A during step 1 of Definition 1. Any set of fewer than d shares in a polynomial secret-sharing of degree d information-theoretically hides the free term, and is distributed identically to a random set of points in F. Moreover, any set of fewer than n shares in an n-way additive secret-sharing information-theoretically hides the secret (i.e., the sum of all shares), and is also distributed identically to a random set of points in F. Recall that a gradual secret sharing consists of polynomial secret sharings of degree 1, . . . , d of additive secret shares s1 , . . . , sd (respectively) of the secret s = s1 + · · · + sd . By assumption, the adversary is Ts -bounded, and so must corrupt fewer than d parties. Therefore, the joint distribution of all honest shares (in the gradual secret-sharing scheme) received by A (whether as a protocol message or as part of the internal state of a corrupted party) during steps 1–2 of the real-world protocol execution is identical to a random set of points in F. So the messages and internal states produced by S on behalf of the parties are indistinguishable to A from those in a real protocol execution. We now move on to step 3 of the real-world protocol execution (Definition 1), i.e., computing the circuit layer by layer (with some interspersed Refresh sub-protocols). In this step, whenever A outputs a message (τ, i) for τ ∈ {passive, active, decorrupt}, S does not forward it to Z. However, S keeps track of the sequence of corruptions made by A and whether these exceed multi-thresholds (Tc , Ts , Tr , Tf ); and if Ts is exceeded at any point, then S concludes the simulation as described in the Note above. S interacts with A on behalf of the honest parties in the real-world protocol execution, by generating honest messages according to 22

the protocol specification with respect to the set of shares held at the end of step 2. Recall that Add consists entirely of local computation, so we need only consider the simulation of Mult, which essentially consists of an execution of the GMW protocol. The standard MPC security guarantee of the GMW protocol ensures that an adversary observing honest parties’ protocol messages learns nothing about the honest input values apart from what can inferred from the output values of corrupt parties. In our case, the output values of the corrupt parties consist of secret-shares of the product of the values reconstructed from the input shares: in particular, nothing can be inferred from these output values about either the product or the multiplicands, provided that the adversary respects the secrecy threshold Ts . It follows that as long as Ts is not exceeded, the joint distribution of all shares seen by A during step 3 is indistinguishable from that of a random set of points in F, in both the real and simulated executions, even in the presence of the GMW protocol messages exchanged during executions of Mult. Finally, we arrive at steps 4–6, the masking and reconstruction steps (which comprise a single operation phase). This coincides with the start of the output corruption phase in the ideal functionality. Note that the simulator, can already compute the values (z1 , . . . , zn ) that would be reconstructed from the secret-sharing held by the parties at the end of step 3. S interacts with A on behalf of the honest parties in the real-world protocol execution, by generating honest messages according to the protocol specification with respect to the set of shares held at the end of step 3, and honestly generating random masks ρ1 , . . . , ρn . Whenever A outputs a message (τ, i) for τ ∈ {passive, active, decorrupt}, the simulator forwards the message (τ, i) to Z and, if τ ∈ {passive, active}, receives the output yi of party i in return. Upon receiving yi , S sets ρi = zi − yi . If pi has already performed the secret-sharing of his mask ρi (i.e., if the shares have been sent out to the other parties), the simulator alters the shares in the degree-d polynomial in the gradual sharing of pi ’s input which are held by honest parties who have never been corrupted in the current phase (including pi if he has not been corrupted in this phase), in order to alter the free term of the degree-d polynomial such that ρi is equal to the sum of the free terms of all the polynomials in the gradual sharing. Then S updates pi ’s internal state to be consistent with the new sharing of mask ρi . If pi has not yet performed the secret-sharing of his mask, then S simply updates pi ’s internal state consistent with ρi . Then, S forwards pi ’s internal state to A. By the a very similar argument to above, provided that Ts is not exceeded, the joint distribution of all shares seen by A during steps 4–6 is identical to that of a random set of points in F conditioned on reconstructing to the correct output values of the corrupt parties, in both the real and simulated executions. We have thus far described the simulator for each numbered step of Definition 2. We now return to address the simulation of the Recover sub-protocol. S interacts with A on behalf of the honest parties in a Recover sub-protocol execution, by generating honest messages according to the protocol specification with respect to the set of shares held at any point when the Recover sub-protocol was triggered. The distribution of messages seen by A in a thus-simulated Recover sub-protocol execution is identical to real execution of Recover. (We have considered a single 23

recovering party here, but this argument straightforwardly generalizes to the case of multiple parties recovering at once.) At the end of the simulated protocol execution, the simulator outputs the view of A in the simulated protocol. As argued above, the simulated protocol execution was indistinguishable to A from a real protocol execution. Moreover, invoking the information-theoretic hiding of the secret-sharing scheme again, the view of a Ts -bounded adversary A is distributed independently of the inputs of parties not corrupted during the input corruption phase and the outputs of parties not corrupted during the output corruption phase. The theorem follows. B.3

Dealing with Active Adversaries

The MPC sub-protocols in Section 3.3 are only secure against passive adversaries. To withstand active adversaries these sub-protocols have to be augmented to ensure that parties corrupted by such adversaries cannot misbehave without getting caught. This can be achieved by adding checks in the sub-protocols to ensure correctness of results of each step using generic zero-knowledge (ZK) proofs and non-malleable commitments based on one-way functions. Such augmentation exists, with constant round overhead, as discussed in more detail below. The purpose of the augmentation is to force each party to prove (over the public broadcast channel) to the other ones that it correctly performed the steps involved in each sub-protocol and thus active misbehavior will be detected and the protocols can abort upon detection of active misbehavior. At first glance, it may seem that generic ZK simply provides a solution to this: each party simply proves to the others, at each step, that it has performed the required computation correctly (and reveals no information other than that single bit). The issue arises when trying to achieve agreement between the honest parties on which parties are maliciously deviating from the protocol. Ideally, we would like each party to broadcast a single proof to all the other parties, which convinces them with regard to whether the proving party is deviating from the protocol; then, all honest parties will agree on whether a proof verified or not. To do this, we would like the n − 1 verifying parties to “act as a single verifier” in a ZK protocol, and we require completeness of verification to hold even when some of the verifying parties are corrupt. A standard MPC protocol suffices to achieve this. Theorem 1 in [Goy11] states that: “A constant round (semi-honest secure) oblivious transfer protocol is necessary and sufficient to obtain a constant round secure multi-party computation protocol (unconditionally) [against active adversaries corrupting a dishonest majority].” Theorem 4 in [Goy11] states that: “There exists a constant round many-many non-malleable commitment scheme using only one-way functions.”15 The non-malleable commitment scheme of Theorem 4 is a key building block in the MPC protocol of Theorem 1. To prove Theorem 4, [Goy11] develops a new constant-round non-malleable commitment scheme; [Goy11] then discusses that this commitment scheme can 15

The theorem numbers are taken from the extended version of the paper which is on the Cryptology ePrint Archive (report 2010/487, revision uploaded August 2015), rather than the conference version cited in the references.

24

be plugged in the construction of [LPTV10] to give rise to a constant-round nonmalleable ZK protocol assuming only OWF. A constant-round non-malleable ZK protocol can be used to compile an MPC protocol secure only against passive adversaries into one that is secure against active ones. This requires non-malleable commitments based on one-way functions that follows from the work of [Goy11] and have subsequently been improved to four rounds (still based on one-way functions) by [COSV16]. Specifically, Theorem 1 in [Goy11] shows that the existence of a constant-round OT protocol is necessary and sufficient to obtain a constant-round MPC protocol building it out of non-malleable commitments. So in summary, assuming OWF and using Theorem 1 from [Goy11], there exists an augmented version of our PMPC protocol in the OT-hybrid model that is secure against active corruptions with constant round overhead. The generic construction carries over to our setting as well. Theorem 2. Let ProactiveMPC0F,n,d,Φ denote the version of ProactiveMPCF,n,d,Φ augmented with zero-knowledge checks of adherence to the protocol at every step. If Φ is an MPC protocol secure against up to n − 1 active corruptions, then ProactiveMPC0F,n,dΦ securely realizes IdealF proactive-mixed with multi-thresholds (Tc , Ts , Tr , Tf ), provided that at most n − d − 1 parties may simultaneously crash/reboot at any given point during the protocol execution, where Tc = {(n, n)}, Ts = {(d − 1, d − 1)}, Tr = {(1, n − 3) : 1 ≤ k ≤ dn/2e − 1}, and Tf = {(k, min(n − k − 1, n − 3)) : 1 ≤ k ≤ dn/2e − 1}. Moreovoer, ProactiveMPC0F,n,d,Φ can be based on any one-way function and oblivious transfer, when instantiated with Φ that depends on the same assumptions (such as [GMW87,Goy11]). Proof (sketch). This proof sketch describes the simulator for ProactiveMPC0F,n,d,Φ and argues informally why its output is indistinguishable from the adversary’s view in a real protocol execution. The simulator may be thought of a composition of a simulator for the underlying protocol ProactiveMPCF,n,d,Φ , and simulators for the added zero-knowledge checks which are implemented by MPC. The latter follows directly from the simulatability of the MPC used to run the ZK checks. We begin with the simulator for the passive case which was described in the proof of Theorem 1, and discuss how to adapt it to work for active adversaries. The main changes are as follows. In the passive case, S was able to generate the corrupt parties’ messages himself, since A’s messages on behalf of the corrupt parties were distributed identically to honest protocol messages; now, the adversary may deviate from the protocol so that is no longer acceptable. Instead, the simulator S, while running A, incorporates the messages outputted by A on behalf of the corrupt parties into the simulated protocol transcript. Also, the execution of Φ within the multiplication sub-protocol is simulated using the simulator for Φ, which is guaranteed to exist since Φ is a secure MPC protocol (against a dishonest active majority). In the passive case, when the secrecy threshold Ts was violated, the simulator learnt all the inputs and simulated an honest protocol execution on those inputs. Again, with an active adversary, this is not an adequate simulation because the adversary’s messages may deviate from the protocol specification. Instead, S must 25

run a protocol execution interacting with A on behalf of all the parties and Z, and output the view of A in that protocol execution. Finally, a proof of security against mixed adversaries must take into account the multi-thresholds Tc , Tr , and Tf which were not necessary to consider for the passive case. Tc is the highest threshold: correctness is only “violated” when the adversary corrupts all parties. However, the correctness condition refers to correctness of honest parties’ outputs, so vacuously holds when all parties are corrupt. Thus, our simulator does not need to take any actions with regard to the correctness multi-threshold Tc . The fairness threshold Tf addresses the case when the adversary A prematurely terminates the protocol after having received the outputs of the corrupted parties, so that the honest parties do not learn their outputs (and output ⊥1 ). If A exceeds Tf at any point during the simulated protocol execution, then S also performs enough corruptions to exceed Tf ; then S will receive the corrupt parties’ outputs (in step 14 of the ideal process) before the honest parties learn their outputs in the ideal process, and thus can simulate such premature termination by A by sending β = 1 in step 14. The robustness multi-threshold Tr is the most interesting one. The multi-threshold Tr = {(k, n − k − 2) : 1 ≤ k ≤ dn/2e − 1} is violated exactly when the ratio of active to passive corruptions is high enough that if all the actively corrupted parties were to quit, then the remaining players’ shares contain insufficient information to reconstruct the secret values shared using GradualShare. This is equivalent to the active adversary causing a protocol abort, and can be simulated as follows: if A exceeds Tr at any point during the simulated protocol execution, then S also performs enough corruptions to exceed Tr , and thereby gains the ability to cause a protocol abort at step 15 of the ideal process (before outputs are issued).

26