Computational complexity and implementation

0 downloads 0 Views 499KB Size Report
Sep 30, 2003 - and a text modification function (F) to an incremental hash function, an ...... Cryptology - CRYPTO'94, Lecture Notes in Computer Science, 839, pp.216-233 ... [6] D. Micciancio, “Oblivious data structures: Applications to.

B.-M. Goi et al.: Computational Complexity and Implementation Aspects of the Incremental Hash Function


Computational Complexity and Implementation Aspects of the Incremental Hash Function Bok-Min Goi, M.U. Siddiqi, and Hean-Teik Chuah, Senior Member, IEEE

Abstract — This paper gives the computational complexity and practical implementation aspects of a newly introduced incremental hash function called Pair Chaining & Modular Arithmetic Combining Incremental Hash Function (PCIHF). The boundary conditions of the parameters involved in the update process of PCIHF are examined in depth. It is proved that these basic requirements can be fulfilled easily. It is shown that in applications involving more than one update process if the number of blocks to be replaced is no more than ¼ of the message blocks needed, the PCIHF is more efficient than standard SHA-1. Finally, it is observed that the computational complexity of combining operation is also important to ensure the practicability of PCIHF. Index Terms — Cryptography, Hash Function, Incremental Cryptography, Collision Resistance. I.


Cryptographic hash functions take an input string of arbitrary length message M and output a unique fixed length string (usually smaller size) called as hash value (µ). They are designed for message integrity purposes. By using them, instead of the whole message only the hash value needs to be verified. Therefore, hash functions speed up the integrity verification process. However, they must fulfill some extra cryptographic properties, which are 1st and 2nd-pre-image resistance and collision resistance [1]. Although there are many proposed fast cryptographic hash functions, most of them use iterative constructions which have to re-hash each message from scratch and are inefficient on bulk hashing of messages with high similarity. The concept of incremental cryptography, first introduced in [2] and has been further studied in [3]-[8]. It has a special property, viz., the cryptographic output can be quickly updated for the modified message. By providing a message (M) with its corresponding µ and a text modification function (F) to an incremental hash function, an updated hash value (µ’) of the modified message (M’) is produced faster than re-computing it from scratch, where M’ = F(M, f) and f is the text modification augment. The time taken depends on the amount of modifications made to the message and not on the message size. It is clear that incremental cryptography shows great advantages over traditional cryptography, especially, when we cryptographically process a lot of large messages, with high similarity.

The authors are with the Faculty of Engineering, Multimedia University, MALAYSIA. (e-mail: [email protected]). Contributed Paper Manuscript received September 30, 2003

The following are some examples where incremental schemes are applicable: (a) Consider a virus detection scheme using a processor with limited processing power, i.e., smart card. Normally, bulks of documents are stored in a cheap but insecure remote medium which can only be accessible through the processor with the virus detection scheme. The processor computes each document and stores a unique tag in a private space for verification process. Therefore, the processor has to exhaustedly re-authenticate the documents which can be very big size and subject to frequent changes. It is desirable to be able to update the tag incrementally. (b) By using an incremental hash function, the current document of the text editor can always be associated with its latest hash value in the background. This hash value is quickly updated, upon detecting any text modification functions that have been applied. Therefore, the final hash value is instantly obtained soon after writing the document. This operations is independent on the document size. As a result, an incremental scheme not only increases the efficiency of hashing process, but also enables us to fully utilize the system processing power. (c) Incremental message authentication code scheme (MAC) can also be implemented in memory checker [7] and [9]. For example, in a banking system, a user withdraws money from an automatic teller machine using his smart card. The bank keeps track of the transactions by queuing them in a list, and at the same time, the memory checker quickly updates the hash value using incremental scheme. When the user verifies his transaction, he prints out this sequence of transactions at the machine. Then, once again, the memory checker incrementally verifies the output by comparing with the stored hash value. If they are identical, the user confirms that no unauthorized transactions have been made. (d) Incremental digital signature scheme is also important in signing sequences of video frames. The big size of video frame makes it impractical to sign each image from scratch, specially in real time application. However, we know that the sequential video frames are usually only slightly different from each other, especially, those with the stationary background. Therefore, using incremental scheme, the signature of each frame can be easily updated, by comparing to the reference frame, i.e., Iframe. (e) An incremental cryptographic scheme can be profitably used in the situations where a large volume of documents whose content is essentially the same except for minor

0098 3063/00 $10.00 © 2003 IEEE


IEEE Transactions on Consumer Electronics, Vol. 49, No. 4, NOVEMBER 2003

differences (like personnel details in questionnaires, standard application forms, contract agreements and etc.) need to be sent to many different users. A new incremental hash function called Incremental Hash Function based on Pair Chaining and Modular Arithmetic Combining (PCIHF) was proposed in [10]. PCIHF has been proved to support a set of powerful text modification functions. Furthermore, it gives a fixed size final hash value, and no information is leaked during the updating process. In this paper, we give the computational complexity and practical implementation aspects of PCIHF by comparing with the standard hash function SHA-1 [11]. Section II presents the notation used in the paper and elaborates on the construction of PCIHF. The computational complexity of PCIHF for text replacement operation is analyzed in Section III. Implementation aspects of the proposed PCIHF are given in Section IV. Security aspects of in the context of generalized birthday attack are discussed in Section V. Conclusions are given in Section VI. II. PRELIMINARY Here are some standard notation used throughout this paper. {0,1}k denotes a binary string with k-bit. [1,q] is the set of integers {x: 1 ≤ x ≤ q }. |s| denotes the length (in bits) of s. Symbol Ω denotes the asymptotic lower bound. r||s denotes that string r is concatenated with string s. To begin with, PCIHF is defined in terms of “abstract” primitives, such as a pseudorandom function (or collision-free compression function) for randomizing process. Later, in order to achieve its practicality, we appropriately instantiated these by concrete ones, SHA-1. A. Construction of PCIHF The construction of PCIHF with single pair-block chaining (the same concept can be applied for multi-block chaining) is as follows: 1) Initialization For the sake of simplicity, we ignore any nonce and MDstrengthening, which are usually added at the first and last block of the original message, respectively. We apply a standard padding method, so that the final message M = M[1]M[2] … M[n], with length |M| is a multiple of b-bit, M[i]∈{0,1}b, for i∈[1,n]. We make a strong assumption that all blocks are distinct. Finally, the message blocks are chained by pair and become M[i]||M[i+1] for i∈[1,n-1]. 2) Randomization We apply the randomize-then-combine paradigm [5]. Let R be a collision-free compression function with 2b-bit input and a k-bit random output string. After going through R, we obtain a series of intermediate hash values for i∈[1,n-1], h[i] = R(M[i]||M[i+1]) 3) Combining In order to ensure the overall system security and efficiency, we choose modular addition operation as the combining operator. It is associative, commutative and invertible in a particular group, so that PCIHF is parallelizable and

incremental. We fix the length of final hash value |µ| to k = 160 bits. The final hash value PCIHF is as follows:


∑ h[i]

(mod 2160 + 1)


i∈[1, n −1]

In this paper, we assume that RAM machine is chosen, and the message is stored inside the memory. Therefore, the time taken to access the corresponding block can be ignored. The total computation steps taken by PCIHF for obtaining the first original hash value (Tt) is equal to n(Rt+Ct) – (Rt+2Ct), where Ct and Rt are the constant times taken for combining and randomizing process, respectively. Its efficiency decreases tremendously compared to the standard hash function for which the number of computation steps is n(Rt/2). Nevertheless, PCIHF is incremental and parallelizable. B. Analysis of PCIHF Here we prove that the proposed PCIHF is not only universal one-way but also collision-free. Definition 1 (Preimage-finder). A preimage-finder (t,q,ε)breaks a hash family if given an random oracle, it makes at most q oracle queries, runs in time t and finds a collision with probability at least ε. A hash function is (t,q,ε)-secure, if and only if there is no preimage-finder which (t,q,ε)-breaks it. Definition 2 (Knapsack Problem). A (k,q)-knapsack problem is (t,ε)-hard, if and only if there is no algorithm which for an instance N can find a set of weights w1,…,wq of the knapsack problem ∑wi ai ≡ 0 (mod N) with probability more then ε in i∈[q]

time t, provided that a1,…,aq are selected uniformly and independently. The weights wi∈{0,1} for a standard knapsack problem and wi∈{-1, 0, +1} for a weighted knapsack problem, for i∈[1,q]. Theorem 1 [10]. PCIHF is (t,q,ε)-universal one-way where t = t’ - Ω(2bq) and ε = 2ε’, if (k,q)-standard knapsack problem is (t’,ε’)-hard. Proof. We show how the hardness of finding the pre-image of PCIHF can be reduced to solving a standard knapsack problem. Informally, we perform the attack strategy that turns the equation system (1) associated with PCIHF(x) = z into a linear equation. The attack is successful, if a linear system of equations can be formed and a message x can be found for any arbitrarily string z∈{0,1}k. Consider two fixed messages x0 = x1 x20 x3 x40 x5 ⋅ ⋅ ⋅ xn−2 xn0−1 xn and x1 = x1 x12 x3 x14 x5 ⋅ ⋅ ⋅ xn−2 xn1−1 xn . Both of them have the same size that is n blocks with b-bit and x20i ≠ x12i for i ∈ [1,m], where m = (n-1)/2. The rest of the blocks are identical. For any m-bit string y = y1y2…ym, yi∈{0,1}, we let x y = x1 x 2y1 x 3 x 4y2 x 5 ⋅ ⋅ ⋅ x n − 2 x ny−m1 x n and

α iy = R( x2i −1 || x2yi ) + R( x2yi || x2i+1 ) . Therefore, the hash value of i




message x is obtained as follows: y

PCIHF( x ) =

∑ αiy



mod N


B.-M. Goi et al.: Computational Complexity and Implementation Aspects of the Incremental Hash Function

We obtain a pre-image of z such that PCIHF(x) = z if we can find the value of y, such that x = xy. From (2), y

PCIHF( x ) =

∑ αiy = z mod N i

Further, since y i = 1 + yi (mod 2),

z mod N


∑ [βi yi ]

= z’ (mod N)



where βi = αi0 + αi1 (mod N),and z’ = z -

regardless of the amount of blocks involved, even though the tree scheme is not implemented. TABLE I COMPLEXITY OF COMPUTATION


∑ [αi0 y i + αi1 yi ] =


∑ α i0 (mod N).


Obviously, (3) is not a linear equations system over GF(2), so Gaussian elimination method cannot be implemented. On the other hand, since the list of values of β i are uniformly distributed over k-bit strings and yi∈{0,1} for i∈[1,m], so (3) is the knapsack problem, which is known to be NP-complete [13]. Theorem 2 [10]. PCIHF is (t,q,ε) – collision - free where t = t’-Ω(2bq) and ε = ε’, if (k,q)-weighted knapsack problem is (t’,ε’)-hard. The proof of Theorem 4 requires relating the security of the PCIHF to the balance problem [5], which can further be reduced to a weighted knapsack problem. The proof then becomes similar to the proof of Theorem 2. Details are given in [10]. C. Text Modification Function Construction of PCIHF PCIHF is shown to support a set of powerful and realistic text modification functions. It also supports multi-message manipulation. Besides the usual single block update operations that modify the original message with a single block while the other blocks remain unchanged, PCIHF can also handle multiple block update operations efficiently. In this Section, we only elaborate the text replacement function; other text modification functions can be performed accordingly. Let σ ∈{0,1}b be a set of distinct message blocks (also with M[i], for i ∈[1,n]). A modification augment f = replace(p,i,σ1,...,σp) denotes the sequence of p-block starting from location i of previous M with hash value µ replaced by σ1,...,σp, but other blocks remain unchanged. Generally, the updated hash value µ’ is incrementally updated as below: µ’ = µ + R(M[i-1]||M[σ1]) + R(M[σp]||M[i+p]) – ∑ R(M [i - 1 + j] || M [i + j]) j =0,1,..., p

The total cost Tt = 2(1+p)(Rt + Ct), is proportional to p but is independent on the length of the message. Therefore, it is incremental. Table I summarizes the complexity of computation of PCIHF for various text modification functions. The time taken to update the result upon replacement, deletion and insertion of M are only proportional to the amount of blocks to be modified p. Furthermore, the times taken for swapping and cut-&-paste text modification functions are constant,

Text modification function, f replace(p, i, σ1,..,σp) delete(p, i) insert(p, i) swap(p1, i, p2, j) cut-paste(p, i, j)

Total time taken, Tt 2(1+p)(Rt + Ct) (2+p)(Rt + Ct) (2+p)(Rt + Ct) 8(Rt + Ct) 6(Rt + Ct)

III. COMPUTATIONAL COMPLEXITY In this Section, the efficiency of the software implementation of PCIHF is examined. The relationship between the system parameters (message size, number of blocks involved and number of modifications) and how they influence the performance of the proposed PCIHF are studied. Since the efficiency of the software implementation depends highly on the word size of the processor, we stick to a 32-bit generic processor. We assume that it supports a set of basic 32-bit instructions: There are unary and binary bitwise logical operations (XOR, OR, NOT, AND, etc.), load/store one word from/to memory (LOAD, STORE), rotated left shifting a word over a number of bits (ROT) and simple binary arithmetic operations (ADD and SUB). Each instruction takes only one machine cycle. It can keep all local variables in registers. However, we ignore the pipelining feature that can further increase the CPU instruction throughput. Approximately, we obtain around 1402 instructions per block (512-bit) in SHA-1. The details of estimating the total number instructions per block in SHA-1 are given in Appendix. With reference to PCIHF algorithm, two blocks (after pair-chaining) are involved in the randomize process. Consequently, the computational complexity for randomize process, Rt is around 2804. For the combining operation, we choose the base as 232, so that (Ai + Bi + C) mod 232 can be computed efficiently by 32-bit processors. Moreover, we assume that the processor has the instruction sets that can support add-with-carry operation to facilitate the process of multiple-precision addition. From Appendix, we observe that the algorithm takes only around 10 instructions to perform the task (Ct is only around 10 instructions). This is extremely fast and can be ignored compared to the randomize process Rt. As a result, the overall computational efficiency of the proposed PCIHF algorithm depends highly on randomize operation. In this paper, we only examine the text replacement function, which is the most costly text modification function. Hereafter, Ct is ignored. From the Table I, the cost for PCIHF to update the hash value under the p-block replacement modification function, treplace = 2(1+ p) Rt But, in order to obtain the first hash value, tPCIHF = (n - 1) Rt as compared to the original SHA-1, R tSHA = n ( t ) 2


IEEE Transactions on Consumer Electronics, Vol. 49, No. 4, NOVEMBER 2003

After k time updates from the original message, we produce in total (k+1) messages and the time taken by using PCIHF and SHA-1 are as follow:

p=1 p=5 p=10 p=20

Normalized Total Computation Complexity

600 500

Normalized Total Computation Complexity






TPCIHF = tPCIHF + k• treplace= [(n-1) + 2k(1+p)] Rt n (k+1) Rt TSHA = (k+1) tSHA= 2


standard one as message size and number of updates get larger, but number of blocks involved gets smaller. k=10 k=20 k=30 k=50








300 200



4 5 6 7 Number of Blocks Replaced, p




Fig. 2. Normalized total cost (TPCIHF/Rt) vs. p with fixed n

100 n-1 0






8 10 12 Number of Updates,k





Fig. 1. Normalized total cost (TPCIHF/Rt) vs. k with fixed n

From the above equations, there are three important parameters: • n denotes the total number of message blocks per message, • k denotes the total number of modifications (replacement in this case), • p denotes the number of blocks that are replaced per each modification function; assume that it is constant for all update processes. Here k, p, n ∈ N ; k, p ≥ 1 ; n >1. Based on (4), we observe that the time taken for updating the hash value by PCIHF is independent on the message size (except for computing the first hash value for the original message). It increases linearly with k. This means that PCIHF shows the incrementality property. Then, we plot out the relationship between these three parameters in the following figures. Fig. 1 shows the graph of the normalized total cost (TPCIHF/Rt) vs. k with fixed n = 50 with different values of p. The efficiency of PCIHF degrades dramatically by increasing the p. This means that a conventional hash function (SHA-1) is preferable when the value of p is too large. Fig. 2 shows the graph of normalized total cost (TPCIHF/Rt) vs. p with fixed n = 20 and different value of k. The efficiency of PCIHF degrades dramatically by increasing the p. This gives the same result as in Fig. 1. IV. PRACTICAL IMPLEMENTATION ASPECTS In Section III, the PCIHF is shown to have a dramatic improvement on efficiency, especially in cases where hashing is to be performed on large documents with minor changes. Clearly, an ideal incremental scheme is advantageous over a

Basically, incrementality (which is a measure of efficiency) brings out a new practical concern. The practical concern is the cross over point, where PCIHF is advantageous (faster) over the standard hash function, SHA-1. It is crucial to prove that the proposed incremental scheme shows its incrementality for those parameters (n,p,k) which are acceptable by the user. This means that the cross over point for PCIHF is low enough to make it interesting. From (4) and (5), for PCIHF to be more efficient then SHA-1, TPCIHF < TSHA 2(2 p + 1) n > 4(1+p) + ,k>1 (6) k −1 2(2 p + 1) nmin = 4(1+p) +  (7) k −1 From (6), 2( p + 1) k >1+ n − 4(1 + p )

2( p + 1)  (8) n − 4(1 + p ) Here n > 4(1+p). From (7) and (8), we need more than one update process and the number of blocks to be replaced is no more than ¼ of the message blocks, otherwise PCIHF is always slower than standard SHA-1. However, these basic requirements can be fulfilled easily. Fig. 3 shows the boundary condition, n vs. k for various p, where PCIHF is more efficiency than SHA-1. From the graph, we can see that the largest value of nmin = 2(3+4p) when k = 2 and nmin = 4(1+p) when k→ ∞. Fig. 4 shows the boundary condition, k vs. n with various p, where PCIHF is more efficiency than SHA-1. From the graph, we can see that the largest value of kmin decreases dramatically when n increases and reaches a value of two for large enough message sizes. Finally, we also know that the combining operation is important to ensure the practicability of PCIHF. If its computational complexity Ct is too large, it cannot be ignored as compared to Rt, PCIHF becomes impractical. This is kmin = 1 +

B.-M. Goi et al.: Computational Complexity and Implementation Aspects of the Incremental Hash Function

because, in order to make sure it works, n and k must be very big but p is small. 160

p=1 p=2 p=5 p=20



Message blocks,n

120 100 80 60 40 20 0











Number of Updates,k

Fig. 3. Boundary condition: n vs. k for various p p=1 p=2 p=5 p=10

Number of Updates,k









50 60 Message blocks,n



Method 2: Instead of concatenation, cipher block chaining mode can be implemented to obtain the 1600bit intermediate hash value. A single change in the message block will affect more bits, but not all, in the intermediate hash value (more diffusion as compared to method 1). Method 3: This method involves 2-level randomizing process with feedback. The result obtained from Method 2 has to be gone through an extra feedback process where the output of the last block in the first level acts as the initial vector for first block in the second level. Therefore, any changes in a single message block (position independent) will affect the whole 1600-bit intermediate hash value.

However, before implementation of the above proposed countermeasures to mitigate generalized birthday attack, they have to be gone through formal analysis, in terms of security and efficiency. This is our current research focus. VI. CONCLUSION







Fig. 4. Boundary condition: k vs. n with various p

V. A GENERALIZED BIRTHDAY ATTACK & PCIHF Wagner [12] proposed a new k-tree algorithm to solve the k-dimensional generalization of the birthday problem. This algorithm is successful to break several blind signature schemes and several incremental hash functions including PCIHF whose security strength are based on k-sum problem. However, the basic idea underlying PCIHF is still sound if we use a larger modulus or other combining operation with better resistance to subset sum attacks. For example, instead of using modulo 2160 addition, we need to use modulo 21600 addition to ensure 80-bit security resistance. There are several approaches to achieve this. Here, we suggest three methods on how to extend the used modulus (assuming, SHA-1 is used for randomizing process): 1. Method 1: In this method, all the chaining message blocks are gone through the randomizing process separately, and then 10 output blocks are concatenated to form a 1600-bit intermediate hash value.

Incrementality feature of PCIHF makes it quite attractive as compared to the conventional hash functions that use iterative construction. Computational complexity of PCIHF and the relationship between the parameters (n, p and k) that influence the overall performance have been investigated. As is to be expected, its efficiency degrades by increasing the number of replaced block p. The practicability of PCIHF is also examined in comparison to the standard hash function (SHA-1). It is shown that PCIHF needs more than one update process and the number of blocks to be replaced is no more than ¼ of the message block, so that it is more efficient. This boundary condition can be easily fulfilled. Another practical concern is memory requirement. The previously proposed incremental schemes produce a hash value, whose length is proportional to the message. Therefore, they are not suitable for implementation in practice, although they are permitted for theoretical investigations. The proposed PCIHF with fixed size of the hash value does not suffer from this type of constraint. However, PCIHF has some limitations. All the message blocks need to be distinct for ensuring its security. In addition, the computational overhead of PCIHF is 100%. Some interesting issues for further investigation are designing an incremental MAC scheme based on PCIHF and an incremental hash function together with its conjugate transformation (i.e. verification). APPENDIX A. Total Number of Instructions in SHA-1 on 32-bit Processor In this Appendix, we describe the implementation of SHA-1 in software. In order to reduce the cost of randomizing process, the block length is chosen appropriately, such as a multiple of 512. SHA-1 is a customized hash function based on MD4, which is designed as an iteration of a common sequence of operations, called as round [11]. SHA-1 has 20 rounds and its compression function operates on 32-bit words internally. Since, the complexity of its round operations


IEEE Transactions on Consumer Electronics, Vol. 49, No. 4, NOVEMBER 2003

directly reflects the overall performance of SHA-1, we begin by estimating the average number of instructions for each round operation. • For 0 ≤ t ≤ 19, the non-linear round function is a multiplexer requires 4 instructions (2 AND, 1 NOT and 1 OR). • For 20 ≤ t ≤ 39 and 60 ≤ t ≤ 79 rounds, same round function is used which is a executive-OR. It uses up only 2 instructions (2 XOR). • For 40 ≤ t ≤ 59, the round function is a majority function and can be optimized to take only 4 instructions (2 AND and 2 OR), instead of 5 instruction (3 AND and 2 OR) if we follow the original Boolean function in [11]. Therefore, on the average, each Boolean round function requires 3 instructions. Table A.I summarizes the number of instructions required for each of the round function. TABLE A.I NUMBER OF INSTRUCTIONS FOR EACH ROUND FUNCTION.

Round, t 0 ≤ t ≤ 19 20 ≤ t ≤ 39 40 ≤ t ≤ 59 60 ≤ t ≤ 79

Boolean Round Functions (B ∧ C) ∨ (~B ∧ D) B⊕ C⊕ D ((C ∨ D) ∧ B) ∨ (C ∧ D ) B⊕C⊕D


By using the 32-bit processor, we estimate the total number of instructions required to perform modular addition of two 160-bits strings based on the multiple-precision method (modified from [1]). Fig. A.1 shows the details of multipleprecision modular addition algorithm. Multiple-Precision Modular Addition Input: |A| = |B| = 160 bits, each having five base 232 digits. Each digit can be stored in 32-bit register. Therefore, string A and B can be represented as, A = A[0]A[1]A[2]A[3]A[4] , B = B[0]B[1]B[2]B[3]B[4] , where |A[i]|=|B[i]|=32 bits, for i=0,1,2,3,4. Output: The sum of A+B = (H[0]H[1]H[2]H[3]H[4]) in radix 232 representation (|H[i]|=32 bits, for i=0,1,2,3,4). 1. Set C ← 0. (where C is the carry flag). 2. For i from 4 to 0, do the following: 2.1 H[i] ← (A[i]+ B[i]+ C) mod 232. 2.2 If (A[i]+ B[i]+ C) > 232 then C ← 1 else C ← 0. 3. Return (H[0]H[1]H[2]H[3]H[4]).

4 2 4 2

Fig. A.1. Multiple-precision modular addition


Next, we calculate the number of instructions required for each round of SHA-1, as shown in Table A.II. In order to calculate the value of TEMP, 8 instructions are needed (4 ADD, 1 ROT and 3 for round function). We need another 6 instructions to complete the round operation.




Description TEMP = S5(A) + ft(B,C,D) + E + Wt + Kt

Instructions 8


E=D; D=C; C= S30(B); B=A; A = TEMP Total

6 14


After assembling all the data, totally, we obtain around 1402 instructions per block (512-bit) in SHA-1, as shown in Table A.III. TABLE A.III NUMBER OF INSTRUCTIONS FOR SHA-1 ON 32-BIT PROCESSOR

Step a

Description load

Details M[i]=W[0], ... ,W[15]


shift and store load


80 rounds

W[t]= S1(W[t-3] ⊕ W[t-8] ⊕ W[t-14] ⊕ W[t-16] ) A = H[0], B = H[1], C = H[2], D = H[3], E = H[4] 80 • 14


add and store


H[0] = H[0] + A, H[1] = H[1] + B, H[2] = H[2] + C, H[3] = H[3] + D, H[4] = H[4] + E Total


Instr. 16

[6] [7] [8]

256 5 1120



5 [11]


B) Total Number of Instructions in Modular Addition on 32bit Processor

[12] [13]

A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, “Handbook of Applied Cryptography,” CRC Press, 1997. M. Bellare, O. Goldreich, and S. Goldwasser, “Incremental cryptography: The case of hashing and signing,” Advances in Cryptology - CRYPTO'94, Lecture Notes in Computer Science, 839, pp.216-233, Springer-Verlag, 1994. M. Bellare, O. Goldreich, and S. Goldwasser, “Incremental cryptography and application to virus protection,” 27th ACM symposium on the Theory of Computing Proceedings, pp.45-56, 1995. M. Bellare, R. Guerin, and P. Rogaway, “XOR MACs: New methods for message authentication using finite pseudorandom functions,” Advances in Cryptology - CRYPTO'95, Lecture Notes in Computer Science, 963, Springer-Verlag, pp.15-29, 1995. M. Bellare, and D. Micciancio, “A new paradigm for collision-free hashing: Incrementality at reduced cost,” Advances in Cryptology EUROCRYPT'97, Lecture Notes in Computer Science, 1233, pp.163192, Springer-Verlag, 1997. D. Micciancio, “Oblivious data structures: Applications to cryptography,” Proceedings of the 29th Annual symposium on the Theory of Computing, pp. 456-464, 1997. M. Fischlin, “Incremental cryptography and memory checkers,” Advances in Cryptology - EUROCRYPT'97, Lecture Notes in Computer Science, 1233, pp.393-408, Springer-Verlag, 1997. M. Fischlin, “Lower bounds for the signature size of incremental schemes,” IEEE Symposium on Foundations of Computer Science (FOCS), pp.438-447, 1997. M. Blum, W. Evans, P. Gemmell, S. Kannan, and M. Naor, “Checking the correctness of memories,” Algorithmica, Volume 12, pp.225-244, 1994. B. M. Goi, M. U. Siddiqi, and H. T. Chuah, “Incremental hash function based on pair chaining & modular arithmetic combining,” Progress in Cryptology - INDOCRYPT 2001, Lecture Notes in Computer Science, 2247, pp.50-61, Springer-Verlag, 2001. National Institute of Standards and Technology, NIST FIPS Publication 180-1: Secure Hash Standard. U.S. Department of Commerce, 1995. D. Wagner, “A generalized birthday problem,” Advances in Cryptology - CRYPTO'02, Lecture Notes in Computer Science, 2442, SpringerVerlag, pp.288-303, 2002. R. Impagliazzo and M. Naor, “Efficient cryptographic schemes provably as secure as subset sum,” Journal of Cryptology, Vol. 9, No.4, pp.199216, Autumn 1996.

B.-M. Goi et al.: Computational Complexity and Implementation Aspects of the Incremental Hash Function Bok-Min Goi received the B.Eng. degree in Electrical Engineering from University of Malaya (MU) and the M. EngSc degree from Multimedia University (MMU), Malaysia, in 1998 and 2002, respectively. Since April 1998, he has been working as a lecturer in the Faculty of Engineering, MMU. Currently, he is working toward his Ph.D research and as a researcher at MMU’s Centre for Cryptography and Information Security (CCIS). His current research interests include cryptography, incremental hash function, authentication and key exchange protocols. M. U. Siddiqi received B.Sc. Engg and M.Sc. Engg degrees from Aligarh Muslim University (AMU Aligarh) in 1966 and 1971 respectively, and Ph.D. degree from Indian Institute of Technology Kanpur (IIT Kanpur) in 1976, all in Electrical Engineering. He has been in the teaching profession throughout, first at AMU Aligarh, then at IIT Kanpur. Currently, he is with the Faculty of Engineering, Multimedia University, Malaysia. His research interests are in coding and cryptography.


Hean-Teik Chuah is currently a Professor, Dean of Engineering and the Director of Research & Postgraduate Programmes at Multimedia University, Malaysia. His research interests include microwave remote sensing, applied electromagnetics and applied mathematics. He has published more than 120 papers in international journals and conferences. He has received many awards, locally and internationally. Among them include the inaugural Young Engineer Award by the Institution of Engineers, Malaysia in 1991; the Young Scientist Award to attend the 23rd General Assembly of the International Union of Radio Science (URSI) at Prague, Czechoslovakia in 1990; the Young Scientist Award to attend the 24th General Assembly of URSI at Kyoto, Japan, in 1993; the Young Scientist Award (Industrial Sector) by the Malaysian Ministry of Science, Technology and the Environment in 1995; and the Malaysian Toray Science Foundation Science and Technology Award in 1999 for his contributions in the area of microwave remote sensing. In 2002, he received the Sterling Award from the Sterling Group of Universities (research-based universities in UK with engineering faculties) for his services to promotion of engineering profession. He is a Fellow of the Academy of Sciences, Malaysia; the Institution of Engineers, Malaysia; the Remote Sensing & Photogrammetry Society, UK; the Institution of Electrical Engineers, UK; Senior Member of IEEE, and a member of the Electromagnetics Academy, USA.