Physically Unclonable Functions: Manufacturing ... - COSIC - KU Leuven

0 downloads 5 Views 647KB Size Report
May 4, 2011 - physically unclonable function or PUF [22]. Moreover, it has been ..... The same behavior as found in an SRAM PUF can also be observed in ...

Physically Unclonable Functions: Manufacturing variability as an unclonable device identifier Ingrid Verbauwhede

Roel Maes

COSIC/ESAT and IBBT, K.U.Leuven Kasteelpark Arenberg 10 3001 Heverlee, Belgium

COSIC/ESAT and IBBT, K.U.Leuven Kasteelpark Arenberg 10 3001 Heverlee, Belgium

[email protected]

[email protected]

ABSTRACT CMOS process variations are considered a burden to IC developers since they introduce undesirable random variability between equally designed ICs. However, it was demonstrated that measuring this variability can also be profitable as a physically unclonable method of silicon device identification. This can moreover be applied to generate strong cryptographic keys which are intrinsically bound to the embedding IC instance. This holds a number of very interesting advantages in comparison to traditional forms of secure identification and key storage. In this work, we summarize and compare the different proposed constructions and are able to identify some generalizing properties for PUFs on silicon devices.

Keywords Physically unclonable functions, manufacturing variability

1.

INTRODUCTION

The need for a unique identification method for semiconductor devices is evident in many applications. A notable example are radio frequency identification or RFID tags which are in essence wireless identifier devices, e.g. as a digital alternative to optical barcodes. One can easily foresee that in our ever more integrated and interconnected digital world, the requirement for unique device identification in general becomes indispensable. However, making silicon products uniquely identifiable still requires an explicit personalisation step for each single device which induces a rather costly manufacturing overhead. On the other hand, illegal cloning of electronic hardware products, and counterfeiting of goods in general, causes a global economic damage in the order of several hundreds of billions of dollars. A strong and unclonable identification method for (electronic) devices is an indispensable tool in the war against piracy. Another result of the widespread digitalisation of data is the need for security measures to ensure confidentiality, au-

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. GLSVLSI’11, May 2–4, 2011, Lausanne, Switzerland. Copyright 2011 ACM 978-1-4503-0667-6/11/05 ...$10.00.

thenticity and privacy of information. Secure implementations of cryptographic primitives are able to provide this, however they still require the presence of a key bitstring which is not only unique, but also secret and securily stored. Equiping a silicon device with a secret key also requires an explicit device personalisation step, but it additionally requires a number of physical protection measures to keep the key well hidden from determined adversaries. The need for security hence further increases the manufacturing cost for an otherwise standard silicon product. As an alternative to explicitly programming a unique digital ID on a device, one could use the physical uniqueness of the device itself. Even when manufactured by the same process, using the same masks, machines, and even coming from the same wafer, two silicon devices will always be physically distinct. This is caused by uncontrollably random variations in the production process which manifest themselves at (deep-)submicron levels in the ICs, called manufacturing variability. In traditional designs, manufacturing variability is a burden to IC developers since it requires them to take necessary precautions in order to assure a functional product within the expected boundaries of these variations. Moreover, since the relative magnitude of manufacturing variability increases significantly with shrinking CMOS technology nodes, it has become a major challenge in further CMOS downscaling. However, as has been shown in a number of recent publications, these inavoidable random and physical differences between devices can also be used as a way to identify them, as opposed to explicitly making a device unique with a digital ID. This method of device identification is a particular instantiation of what has become known as a physically unclonable function or PUF [22]. Moreover, it has been demonstrated that, by applying appropriate postprocessing algorithms, these PUFs can also be used to generate cryptographically strong keys. Besides being intrinsically generated, these PUF-derived keys have a number of other interesting physical security properties such as being unclonable and possibly tamper evident. In this work, we introduce the basic concepts of PUFs and their most important characteristics. In recent years, a number of interesting construction methods for PUFs on silicon devices have been proposed, based on intrinsic IC manufacturing variability, see e.g. [19] for a detailed survey. We describe these constructions, called intrinsic PUFs, and discuss their implementation results. We moreover show some simple application scenarios for PUFs from which it becomes clear that they are a valuable addition to the primitives for

practical cryptography, possibly at an improved efficiency over currently used solutions.

2.

BASICS OF PUFS

Before going into detail on particular constructions of PUFs on silicon devices, we first briefly describe what we mean by a PUF. We also introduce a few main characteristics used to assess the PUF-like behavior of a construction.

2.1

A functional description of PUFs

In the cryptographic hardware community, there exists a general understanding of which constructions are called PUFs, although a formal definition does not (yet) exist. The basic description of a PUF is best captured by its name, i.e. PUFs are (physical) functions which are to some extent physically unclonable. A physical function in this context is a part of a physical system which produces an output signal or response, which is a function of the exact physical properties of the system, and possibly of an external physical excitation signal or challenge. The particular dependence of responses on physical parameters and challenges for a given PUF is generally called the challenge-response behavior of this PUF. By physically unclonable, it is understood that for a given manufacturing process, it should be very hard or impossible to produce two physically distinct PUFs with a similar challenge-response behavior. On the other hand, it is evident that to be of any practical use, it should be easy to construct and evaluate a random PUF.

2.2

Basic characteristics of PUFs

A particular PUF construction has a useful challengeresponse behavior if it meets certain requirements for at least three minimal characteristics, summarized as: 1. For two random instances of a PUF construction, the difference between their expected responses to the same challenge should be sufficiently large. 2. For a single random instance of a PUF construction, the difference between two separately measured responses to the same challenge should be small. 3. For a single random instance of a PUF construction, the uncertainty about the expected response to a particular challenge should be large when one does not have access to that particular PUF instance. Next, we discuss a number of useful measures which are used to express these three characteristics.

2.2.1

Characterizing manufacturing variability

The difference between the expected responses to the same challenge on two distinct instances of a PUF construction is caused by the variability in the manufacturing process of this construction. The magnitude of this difference, expressed in a particular distance measure, is called the inter-distance between the responses. When measured over a large number of responses to many different challenges and on many pairs of instances of a PUF construction, one obtains an approximation of the distribution of the inter-distance for a random challenge on a random PUF instance pair. The information about this distribution is useful in assessing the usability of the PUF construction and it is often approximately Gaussian distributed with a particular mean µinter and standard deviation σinter .

2.2.2

Characterizing noise

The difference between two separately measured responses to the same challenge on the same PUF instance is caused by random noise, measurement uncertainty and uncontrollable external influences such as temperature variations and silicon ageing effects between the two measurements. The magnitude of this difference, expressed in a particular distance measure, is called the intra-distance between the responses. As opposed to inter-distance, the expected intra-distance between responses should be small in order to allow for an easy identification. Again the distribution of intra-distances for a particular type of PUF construction is a useful summary and is often approximately Gaussian distributed with a particular mean µintra and standard deviation σintra .

2.2.3

Characterizing unpredictability

The unpredictability of a PUF construction is measured by the difficulty an outsider has in correctly predicting an unkown response to a particular challenge on a particular PUF instance. Based on the accessible information of the outsider, e.g. other instances of the same PUF construction, other responses to different challenges on the same PUF, etc., and based on his capabilities, e.g. his computational power or his ability to control the challenge, a number of different unpredictability notions can be defined. In many practical cases, unpredictability is an ad-hoc measure, i.e. a PUF is as predictable as the best known attack against it. This is similar to the standard security notion for classical cryptographic primitives, where the security of e.g. a block cipher or a hash function is measured by its best known cryptanalysis results. For a number of PUFs, predictability attacks are known and are based on building a mathematical model for the underlying physical evalutation. These attacks are usually based on machine-learning results. Another approach in assessing a PUF’s unpredictability in a more formal sense is by estimating the min-entropy present in its responses. The information theoretic notion of min-entropy measures the uncertainty an outsider has that his best possible prediction for an unknown response is correct1 [4]. Assessing the min-entropy of a PUF construction is very hard since it requires the exact response distribution which is generally unknown. However, based on a number of assumptions given the underlying physics and based on the outcomes of a number of statistical tests, a sensible estimate can be given for some constructions.

3.

INTRISIC PUFS ON SILICON

A number of different constructions with a PUF-like behavior have been introduced based on optical effects, e.g. [22, 2, 7] or random capacitances [30, 3]. However, in this work we will focus on PUFs which can be embedded in silicon devices. This holds a number of advantages. Firstly, the tight integration with a digital computing device allows for an easy use of the PUF responses in applications. Secondly, as we already pointed out, the manufacturing variability which is inherent to the production process of CMOS ICs is an 1 Min-entropy is hence a measure of the “worst-case” or “guessing” uncertainty of an adversary, as opposed to the well-known Shannon entropy which measures the “average” uncertainty. Formally, the min-entropy of a distribution is defined as H∞ (X) := − log2 maxx {Pr[X = x]}.

implicit source of PUF-like behavior. Finally, integrating a PUF in a silicon chip allows to keep the measured responses well shielded inside the chip, which is interesting when the response is used as a secret in cryptographic applciations. For a construction to be called an intrinsic PUF, it has to meet two additional construction requirements besides showing PUF behavior. Firstly, the complete PUF including the measurement process should be fully integrated in the embedding device using the PUF, and secondly, this integration can be completely performed using the standard manufacturing flow of the device, i.e. without the need for PUFspecific processing steps or components. Because of these requirements, intrinsic PUFs are more cost-effective to implement whilst producing more secure responses which never have to leave the device embedding the PUF. In this work, we concentrate on constructions for silicon intrinsic PUFs, i.e. PUFs which can be manufactured in a standard CMOS IC technology and are based on inevitable and random manufacturing variability which is intrinsically present in the embedding hardware. Currently, two approaches towards constructing intrinsic PUFs on silicon devices are known, i.e. based on delay variation in the propagation of digital signals, and based on symmetry mismatch variation in bistable digital memory elements.

arbiter PUFs were proposed [13, 28, 21], but improved modeling techniques [26] have shown that these constructions are also not immune to prediction.

3.1

3.2

Delay based intrinsic PUFs

The propagation delay of a digital signal is a function of the electric parameters of the components it encounters on its path. Many of these parameters, e.g. MOSFET channel lengths, widths [23] and threshold voltages [1], oxide thicknesses, metal line shapes [11], et cetera, are subject to manufacturing variability [24, 32]. This is caused by the limited resolution of the litographic and chemical-mechanical processes, and by non-uniform fabrication conditions. By consequence, a digital signal’s propagation delay will be partially random and when measured it will show PUF-like behavior. Using this observation as a source for an intrinsic PUF requires the integration of an accurate delay measurement technique on a silicon device. A number of methods have been introduced.

3.1.1

Arbiter PUFs

A first type of intrinsic PUF on a silicon device which is based on the variability in digital propagation delay is the Arbiter PUF [14, 13]. An arbiter PUF implements two symmetrical digital delay paths on an IC. A challenge controls the exact delay setting of the lines, and a race condition is introduced by feeding a pulse simultaneously on both. A so-called arbiter circuit at the end of both delay lines determines which of both paths was the fastest and outputs a bit accordingly. Because the two delay paths are designed symmetrically, any mismatch between them will be caused by random manufacturing variability. In [15], it was shown that this effect is intensified when the arbiter PUF is evaluated in the CMOS subthreshold region, leading to a more efficient implementation. Early after their conception, it was recognized that due to the specific linear construction of the delay circuit, arbiter PUFs are susceptible to model-building attacks, i.e. after observing a relatively small number of pairs of challenges and responses, the remaining unseen responses can be predicted with high accuracy by modeling the delay paths. To overcome this problem, a number of non-linear variants for

3.1.2

Ring-oscillator PUFs

A Ring oscillator PUF [5] is also based on digital delays, but transforms them into oscillators by using negative feedback. By measuring the oscillation frequency, a measure for the delay is obtained. Since free-running oscillations on chip are very sensitive to environmental conditions such as temperature and supply voltage, a differential measurement approach, i.e. comparing the frequencies of two or more simultaneous oscillations [28, 33, 20], will produce much more stable results.

3.1.3

Glitch PUFs

Another method to use delay line variability as a source of PUF behavior is introduced in [29]. They observe that the glitch behavior of a random logic circuit can also be used as a PUF since it is partially random due to manufacturing variability. Moreover, due to the complex non-linear nature of the propagation of glitches, the unpredictability of a glitch PUF can be much higher than for the simple arbiter and ring oscillator PUFs.

Memory based intrinsic PUFs

A number of digital memory implementations are based on bi-stable logic cells, i.e. circuits which can assume one of two logically stable states. By residing in one of both states, they are able to store one binary digit. Such circuits are mostly very symmetric and exhibit a cross-coupled negative feedback structure. When released from a logically unstable state, a bi-stable cell will quickly converge to one of its stable states. It was observed that most cells have a clear preference of one stabilizing state over the other. This effect is caused by a mismatch between the parameters of the otherwise symmetrically designed cells. Since this mismatch is caused by manufacturing variability, the observed stabilizing state of such a memory cell shows a PUF behavior.

3.2.1

Simple SRAM PUFs

The simplest bi-stable memory circuit is found in SRAM memories. One SRAM cell consists of two cross-coupled logical inverters. The PUF behavior of SRAM cells is in particular observed right after power-up of the memory, i.e. some cells always tend to power-up storing a ‘0’, others storing a ‘1’ and still others have no clear preference of one over the other. In [6, 10], it was proposed to use this behavior as a PUF, yielding the SRAM PUF.

3.2.2

Latch, Flip-Flop and Butterfly PUFs

The same behavior as found in an SRAM PUF can also be observed in other memory devices based on cross-coupled circuits, e.g. in [17, 31] the power-up behavior of flip-flops is used to construct a flip-flop PUF. In [27], a custom NAND latch was implemented which could be destabilized explicitly, after which the stabilizing state again shows a PUF behavior, yielding a latch PUF. The butterfly PUF [12] is a construction equivalent to the latch PUF but can be implemented efficiently on an FPGA and behaves similarly.

3.2.3

Extensions to SRAM PUFs

Observing the stabilizing state of a bi-stable memory cell

can be seen as a very coarse 1-bit quantization of the parameter mismatch between the two symmetrical halves of the cell’s circuit. It was observed in [18] and later in [9] that a more fine-grained quantization can lead to a more efficient implementation, particularly since it facilitates any error-correction post processing. In [18], the magnitude of the mismatch is measured by observing multiple consecutive measurements of a response to obtain an approximate probability of the cell going to a particular state. In [9] a dedicated analog measurement circuit is used to measure the magnitude of the mismatch directly.

3.3

Alternative PUF proposals on silicon

A number of other silicon-based constructions which behave like PUFs have been proposed. In [16], the variability on the threshold voltage of MOSFETs is measured directly with an on-chip analog measurement circuit. A method to identify an IC based on the unique resistance values in its power supply distribution system was introduced in [8]. In [25], irregular voltage-current curves of diodes packed in a crossbar memory are used to construct a PUF.

4.

but also in the number of response bits they can securily produce. However, fairly comparing the constructions on this base is very difficult because of the large variety of different platforms they are implemented on. • The provided values for the characteristics are averages obtained over a limited number of measurements and PUF instances. Their statistical significance may vary. An important conclusion from Table 1 is that for all constructions µinter  µintra . This is an indispensable requirement in order to be able to use the PUF as a reliable and unique device identification method. In essence, this requirement captures the most basic property a construction should have in order to be called a PUF.

4.2

PROPERTIES OF INTRINSIC PUFS

4.1

• Physical unclonability expresses the difficulty an adversary has in producing a physical clone of a particular PUF instance, i.e. an instance with a challengeresponse behavior which is sufficiently similar to the original such that they cannot be distinguished efficiently anymore. In general this should be hard for a PUF construction. Together with the identification property, this is a defining requirement for a PUF. For the discussed intrinsic PUFs, it is assumed that the randomness introduced by manufacturing variability is sufficiently uncontrollable to be able to capture and copy it with great accuracy on a second instance.

Intrinsic PUF results

We list all results of the PUF characteristics of the discussed intrinsic PUF constructions in Table 1. All proposals mention experimentally obtained values for their average inter- and intra-distance, which at least makes a partial comparison possible. However, a number of issues have to be taken into account when using this table: • Whenever possible, we also included a measure for the PUF’s unpredictability. This is either an estimate of their min-entropy content, which is a theoretical lower-bound for predictability, or it is the best known machine-learning result from a model-building attack, e.g. as described in Sect. 3.1.1. It points out the accuracy a machine-learning algorithm can achieve when predicting unknown responses from a number of known responses to the same PUF instance. From the table it is already clear that not all constructions are equally unpredictable, which is an important criterium in security sensitive applications. • Some of the provided results for intra-distance include measurements taken over varying environmental conditions (marked with ∗ ) while other kept the environment fixed. This is an important difference since environmental effects such as changes in temperature and supply voltage have a noticable influence on most PUF responses. For the results where these effects are taken into account, the applied magnitude of the variations also differs. • Most of these results come from different types of devices, including standard-cell ASICs, full-custom ASICs, FPGAs and commercial off-the-shelf products. Also, within a single device type, different manufacturing technologies are used for different PUF constructions. • It is evident that all these proposed constructions are not equally efficient. They vary widely in silicon area usage, speed performance, manufacturing requirements,

Security properties

Besides the identification property, expressed by µinter  µintra and discussed in Sect. 4.1, PUF constructions can have additional security properties which make them an interesting cryptographic hardware primitive.

• Tamper evidence is the ability to detect tampering attacks after they have occured. Because of the dependence on exact physical details at sub-micron levels, it is often assumed that tampering with a PUF implementation unavoidably changes its challenge-response behavior, and hence allows to detect the tampering. However, none of the discussed intrinsic PUFs is experimentally verified to show tamper evidence. • Other, more specific security properties for PUFs can be considered. For a more detailed discussion we refer to [19].

5. 5.1

APPLICATION SCENARIOS PUFs as unique device identifiers

The use of a PUF as a device identifier has been mentioned throughout this text. A very simple but practical scheme works as follows. During an enrollment phase, a PUF response is measured on a device and stored in a verifier’s database. At a later time, during the verification phase, the verifier requests the same response and compares it to the ones in its database. It identifies the device as the one with the stored response most closely to the measured response. If responses of different devices are sufficiently different (µinter is large) and the same response on the same device is sufficiently similar on every measurement (µintra is small) then a correct identification is possible. Based on the

PUF Construction µinter ± σinter µintra ± σintra Unpredictability? Arbiter PUF [14] 23% 4.82%∗ 97% predictable after observing 5000 responses Arbiter PUF [13] 1.05% 0.3% Feed-forward Arbiter PUF [14] 38% 9.8%∗ 95% predictable after observing 50000 responses [26] Subthreshold Arbiter PUF [15] ≈ 50% < 5%∗ 90% predictable after observing 250 responses Ring Oscillator PUF [5] 1% 0.01% Ring Oscillator PUF [28] 46.15% 0.48%∗ Glitch PUF [29] 41.5% < 6.6%∗ Min-entropy ≈ 26% of response length SRAM PUF [6] 49.97% ± 0.3% < 12%∗ Min-entropy ≈ 76% of response length SRAM PUF [10] 43.16% 3.8% SRAM PUF [10] 49.34% 6.5% Latch PUF [27] 50.55% 3.04% Flip-flop PUF [31] 36% ± 2.9% < 13%∗ Min-entropy ≈ 81.3% of response length Butterfly PUF [12] ≈ 50% < 6%∗ * these results include environmental variations (temperature and/or supply voltage). Table 1: Comparison of PUF characteristics on different proposed Intrinsic PUF constructions. distribution of the inter- and intra-distance for the used PUF constructions, one is able to calculate a false-acceptance rate and a false-rejection rate for this simple identification scheme, respectively quantifying the probability of a fake device being positively identified by accident and the probability of a genuine device being falsely rejected. This is very similar to biometrical identification schemes, e.g. based on human fingerprints.

5.2

PUFs as cryptographic key generators

By applying the appropriate post-processing techniques, a PUF can be used to produce a cryptographically strong key, enabling more elaborate security schemes. Two major issues have to be dealt with in order to extract a secure key from PUF responses. Firstly, as is clear from Table 1, all PUF constructions produce responses with a non-neglible probability of errors between distinct measurements (expressed by µintra ). Therefore, the post-processing procedure needs to apply an error-correcting step in order to guarantee that the same key is derived every time. Secondly, the extraction algorithm needs to make sure that the outputted key is completely unpredictable, i.e. it should be a uniformly distributed random bitstring. Since PUF responses are mostly only partially unpredictable (see Table 1), the extraction algorithm needs to compress a sufficient number of responses into one key in order to guarantee strong unpredictability. Algorithms meeting both these requirements have been studied in [4] and are known as fuzzy extractors. Once a secure key is derived, standard cryptographic techniques can be deployed to meet all kinds of security objectives. However, this method of generating a key with a PUF holds a number of interesting advantages over traditional secure key storages. Firstly, the secure key can be derived from the PUF at the required time and deleted afterwards, such that it is only present in the device when needed. This limits the time frame for an attack to extract the key from the device, since outside this time frame the key is not present in the device in digital form. Secondly, a PUF with a fuzzy extractor can be more efficient than using secured non-volatile memory elements to securily store a cryptographic key. Finally, a PUF-generated key is also strongly linked to the physical hardware embedding the PUF, making it physically unclonable. This is an interesting property, e.g. in anti-counterfeiting applications.

6.

CONCLUSIONS

In this survey work, we have discussed the use of intrinsic physically unclonable functions or PUFs on silicon devices as an efficient and secure device identification method. We have provided an overview of most of their constructions which are interestingly based on CMOS process variations. Summarizing their PUF-like characteristics, we are able to identify some fundamental properties which can be found in any of the discussed PUF constructions. We have also briefly described the most basic use cases of PUFs and pointed out their importance in practical cryptography.

Acknowledgments This work was supported in part by the UNIQUE Project (EU FP7), the BCRYPT network (Belgium IAP P6/26) and by K.U.Leuven-GOA funding. Roel Maes is funded by IWTVlaanderen under grant number 71369.

7.

REFERENCES

[1] A. Agarwal, K. Kang, S. Bhunia, J. D. Gallagher, and K. Roy. Device-aware yield-centric dual-vt design under parameter variations in nanoscale technologies. IEEE Trans. Very Large Scale Integr. Syst., 15:660–671, 2007. [2] J. D. R. Buchanan, R. P. Cowburn, A.-V. Jausovec, D. Petit, P. Seem, G. Xiong, D. Atkinson, K. Fenton, D. A. Allwood, and M. T. Bryan. Forgery: ‘fingerprinting’ documents and packaging. Nature, 436(7050):475, July 2005. [3] G. Dejean and D. Kirovski. Rf-dna: Radio-frequency certificates of authenticity. In CHES ’07: Proceedings of the 9th international workshop on Cryptographic Hardware and Embedded Systems, pages 346–363, Berlin, Heidelberg, 2007. Springer-Verlag. [4] Y. Dodis, R. Ostrovsky, L. Reyzin, and A. Smith. Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. SIAM J. Comput., 38(1):97–139, 2008. [5] B. Gassend, D. Clarke, M. van Dijk, and S. Devadas. Silicon physical random functions. In ACM Conference on Computer and Communications Security, pages 148–160, New York, NY, USA, 2002. ACM Press.

[6] J. Guajardo, S. S. Kumar, G. J. Schrijen, and P. Tuyls. FPGA intrinsic PUFs and their use for IP protection. In Cryptographic Hardware and Embedded Systems Workshop, volume 4727 of LNCS, pages 63–80, September 2007. [7] G. Hammouri, A. Dana, and B. Sunar. CDs Have Fingerprints Too. In CHES ’09: Proceedings of the 11th International Workshop on Cryptographic Hardware and Embedded Systems, pages 348–362, Berlin, Heidelberg, 2009. Springer-Verlag. [8] R. Helinski, D. Acharyya, and J. Plusquellic. A physical unclonable function defined using power distribution system equivalent resistance variations. In DAC ’09: Proceedings of the 46th Annual Design Automation Conference, pages 676–681, New York, NY, USA, 2009. ACM. [9] M. Hofer and C. Boehm. An alternative to error correction for sram-like pufs. In Cryptographic Hardware and Embedded Systems, CHES 2010. Springer Berlin / Heidelberg, 2010. [10] D. E. Holcomb, W. P. Burleson, and K. Fu. Power-up sram state as an identifying fingerprint and source of true random numbers. IEEE Trans. Comput., 58(9):1198–1210, 2009. [11] T. Kanamoto, Y. Ogasahara, K. Natsume, K. Yamaguchi, H. Amishiro, T. Watanabe, and M. Hashimoto. Impact of well edge proximity effect on timing. IEICE Transactions, 91-A(12):3461–3464, 2008. [12] S. Kumar, J. Guajardo, R. Maes, G.-J. Schrijen, and P. Tuyls. Extended abstract: The butterfly PUF protecting IP on every FPGA. In Hardware-Oriented Security and Trust, 2008. HOST 2008. IEEE International Workshop on, pages 67–70, June 2008. [13] J. W. Lee, D. Lim, B. Gassend, G. E. Suh, M. van Dijk, and S. Devadas. A technique to build a secret key in integrated circuits for identification and authentication application. In Proceedings of the Symposium on VLSI Circuits, pages 176–159, 2004. [14] D. Lim. Extracting Secret Keys from Integrated Circuits. Master’s thesis, MIT, MA, USA, May 2004. [15] L. Lin, D. Holcomb, D. K. Krishnappa, P. Shabadi, and W. Burleson. Low-power sub-threshold design of secure physical unclonable functions. In ACM/IEEE international symposium on Low power electronics and design (ISLPED), pages 43–48, 2010. [16] K. Lofstrom, W. R. Daasch, and D. Taylor. IC Identification Circuit Using Device Mismatch. In In Proceedings of ISSCC 2000, pages 372–373, 2000. [17] R. Maes, P. Tuyls, and I. Verbauwhede. Intrinsic pufs from flip-flops on reconfigurable devices. In 3rd Benelux Workshop on Information and System Security (WISSec 2008), Eindhoven,NL, 2008. [18] R. Maes, P. Tuyls, and I. Verbauwhede. Low-overhead implementation of a soft decision helper data algorithm for sram pufs. In Cryptographic Hardware and Embedded Systems - CHES 2009, pages 332–347, 2009. [19] R. Maes and I. Verbauwhede. Physically unclonable functions: A study on the state of the art and future research directions. In A.-R. Sadeghi and D. Naccache, editors, Towards Hardware-Intrinsic Security,

[20]

[21]

[22] [23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

[32]

[33]

Information Security and Cryptography, pages 3–37. Springer Berlin Heidelberg, 2010. A. Maiti, J. Casarona, L. McHale, and P. Schaumont. A large scale characterization of RO-PUF. In IEEE Symposium on Hardware-Oriented Security and Trust (HOST), pages 94–99, 2010. M. Majzoobi, F. Koushanfar, and M. Potkonjak. Techniques for design and implementation of secure reconfigurable pufs. ACM Trans. Reconfigurable Technol. Syst., 2(1):1–33, 2009. R. S. Pappu. Physical one-way functions. PhD thesis, Massachusetts Institute of Technology, March 2001. M. Pelgrom, A. Duinmaijer, and A. Welbers. Matching properties of mos transistors. IEEE Journal of Solid-State Circuits, 24(5):1433 – 1439, 1989. S. Reda and S. R. Nassif. Analyzing the impact of process variations on parametric measurements: novel models and applications. In Proceedings of the Conference on Design, Automation and Test in Europe, DATE ’09, pages 375–380, 3001 Leuven, Belgium, Belgium, 2009. European Design and Automation Association. U. R¨ uhrmair, C. Jaeger, C. Hilgers, M. Algasinger, G. Csaba, and M. Stutzmann. Security applications of diodes with unique current-voltage characteristics. In Financial Cryptography and Data Security, 2010. U. R¨ uhrmair, J. S¨ olter, and F. Sehnke. On the foundations of physical unclonable functions. Cryptology ePrint Archive, Report 2009/277, 2009. Y. Su, J. Holleman, and B. Otis. A 1.6pj/bit 96% stable chip-id generating circuit using process variations. In Solid-State Circuits Conference, 2007. ISSCC 2007. Digest of Technical Papers. IEEE International, pages 406–611, Feb. 2007. G. E. Suh and S. Devadas. Physical unclonable functions for device authentication and secret key generation. In Design Automation Conference, pages 9–14, New York, NY, USA, 2007. ACM Press. D. Suzuki and K. Shimizu. The glitch puf: A new delay-puf architecture exploiting glitch shapes. In Cryptographic Hardware and Embedded Systems, CHES 2010, pages 366–382. Springer Berlin / Heidelberg, 2010. ˇ P. Tuyls, G.-J. Schrijen, B. Skori´ c, J. van Geloven, N. Verhaegh, and R. Wolters. Read-proof hardware from protective coatings. In Cryptographic Hardware and Embedded Systems Workshop, volume 4249 of LNCS, pages 369–383. Springer, October 2006. V. van der Leest, G.-J. Schrijen, H. Handschuh, and P. Tuyls. Hardware intrinsic security from d flip-flops. In Proceedings of the fifth ACM workshop on Scalable trusted computing, STC 2010, pages 53–62, 2010. N. H. E. Weste and D. M. Harris. CMOS VLSI Design: A Circuits and Systems Perspective. Pearson Education, Inc, Boston, Massachusetts, USA, fourth edition, 2010. C.-E. D. Yin and G. Qu. LISA: Maximizing RO PUF’s secret extraction. In IEEE Symposium on Hardware-Oriented Security and Trust (HOST), pages 100–105, 2010.

Physical(ly) Unclonable Functions An introduction to Intrinsic PUFs Ingrid Verbauwhede Slide courtesy: Roel Maes COSIC – K.U.Leuven Supported by:  IWT and           

Introduction Goal of this talk → try to answer the question:  What is a PUF?  What is it usage?  Can we use it as an unclonable device identifier? 

P.U.F.? 



P.U.F. → Physical Unclonable Function a physical function which is unclonable in every sense P.U.F. → Physically Unclonable Function a function which is unclonable in a physical sense

GLSVLSI, Lausanne, May 3, 2011

2

1

Introduction (cont.) 

Need for unique identification of devices or goods 





Problem: personalization step during fabrication  



 

Example: RFID tags Secure storage of a key bit string (non volatile memory, battery backed up  SRAM, fuses, etc.) Expensive, extra processing steps Post‐processing e.g. by blowing fuses

Idea: use physical uniqueness of devices  Idea: use CMOS process variations for this 





Threshold voltage Oxide thickness Metal line shapes

GLSVLSI, Lausanne, May 3, 2011

3

Functional description of PUF  

(how crypto community sees it) PUF = (physical) function which is physically unclonable

Challenge

Response PUF





Very hard (“impossible”) to produce two PUFs with  similar challenge‐response behavior Easy to construct and evaluate a random PUF

GLSVLSI, Lausanne, May 3, 2011

4

2

Basic properties of PUF Minimum requirements:  For two random PUFs, difference between expected responses to  same challenge, should be large = manufacturing variability is ‘large’ 

For single random PUF, difference between two measured  responses to same challenge, should be small = noise, aging, temperature effects,… are limited



For single random PUF, uncertainty about response to challenge is  large, when one does not have access to this PUF instance = unpredictability is large

GLSVLSI, Lausanne, May 3, 2011

5

Intrinsic PUFs 



Use inherent manufacturing variability present in CMOS  fabrication Keep measured PUF responses inside chip Useful for secure key storage!

Requirements:  PUF + measurement circuit + post‐processing inside chip  Standard processing, no extra processing steps

GLSVLSI, Lausanne, May 3, 2011

6

3

Outline     

Introduction A few examples of Intrinsic PUFs PUF properties PUF applications Conclusion

GLSVLSI, Lausanne, May 3, 2011

7

First example: Arbiter PUF Delay based intrinsic PUF

4

Arbiter PUF: basic operation 

Initial design [Lee et al, MIT 2004]  





switch block: e.g. two muxes arbiter: e.g. a latch or a flip‐flop n switch blocks → 2n “different” delays Challenge

0

1

0

0

1

1

Arbiter

Response 0/1

Switch Block

GLSVLSI, Lausanne, May 3, 2011

9

Arbiter PUF: experiments [Lee04] 

Results: 



[L04]

10000 CRPs from 37 ASICs 64‐stage arbiter PUF μinter = 23% μintra ≈ 0.7% μintra ≈ 3.47% (voltage variation) μintra ≈ 4.82% (temperature variation) [Lee04]



Results on FPGA [Lee at al 04] : μinter = 1.05%, μintra = 0.3%

GLSVLSI, Lausanne, May 3, 2011

10

5

Arbiter PUF: analysis 

delay ≈ additive ! 





Attack results: 





ASIC: 3.55% prediction error with SVM trained with 5000 CRPs (

Suggest Documents