Test and Test Bench Generation by Algorithmic ... - Semantic Scholar

1 downloads 0 Views 434KB Size Report
Generation of the input stimuli is performed with respect to the distribution of .... Let us have a model of a circuit in a high level programming language. We treat it ...
Test and Test Bench Generation by Algorithmic Models R. Seinauskas, V. Jusas Abstract Each circuit may be described at the algorithmic, behavioural or gate levels. Normally the tests are generated separately for each of the levels. We present here a technique of generating test patterns for faults by a simulation of descriptions of the circuits on the algorithmic level. The resultant tests are applicable for VHDL behavioural level as test bench. They give high fault coverage for the equivalent circuit on the gate level. The procedure of selecting the tests is based on the transmission of distorted signals to the outputs. Generation of the input stimuli is performed with respect to the distribution of the inputs. The application of test frames allows the sequential circuits to be treated as combinational ones. 1.Introduction Normally any design of an electronic circuit involves several stages. The first to be born is the idea, which is then checked by compiling an algorithmic model of the circuit. The prototype may be implemented by either a high level programming language, or by a special hardware description language, such as VHDL. An analysis of the prototype needs stimuli of input signals and demonstrates the behaviour of the circuit. This is in most cases the job of the circuit designer. Circuits with numerous inputs can suffer of a large number of input stimuli, and the task of the designer is to select the most important ones as specifications of the behaviour of the circuit under design. This is by no means a simple procedure, as it is never evident, that input stimuli selected reflect all behaviour modifications of the circuit. We recommend even at this early stage an automated generation of input stimuli, to have a possibility of checking the performance of the circuit designed in the presence of unexpected stimuli. In checking an algorithm, the role of Oracle is given to the circuit designer, as he is the one who decides upon a correct response of the algorithms to the given input stimuli. Making a decision demands certain efforts, therefore the designer is eager to have a possibly small number of input stimuli to be evaluated. Hence, it is always important to generate the smallest number of input stimuli for the evaluation of an algorithm. The next stage is to compile a description of the circuit in a language of the synthesis. Here an automated generation of input stimuli is even more important, as the adequacy between the model of the algorithm and the description intended for the synthesis must be checked. It is evident, that here input stimuli intended for the analysis of the model of the algorithm can be applied. The question is: is this sufficient? An adequacy between two descriptions is best proved in a formal approach but here the techniques are not quite reliable, yet. Certain inadequacies can be found by comparing the results of the input stimuli of the algorithmic description and the synthesis description. A dilemma arises here of insuring the sufficiency of the selected input stimuli in the evaluation of the descriptions.

After the synthesis of the circuit has been completed, test stimuli should be generated to find the defects in the circuit manufactured. This is one more difficult solution. We face also a number of problems related to the model of the faults, and especially in integrated circuits, such as is it sufficient to use gate level models or switch level ones should be used. In any case test patterns for detecting defects are most efficient, when they have been generated on the lowest level. An assumption can be applied, that test patterns for faults on the gate level can be applied successfully in the analysis of the algorithmic description of the circuit, and in finding discrepancies between algorithmic and synthesis descriptions. The difficulty lies in that we have to perform an analysis of the algorithm, and to prove the adequacy between the algorithm and the synthesis description, before the circuit itself is synthesised . On the other hand, the situation can be exploited to evaluate the adaptability of the input stimuli in the analysis of the circuit algorithm, and in the comparison of the synthesis description with the algorithm as well as for test of the circuit synthesised. One of the modes of generating tests on the behavioural level is to use the models of behavioural faults [ 1 - 3]. So far, there are no popular fault models for behavioural descriptions that could insure proper testing of physical defects of a circuit. Also, there can be a large number of faults in descriptions on high levels abstraction. Our problem could be stated in the following way: generate test patterns according to a black box description of the circuit. The first thing to be asked is: if there exist any solutions. The algorithm description of the circuit gives no information on its structure, the solution must be based on input stimuli and output responses. We remind here, that each fault in a circuit can be tested by a number of input stimuli and can lead to different output responses. This is the basis of locating circuit faults by the observation of output responses only. The response of a diagnostic test gives suggestions on the type and location of the fault, and challenges the hopes of a possibility of judging about the coverage of internal faults from the test responses of the circuits. The disposition of only the description of the algorithm gives the possibility of operating only on the input faults, so that one more question must be answered: are they sufficient? The only way of finding this answer was the experiment. Therefore we have suggested an algorithm of evaluating the coverage of input stimuli that are generated for detecting inputs faults only[ 5]. We performed the experiment with selected 20 ISCAS85/89 circuits, and found the coverage of input stimuli just several per cent lower than a exhaustive testing of input. The selection evaluated only changes of the values of the inputs, and only responses of the outputs. The coverage of stimuli was evaluated after all internal stuck-at faults in a description at the gate level. The experiment included a generation of certain number of input stimuli, with not necessarily the largest coverage of the circuit by the tests. We present here a technique of generating the tests, based on the circuit description on the algorithmic level. What is the difference between a description on the algorithmic level and a behavioural VHDL description?. Algorithmic descriptions are implemented in some high level programming language, say, C. They carry no information either on synchronization or on the initialization of the circuit. The model reflects only the function of the circuit. This paper is organised as follows. In section 2 we present a test generation method based on a transmission model. In section 3 we introduce the techniques for generating test patterns for sequential circuits. In section 4 we show how to use the generated test patterns as test bench. In section 5 we give experimental results. And we conclude in section 6. 2. Generation of Tests for Transmission Faults of Circuits

Let us have a model of a circuit in a high level programming language. We treat it as a black box and consider only its inputs and outputs. Let the circuit have a set of inputs X = {x1, x2, ... ,xi, ... ,xn} and a set of outputs Y = {y1, y2, ... ,yj, ... ,ym}. We consider a change of the signal of an input xi and write xi1 ( xi0) for a zero-to-one( one-to-zero) change of an input signal. A change of one signal in the input stimulus may lead to changes of several output signal values. Then write yj1 (yj0) for the zeroto-one ( one-to-zero) change of the output yj. Couple (xi1, yj0) implies , that a zero-toone change of the signal of the input xi leads to a one-to-zero change of the signal of the output yj . There are four modifications of the relation between input xi and output yj: (xi0, yj0), (xi0, yj1), (xi1, yj0), (xi1, yj1). We treat these couples (xit, yjk), where t=0,1 , k=0,1 as models of circuit faults and call them transmission faults. We assume, that the test pattern detects, say, the transmission fault (xi1, yj0) , if a zero-to-one change of the signal of the input xi of the test pattern leads to a one-to-zero change of the signal of the output yj . Each fault of transmission (xit, yjk) is related to real defects of the circuit. No changes occur in a faulty circuit. In a general case, n inputs of a circuit are related to m outputs, and the circuit may have 4 x (n x m) transmission faults. The list of transmission faults may be written as a set: F = { (xit, yjk) | i = 1, n ; y = 1, m ; t = 0,1, k=0,1; }, There may be no electric connections between some of the inputs and the outputs, and their transmission faults will not be detected. These faults are undetectable. In this manner the number of generated test patterns will not exceed the number of transmission faults in the circuit. There may be several input-output transmission paths. In the fault model suggested, the paths will be activated by no more than four test patterns. On the first sight, one cannot be sure, that test patterns which detect the suggested model of the faults will cover sufficiently the stuck-at faults at the gate level. The applicability of the procedure to large circuits of long transmissions paths is equally doubtful. But the experimental checking [ 5 ] has proved that the test patterns, by detecting transmission faults, also detect the major part of stuck-at faults at the gate level, including the largest Benchmark circuits. The general strategy of generating the test is: find for each couple (xit, yjk) a relevant test pattern. The same test patterns can fit to several couples. The search is performed by a technique of a simulation. We simulate separately for the input stimuli generated all changes, and select the ones causing changes at the outputs as test patterns. Two input stimuli with the opposite signals of one input detect symmetric transmission faults (xit, yjk). This feature is used in selecting test pattern. A specific feature of this strategy of generation is that selection of the test patterns follows the algorithmic description on a high level of abstraction, which results in a short time of the simulation, and a large number of input stimuli can be covered. The maximum number C of inputs completely covered by the selection depends on the complexity of the model of the circuit and on the available computer resources. So, if a circuit has a number of inputs less or equal to limit C, we apply exhaustive sorting of input stimuli. There are generated digital numbers from 0 to 2C, then they are converted to binary numbers. These binary numbers are applied to the inputs of the circuit as input stimuli. The responses on the outputs are calculated by means of simulation. Such a input stimulus is initially generated. we need to make all possible single input signals change to the opposite. If a circuit has n inputs, there will be created n distorted input stimuli from a single initial input stimulus. The distorted input stimulus are simulated, too. After simulation we compare the responses of initial

input stimulus and distorted input stimuli. If there are different values on some outputs, corresponding couple (xit, yjk) gets test pattern. If a circuit has a number of inputs that exceeds a predefined limit C, the exhaustive testing becomes problematic because it consumes a lot of time. In this case we need to split the inputs into groups which have a number of inputs less than the limit. An exhaustive sorting of input stimuli should be done for one of group and these input stimuli are simply repeated for all other groups. The changes of input signals of the initial generated stimulus are made in spite of the groups. Note that in this approach the distortions do not lead to repeated input stimuli. Now let us formulate the algorithm in order to exclude ambiguity. We write a vector V = < v1,v2, ... vi, ... ,vn> for a generated input stimulus, where n denotes a number of inputs, and T - the set of test stimuli selected, which is initially empty T = { }. Then write T := T ( { V } for including vector V into the set of input stimuli selected. Then F := F \{(xit, yjk)} stands for the exclusion of fault (xit, yjk). For a given input stimulus V in the course of simulation R := f ( V ) find the values of output signals R = < r1,r2, ... ,rj, ... rm>. The initial input group size C of exhaustive sorting starts from 8. Value P = 1 states that some transmission faults were detected by the ordinary iteration of selection. An algorithm of test pattern generation follows: 1. T = { }; F = { (xit, yjk)| i = 1, n ; y = 1, m ; t = 0,1 ;k = 0,1}, C := 8; P := 1; 2. while ( P> 0) < g1, g2, ... gh, ...gC > := < 0,0, ...,0, ...,0>; P := 0; 3. 4.

while (< g1, g2, ... gh, ...gC > ( ) do i = 1 to n ; h := i; /*Generate input stimulus V = < v1, v2, ... vi, ... ,vn>;.*/

5.

while ( h > C ) h := h - C;

6.

vi := gh ;

7.

enddo; /* Input stimulus generated*/

8.

Calculate response R := f ( V ); R = < r1, r2, ... ,rj, ... rm>.;

9.

do i = 1 to n ;

10.

vi := notvi ; /* change input signal */

11.

Calculate the response S := f ( V ); S = < s1, s2, ... ,sj, ... sm>.;

12.

after

do j = 1 to m ;

13.

if rj ( sj then k = sj ; t := (vi ;

14.

if (xit, yjk) in F then

15.

T := T union { V }; F := F \{(xit, yjk)}; P:= P + 1;

16.

endif; /*test a symmetric distortion */

17.

k = rj ; t := vi;

18.

if (xit, yjk) in F then

19.

T := T union { V }; F := F \{(xit, yjk)};

distortion

P:= P + 1; 20.

endif;

21.

endif;

22.

enddo;

23.

vi := notvi ;

24. 25.

enddo; < g1, g2, ... gh, ...gC > := < g1, g2, ... gh, ...gC > +< 0,0, ...,0, ...0,1>; /* Increase by unit binary value of vector < g1, g2, ... gl, ...gC > */

26.

endwhile;

27.

C := C + 1;

28. endwhile; 29. stop; This strategy of test generation is based on an incremental approach. It starts on small groups of inputs, say of 8, and 256 input stimuli are generated, their distortions are simulated as well, and from the stimuli generated and from their distortions such stimuli are selected, which detect transmission faults in the circuit. Then input stimuli are generated for groups of 9, and such stimuli are selected, which detect new transmission faults. With a gradual increase of the groups, we arrive at one of the following situations: all transmissions faults are detected, or an increase of the group gives no input stimuli detecting new transmission faults, or the size of a group becomes limited by the size of the circuit and by the computer resources. When the generation of the tests must be stopped because of the computer resources, a set of different groups of the circuit inputs may be tested as the last attempt. The groups can be formed in random, or following the functions of the inputs. The results depend on the experience of the designer. Note that a unit added to an input group gives a double number of input stimuli generated. As we have mentioned, a model of the circuit at the algorithmic level has no clock information. Therefore, generated test patterns based on such a model may be directly applied only if a circuit is combinational. If a circuit is sequential, these test patterns require an additional treatment. For this purpose the test frames (templates) are used. They are described in the next section. 3. The Test Frames To be able to process the data, a sequential circuit has a defined sequence of synchronisation signals and of control signals. This type of a sequence indicates the way the data are transmitted. For a sequence of input stimuli, that have defined values of some inputs and leave non-defined values for other ones, we use the term "a test frame"[6,7 ]. A sequential circuit with a test frame may be presented as a function in which output values are only related to input values and are not related to the internal state of the circuit. In other words, this is a combinational circuit. The test frame should be formed before the tests are generated. The model of a circuit under creation is very closely related to its test frame, as the model is constructed according to the test frame. The test "cases" are generated like for combinational circuits. The test frame suggests a way of transforming the test cases into a test sequence. Eventually the test frame must be filled-up by the values of the test cases. Let us consider an algorithm of multiplication ( S344) of two binary digits that consist of 4 bits. It results in a number of 8 bits. In a high level programming language such

an algorithm is expressed by a single operator Z=X*Y. The algorithm has 8 inputs and 8 outputs, and 256 input stimuli can be generated. The tabulated test data suggest, that 78 test patterns were selected by the algorithm among all possible input stimuli by the test generation technique suggested for the S344 circuit that implements an algorithm of multiplication ( Table I). These 78 test patterns for a combinational circuit that implements an algorithm of a product gives 96.5% a fault coverage (line S344k). For an implementation of the multiplier by a sequential circuit, the test frame must be used. It is shown in Fig. 3.1.

Test Case (S 344) Multiplicand Multiplier

Product-high Product-low Ready 1 Ready 2

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17

18

Test Frame start clk a_in

b_in

phg-out

phv-out

ready

x x x x

1) 0

x

x x x x

x x x x

x x x x

x

2) 1

0

01 02 03 04 05 06 07 08 x x x x

x x x x

17

3) 0

1

K K K K

K K K K

x x x x

x x x x

K

4) 0

0

x x x x

x x x x

x x x x

x x x x

K

5) 0

1

x x x x

x x x x

x x x x

x x x x

K

6) 0

0

x x x x

x x x x

x x x x

x x x x

K

7) 0

1

x x x x

x x x x

x x x x

x x x x

K

8) 0

0

x x x x

x x x x

x x x x

x x x x

K

9) 0

1

x x x x

x x x x

x x x x

x x x x

K

10) 0

1

x x x x

x x x x

x x x x

x x x x

K

11) 0

1

x x x x

x x x x

x x x x

x x x x

K

12) 0

0

x x x x

x x x x

x x x x

x x x x

K

13) 0

1

x x x x

x x x x

09 10 11 12 13 14 15 16 18

14) 0

0

x x x x

x x x x

K K K K

K K K K

K

15) 0

1

x x x x

x x x x

K K K K

K K K K

K

Op.

Data a

Data b

Results r

1. Load a 2. Load b 3. Op. a,b 4. Output r r := a op b; Fig.3.2. A schematic representation of a test frame for a calculator

Fig. 3.1. The S344 test frame. The leading 8 bits in the test case line that corresponds to a single test pattern stand for the inputs, the other 8 bits - for the outputs. The procedure ends on the "ready" signal. It changes twice in the sequence of the test frame, therefore two bits are envisaged in the test case line. Neither "clock" nor "start" signals are reflected in the test case line. They are fixed in the sequence of the test frame. The sequence of the test frame includes indications on what and where the values of the test case should be inserted. Data for a multiplication are shown for the first clock signal, the subsequent information on these inputs is not important. Output results appear after the 6-th clock signal. Symbol K means the preceding signal value maintained. In this manner a procedure of multiplication takes 7 clock signals. Then each test case of the 78 selected ones has been replaced by a sequence of 15 test patterns. A simulation of the faults revealed, that the selected test cases detect 96.1% of the 342 primary faults,

which means 329 primary faults( line S344o). The same number of primary faults can be detected in the same circuit by deterministic techniques[ 4]. Note that the selected 78 test cases are redundant, so that the program of test simulation selects 34 test patterns for a combinational circuit and 28 test sequences for a sequential implementatio. Again, the selected sets of 28 and 34 test patterns include 25 common ones. This implies that the largest number of the circuit faults are detected by the same input stimuli, in spite of the fundamental differences of circuit implementation techniques. This is a very interesting observation which needs a deeper study. The test frame opens a way of using a closer hypothetical model to the generation of tests. We consider a simple calculator as an example. It contains the commands of loading one operand, then another operand, and performing the command, and loading the result at the output, and it has just a single data bus. To test a command, the first operand must be loaded, then the second operand must be loaded via the same bus, and only then the operation can be performed and the results put out. The test frame imposes a sequence of commands, that puts the necessary data in and loads the result at the circuit outputs. Following this test frame, we have constructed a hypothetical model of a calculator with 2 separate input buses and separate 2 output bus, because the multiplied result is twice as long. A schematic representation of the calculation frame is presented in Fig. 3.2. This shows quite different inputs in the model of selecting the stimuli and of the real calculator. A circuit may have several test frames and several corresponding models. See the two test frames in the above calculator, one for multiplication, and one for the other operations. The operation of multiplying has twice as long period of synchronisation. See in the tabulated experimental results, that test stimuli selected for the calculator after the hypothetical model resulted in a 96.0% coverage of the real stuck-at faults. This demonstrates how the test frames simplify the selection of the input stimuli and open the way of using algorithmic models of the circuits for their test generation by direct input-output observation. 4. The Test Bench. With the algorithmic model ready, for its validation by the simulation technique, its input stimuli must be known. The functionality of the model is evaluated by an investigation of its output responses. Traditionally the input stimuli are selected either by the design engineer or by the model compiler. Such input stimuli reflect just the opinion of the designer on the way the model should operate. This means some functions may remain unchecked. For a more exact checking of the model, an automated selection of input stimuli should be introduced, This means two solutions must be found: how to select input stimuli for a universal study of the algorithm of the model, and how to find correct and expected responses to the stimuli. In the absence of an alternative algorithm description or a prototype, the algorithm designer is the single person that can find correct responses to the stimuli. Therefore the number of input stimuli for the validation of the algorithm should be minimal. The suggested technique of test generation is perfectly adapted to the "test bench" construction of the algorithm model. The generated test patterns detect the faults at the gate level very well. This allows us to expect their sufficiency in the analysis of the algorithm. The number of input stimuli selected is not large and demands few effort of the algorithm compiler in checking the correctness of the responses. The same input stimuli can be adapted for the evaluation of the synthesis description and of the model of the algorithm. The responses of the two descriptions to the selected input stimuli must coincide. A deeper analysis of adequacy of the two

descriptions is given by an analysis of transmission faults according to each description: The faults should coincide. In this case the tests cover also responses to the distortions of the input stimuli, which means a deeper checking of adequacy of the two descriptions. The situation, when the input stimuli used in the evaluation of the algorithm are found insufficient for testing the circuit on the gate level, should be considered separately. The set of the test stimuli should be prolonged by test patterns from the description on the gate level, so that the circuit compiled should be tested properly. One more analysis of the algorithm including the new test patterns and their one more comparison to the synthesis description seems quite reasonable in this situation. Some discrepancies could be detected and pre-launching validation of the circuit could be improved. Note also that modern techniques of circuit design use more and more languages of hardware description. One of such languages is VHDL, which gives either a downward or an upward description of the project, where extended hierarchical descriptions can be inserted in an arbitrary levels. This is a call for a technology of generating the tests that could also work on different levels of abstraction. 5. The Experimental results. Efficiency of the suggested techniques was evaluated with the help of different VHDL models from FUTEG Benchmarks. The research was partially supported by the Copernicus project FUTEG No. 9624: "Functional Test Generation and Diagnosis". The model descriptions were constructed in the C programming language for all VHDL descriptions. The faults were simulated at the gate level, and the VeriFault simulator was used. Descriptions on the gate level were synthesised from the VHDL behavioural models using the Synopsis system. The results are tabulated (Table I). First column denotes the name of a circuit. GCD means a greatest common divisor. S344 denotes multiplier. We had three gate level descriptions of the multiplier. Therefore there are three lines that start under name S344. Scalc denotes simple calculator. There were constructed two test frames and two models for simple calculator. Difeq denotes differential equation. Risc denotes some shortened description of RISC processor. There were used two test frames, too. RiscT denotes another RISC processor which consists only from combinational part. The size of the set input stimuli generated and test generation time in seconds (on Sparc Classic) are presented on the columns two and three. The number of test pattern or test sequences selected by suggested approach is presented in the column under name "Vec". Because the generated test patterns are redundant, a column under name Min. Vec. shows, a minimum number of patterns that give the same fault coverage as initial test patterns. These numbers were obtained by results of fault simulation. A column under name "Frame" shows the number of test patterns in the test frame. A columns under names "Input" and "Output" denotes the number of the inputs and outputs of the circuit. Table I Circuit

Set

Time, seconds

Vec.

Min. Vec.

Frame Input Output

Total Faults

Fault Coverage.

256

ti For example, Figure 1 shows the realization of the function of Table I. Vectors of universal test set for a function F(a,b,c,d) derived from its true table. It has been proved that the universal test set detects all multiple stuck-at faults in a realization R under the following restriction: every path between two points in R has the same inversion parity. These realizations are called a unate gate network. The unate gate network restriction is strict and practically it is often replaced by a relaxed condition. The realization R of the function is a balanced inversion parity ( BIP) network, if paths from unate variables have the same inversion parity. BIP realizations allow paths with different inversion parities between any binate variable and the outputs of a network, and so the restrictions are valid to any practical realization. It has been shown recently that in the worst case unate-gate networks are at most twice larger than the minimal implementation. BIP realizations, on the other hand, tend to be minimal or near-minimal. Any BIP realization can be obtained from a unate-gate network by applying a set of the resubstitution and De Morgan transformation. Therefore, the universal test set detects all detectable multiple stuck-at faults in BIP realization. The size of the universal test set for a unate function is relatively small, but for a binate function equals the size of the exhaustive test. The exhaustive test (2n) includes all possible test vectors of n inputs.

Table I The universal test set Input vectors abcd 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Expanded vectors abb’cc’d 001010 001011 001100 001101 010010 010011 010100 010101 101010 101011 101100 101101 110010 110011 110100 110101

Output 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1

Vectors of universal test set

Test type

Test vector

Max. false

Test vector

Max. false

Test vector

Max. false

Test vector Test vector

Min. true Min. true

Test vector

Min. true

Test vector

Min. true

Test vector

Max. false

In general case it is impossible to get a complete test set less than the exhaustive test without the information about the concrete realization of the module. The complete test detects all single stuck-at faults of the module described in terms of primitive gates. It has been proved that each input vector of the true table is necessary for testing at least one realization of the function [3]. Figure 2 shows the realization of the function of Table I. The universal test set does not include the input vector abcd = 0000. We can create a new realization, which requires this input vector as a test vector. Let’s say, that the output of the function F1 differs from the output of the function F on the input vector abcd = 0000 and F(0000) = 0, F1(0000) = 1. Figure 3 shows the realization of the function F1. The OR gate of four input in Figure 4 corrects the reactions of the block of the function F1 and we get a new realization of the function F. The test set of this realization requires input vector abcd = 0000. All stuck-at faults are detectable in this realization. It should be noted, that this new realization does not meet the BIP networks restriction. The input variables a and d of the function F are unate and the variables b and c are binate. Only the input variable a is unate for the function F1. Therefore, the realization of the function F in Figure 4 contains paths with a different parity from the input of the variable d and does not fill the restriction to BIP gate networks.

Figure 2. The realization of Table I function F

Figure 3. The realization of the function F1

F1

Figure 4. A new realization of the function F. A cell fault [4] implicitly models all defects that alter a module’s truth table and so provides a high degree of the realization independence. However, this model can be applied only to very small modules, because it often requires an exhaustive test set comprising all possible input vectors. The input pattern fault model [5] uses a true table of the function as well. Each input pattern fault defines an input and output pattern pair and corresponds the faulty behavior of the module. The number of faults for even small modules is large. Coupling faults can alter output values in response to changes occurring on one or more inputs of a function [6]. The simplest case is a single coupling fault, which is defined in terms of a single input/output signal pair. All test vectors for coupling faults are called as a coupling test set. The coupling test sets share some properties with universal test sets, but they are not necessarily exhaustive for binate functions. The coupling test sets are very large even for small functions. Higher-level fault models have been proposed for the realization-independent functional testing of combinational circuits. RTL fault models and quality metrics have been considered in [7]. Logic/arithmetic operations, constant/ variable switch, null statements, if/else, case, for instructions has been considered as RTL fault models. In some cases, their effectiveness in covering stuck-at faults on the circuit’s structural description has been ascertained. However, this does not guarantee their effectiveness to uncover physical defects or stuck-at faults. The high-level fault models taken from the software testing have three main advantages: they are well known and quite standardized; they require little

calculations, apart from the complete fault-free simulation; and they are already embedded in some commercial tools. However, while such metrics may be useful to validate the correctness of a design, they are usually inadequate to foresee the gatelevel fault coverage with a high degree of accuracy. One of the most used fault models is the observability enhanced statement coverage metric proposed in [8] and [9]. This fault model requires that all statements in the VHDL description are executed at least once, and that their effects are propagated to at least one primary output. Some approaches rely on a direct examination of the HDL description [10] or exploit the knowledge of the gate-level implementation [11]. Some extract of the corresponding control machine [12,13] from a behavioural description is used. The listed approaches are of limited generality and the adequacy of testing defects or of the coverage stuck-at faults on the gate level are not proved. The behavioral view or the “black-box” represents the system by defining the behavior of its outputs according to the values applied on its inputs without the knowledge of its internal organization. In this case only the input-output connectivity can be fixed [14]. The connectivity fault models are rough enough compared to the stuck-at fault model. However, the experimental investigation fault models based on input-output paths testing demonstrated a high defect and stuck-at fault coverage on the benchmark circuits [14,15]. A n –detection test set is the one where each modeled fault is detected either by n different tests, or by the maximum number of different tests that can detect the fault if this number is smaller than n. In various types of experiments reported in [16], n detection test sets were shown to be useful in achieving a high defect coverage for all types of circuits and for different fault models.

III. The influence of the circuit re-synthesising on the fault coverage. The core can be synthesized by different electronic design automation systems, and mapped into different cell libraries and manufacturing technologies. An important issue is how the test set of the core covers the faults of new implementations, which are done by the same synthesizer. The ISCAS’85 benchmarks have been selected for experiments. The original ISCAS’85 circuits have been resynthesized with the Synopsys Design Compiler program by the default mode and by using the AND-NOT cell library of two inputs. The three realizations have been analyzed: R1 – the non-redundant ISCAS’85 benchmark circuit R2 – Synopsys Design Optimization, the target library – class.db (the default mode) R3 - Synopsys Design Optimization, the target library – and_or.db The number of stuck-at faults for each realization we can see in Table II and Figure 5. The original benchmark realizations have more stuck-at faults in total. It means that re-synthesized circuits were more optimized. The percent of the difference between maximum and minimum numbers to the maximum number of stuck-at faults varies from 8 to 53. It demonstrates the impact of the target library by the design synthesis and the diversity of realizations.

Table II The number of stuck-at faults Circuits C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Total

R1 507 750 942 1566 1862 1990 3126 5248 7638 7039 30668

R2 426 978 857 1316 876 1500 2474 3879 6680 4578 23564

R3 460 1246 928 1406 1224 1658 2520 4130 7498 4798 25868

D 81 496 85 250 986 490 652 1369 958 2461

% 16 40 9 16 53 25 21 26 13 35

R1 –The non-redundant ISCAS’85 benchmark circuits R2 – Synopsys Design Optimization , the target library – class.db R3 - Synopsys Design Optimization, the target library – and_or.db D - The difference between maximum and minimum numbers % - The percent of the difference to the maximum number

The number of stuck-at faults

R1 R2 R3 D

C 43 2 C 49 9 C 88 C 0 13 5 C 5 19 0 C 8 26 7 C 0 35 4 C 0 53 1 C 5 62 8 C 8 75 52

9000 8000 7000 6000 5000 4000 3000 2000 1000 0

Figure 5. The number of stuck-at faults for each realization The test sets have been generated for each original ISCAS’85 circuit and for each re-synthesized circuit by the deterministic algorithm and by the random & deterministic algorithm. The deterministic algorithm has been used if the random search did not reach a 100% fault coverage. The test size of test sets with a 100% stuck-at fault coverage we can see in Table III. The random test generation increased the test size for all realizations. In both test generation cases we see the test size dispersal (Figure 6) in a number of circuits. The test sizes for the realisation of the circuits c432, c880, c2670, c6288 are very similar. These circuits have the smallest dispersal of stuck-at faults after the re-synthesizing .

Table III The size of test sets Circuits C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Total

R1 57 54 62 86 118 105 167 130 43 211 1033

D R2 46 74 49 83 57 120 143 99 47 146 864

R3 45 80 50 80 75 116 147 89 34 138 854

R1 63 63 54 92 123 113 172 130 34 209 1053

R&D R2 47 78 51 100 60 120 144 92 49 154 895

R3 50 90 54 110 81 114 143 97 39 136 914

D- Synopsys deterministic test patterns R&D- Synopsys Random and deterministic test patterns R1 –The non-redundant ISCAS’85 benchmark circuit R2 – Synopsys Design Optimization , the target library – class.db R3 - Synopsys Design Optimization, the target library – and_or.db

The Test Size 250 200

D R1 D R2

150

D R3 R&D R1

100

R&D R2 R&D R3

50

C7552

C6288

C5315

C3540

C2670

C1908

C1355

C880

C499

C432

0

Figure 6. The test size of the deterministic, random&deterministic generated test patterns for all three realizations. The number of undetected faults of each test set for each circuit realization was computed. Table IV gives the results of the experiments. Deterministic test sets have been generated for each realization R1, R2, R3 of the benchmark circuits. The fault simulation gives the number of undetected faults for each realization (columns R1, R2, R3). Of course, the number of undetected faults is zero for realizations and tests, which were generated for that particular realization. The test set, generated for the realization R1 of the circuit c432 detects all faults of the realization R1. However, this test set doesn’t detect 11 faults of the realization R2 and 1 fault of the realization R3. In general, the test reuse for the other realization in most cases detects not all stuck-at faults. The random and deterministic (R&D) generated test sets mainly detect more faults as compared to the deterministic generated test sets (the last row of the Table IV). The test sets generated for re-synthesized benchmark circuits detect fewer faults on original benchmark circuits (the last column of Table IV and Figure 7). Only in two cases the number of undetected faults for original benchmark circuits is smaller

than the number of undetected faults after the re-synthesizing (Figure 7). The maximum percent (116/1246) = 9.3 % of undetected faults for deterministic generated test sets has been got for the original realization of the C499 circuit. The maximum percent (164/1862) = 8.8 % of undetected faults for random and deterministic generated test sets has been got for the synthesized C1908 circuit with the target library – class.db. Table IV Undetected faults of realizations Circuits C432

C499

C880

C1355

C1908

C2670

C3540

C5315

C6288

C7552

Total

R1 R2 R3 R1 R2 R3 R1 R2 R3 R1 R2 R3 R1 R2 R3 R1 R2 R3 R1 R2 R3 R1 R2 R3 R1 R2 R3 R1 R2 R3

R1 0 11 1 0 44 116 0 0 0 0 25 20 0 3 1 0 36 29 0 6 6 0 9 11 0 39 18 0 24 17 416

D R2 21 0 7 6 0 22 29 0 7 8 0 10 158 0 41 24 0 8 57 0 8 72 0 10 0 0 27 190 0 16 721

R3 16 9 0 16 8 0 18 2 0 12 12 0 129 12 0 21 4 0 53 6 0 77 17 0 0 21 0 241 44 0 718

R1 0 4 0 0 34 48 0 1 0 0 0 0 0 2 0 0 39 40 0 4 6 0 10 10 0 59 41 0 12 13 323

R&D R2 13 0 8 8 0 12 13 0 4 8 0 0 164 0 48 21 0 9 61 0 8 66 0 13 0 0 9 223 0 16 704

R3 11 7 0 16 8 0 17 1 0 16 8 0 120 11 0 29 6 0 56 4 0 96 25 0 0 28 0 236 37 0 732

Total 61 31 16 46 94 198 77 4 11 44 45 30 571 28 90 95 85 86 227 20 28 311 61 44 0 147 95 890 117 62

D- Synopsys deterministic test patterns R&D- Synopsys Random and deterministic test patterns R1 – The non-redundant ISCAS’85 benchmark circuit R2 – Synopsys Design Optimization , the target library – class.db R3 - Synopsys Design Optimization, the target library – and_or.db

Undetected faults for realizations 1000 900 800 700 600 500 400 300 200 100 0

R1 R2

C7552

C6288

C5315

C3540

C2670

C1908

C1355

C880

C499

C432

R3

Figure 7. Total undetected faults for each realization of the circuit Each test set generated for one realization was reused for two other realizations. The average percent of undetected faults for the deterministic generated test set and for the random and deterministic generated test set is given in Table V. Table V The number of stuck-at faults and undetected faults Circuits

F_R2+R3

C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Total Percent

886 2224 1785 2712 2100 3158 4994 8009 14178 9376 49422

U_R2+R3 D_T_R1 R_T_R1 11 5 160 82 0 1 45 0 4 2 65 79 12 10 20 20 57 100 41 25 415 324 0.84% 0.66%

F_R1+R3 967 1996 1870 2972 3086 3648 5646 9378 15136 11837 56536

U_R1+R3 D_T_R2 R_T_R2 29 20 28 20 36 17 18 8 199 212 32 30 65 69 82 82 27 9 206 239 722 706 1.28% 1.25%

F_R1+R2 933 1728 1799 2882 2738 3490 5600 9127 14318 11617 54232

U_R1+R2 D_T_R3 R_T_R3 25 18 24 24 20 18 24 24 141 131 25 35 59 60 94 121 21 28 285 273 718 732 1.32% 1.35%

F-Ri+Rj – The number of stuck-at faults of two realizations Ri and Rj U_Ri+Rj – The number of undetected faults of two realizations Ri and Rj T_D_Ri – The deterministic test set generated for the realization Ri R_D_Ri – The random and deterministic test set generated for the realization Ri

The average percent of undetected faults doesn’t exceed 1.5 %. The maximum percent of undetected faults for two realizations reaches (160/2224)= 7.2% in case of test sets for the realization R1. It reaches (212/3086) = 6.9% in case of test sets for the realization R2 and it reaches (141/2738) = 5.1% in case of test sets for the realization R3. Two merged test sets can be applied for testing as a double-detection approach. One set is a deterministic generated test set and the other set is a random and deterministic generated test set. Both test sets have different test patterns and each of them detects all faults of the target realization. The undetected faults for double test sets are given in Table VI.

Table VI The number of undetected faults for double test sets Circuits

F_R2+R3

TR1

U_R2+R3

F_R1+R3

TR2

U_R1+R3

F_R1+R2

TR3

U_R1+R2

933 1728 1799 2882 2738 3490 5600 9127 14318 11617 54232

95 152 104 190 156 230 290 186 73 274 1750

9 6 9 8 80 7 20 30 2 147 318 0.59%

C432 886 120 2 967 93 10 C499 2224 90 49 1996 117 10 C880 1785 116 0 1870 100 5 C1355 2712 178 0 2972 183 4 C1908 2100 241 2 3086 117 130 C2670 3158 218 27 3648 240 17 C3540 4994 339 3 5646 287 24 C5315 8009 260 1 9378 191 24 C6288 14178 77 4 15136 96 0 C7552 9376 420 4 11837 300 142 Total 49422 2059 92 56536 1724 366 Percent 0.19% 0.65% F-Ri+Rj – The number of stuck-at faults of two realizations Ri and Rj U_Ri+Rj – The number of undetected faults of two realizations Ri and Rj TRi - The test size of two sets for the realization Ri

The average percent of undetected faults in case of double-detection test set declined more than twice (Figure 8). As well the maximum percent of undetected faults declined as till 2.2 % in case of the test set for the realization R1, till 4.2 % in case of the test set for the realization R2 and till 2.9 % in case of test sets for terealization R3 (Figure 9). The average percent of undetected faults 1,6 1,4 1,2 1 0,8 0,6 0,4 0,2 0

Deterministic generated test set Random and deterministic generated test set Undetected faults of two realizations R2 and R3

Undetected faults of two realizations R1 and R3

Undetected faults of two realizations R1 and R2

Double test set

Figure 8. The average percent of undetected faults The maximum percent of undetected faults 8 7 6 5 4 3 2 1 0

Deterministic generated test set

Among two Among two Among two realizations realizations realizations R2 and R3 R1 and R3 R1 and R2 of the of the of the circuits circuits circuits

Random and deterministic generated test set Double test sets

Figure 9. The maximum percent of undetected faults The test reuse for other realizations in most cases detects on the average more than 98% of all stuck-at faults. The maximum percent of undetected faults is significantly higher than the average percent of undetected faults. The double test sets declined almost twice both the maximum and the average percent of undetected faults.

IV.Fault models based on the input-output paths testing. The different fault models based on input-output paths testing (as noted above) were suggested [14,15]. We provide the other presentation of the main concepts. Let the circuit have a set of inputs X = {x1, x2, ... ,xi, ... ,xn} and a set of outputs Z = {z1, z2, ... ,zj, ... ,zm}. The pin fault model considers the stuck-at-0/1 faults occurring at the module boundary, and has a week correlation with the circuit’s physical faults. We write xi1 and xi0 for the input stuck-at-1/0 faults, and zj1 and zj0 for the output stuckat-1/0 faults. There are 2n +2m possible pin faults. Input-output pin stuck-at fault pairs (xit, zjk), t=0,1, k=0,1 are called pin pair faults (PP). The number of possible pin pair faults of the circuit is at most 4*n*m. We denote the set of the pin pair faults by P1 = { (xit, zjk) | i =1,…,n, j=1,…,m, t = 0,1, k=0,1 }. The test vector detects the pin pair fault (xit, zjk) of the module if the test vector detects both the pin faults xit, and zjk of the pair on the output zj of the module. It may appear that there exist no electric connections between the input and the output, and the pin pair fault defined by these inputs and outputs can’t be detected. These faults are not testable. The PP fault (xit, zjk) of a module is testable if a conventional deterministic test generator for a realization of the module find a test vector, which detects a pin fault xit on an output zj while the input xi and the output zj are set up to the ⌐t and ⌐k. The number of testable PP faults equals to 4*n*m minus the number of not testable PP faults. The connectivity rate demonstrates the relation between the number of testable PP faults and the total number of possible PP faults and is computed as follows: Connectivity rate = .No. of testable PP faults/ 4*n*m Note that in general it is not possible relate the PP fault with the defects of the module unambiguously, because the PP fault doesn’t fix exactly the signal propagation path in the circuit. The set of the PP faults of the function F (see Table I) includes the faults P1 = {(a1,y1), (a0,y0), (d0,y0), (d1,y1), (b1,y1), (b1,y0), (b0,y1), (b0,y0), (c1,y1), (c1,y0), (c0,y1), (c0,y0)}. The six test vectors 1010, 1110, 0011, 0111, 1100, 0101 detect all the PP faults. The test vector 1010 detects the PP faults (b1,y0), (a0,y0), the test vector 1110 detects the PP faults (b0,y1), (c0,y1), the test vector 0011 detects the PP fault (b1,y1) and so on. The six test vectors detect all the stuck-at faults for the realization of the function F in Figure2. The deterministic test generator defines the set of testable PP faults. The input pin faults detectable on the output zjk establish the inputs connected to the output zjk. The comparison of the testable PP fault set size with the stuck-at fault set size of the original benchmark’85 circuit realization and the comparison of the length of the test sets are given in Figures 10 and 11 according to Table VII. Note that the numbers of the stuck-at faults and the numbers of the PP faults are of the same rate. The test sets for the PP faults are in all cases larger than the test sets for the stuck-at faults

significantly. Note that the test set for PP faults has been got using the random search procedure. Table VII Stuck-at and PP faults Test generation for PP faults Testable PP Test Faults size 540 122 5184 1053 1326 379 5184 1023 3004 620 3320 461 2588 513 10540 1113 3068 246 11736 2000

The number of the faults

Stuck-at faults PP faults

14000 12000 10000 8000 6000 4000 2000

C 49

C

43

2

9 C 88 C 0 13 5 C 5 19 0 C 8 26 7 C 0 35 4 C 0 53 1 C 5 62 8 C 8 75 52

0

Figure 10. The comparison of the number of the faults

The size of the test sets 2500 2000 1500 1000 500

9 C 88 C 0 13 5 C 5 19 0 C 8 26 7 C 0 35 4 C 0 53 1 C 5 62 8 C 8 75 52

2

0 C 49

C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552

Test generation for stuck-at faults Stuck-at Test Faults size 507 63 750 63 942 54 1566 92 1862 123 1990 113 3126 172 5248 130 7638 34 7039 209

C 43

Circuits

Stuck-at faults PP faults

Figure 11. The size of the test sets for the stuck-at and PP faults The numbers of undetected stuck-at faults of three realizations for the test sets of the PP faults for the benchmark circuits are given in Table VIII. The average percent of undetected faults doesn’t exceed 0.7%, but the maximum percent of undetected faults reaches 3.9 %. The detecting of the PP faults by the test sets generated deterministic for the stuck-at faults (D) and by the test sets generated randomly and deterministic (R&D) is given in Table IX. The test sets, which detect 100% stuck-at faults of the benchmark circuits, detect on average about 60% of the PP faults. Benchmark results have shown that the test sets, which are generated according to the PP fault model, can obtain high fault coverage of gate stuck-at faults. However, the PP fault coverage of the test sets targeted for the stuck-at fault is very low. This implies that a test set based on the PP fault model covers far much more than the single stuck-at faults. It is very likely the test vectors based on the PP fault model can cover other kinds of the faults such as bridging faults, multiple stuck-at faults and stuck-at faults of different circuit realizations. A pin pair fault can concern few physical signal propagation paths between an input and an output. The testing of pin pair faults cannot guarantee the detection of all stuck-at faults of the module. The N-detection of PP fault makes sense. The change of the input value of the test pattern, which detects the PP fault (xit, zjk), generates the adjacent test pattern, which detects the PP fault (xi¬t, zj¬k). The test size of the test sets supplemented with the adjacent test vectors is given in Table X. The number of the undetected stuck-at faults of three realizations for the supplemented test sets of the benchmark circuits is given in Table X as well. The supplemented test sets of the PP faults detect more stuck-at faults. The average percent of the undetected faults doesn’t exceed 0.3%, and the maximum percent of the undetected faults reaches 0.9 % only. We see that the test sets for the PP faults and for the supplemented test sets detect less percent of stuck-at faults for the original realization R1. It may be explained in such a way: The original circuits were synthesized early and resynthesized by Synopsys the benchmark circuits are more optimized and have less stuck-at faults. It means that the fault coverage of the test sets generated for the black box model depends on the optimization level during the synthesis of the circuits. Table VIII The undetected stuck-at faults of three realizations Circuits C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Total

Undetected of R1 20(3,9%) 0 1 0 52(2,8%) 8 38 (1,2%) 0 0 90(1,3%) 214(0,7%)

Undetected of R2 1 0 0 0 7 2 12(0,5%) 0 0 18(0,4%) 40(0,2%)

Undetected of R3 6 0 0 1 5 6 27(1,1%) 0 0 21(0,4%) 66(0,3%)

R1 – The non-redundant ISCAS’85 benchmark circuit R2 – Synopsys Design Optimization , the target library – class.db R3 - Synopsys Design Optimization, the target library – and_or.db

Table IX The detecting of the PP faults Circuits C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Total

PP faults 540 5184 1326 5184 3004 3320 2588 10540 3068 10868 45016

D Detected 406 1510 837 2718 1521 2204 2051 7148 2353 20748

% 75,13 29,13 63,12 52,43 50,63 66,38 79,31 67,88 76,74 59,69

R&D Detected % 467 86,48 1434 27,66 862 65,01 2638 50,89 1740 57,92 2142 64,51 2036 78,73 7305 69,30 2393 77,99 21017

60,47

D- Synopsys deterministic test patterns for the realization R1 R&D- Synopsys Random and deterministic test patterns for the realization R1 Table X The undetected stuck-at faults of the supplemented test sets Circuits C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Total

Test size 468 6221 1381 6186 2968 1922 1997 5507 1262 5004 32916

Undetected of R1 4 0 1 0 14(0,8%) 2 9 0 0 62(0,9%) 92(0,3%)

Undetected of R2 0 0 0 0 4 2 4 0 0 14 24(0,1%)

Undetected of R3 0 0 0 0 2 0 7 0 0 12 21(0,1%)

R1 – The non-redundant ISCAS’85 benchmark circuit R2 – Synopsys Design Optimization , the target library – class.db R3 - Synopsys Design Optimization, the target library – and_or.db

In order to present the percent of the undetected faults in a uniform way for the PP faults and for the test reuse of the other realizations, the average and the maximum percent of the undetected faults for each realization pair was computed and given in Table XI. The comparison of the percent of the undetected faults for one and double detection stuck-at and PP faults we see in Figures 12 and 13. The test sets generated for the PP faults are comparable according to the average and the maximum of the undetected faults with the test reuse of the double-detection test sets for the stuck-at faults. Note that the PP test sets are generated for the black-box model of the circuits and the gate level implementation details are unavailable. The black-box model represents the system by defining the behavior of its outputs according to the values applied to its inputs without the knowledge of its internal organization. The black box model on the programming language C for ISCAS’85 benchmark circuits was used by the test generation for the PP faults.

Table XI The maximum and the average percent of the undetected faults The test sets

P_R2+R3 Averag Maximum e 0.75 5.50

P_R1+R3 Average Maximum

P_R1+R2 Average Maximum

One-detection 1.27 6.66 1.34 stuck-at fault test sets Double detection 0.19 2.20 0.65 4.21 0.59 stuck-at fault test sets One-detection PP 0.21 0.78 0.49 2.69 0.46 fault test sets Double detection 0.09 0.28 0.20 0.63 0.21 PP fault test sets P_Ri+Rj – The percent of the undetected faults of two realizations Ri and Rj

4.92 2.92 2.25 0.66

The average percent of the undetected faults 1,6

One-detection stuck-at fault test sets

1,4 1,2

Double-detection stuck-at fault test sets

1 0,8

One-detection PP fault test sets

0,6 0,4 0,2

Double-detection PP fault test sets

0 R2+R3

R1+R3

R1+R2

Figure 12. The comparison of the average percent of the undetected faults The maximum percent of the undetected faults 7

One-detection stuck-at fault test sets

6 5

Double-detection stuck-at fault test sets

4 3

One-detection PP fault test sets

2 1

Double-detection PP fault test sets

0 R2+R3

R1+R3

R1+R2

Figure 13. The comparison of the maximum percent of the undetected faults The PP fault model requires the path sensitisation between an input and an output at least one time. The sensitisation of the paths pair would increase the rate of the separate paths sensitisation. The pin triplets contain such a property.

The input-input-output pin stuck-at fault triplets (xit, yhp, zjk), t=0,1, k=0,1 p=0,1 are called the pin triplets faults (PT). The number of possible pin triplets faults of the circuit is at most 4*n*n*m. We denote the set of the pin triplet’s faults by P1 = { (xit, yhp, zjk) | i =1,…,n, h=1, ,n, j=1,…,m, t = 0,1, k=0,1 p=0,1}. The test vector detects the pin triplet fault (xit, yhp, zjk) of the module if the test vector detects the pin pair faults (xit, zjk) and (yhp, zjk) of the triplet on the output zj of the module. The pin triplet fault requires the sensitisation of two paths from inputs to the output on the same test vector. All possible pairs of the sensitisation paths will be considered The PT fault covers the PP fault if xi and yh is the same input. The pin stuck-at fault triplet (xi, xi, zj) corresponds to the pin stuck-at fault pair (xi, zj). The test sets for the PT faults were computed using the black-box model and the random search. The test size and the undetected stuck-at faults for three realizations are given in Table XII. The number of the detected PT faults is provided as well. Table XII-The testing of the PT faults Circuits C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Total

Test size 873 3458 7011 3481 2901 4575 7175 8578 2016 16934 56961

PT faults 56800 415619 138766 415619 169012 417371 225128 519231 152254 1002169

UnR1

NTR1

UnR2

NTR2

UnR3

NTR3

0 0 0 0 4 0 0 0 0 49 53

77 65 73 105 163 173 188 207 59 346 1456

0 0 0 0 4 0 0 0 0 4 8

67 91 49 109 94 161 144 175 73 266 1229

0 0 0 0 2 0 0 0 0 4 6

69 103 53 109 124 165 149 174 72 238 1256

NTRi – The number of the test patterns selected by a fault simulation for the realization Ri UnRi – The number of the undetected faults for the realization Ri

We see that the test sets for the input-input-output pin stuck-at fault triplets almost covers stuck-at faults for all realizations. The generated test sets are suitable for all realizations and the size of the test sets is huge. The necessary test pattern for each realization can be selected from the generated test sets by a fault simulation and the number of the selected test pattern is given in Table XII as well. The length of the tests, generated deterministic and selected by a fault simulation from the test sets, which detect the PT faults is given in Table XIII for the comparison. We see that the test sets generated for the black box is only about one and a half times longer. Table XIII The length of test sets Circuits C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Total

Test sets generated for stuck-at faults R1 R2 R3 57 46 45 54 74 80 62 49 50 86 83 80 118 57 75 105 120 116 167 143 147 130 99 89 43 47 34 211 146 138 1033 864 854

Test sets selected from PT fault test R1 R2 R3 77 67 69 65 91 103 73 49 53 105 109 109 163 94 124 173 161 165 188 144 149 207 175 174 59 73 72 346 266 238 1456 1229 1256

We will demonstrate by a simple example why a test set for PP and PT faults doesn’t guaranty the detecting of realization stuck-at faults. The circuit of the example

is given in Figure 14. The circuit has one fanout with branches b1 and b2. All possible input vectors are listed in Table XIII, the PP faults, the fanout stuck-at faults and the PT faults detectable on the input vector.

b1

b2

Figure 14. The realization of the Table XIII function. Table XIII The function of the example No.

a

b

c

d

PP faults

Fanout faults

PT faults

0 1 2 3 4 5 6 7

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

0 0 0 1 0 0 1 1

(b1,d1) (c ,d1) (a1,d1) (b0,d0) (c0,d0) (b1,d1) (b1,d1) (b0,d0) (a0,d0) (b0,d0)

b21

(b1,b1,d1) (a1,c1,d1) (b0,c0,d0) (b1,b1,d1) (b1,b1,d1) (a0,b0,d0)

1

b20 b11 b11 b21 b10

The input vectors 2,3 and 6 will be always selected by detecting PP or PT faults. The ( b1,d1) PP fault can be detected by one of the three vectors 1 or 4 or 5. In case of a selection of the input vector 5 all fanout stuck-at faults will be detected. In case of selecting of the input vector 1 or 4 some stuck-at fault of the fanout remains undetected. Only both vectors 1 and 4 can guaranty the detection of all stuck-at faults of the fanout. The PP fault model, and the PT fault model can never secure the selection of both vectors 1 and 4 or the selection of the vector 5. Note that the adjacent input vectors of the test pattern of the sensitive inputs can improve the stuckat faults detection once more, what we will see in the next section. Despite the drawback of the fault models demonstrated above, the test sets generated according to the mentioned models for whole benchmark circuits detect stuck-at faults of any realization unexpected well. We see a few reasons why the results are surprisingly good. First of all, functions of the different outputs depend on the same inputs. One stuck-at fault can be detected on several outputs. The test pattern

for one input can detect stuck-at faults for other outputs. Actually, each test pattern can detect several PP faults and as result each PP fault may be tested more than once. We analyzed the parameters of the circuits in order to highlight what has the impact on the stuck-at faults coverage of the black-box test sets. The connectivity rate of the circuits we can see in Table XIV. The summary number of the undetected faults of the test sets for the PP faults and by the test sets supplemented, the connectivity and output/input rate, and the number of all circuit paths [17] are given in Table XV and illustrated by Figure 14. Table XIV The connectivity rate of circuits Circuits

Inputs n

Outputs m

4*n*m

Testable PP faults

C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552

36 41 60 41 33 157 50 178 32 206

7 32 26 32 25 64 22 123 32 107

1008 5248 6240 5248 3300 40192 4400 87576 4096 88168

540 5184 1326 5184 3004 3320 2588 10540 3068 10868

Connectivity = testable connection/ 4*n*m 50 % 99% 21% 99% 91% 8,3% 59% 12% 75% 12%

TableXV The parameters of circuits Circuits

(O/I)*100 rate %

C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552

19,4 78,0 43,3 78,0 75,8 40,8 44,0 69,1 100 50

The connectivity rate % 50 99 21 99 91 8,3 59 12 75 12

The number of the paths

The undetected faults 31 0 2 1 84 20 97 0 0 217

17,264 8,346,432 1,458,114 1,359,768 57,353,342 2,682,610 ~1020 1,452,986

(O/I) rate %

Circuit parameters

Connectivity Undetected

250 200 150 100 50

52

88

75 C

15

62 C

40

35

53 C

26

70 C

19

08 C

55 C

0 C

13

9

88 C

49 C

C

43

2

0

Figure 14. The impact of the circuit parameters to the number of the undetected faults In Figure 14 we don’t see the definite impact of the circuit parameters on the number of the undetected faults for the test sets generated for the black box model. We see only the correlation of the connectivity rate and the output/input rate of the circuits. It is likely that the complexity of the implemented functions is playing the fundamental role for the number of the undetected faults.

V. The properties of sensitive adjacent input vectors Every pin in the circuit can have two stuck-at faults: stuck-at 1, stuck-at 0. In order to detect stuck-at faults of some pin it is needed to create a sensitive path from the place of the faulty pin to the output of the circuit. If such a path is created all the stuck-at faults along this path are detected. The creation of a sensitive path requires two test vectors. If a sensitive path starts from the input of the circuit these test vectors differ only in the value of the input from which sensitive path starts. Definition 1. Two input vectors are adjacent if they differ in the value of a single input. The Hamming distance between adjacent input vectors is one. Definition 2. The adjacent input vectors V and V* are considered as sensitive adjacent input vectors if output vectors obtained in response to V and V* are different. Each input vector of the length n has n adjacent input vectors, from which some input vectors might be sensitive adjacent input vectors. For example, consider an input vector 01010 that produces an output vector 101. We compare the output responses of adjacent input vectors 11010, 00010, 01110, 01000,01011. Let these responses be 101,101,111,101,010. After comparing the output response 101 with the output responses 111 and 010, we find that these output responses are different, and we mark the input vectors 01110 and 01000 as sensitive adjacent input vectors of the considered input vector 01010. We refer to the input where vectors V and V* differ as a sensitive input, and to the outputs that assume different values under V and V* as sensitive outputs. Sensitive adjacent input vectors can be generated for each test pattern of the test set. Since a change in the value of a single input of sensitive adjacent input vectors changes the output vector, it is likely that the presence of a fault on a path from a sensitive input to a sensitive output will be detected. Generated sensitive adjacent input vectors generated are likely to be sensitive to the presence of a defect, and are likely to result in higher fault coverage. In [6] is derived a design constraint that ensures full coverage of stuck-at faults in the two-level circuit realization. A sum-of-products (SOP) cover E of the function z is a set of implicants of z = p1+p2+ …+pk. An implicant pi of E is relatively essential if the result of deleting pi from E covers fewer minterms of z than E. An input vector v is relatively essential in p if p(v)=1, but p*(v)=0 for some other implicant p*/=p in E. A relatively essential implicant contains at least one relatively essential vector. A non redundant SOP realization is composed of relatively essential prime implicants . The following lemma proved in [6] states the sufficient conditions for a test set to detect all stuck-at faults. Lemma 1. A test set T detects all stuck-at faults in a non-redundant SOP circuit realization if T includes: • At least one relatively essential vector v of every prime implicant p in E



At least one false vector adjacent to a minterm of p for every literal in p The test set that satisfies the conditions of Lemma 1 is likely to result in high fault coverage for multi-level circuit realizations. We investigate how many sensitive adjacent input patterns of test sets generated for the PP faults and by a deterministic test generator increase the fault coverage of different realizations of ISCAS’85 benchmark circuits. The test sets generated for the PP faults of the black box model and the test sets extended with sensitive adjacent input vectors have been minimized by means of the fault simulation. All results of the experiment concerning sensitive adjacent vectors are presented in three tables: table XVI, table XVII, and table XVIII. Test patterns were generated on the base of the PP fault model according the circuit black box model in tables XVI and XVII. It means that no information on the structure of the circuits was used during the test pattern generation. Only the functions of the circuits were taken into account. The random test pattern generation was used. When the random test pattern generation is used anyone can argue that test results can significantly differ from one run to another run. In order to show the trustiness of the test generation results the experiment for every circuit was carried two times. Results in tables XVI and XVII differ in the application of the procedure for the sensitive adjacent patterns generation. Table XVI shows results when sensitive adjacent patterns were inserted into the initial sequence of test patterns. It means that the order of the generation was as follows: the initial pattern, then it's sensitive adjacent patterns, and such a mode was applied for every initial test pattern. Table XVII shows the results when adjacent patterns were added to the end of the initial test sequence. The mode of adding sensitive adjacent patterns has a big influence to the number of minimized test patterns after the fault simulation. The second mode selects much smaller number of minimized patterns. Therefore it was decided that the second mode is the right mode, and this mode was used for the other experiments including Table XVIII. Initial test patterns of the Table XVIII were generated by automatic test patterns generation tool for the circuit implementation R2. This implementation denotes the optimised library class.db. Test patterns of the first line for every circuit were generated by the deterministic mode. Test patterns of the second line for every circuit were generated by the random mode plus the deterministic mode in order to get 100% fault coverage. The left part of tables presents results of the test pattern generation before the application of the procedure for the sensitive adjacent patterns generation. The right part of tables shows results with sensitive adjacent patterns. The test patterns selection was done for three implementations of every circuit. Two columns define every implementation of the circuit: the fault coverage and the number of minimized test patterns. The minimization of test patterns was based on the results of the fault simulation. The simple rule was applied: the test pattern is valuable if it detects new faults. If we rearrange initial test patterns, we would get a different number of minimized test patterns. The last three lines in every table were calculated in order to prove the trustiness of the results of the test pattern generation. The first line "Average" has the average for every column. The calculation of values in the other two lines requires a longer explanation. As we remember, an experiment for every circuit was carried out two times. The average of the results was calculated for every circuit separately. Then the deviation from the corresponding average was calculated for every circuit and

expressed in per cents. Finally, the average deviation that is shown in the second line "Average deviation" was calculated. The last line "Maximum deviation" shows the maximum deviation in per cents from the average. As we see, the last two lines have very small numbers. So it means that the distinction of the results between separate generations is very small. Therefore these small numbers prove the trustiness of the results of the test patterns generation. If we look to the right part of the tables, we will see bigger numbers than in the left part of the tables, except the circuits, which have a 100% fault coverage initially. This exception isn't valid for Table XVI. Such results mean that sensitive adjacent test patterns always add their value to the fault coverage. This conclusion is valid for any implementation of the circuit. Sensitive adjacent patterns are especially good for the and_or implementation (R3). As we can see from the left part of Table XVI, 12 test sequences for 6 circuits of the R3 implementation didn't have full fault coverage (100%). After the application of the procedure for the adjacent vector generation only a single test sequence (circuit c7552) didn't have full fault coverage for circuits of the R3 implementation. Another indicator that could emphasize the value of sensitive adjacent vectors is the number of undetected faults that is on the left and right parts of the table. So the left part has 688 undetected faults jointly, whereas the right part has only 72 undetected faults jointly. The very similar result may be confirmed in Table XVII (the left pat - 670 undetected faults, the right part – 70 undetected faults). TableXVI The application of adjacent test vectors inside the initial test sequence. Circuit C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Average Aver.(%) deviation Max.(%) deviation

Nr. 109 120 1035 1045 364 358 1029 1028 613 622 422 459 497 525 1151 1161 238 263 1642 1727 720

% 96,1 95,5 100 100 99,9 99,5 100 100 97,7 97.0 99,4 99,9 98,9 98,3 100 100 100 100 98,9 98,9 99

R1 Nr. 63 70 67 67 108 102 109 106 139 137 155 160 202 212 198 203 49 52 312 318 142

% 98,8 97,4 100 100 99,9 99,5 100 100 98,9 99,0 99,3 99,7 99,0 98,8 100 100 100 100 99,6 99,5 99,5

R2 Nr. 58 60 87 93 89 91 116 118 82 85 142 159 174 187 155 153 64 63 234 245 123

% 98,9 97,8 100 100 99,9 99,6 100 100 99,3 99,3 99,5 99,9 98,8 98,6 100 100 100 100 99,6 99,6 99,5

R3 Nr. 60 65 96 100 88 89 122 122 101 107 146 160 173 190 154 151 63 75 213 222 125

Nr. of adjacent 986 1091 36627 37000 13962 13627 36411 36389 16389 16657 25753 29081 14571 15309 110221 110431 7842 8475 138087 144800 40686

% 100 99,6 100 100 100 100 100 100 99,8 99,7 99,9 100 100 100 100 100 100 100 99,7 99,7 99,9

Nr. 81 92 96 97 203 188 139 154 193 196 325 334 366 363 520 596 144 159 702 708 283

R2 % Nr. 100 68 100 73 100 120 100 120 100 178 100 161 100 153 100 181 99,8 121 99,8 124 99,9 281 100 291 100 316 100 305 100 468 100 477 100 207 100 198 100 520 99,9 528 99,9 245

R3 % Nr. 100 72 100 81 100 129 100 133 100 180 100 164 100 160 100 184 100 151 100 155 100 292 100 300 100 321 100 315 100 480 100 473 100 187 100 192 100 554 99,9 561 99,9 254

0,54

0,03

0,49

0,03

0,54

0,03

0,76

0,56

0,01

0,76

0,003

0,64

0,001

0,36

1,25

0,09

1,32

0,18

1,41

1,14

2,17

1,51

0,07

1,70

0,01

2,09

0,01

1,74

R1 – The non-redundant ISCAS’85 benchmark circuit R2 – Synopsys Design Optimization , the target library – class.db R3 - Synopsys Design Optimization, the target library – and_or.db Nr. – The number of test patterns % - The fault coverage

R1

TableXVII The application of adjacent test vectors at the end of the initial test sequence. circuit C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Average Aver.(%) deviation Max.(%) deviation

Nr. 116 128 1035 1045 351 344 1029 1028 648 608 460 428 496 520 1151 1161 268 263 1768 1727 729

% 97 95,9 100 100 99,7 99,6 100 100 96,8 97,3 99,9 99,7 98,8 98,9 100 100 100 100 98,9 98,9 99,1

R1 Nr. 68 65 67 67 98 104 109 106 132 135 160 155 229 211 198 203 50 52 312 318 142

R2 % Nr. 98,4 59 98,1 61 100 87 100 93 99,8 78 99,8 85 100 116 100 118 98,6 78 98,6 80 99,7 159 99,7 147 99,4 189 99,2 183 100 148 100 153 100 62 100 63 99,6 225 99,5 245 99,5 122

% 98,5 98 100 100 99,8 99,7 100 100 98,7 98,9 99,9 99,8 99,1 98,8 100 100 100 100 99,7 99,6 99,5

R3 Nr. 62 62 96 100 79 85 122 122 93 103 160 150 190 184 146 151 68 75 204 222 124

Nr. of adjacent 1053 1148 36627 37000 13496 13175 36411 36389 17281 16290 29255 27389 14488 15199 110541 110852 8875 8475 148323 144800 41353

0,45

0,03

0,04

0,008

0,59

0,02

0,66

0,46

0,001

1,23

1,14

1,02

0,04

1,07

0,06

1,28

1,08

0,01

R1 – The non-redundant ISCAS’85 benchmark circuit R2 – Synopsys Design Optimization , the target library – class.db R3 - Synopsys Design Optimization, the target library – and_or.db Nr. – The number of test patterns % - The fault coverage

R1 % Nr. 100 81 100 77 100 67 100 67 100 101 100 108 100 109 100 106 99,7 158 99,6 154 100 162 100 160 100 255 100 242 100 198 100 203 100 50 100 52 99,7 350 99,7 359 99,9 153

% 100 100 100 100 100 100 100 100 99,8 99,8 100 100 100 100 100 100 100 100 99,9 99,9 99,9

R2 Nr. 66 69 87 93 80 87 116 118 84 87 164 151 201 202 148 153 62 63 240 260 127

R3 % Nr. 100 68 100 71 100 96 100 100 100 81 100 88 100 122 100 122 100 103 100 112 100 162 100 153 100 208 100 211 100 146 100 151 100 68 100 75 100 216 99,9 233 99,9 129

0,41

0

0,58

0,001

0,66

0,84

0

1,05

0,01

1,22

TableXVIII The application of adjacent test vectors when the test sequence is generated by ATPG circuit C432 C499 C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552 Average Aver.(%) deviation Max.(%) deviation

Nr 46 47 74 78 49 51 83 100 57 60 120 120 143 144 99 92 47 49 146 154 88

% 97,4 95,9 98,9 99,2 98,6 96,9 99,5 99,5 91,2 91,5 98,9 98,8 98,0 98,2 98,7 98,6 100 100 96,8 97,3 97,7

R1 Nr. 44 45 52 57 46 50 77 92 56 59 106 109 141 143 96 91 32 49 143 147 82

R2 % Nr. 100 43 100 47 100 74 100 78 100 46 100 49 100 83 100 98 99,7 57 99,8 59 100 116 99,9 117 100 138 100 144 99,7 97 99,8 90 100 43 100 49 99,9 142 99,9 149 99,9 86

% 98,5 98,3 99,0 98,2 99,6 99,2 100 99,3 96,1 96,7 99,5 99,5 99,7 99,7 99,7 99,8 99,9 99,6 99,7 99,7 99,1

R3 Nr. 40 45 73 74 46 49 79 97 54 58 116 118 137 143 96 88 43 49 129 140 84

Nr of adjacent 548 469 2837 3105 2031 2092 3149 3868 1581 1679 7911 7873 4301 4285 9314 8713 1582 1648 13197 13836 4701

R1 % Nr. 100 57 99,8 61 100 56 100 60 100 58 100 73 100 83 100 95 99,2 128 99,5 135 100 124 100 131 100 188 100 188 100 150 100 146 100 32 100 49 98,8 229 98,8 211 99,8 113

0,66

0,06

1,21

0,004

0,89

0,04

1,04

0,88

0,006

2,32

0,22

5,25

0,01

2,07

0,1

2,56

2,56

0,04

% 100 100 100 100 100 100 100 100 99,8 99,8 100 100 100 100 100 100 100 100 100 100 99,9

R2 Nr. 43 47 74 78 46 49 83 98 58 59 116 118 138 144 106 98 43 49 146 152 87

R3 % Nr. 100 47 100 52 100 81 99,8 86 100 49 100 53 100 79 100 102 99,9 82 100 86 100 122 100 124 100 144 100 149 100 106 100 98 100 52 100 76 100 142 100 100 99,9 92

1,42

0

0,87

0,004

1,74

5,25

0

2,07

0,03

4,67

R1 – The non-redundant ISCAS’85 benchmark circuit R2 – Synopsys Design Optimization , the target library – class.db R3 - Synopsys Design Optimization, the target library – and_or.db Nr. – The number of test patterns % - The fault coverage

Some attention has to be drawn to the results of Table XVIII. As we have said, initial test patterns were generated by automatic test pattern generation tool for the implementation R2. So, we would expect 100% fault coverage for every circuit of this implementation. But the circuits C1908, C5315 and C7552 don't have 100% fault coverage initially. This could be explained in such a way. The library class.db includes some hierarchical elements. The test generation was carried out at the hierarchical level, but the fault simulation was carried out at the gate level. Therefore some circuits don't have initially 100% fault coverage for the implementation R2. But despite this drawback addition of adjacent patterns gives 100% fault coverage for every implementation of the every circuit, except the circuit C1908. Such a result only sharpens the value of adjacent test patterns. Finally, Figure 15 shows how the number of test patterns subject to the test generation mode. As we could expect, the least test number for every circuit we get from automatic test generation tool (ATPG). The biggest test number we get when the sensitive adjacent patterns are added inside the test sequence. The other two lines show that the addition of the adjacent test patterns at the end of the test sequence increases the test number only for some circuits that don't have full fault coverage (100%).

600 500 400 300

ATPG Black Box Adjacent at the end Adjacent inside

200 100 0

C 43 2 C 49 9 C 88 C 0 13 5 C 5 19 0 C 8 26 7 C 0 35 4 C 0 53 1 C 5 62 8 C 8 75 52

The number of test patterns

The number of test patterns and the test generation mode

Figure 15. The number of test patterns and the test generation mode We can draw the following conclusions based on the results of the experiment: 1. The addition of the adjacent test patterns showed surprisingly good results – when the circuit didn't have a 100% fault coverage, the fault coverage was improved in every case; 2. The addition of the adjacent test patterns at the end of the test sequence is better than their insertion inside the test sequence. Now on the base of previous conclusions we can make a recommendation concerning the IP core test suites. When the IP core is supplied to the user, it is presented at the behavioural level. Its gate-level implementation details are unavailable. Therefore the user has to synthesize gate level description himself. Test suites are supplied together with IP core. These test suites reflect the behavior of the IP core and are devoted only to a particular gate level implementation. The supplied test suites of the IP core are not able to detect all faults of any synthesized gate level implementation. Therefore there is a problem how to get a test for a re-synthesized gate level implementation of the IP core. We suggest complementing the existent test suites of the IP core with sensitive adjacent patterns. Then the suitable test patterns for the synthesized gate level implementation have to be selected on the base of the fault simulation. Our experiment proves that such a complement would improve the test quality for any synthesized IP core gate level description. We think that the practice of sensitive adjacent patterns is a very cheap way to adopt test patterns for the resynthesized gate level description of IP core.

VI. Realization-Independent testing of sequential circuits As it was already mentioned above the core can be synthesized by different electronic design automation systems and mapped into different cell libraries and manufacturing technologies. In previous chapters we have presented the experimental results that show how the test set of the core covers the faults of new implementations for combinational circuits. Now we are going to consider the same problem for sequential circuits. The original ITC’99 benchmark circuits [20] were chosen for experiments. The combinational part of these circuits has been re-synthesized with the Synopsys Design Compiler program by the default mode and by using an AND-NOT cell library of two inputs. The three realizations have been analyzed: R1 – the non-redundant benchmark circuit

R2 – Synopsys Design Optimisation, the target library – class.db (default mode) R3 - Synopsys Design Optimisation, the target library – and_or.db The parameters of the original ITC’99 benchmark circuits are given in Table XIX and Figures 16, 17. The columns are denoted as follows: Gates – the number of gates, FF – the number of flip-flops, PI – the number of primary inputs, PO – the number of primary outputs, Best fault coverage % - the best published in the papers stuck-at fault coverage of the original ITC’99 benchmark circuits reached using test generators Hitec, RAGE, TetraMAX or GATO, R1, R2, R3– the number of stuck-at faults in the circuit realizations R1, R2, R3 respectively, ∆ - the difference the between maximum and the minimum stuck-at faults numbers, % - the percent of the difference to the maximum stuck-at faults number. Table XIX. The parameters of the original ITC’99 benchmark circuits Circuits b01 b02 b03 b04 b05 b06 b07 b08 b09 b10 Total

Gates

FF

PI

PO

40 18 111 394 570 48 321 154 100 137

5 4 30 66 34 9 51 21 28 17

4 3 6 13 3 4 3 11 3 13

2 1 4 8 36 6 8 4 1 6

Best fault coverage % 100.00 99.22 73.24 89.58 40.00 94.15 50.00 88.10 87.23 93.59

Number of stuck-at faults R1 R2 R3 268 246 278 128 126 128 822 782 784 2640 2614 2666 3362 2880 3132 346 334 336 2198 2302 2488 800 812 828 736 758 772 952 964 1078 12252 11818 12490



%

32 2 40 52 482 12 290 28 36 126

12 2 5 2 14 3 12 3 5 12

600 500 400 300 200 100 0 B01 B02 B03 B04 B05 B06 B07 B08 B09 B10

Figure 16. The parameters of the original ITC’99 benchmark circuits

Gates FF PI PO

4000 3500 3000 2500

R1

2000 1500 1000

R3 ?∆

R2

500 0 B01 B02 B03 B04 B05 B06 B07 B08 B09 B10

Figure 17. The number of stuck-at faults for each realization We can see the number of stuck-at faults for each realization in Table XIX and Figure 17. The benchmark circuits realizations using the target library and_or.db have more stuck-at faults in total. The most optimized are benchmark circuits realizations using the target library class.db. But the percent of the difference between the maximum and the minimum numbers to the maximum number of stuck-at faults, which varies from 2 to 14, is not so big as for analyzed combinational ISCAS’85 benchmark circuits (from 8 to 53 percent). Most sequential ATPG algorithms are the direct extensions of combinational algorithms applied to the iterative logic array model [18] of the circuit under test. An advanced circuit description language like VHDL gives an opportunity to apply an iterative logic array model of a sequential circuit, and in the case of the use of the combinational ATPG to manage the search space flexibly. A test generation approach for sequential circuits based on the iterative logic array model is described in [18]. We will shortly remind the main features of this approach. Each component of the state vector can have one of five values, namely {0, 1, X, D, notD}. If a test exists, 0 or 1 can replace the X value, hence only four values need to be considered. It is clear that in testing a circuit it is never necessary to enter some state twice, therefore each state vector can be restricted to be unique, and there are 4n such unique states, where n is the number of state variables. The test generation procedure given in [18] consists of the following three steps: 1. Set k to 1 for the number of copies (combinational cells) of the iterative circuit model. Set don’t-care value X for all state variables of the first copy. 2. Construct the k-iterative array model of the circuit. 3. Apply the combinational ATPG for a k-iterative array model of the circuit. If no test vectors are found for undetected faults, increment k by one and repeat (2). Terminate when k>4n. If no test vectors are found for some faults, the circuit is redundant. We implemented the procedure described above by means of the SYNOPSYS combinational ATPG for stuck-at faults. Of course the extent of the state search space 4n is purely theoretical and not applicable for real circuits. In our experiments we stopped incrementing the length k of the iterative logic array model when the appliance of the last twenty additional combinational copies of the circuit did not

increase the fault coverage or in the case when the SYNOPSYS combinational test generator was not able to deal with such enlarged circuit. The combinational ATPG (SYNOPSYS) generates test vectors of the length k*PI, where k – the number of combinational copies, PI – the number of primary inputs in one copy. In order to apply these test vectors to initial circuits test vectors were folded in test sequences of k test vectors of the length equal to PI each. The test sets have been generated for each original ITC’99 benchmark circuit and than reused for two other realizations. The computer Sun Ultra 5 was used for the test generation. The results of the experiment are presented in Table K2. Table XX. The iterative logic array model. Undetected stuck-at faults for realizations. Circuit name b01 b02 b03 b04 b05 b06 b07 b08 b09 b10

Nr. 18 10 33 74 55 16 34 48 37 39

Sequence length 12 11 16 15 257 11 289 65 81 25

Generation time 1 sec.