Testing 1. 2 Problems of Ideal Tests n Ideal tests detect all defects produced in the manufacturing...
-
Upload
patrick-horton -
Category
Documents
-
view
221 -
download
0
Transcript of Testing 1. 2 Problems of Ideal Tests n Ideal tests detect all defects produced in the manufacturing...
2
Problems of Ideal TestsProblems of Ideal Tests
Ideal tests detect all defects produced in the manufacturing process.
Ideal tests pass all functionally good devices. Very large numbers and varieties of possible
defects need to be tested. Difficult to generate tests for some real
defects. Defect-oriented testing is an open problem.
3
Real TestsReal Tests
Based on analyzable fault models, which may not map on real defects.
Incomplete coverage of modeled faults due to high complexity.
Some good chips are rejected. The fraction (or percentage) of such chips is called the yield loss.
Some bad chips pass tests. The fraction (or percentage) of bad chips among all passing chips is called the defect level.
4
Testing as Filter ProcessTesting as Filter Process
Fabricatedchips
Good chips
Defective chips
Prob(good) = y
Prob(bad) = 1- y
Prob(pass test) = high
Prob(fail test) = high
Prob(fail test) = lowPro
b(pass
te
st) =
low
Mostlygoodchips
Mostlybad
chips
5
Costs of TestingCosts of Testing Design for testability (DFT)
Chip area overhead and yield reduction Performance overhead
Software processes of test Test generation and fault simulation Test programming and debugging
Manufacturing test Automatic test equipment (ATE) capital cost Test center operational cost
6
Design for Testability (DFT)
Design for Testability (DFT)
DFT refers to hardware design styles or addedhardware that reduces test generation complexity.
Motivation: Test generation complexity increasesexponentially with the size of the circuit.
Logicblock A
Logicblock BPI PO
Testinput
Testoutput
Int.bus
Example: Test hardware applies tests to blocks Aand B and to internal bus; avoids test generationfor combined A and B blocks.
7
Cost of Manufacturing Testing in 2000AD
Cost of Manufacturing Testing in 2000AD
0.5-1.0GHz, analog instruments,1,024 digital pins: ATE purchase price = $1.2M + 1,024 x $3,000 = $4.272M
Running cost (five-year linear depreciation) = Depreciation + Maintenance + Operation
= $0.854M + $0.085M + $0.5M = $1.439M/year
Test cost (24 hour ATE operation) = $1.439M/(365 x 24 x 3,600) = 4.5 cents/second
9
Automatic Test Equipment Components
Automatic Test Equipment Components Consists of:
Powerful computer Powerful 32-bit Digital Signal
Processor (DSP) for analog testing Test Program (written in high-level
language) running on the computer Probe Head (actually touches the bare
or packaged chip to perform fault detection experiments)
Probe Card or Membrane Probe (contains electronics to measure signals on chip pin or pad)
13
Verification TestingVerification Testing
Ferociously expensive May comprise:
Scanning Electron Microscope tests Bright-Lite detection of defects Electron beam testing Artificial intelligence (expert system)
methods Repeated functional tests
14
Characterization TestCharacterization Test
Worst-case test Choose test that passes/fails chips Select statistically significant sample of
chips Repeat test for every combination of 2+
environmental variables Plot results in Shmoo plot Diagnose and correct design errors
Continue throughout production life of chips to improve design and process to increase yield
15
Manufacturing TestManufacturing Test
Determines whether manufactured chip meets specs
Must cover high % of modeled faults Must minimize test time (to control cost) No fault diagnosis Tests every device on chip Test at speed of application or speed
guaranteed by supplier
16
Burn-in or Stress TestBurn-in or Stress Test
Process: Subject chips to high temperature &
over-voltage supply, while running production tests
Catches: Infant mortality cases – these are
damaged chips that will fail in the first 2 days of operation – causes bad devices to actually fail before chips are shipped to customers
Freak failures – devices having same failure mechanisms as reliable devices
17
Sub-types of TestsSub-types of Tests
Parametric – measures electrical properties of pin electronics – delay, voltages, currents, etc. – fast and cheap
Functional – used to cover very high % of modeled faults – test every transistor and wire in digital circuits – long and expensive – main topic of tutorial
18
Fault ModelingFault Modeling
Why model faults? Some real defects in VLSI and PCB Common fault models Stuck-at faults
Single stuck-at faults Fault equivalence Fault dominance and checkpoint theorem Classes of stuck-at faults and multiple faults
Transistor faults Summary
19
Some Real Defects in ChipsSome Real Defects in Chips Processing defects
Missing contact windows Parasitic transistors Oxide breakdown . . .
Material defects Bulk defects (cracks, crystal imperfections) Surface impurities (ion migration) . . .
Time-dependent failures Dielectric breakdown Electromigration . . .
Packaging failures Contact degradation Seal leaks . . .
Ref.: M. J. Howes and D. V. Morgan, Reliability and Degradation - Semiconductor Devices and Circuits, Wiley, 1981.
20
Observed PCB DefectsObserved PCB DefectsDefect classes
ShortsOpensMissing componentsWrong componentsReversed componentsBent leadsAnalog specificationsDigital logicPerformance (timing)
Occurrence frequency (%)
51 1 613 6 8 5 5 5
Ref.: J. Bateson, In-Circuit Testing, Van Nostrand Reinhold, 1985.
21
Common Fault ModelsCommon Fault Models
Single stuck-at faults Transistor open and short faults Memory faults PLA faults (stuck-at, cross-point,
bridging) Functional faults (processors) Delay faults (transition, path) Analog faults etc.
22
Single Stuck-at FaultSingle Stuck-at Fault Three properties define a single stuck-at fault
Only one line is faulty The faulty line is permanently set to 0 or 1 The fault can be at an input or output of a gate
Example: XOR circuit has 12 fault sites ( ) and 24 single stuck-at faults
a
b
c
d
e
f
1
0
g h i 1
s-a-0
j
k
z
0(1)1(0)
1
Test vector for h s-a-0 fault
Good circuit valueFaulty circuit value
23
Fault EquivalenceFault Equivalence Number of fault sites in a Boolean gate circuit =
#PI + #gates + # (fanout branches). Fault equivalence: Fault sets f1 and f2 are
equivalent if all tests that detect f1 also detect f2 and vice versa.
If faults f1 and f2 are equivalent then the corresponding faulty functions are identical.
Fault collapsing: All single faults of a logic circuit can be divided into disjoint equivalence subsets, where all faults in a subset are mutually equivalent. A collapsed fault set contains one fault from each equivalence subset.
24
Equivalence RulesEquivalence Rules
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0
sa1
sa0
sa1
sa0
sa0sa1
sa1
sa0
sa0
sa0sa1
sa1
sa1
AND
NAND
OR
NOR
WIRE
NOT
FANOUT
25
Dominance ExampleDominance Example
sa0 sa1sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
Faults in redremoved byequivalencecollapsing
15Collapse ratio = ── = 0.47 32
Faults in yellowremoved bydominancecollapsing
26
Fault DominanceFault Dominance If all tests of some fault F1 detect another fault F2, then F2 is
said to dominate F1. Dominance fault collapsing: If fault F2 dominates F1, then F2
is removed from the fault list. When dominance fault collapsing is used, it is sufficient to
consider only the input faults of Boolean gates. See the next example.
In a tree circuit (without fanouts) PI faults form a dominance collapsed fault set.
If two faults dominate each other then they are equivalent.
27
Dominance ExampleDominance Example
s-a-1F1
s-a-1F2 001
110 010 000101 100
011
All tests of F2
Only test of F1s-a-1
s-a-1
s-a-1s-a-0
A dominance collapsed fault set
28
CheckpointsCheckpoints Primary inputs and fanout branches of a combinational
circuit are called checkpoints. Checkpoint theorem: A test set that detects all single
(multiple) stuck-at faults on all checkpoints of a combinational circuit, also detects all single (multiple) stuck-at faults in that circuit.
Total fault sites = 16
Checkpoints ( ) = 10
29
Transistor (Switch) Faults
Transistor (Switch) Faults
MOS transistor is considered an ideal switch and two types of faults are modeled:
Stuck-open -- a single transistor is permanently stuck in the open state.
Stuck-short -- a single transistor is permanently shorted irrespective of its gate voltage.
Detection of a stuck-open fault requires two vectors.
Detection of a stuck-short fault requires the measurement of quiescent current (IDDQ).
30
Stuck-Open ExampleStuck-Open Example
Two-vector s-op testcan be constructed byordering two s-at testsA
B
VDD
C
pMOSFETs
nMOSFETs
Stuck-open
1
0
0
0
0 1(Z)
Good circuit states
Faulty circuit states
Vector 1: test for A s-a-0(Initialization vector)
Vector 2 (test for A s-a-1)
31
Stuck-Short ExampleStuck-Short Example
A
B
VDD
C
pMOSFETs
nMOSFETs
Stuck-short
1
0
0 (X)
Good circuit state
Faulty circuit state
Test vector for A s-a-0
IDDQ path infaulty circuit
34
Functional vs. Structural(Continued)
Functional vs. Structural(Continued)
Functional ATPG – generate complete set of tests for circuit input-output combinations 129 inputs, 65 outputs: 2129 = 680,564,733,841,876,926,926,749,
214,863,536,422,912 patterns Using 1 GHz ATE, would take 2.15 x 1022 years
Structural test: No redundant adder hardware, 64 bit slices Each with 27 faults (using fault equivalence) At most 64 x 27 = 1728 faults (tests) Takes 0.000001728 s on 1 GHz ATE
Designer gives small set of functional tests – augment with structural tests to boost coverage to 98+ %
35
Exhaustive AlgorithmExhaustive Algorithm
For n-input circuit, generate all 2n input patterns
Infeasible, unless circuit is partitioned into cones of logic, with 15 inputs Perform exhaustive ATPG for each cone Misses faults that require specific
activation patterns for multiple cones to be tested
36
Random-Pattern Generation
Random-Pattern Generation
Flow chart for method
Use to get tests for 60-80% of faults, then switch to D-algorithm or other ATPG for rest
37
History of Algorithm Speedups
History of Algorithm Speedups
Algorithm
D-ALGPODEMFANTOPSSOCRATESWaicukauski et al.ESTTRANRecursive learningTafertshofer et al.
Est. speedup over D-ALG(normalized to D-ALG time)17232921574 ATPG System2189 ATPG System8765 ATPG System3005 ATPG System48525057
Year
1966198119831987198819901991199319951997
† † †
†
Testability MeasuresTestability Measures
Definition Controllability and observability SCOAP measures
Combinational circuits Sequential circuits
Summary
38
What are Testability Measures?
What are Testability Measures?
Approximate measures of: Difficulty of setting internal circuit lines to 0 or 1
from primary inputs. Difficulty of observing internal circuit lines at
primary outputs. Applications:
Analysis of difficulty of testing internal circuit parts – redesign or add special test hardware.
Guidance for algorithms computing test patterns – avoid using hard-to-control lines.
39
Testability AnalysisTestability Analysis
Determines testability measures Involves Circuit Topological analysis, but no test vectors (static analysis) and no search algorithm. Linear computational complexity
Otherwise, is pointless – might as well use automatic test-pattern generation and calculate:
Exact fault coverage Exact test vectors
40
SCOAP MeasuresSCOAP Measures SCOAP – Sandia Controllability and Observability Analysis
Program Combinational measures:
CC0 – Difficulty of setting circuit line to logic 0 CC1 – Difficulty of setting circuit line to logic 1 CO – Difficulty of observing a circuit line
Sequential measures – analogous: SC0 SC1 SO
Ref.: L. H. Goldstein, “Controllability/Observability Analysis of Digital Circuits,” IEEE Trans. CAS, vol. CAS-26, no. 9. pp. 685 – 693, Sep. 1979.
41
Range of SCOAP MeasuresRange of SCOAP Measures
Controllabilities – 1 (easiest) to infinity (hardest) Observabilities – 0 (easiest) to infinity (hardest) Combinational measures:
Roughly proportional to number of circuit lines that must be set to control or observe given line.
Sequential measures: Roughly proportional to number of times flip-flops
must be clocked to control or observe given line.
42
Combinational ObservabilityCombinational ObservabilityTo observe a gate input: Observe output and make other input
values non-controlling.
45
Observability Formulas(Continued)
Observability Formulas(Continued)
Fanout stem: Observe through branch with best observability.
46
Combinational Observability for Level
1
Combinational Observability for Level
1Number in square box is level from primary outputs (POs).
(CC0, CC1) CO
50
Sequential Measures (Comparison)
Sequential Measures (Comparison)
Combinational
Increment CC0, CC1, CO whenever you pass through
a gate, either forward or backward.
Sequential
Increment SC0, SC1, SO only when you pass through
a flip-flop, either forward or backward.
Both
Must iterate on feedback loops until controllabilities
stabilize.
53
D Flip-Flop EquationsD Flip-Flop Equations Assume a synchronous RESET line. SC1 (Q) = SC1 (D) + SC1 (C) + SC0 (C) + SC0
(RESET) + 1 SC0 (Q) = min [SC1 (RESET) + SC1 (C) + SC0 (C),
SC0 (D) + SC1 (C) + SC0 (C)] + 1 SO (D) = SO (Q) + SC1 (C) + SC0 (C) + SC0
(RESET)
54
D Flip-Flop Clock and ResetD Flip-Flop Clock and Reset CO (RESET) = CO (Q) + CC1 (Q) + CC1 (RESET) + CC1 (C) + CC0 (C) SO (RESET) is analogous Three ways to observe the clock line:
1. Set Q to 1 and clock in a 0 from D2. Set the flip-flop and then reset it3. Reset the flip-flop and clock in a 1 from D
CO (C) = min [ CO (Q) + CC1 (Q) + CC0 (D) + CC1 (C) + CC0 (C), CO (Q) + CC1 (Q) + CC1 (RESET) + CC1 (C) + CC0 (C), CO (Q) + CC0 (Q) + CC0 (RESET) + CC1 (D) + CC1 (C) + CC0 (C)] SO (C) is analogous
55
Testability ComputationTestability Computation1. For all PIs, CC0 = CC1 = 1 and SC0 = SC1 = 0
2. For all other nodes, CC0 = CC1 = SC0 = SC1 = ∞3. Go from PIs to POs, using CC and SC equations to get
controllabilities -- Iterate on loops until SC stabilizes -- convergence is guaranteed.
4. Set CO = SO = 0 for POs, ∞ for all other lines.
5. Work from POs to PIs, Use CO, SO, and controllabilities to get observabilities.
6. Fanout stem (CO, SO) = min branch (CO, SO)
7. If a CC or SC (CO or SO) is ∞ , that node is uncontrollable (unobservable).
56
Testability Measures are Not Exact
Testability Measures are Not Exact
Exact computation of measures is NP-Complete and impractical Green (Italicized) measures show correct (exact) values – SCOAP
measures are in orange -- CC0,CC1 (CO)
1,1(6)1,1(5,∞)
1,1(5)1,1(4,6)
1,1(6)
1,1(5,∞)
6,2(0)4,2(0)
2,3(4)2,3(4,∞)
(5)(4,6)
(6)
(6)
2,3(4)2,3(4,∞)
63
SummarySummary
Testability measures are approximate measures of: Difficulty of setting circuit lines to 0 or 1 Difficulty of observing internal circuit lines
Applications: Analysis of difficulty of testing internal circuit parts
Redesign circuit hardware or add special test hardware where measures show poor controllability or observability.
Guidance for algorithms computing test patterns – avoid using hard-to-control lines
64
ExerciseExercise
Compute (CC0, CC1) CO for all lines in the following circuit.
Questions: 1. Is observability of primary input correct?
2. Are controllabilities of primary outputs correct?
3. What do the observabilities of the input lines ofthe AND gate indicate?
65
66
Major Combinational Automatic Test-Pattern Generation Algorithms
Major Combinational Automatic Test-Pattern Generation Algorithms
Definitions D-Algorithm (Roth) – 1966 PODEM (Goel) -- 1981
67
Forward ImplicationForward Implication Results in logic gate
inputs that are significantly labeled so that output is uniquely determined
AND gate forward implication table:
68
Backward ImplicationBackward Implication
Unique determination of all gate inputs when the gate output and some of the inputs are given
69
Implication StackImplication Stack Push-down stack. Records:
Each signal set in circuit by ATPG Whether alternate signal value already tried Portion of binary search tree already searched
70
Implication Stack after Backtrack
Implication Stack after Backtrack
0
1
0 0
0
0
0 11 1
1
E
F
BB
F F
1
UnexploredPresent AssignmentSearched and Infeasible
71
Objectives and Backtracing of ATPG
Algorithm
Objectives and Backtracing of ATPG
Algorithm Objective – desired signal value goal for ATPG
Guides it away from infeasible/hard solutions Backtrace – Determines which primary input
and value to set to achieve objective Use testability measures
72
D-Algorithm -- Roth IBM
(1966)
D-Algorithm -- Roth IBM
(1966) Fundamental concepts invented:
First complete ATPG algorithm D-Cube D-Calculus Implications – forward and backward Implication stack Backtrack Test Search Space
73
Primitive D-Cube of Failure
Primitive D-Cube of Failure
Models circuit faults: Stuck-at-0 Stuck-at-1 Bridging fault (short circuit) Arbitrary change in logic function
AND Output sa0: “1 1 D” AND Output sa1: “0 X D ”
“X 0 D ” Wire sa0: “D” Propagation D-cube – models
conditions under which fault effect propagates through gate
74
Implication ProcedureImplication Procedure
1. Model fault with appropriate primitive D-cube of failure (PDF)
2. Select propagation D-cubes to propagate fault effect to a circuit output (D-drive procedure)
3. Select singular cover cubes to justify internal circuit signals (Consistency procedure)
Put signal assignments in test cube Regrettably, cubes are selected very
arbitrarily by D-ALG
75
D-Algorithm – Top LevelD-Algorithm – Top Level
1. Number all circuit lines in increasing level order from PIs to POs;
2. Select a primitive D-cube of the fault to be the test cube;
Put logic outputs with inputs labeled as D (D) onto the D-frontier;
3. D-drive ();4. Consistency ();5. return ();
76
D-Algorithm – D-driveD-Algorithm – D-drivewhile (untried fault effects on D-frontier)
select next untried D-frontier gate for propagation;while (untried fault effect fanouts exist)
select next untried fault effect fanout;generate next untried propagation D-cube;D-intersect selected cube with test cube;if (intersection fails or is undefined) continue;if (all propagation D-cubes tried & failed) break;if (intersection succeeded)
add propagation D-cube to test cube -- recreate D-frontier;Find all forward & backward implications of assignment;save D-frontier, algorithm state, test cube, fanouts, fault;break;
else if (intersection fails & D and D in test cube) Backtrack ();else if (intersection fails) break;
if (all fault effects unpropagatable) Backtrack ();
77
D-Algorithm -- ConsistencyD-Algorithm -- Consistencyg = coordinates of test cube with 1’s & 0’s;if (g is only PIs) fault testable & stop;for (each unjustified signal in g)
Select highest # unjustified signal z in g, not a PI;if (inputs to gate z are both D and D) break;while (untried singular covers of gate z)
select next untried singular cover;if (no more singular covers)
If (no more stack choices) fault untestable & stop;else if (untried alternatives in Consistency)
pop implication stack -- try alternate assignment;else
Backtrack ();D-drive ();
If (singular cover D-intersects with z) delete z from g, add inputs to singular cover to g, find all forward and backward implications of new assignment, and break;
If (intersection fails) mark singular cover as failed;
78
BacktrackBacktrack
if (PO exists with fault effect) Consistency ();else pop prior implication stack setting to try
alternate assignment;if (no untried choices in implication stack)
fault untestable & stop;else return;
83
Step 5 -- Example 7.2Step 5 -- Example 7.2
D1
0
D
Step 5 – Consistency – f = 0 Already set
D
1
D
1
84
Step 6 -- Example 7.2Step 6 -- Example 7.2
D1
0
D
Step 6 – Consistency – Set c = 0, Set e = 0
D
1
D
1
0
0
85
D-Chain Dies -- Example 7.2
D-Chain Dies -- Example 7.2
D1
0
X
D
Step 7 – Consistency – Set B = 0 D-Chain dies
D
1
D
1
0
0
0
Test cube: A, B, C, D, e, f, g, h, k, L
88
Example 7.3 – Step 2 s sa1Example 7.3 – Step 2 s sa1 Forward & Backward Implications
1
Dsa1
0D
D
1 1
0
11
89
Example 7.3 – Step 3 s sa1
Example 7.3 – Step 3 s sa1
Propagation D-cube for Z – test found!
1
Dsa1
0D
D
1 1
0
11
1
D
92
Example 7.3 – Step 2 u sa1Example 7.3 – Step 2 u sa1 Forward and backward implications
1
D
0
sa1D
0
01
0
1
0
93
InconsistentInconsistent
d = 0 and m = 1 cannot justify r = 1 (equivalence) Backtrack Remove B = 0 assignment
97
Example 7.3 – Step 4 u sa1
Example 7.3 – Step 4 u sa1
Propagation D-cube for Z and implications
D
1
sa1D
01
D
1
1
00
0
1 1
98
PODEM -- Goel IBM
(1981)
PODEM -- Goel IBM
(1981)
New concepts introduced: Expand binary decision tree only
around primary inputs Use X-PATH-CHECK to test
whether D-frontier still there
Objectives -- bring ATPG closer to propagating D (D) to PO
Backtracing
99
MotivationMotivation
IBM introduced semiconductor DRAM memory into its mainframes – late 1970’s
Memory had error correction and translation circuits – improved reliability D-ALG unable to test these circuits
Search too undirected Large XOR-gate trees Must set all external inputs to define
output Needed a better ATPG tool
100
PODEM High-Level FlowPODEM High-Level Flow
1. Assign binary value to unassigned PI2. Determine implications of all PIs3. Test Generated? If so, done.4. Test possible with more assigned PIs? If
maybe, go to Step 15. Is there untried combination of values on
assigned PIs? If not, exit: untestable fault6. Set untried combination of values on
assigned PIs using objectives and backtrace. Then, go to Step 2
102
Initial objective: Set r to 1 to sensitize fault
1
sa1
Example 7.3 -- Step 2 s sa1
Example 7.3 -- Step 2 s sa1
105
Example 7.3 -- Step 5 s sa1Example 7.3 -- Step 5 s sa1 Forward implications: d = 0, X = 1
1
sa1
00
1
108
Example 7.3 -- Step 8 s sa1Example 7.3 -- Step 8 s sa1 Set B to 1. Implications in stack: A = 0, B = 1
1
sa1
00
1
1
109
D
Example 7.3 -- Step 9 s sa1Example 7.3 -- Step 9 s sa1 Forward implications: k = 1, m = 0, r = 1, q =
1, Y = 1, s = D, u = D, v = D, Z = 1
1
sa1
1
0
1
1
DD
1
0
1
0
1
110
Backtrack -- Step 10 s sa1
Backtrack -- Step 10 s sa1 X-PATH-CHECK shows paths s – Y and s – u – v – Z blocked (D-frontier
disappeared)
1
sa1
00
1
112
Backtrack -- s sa1Backtrack -- s sa1
1sa1
00
1
0 1
0
1
01
01
Forward implications: d = 0, X = 1, m = 1, r = 0, s = 1, q = 0, Y = 1, v = 0, Z = 1. Fault not sensitized.
116
Backtrack -- s sa1Backtrack -- s sa1 Forward implications: d = 0, X = 1, m = 1, r =
0. Conflict: fault not sensitized. Backtrack
sa1
1
0
0
0
1
1
1
1
10
01
118
Fault Tested -- Step 18 s sa1
Fault Tested -- Step 18 s sa1 Forward implications: d = 1, m = 1, r = 1, q =
0, s = D, v = D, X = 0, Y = D
1
sa1
1
1
11
0
D
0
D
D
X
D
119
Backtrace (s, vs)Pseudo-Code
Backtrace (s, vs)Pseudo-Code
v = vs;while (s is a gate output)
if (s is NAND or INVERTER or NOR) v = v;if (objective requires setting all inputs)
select unassigned input a of s with hardest controllability to value v;
elseselect unassigned input a of s with easiest
controllability to value v;s = a;
return (s, v) /* Gate and value to be assigned */;
120
Objective Selection CodeObjective Selection Code
if (gate g is unassigned) return (g, v);select a gate P from the D-frontier;select an unassigned input l of P;if (gate g has controlling value)
c = controlling input value of g;else if (0 value easier to get at input
of XOR/EQUIV gate)c = 1;
else c = 0;return (l, c );
121
PODEM AlgorithmPODEM Algorithmwhile (no fault effect at POs)
if (xpathcheck (D-frontier))
(l, vl) = Objective (fault, vfault);
(pi, vpi) = Backtrace (l, vl);
Imply (pi, vpi);
if (PODEM (fault, vfault) == SUCCESS) return (SUCCESS);
(pi, vpi) = Backtrack ();
Imply (pi, vpi);
if (PODEM (fault, vfault) == SUCCESS) return (SUCCESS);Imply (pi, “X”);return (FAILURE);
else if (implication stack exhausted)return (FAILURE);
else Backtrack ();return (SUCCESS);
FAN -- Fujiwara and Shimono(1983)
FAN -- Fujiwara and Shimono(1983)
New concepts: Immediate assignment of
uniquely-determined signals Unique sensitization Stop Backtrace at head lines Multiple Backtrace
122
PODEM Fails to Determine Unique Signals
PODEM Fails to Determine Unique Signals
Backtracing operation fails to set all 3 inputs of gate L to 1 Causes unnecessary search
123
FAN -- Early Determination of Unique
Signals
FAN -- Early Determination of Unique
Signals
Determine all unique signals implied by current decisions immediately Avoids unnecessary search
124
PODEM Makes Unwise Signal Assignments
PODEM Makes Unwise Signal Assignments
Blocks fault propagation due to assignment J = 0
125
Unique Sensitization of FAN with No Search
Unique Sensitization of FAN with No Search
FAN immediately sets necessary signals to propagate fault
Path over which fault is uniquely sensitized
126
HeadlinesHeadlines
Headlines H and J separate circuit into 3 parts, for which test generation can be done independently
127
Multiple BacktraceMultiple Backtrace
FAN – breadth-firstpasses –
1 time
PODEM –depth-first
passes – 6 times
129
AND Gate Vote Propagation
AND Gate Vote Propagation
AND Gate Easiest-to-control Input –
# 0’s = OUTPUT # 0’s # 1’s = OUTPUT # 1’s
All other inputs -- # 0’s = 0 # 1’s = OUTPUT # 1’s
[5, 3]
[5, 3]
[0, 3]
[0, 3]
[0, 3]
130
Multiple Backtrace Fanout Stem VotingMultiple Backtrace Fanout Stem Voting
Fanout Stem -- # 0’s = Branch # 0’s, # 1’s = Branch # 1’s
[5, 1][1, 1][3, 2]
[4, 1]
[5, 1]
[18, 6]
131
Multiple Backtrace Algorithm
Multiple Backtrace Algorithm
repeat
remove entry (s, vs) from current_objectives;
If (s is head_objective) add (s, vs) to head_objectives;
else if (s not fanout stem and not PI)vote on gate s inputs;if (gate s input I is fanout branch)
vote on stem driving I;add stem driving I to stem_objectives;
else add I to current_objectives;
132
Rest of Multiple BacktraceRest of Multiple Backtraceif (stem_objectives not empty)
(k, n0 (k), n1 (k)) = highest level stem from stem_objectives;
if (n0 (k) > n1 (k)) vk = 0;
else vk = 1;
if ((n0 (k) != 0) && (n1 (k) != 0) && (k not in fault cone))
return (k, vk);
add (k, vk) to current_objectives;
return (multiple_backtrace (current_objectives));remove one objective (k, vk) from head_objectives;
return (k, vk);133
True-Value Simulation Algorithms
True-Value Simulation Algorithms
Compiled-code simulation Applicable to zero-delay combinational logic Also used for cycle-accurate synchronous sequential
circuits for logic verification Efficient for highly active circuits, but inefficient for low-
activity circuits High-level (e.g., C language) models can be used
Event-driven simulation Only gates or modules with input events are evaluated
(event means a signal change) Delays can be accurately simulated for timing verification Efficient for low-activity circuits Can be extended for fault simulation
135
Compiled-Code Algorithm
Compiled-Code Algorithm
Step 1: Levelize combinational logic and encode in a compilable programming language
Step 2: Initialize internal state variables (flip-flops) Step 3: For each input vector
Set primary input variables Repeat (until steady-state or max. iterations)
Execute compiled code
Report or save computed variables
136
Event-Driven Algorithm(Example)
Event-Driven Algorithm(Example)
2
2
4
2
a =1
b =1
c =1 0
d = 0
e =1
f =0
g =1
Time, t 0 4 8
g
t = 0
1
2
3
4
5
6
7
8
Scheduledevents
c = 0
d = 1, e = 0
g = 0
f = 1
g = 1
Activitylist
d, e
f, g
g
Tim
e s
tack
137
Time Wheel (Circular Stack)Time Wheel (Circular Stack)
t=01
2
3
45
67
maxCurrenttimepointer Event link-list
138
Efficiency of Event-driven Simulator
Efficiency of Event-driven Simulator
Simulates events (value changes) only Speed up over compiled-code can be ten times or
more; in large logic circuits about 0.1 to 10% gates become active for an input change
Large logicblock without
activity
Steady 0
0 to 1 event
Steady 0(no event)
139
Fault Simulation Algorithms
Fault Simulation Algorithms
Serial Parallel Deductive Concurrent Differential
140
Serial AlgorithmSerial Algorithm Algorithm: Simulate fault-free circuit and save
responses. Repeat following steps for each fault in the fault list:
Modify netlist by injecting one fault Simulate modified netlist, vector by vector, comparing
responses with saved responses If response differs, report fault detection and suspend
simulation of remaining vectors Advantages:
Easy to implement; needs only a true-value simulator, less memory
Most faults, including analog faults, can be simulated
141
Serial Algorithm (Cont.)Serial Algorithm (Cont.) Disadvantage: Much repeated computation; CPU
time prohibitive for VLSI circuits Alternative: Simulate many faults together
Test vectors Fault-free circuit
Circuit with fault f1
Circuit with fault f2
Circuit with fault fn
Comparator f1 detected?
Comparator f2 detected?
Comparator fn detected?
142
Parallel Fault Simulation
Parallel Fault Simulation
Compiled-code method; best with two-states (0,1)
Exploits inherent bit-parallelism of logic operations on computer words
Storage: one word per line for two-state simulation
Multi-pass simulation: Each pass simulates w-1 new faults, where w is the machine word length
Speed up over serial method ~ w-1
143
Parallel Fault Sim. Example
Parallel Fault Sim. Example
a
b c
d
e
f
g
1 1 1
1 1 1 1 0 1
1 0 1
0 0 0
1 0 1
s-a-1
s-a-0
0 0 1
c s-a-0 detected
Bit 0: fault-free circuit
Bit 1: circuit with c s-a-0
Bit 2: circuit with f s-a-1
144
Deductive Fault Simulation
Deductive Fault Simulation
One-pass simulation Each line k contains a list Lk of faults
detectable on k Following true-value simulation of each vector,
fault lists of all gate output lines are updated using set-theoretic rules, signal values, and gate input fault lists
PO fault lists provide detection data Limitations:
Set-theoretic rules difficult to derive for non-Boolean gates
Gate delays are difficult to use
145
Deductive Fault Sim.Example
Deductive Fault Sim.Example
a
b c
d
e
f g
1
1 1
0
1
{a0}
{b0 , c0}
{b0}
{b0 , d0}
Le = La U Lc U {e0}
= {a0 , b0 , c0 , e0}
Lg = (Le Lf ) U {g0}
= {a0 , c0 , e0 , g0}
U
{b0 , d0 , f1}
Notation: Lk is fault list for line k
kn is s-a-n fault on line
k
Faults detected bythe input vector
146
Concurrent Fault SimulationConcurrent Fault Simulation Event-driven simulation of fault-free circuit and only
those parts of the faulty circuit that differ in signal states from the fault-free circuit.
A list per gate containing copies of the gate from all faulty circuits in which this gate differs. List element contains fault ID, gate input and output values and internal states, if any.
All events of fault-free and all faulty circuits are implicitly simulated.
Faults can be simulated in any modeling style or detail supported in true-value simulation (offers most flexibility.)
Faster than other methods, but uses most memory.
147
Conc. Fault Sim. ExampleConc. Fault Sim. Example
a
b c
d
e
f
g
1
11
0
1
1
11
1
01
1 0
0
10
1
00
1
00
1
10
1
00
1
11
1
11
0
00
0
11
0
00
0
00
0 1 0 1 1 1
a0 b0 c0 e0
a0 b0
b0
c0 e0
d0d0 g0 f1
f1
148
Fault Coverage and Efficiency
Fault Coverage and Efficiency
Fault coverage =
Faultefficiency
# of detected faultsTotal # faults
# of detected faultsTotal # faults -- # undetectable faults
=
149
Test Generation SystemsTest Generation Systems
CircuitDescription
TestPatterns
UndetectedFaults
RedundantFaults
AbortedFaults
BacktrackDistribution
Fault
List
Compacter
SOCRATESWith faultsimulator
150
Test Compaction Example
Test Compaction Example
t1 = 0 1 X t2 = 0 X 1
t3 = 0 X 0 t4 = X 0 1
Combine t1 and t3, then t2 and t4 Obtain:
t13 = 0 1 0 t24 = 0 0 1
Test Length shortened from 4 to 2
151
Test CompactionTest Compaction
Fault simulate test patterns in reverse order of generation ATPG patterns go first Randomly-generated patterns go last
(because they may have less coverage) When coverage reaches 100%, drop
remaining patterns (which are the useless random ones)
Significantly shortens test sequence – economic cost reduction
152
Static and Dynamic Compaction of Sequences
Static and Dynamic Compaction of Sequences Static compaction
ATPG should leave unassigned inputs as X Two patterns compatible – if no conflicting
values for any PI Combine two tests ta and tb into one test
tab = ta tb using D-intersection
Detects union of faults detected by ta & tb Dynamic compaction
Process every partially-done ATPG vector immediately
Assign 0 or 1 to PIs to test additional faults
153
Sequential CircuitsSequential Circuits
A sequential circuit has memory in addition to combinational logic.
Test for a fault in a sequential circuit is a sequence of vectors, which
Initializes the circuit to a known state Activates the fault, and Propagates the fault effect to a primary output
Methods of sequential circuit ATPG Time-frame expansion methods Simulation-based methods
154
Example: A Serial Adder
Example: A Serial Adder
FF
An Bn
Cn Cn+1
Sn
s-a-0
11
1
1
1X
X
X
D
D
Combinational logic
155
Time-Frame ExpansionTime-Frame Expansion
An Bn
FF
Cn Cn+11
X
X
Sn
s-a-011
11
D
D
Combinational logicSn-1
s-a-011
11 X
D
D
Combinational logic
Cn-1
11
D
D
X
An-1 Bn-1 Time-frame -1 Time-frame 0
156
Concept of Time-Frames
Concept of Time-Frames
If the test sequence for a single stuck-at fault contains n vectors,
Replicate combinational logic block n times Place fault in each block Generate a test for the multiple stuck-at fault
using combinational ATPG with 9-valued logic
Comb.block
Fault
Time-frame0
Time-frame-1
Time-frame-n+1
Unknownor givenInit. state
Vector 0Vector -1Vector -n+1
PO 0PO -1PO -n+1
Statevariables
Nextstate
157
Five-Valued Logic (Roth)0,1, D, D, X
Five-Valued Logic (Roth)0,1, D, D, X
A
B
X
X
X
0
s-a-1D
A
B
X X
X
0
s-a-1D
FF1 FF1
FF2 FF2D D
Time-frame -1 Time-frame 0
159