APPLICATION OF MACHINE LEARNING IN DIGITAL LOGIC …
Transcript of APPLICATION OF MACHINE LEARNING IN DIGITAL LOGIC …
ABSTRACT
APPLICATION OF MACHINE LEARNING IN DIGITAL LOGIC CIRCUIT DESIGN VERIFICATION AND TESTING
Test pattern generation and fault simulation portray the essential role for
the structural testing of the Integrated Chips. Structural testing validates the
correctness of a circuit in terms of gates and interconnection between gates. The
primary role of structural testing is to simulate the various operations of the
circuit. To simulate the circuits for the structural testing several Electronic Design
Automation (EDA) tools are available for fault detection and test patterns
generation. This thesis presents a new approach for fault detection and test pattern
generation in the combinational circuit by using machine learning techniques. The
Machine-learning model can be trained for predicting the behavioral architecture
of the circuit. This machine learning model can predict the test patterns and
number of possible faults by giving the inputs such as primary input and number
of gates required for the circuit. In addition, a truth table of a design can be used
by machine learning model to verify the functionality of a given circuit. The main
purpose of this research work is to use machine learning to develop a new
approach for VLSI testing and design verification of a digital logic circuit.
Abhilasha Harsukhbhai Dave August 2018
APPLICATION OF MACHINE LEARNING IN DIGITAL LOGIC
CIRCUIT DESIGN VERIFICATION AND TESTING
by
Abhilasha Harsukhbhai Dave
A thesis
submitted in partial
fulfillment of the requirements for the degree of
Master of Science in Engineering
in the Lyles College of Engineering
California State University, Fresno
August 2018
APPROVED
For the Department of Electrical and Computer Engineering:
We, the undersigned, certify that the thesis of the following student meets the required standards of scholarship, format, and style of the university and the student's graduate degree program for the awarding of the master's degree. Abhilasha Harsukhbhai Dave
Thesis Author
Reza Raeisi (Chair) Electrical and Computer Engineering
Aaron Stillmaker Electrical and Computer Engineering
Hayssam El-Razouk Electrical and Computer Engineering
For the University Graduate Committee:
Dean, Division of Graduate Studies
AUTHORIZATION FOR REPRODUCTION
OF MASTER’S THESIS
X I grant permission for the reproduction of this thesis in part or in
its entirety without further authorization from me, on the
condition that the person or agency requesting reproduction
absorbs the cost and provides proper acknowledgment of
authorship.
Permission to reproduce this thesis in part or in its entirety must
be obtained from me.
Signature of thesis author:
ACKNOWLEDGMENTS
I would personally like to whole-heartedly thank my respected advisor; Dr.
Reza Raeisi, for his valuable time and immense effort that has made this project
successful. It is along with his guidance and under his supervision that I was able
to acquire the knowledge and apply the skills extensively. He has always provided
me with motivation and encouraged me to scale new heights.
Nevertheless, friends are family away from home. A special mention to my
dear and near friends and colleagues; Manroop Turna, Ketul Shah, Viral Pandya
and Tanmay Parkar for their outstanding support.
Last but never the least, I would like to express gratitude and appreciation
to my beloved parents; Mr. Harsukh Dave and Mrs. Malti Dave, without whom I
would not be able to start my collegiate work, let alone the completion. Their
unconditional love and care along with their emotional and financial support
allowed me to showcase my best expertise in my field of study.
My investment in the almighty up above has rewarded me with valuable
returns that I will cherish them for life. The success that I have gained upon
completion of the project happens to be one of them. I am deeply honored to be a
student of the Lyles College of Engineering and alumni of the California State
University – Fresno. Go Dogs!
TABLE OF CONTENTS
Page
LIST OF TABLES .................................................................................................. vi
LIST OF FIGURES ................................................................................................ vii
LIST OF ABBREVIATIONS ................................................................................. ix
SIGNIFICANCE AND OBJECTIVE ...................................................................... x
CHAPTER 1: INTRODUCTION ............................................................................ 1
CHAPTER 2: VLSI TESTING ................................................................................ 5
Levels of Testing ............................................................................................... 5
Fault Modeling .................................................................................................. 7
CHAPTER 3: MACHINE LEARNING ................................................................. 19
Unsupervised Learning ................................................................................... 22
Model and Cost Function ................................................................................ 23
Gradient Descent ............................................................................................. 28
CHAPTER 4: IMPLEMENTATION, RESULTS, AND DISCUSSION .............. 31
Implementation ............................................................................................... 31
Results ............................................................................................................. 38
Discussion ....................................................................................................... 40
Summary ......................................................................................................... 42
CHAPTER 5: CONCLUSION ............................................................................... 44
CHAPTER 6: FUTURE WORK ............................................................................ 45
REFERENCES ....................................................................................................... 49
APPENDICES ........................................................................................................ 53
APPENDIX A: VERILOG BENCH CODES OF BOOLEAN EQUATIONS ...... 54
APPENDIX B: DATA SET ................................................................................... 89
LIST OF TABLES
Page
Table 1. Difference between verification and testing [13] ....................................... 5
Table 2. Truth Table of AND Gate at S-A-1 Fault .................................................. 8
Table 3. Truth Table of AND Gate ........................................................................ 13
Table 4. Singular cover of AND gate ..................................................................... 14
Table 5. Truth table of OR gate .............................................................................. 14
Table 6. Singular cover of OR gate ........................................................................ 14
Table 7. D- Intersection .......................................................................................... 15
Table 8. Singular Cover of NOR gate [13] ............................................................ 16
Table 9. D-Intersection of NOR gate for s-a-0 [13] ............................................... 16
Table 10. PDC of AND gate for node A [13] ........................................................ 17
Table 11. PDC of AND gate for node B [13] ......................................................... 17
Table 12. Test pattern generation using D-Algorithm ........................................... 18
Table 13. Training Sets for Housing Price [18] ..................................................... 23
Table 14. Sample dataset for test-pattern ............................................................... 40
Table 15. Comparison between actual fault present in the circuit and predicted fault by ML model ................................................................................... 41
Table 16. Objective Summary ................................................................................ 43
Table 17. Labeling the 3-input test patterns ........................................................... 45
Table 18. Data set example for 3-input .................................................................. 46
Table 19. Different combination for 3-input of fault ............................................. 47
Table 20. Combination of test patterns with respect to number of primary inputs .......................................................................................... 48
LIST OF FIGURES
Page
Figure 1. The cost of VLSI testing [6] ..................................................................... 2
Figure 2. Stages in VLSI testing .............................................................................. 6
Figure 3. Stages of VLSI design abstraction ............................................................ 6
Figure 4. VLSI circuit fault types ............................................................................. 7
Figure 5. Example of S-A-1 fault ............................................................................. 8
Figure 6. Totem pole circuit ..................................................................................... 9
Figure 7. NAND implementation of two input NOR gate [13] ............................. 10
Figure 8. Fan Out circuit of two input NAND gate [13] ........................................ 10
Figure 9. 3-input AND gate [13] ............................................................................ 11
Figure 10. Fault equivalence of all logic gates [14] ............................................... 12
Figure 11. Example of Fault Elimination ............................................................... 12
Figure 12. Terminologies associated with D-Algorithm ........................................ 13
Figure 13. 2-input NOR gate .................................................................................. 16
Figure 14. 2-input AND gate .................................................................................. 17
Figure 15. Example circuit for D-Algorithm.......................................................... 18
Figure 16. Housing price prediction [18] ............................................................... 20
Figure 17. Tumor size Vs. age [18] ........................................................................ 21
Figure 18. Tumor size [18] ..................................................................................... 21
Figure 19. Datasets on genes [18] .......................................................................... 22
Figure 20. Example of clustering in unsupervised learning [18] ........................... 22
Figure 21. Cocktail party problem [18] .................................................................. 23
Figure 22. Flow chart for the hypothesis ................................................................ 24
Figure 23. Straight-line representation for linear regression .................................. 25
Page
viii viii
Figure 24. Different hypothesis function [18]. ....................................................... 25
Figure 25. Hypothesis representation based on cost function [18] ........................ 26
Figure 26. 3-D plot of cost function [18] ............................................................... 27
Figure 27. Representation of 3-D into 2-D [18] ..................................................... 27
Figure 28. 3-D plot for gradient descent optimization graph A [18] ..................... 29
Figure 29. 3-D plot of gradient descent optimization graph B ............................... 29
Figure 30. Equation for the local optimum point [17] ........................................... 29
Figure 31. Simulation code for gradient descent [18] ............................................ 29
Figure 32. Different learning rate (alpha) [19] ....................................................... 30
Figure 33. Flow chart comparison between proposed machine learning method and existing EDA tool method ................................................... 31
Figure 34. Work flow of data collection ................................................................ 33
Figure 35. RTL View of boolean equation [20] ..................................................... 34
Figure 36. Neural network for regression model [21] ............................................ 36
Figure 37. TensoFlow model flow ......................................................................... 36
Figure 38. Flow chart of model implementation .................................................... 37
Figure 39. The actual out-put ................................................................................. 41
Figure 40. Output prediction for test pattern generation ........................................ 41
Figure 41. Behavior of Adder ................................................................................. 42
LIST OF ABBREVIATIONS
ATPG Automatic Test Pattern Generation
CSV Common Separated Value
DNN Deep Neural Network
DRC Design Rule Check
EDA Electronic Design Automation
FC Fault Coverage
ML
MIPS
Machine Learning
Million Instructions Per Second
OS Operating System
PDC Propagation D-Cubes
PDCF Primitive D-Cubes for a Fault
RTL Register-Transfer Level
VLSI Very Large-Scale Integration
SIGNIFICANCE AND OBJECTIVE
As the number of transistor per chip increases the dimension of transistor
and interconnection between them is decreases which refer to the feature size. The
continuous shrinking of feature size increases the probability of manufacturing
defect. Thus, it is essential to ensure the correct behavior of a circuit. Therefore,
the fault testing for the circuit is necessary. The proposed ML technique provides a
faster approach for predicting the number of faults and test patterns for a circuit.
Therefore, it reduces the time to verify the correct behavior of a circuit.
The prediction of the number of faults and test patterns in ML models is
done by giving information such as the number of gates and number of primary
inputs. Thus, by giving very minimum information to ML model the number of
faults and test pattern for a circuit can be predicted. Also, by giving only the truth
table of a circuit the behavior of a circuit can be predicted. For example, if ML
model is trained to predict the output of 2-bit adder then the same ML model can
be used to predict the output for 3,4,5, etc. bit adders. Therefore, the use of ML
model in digital design verification and testing can reduce the time for verification
and improve the quality of systems.
The objectives of this research work are:
1 Create an ML model to predict test patterns, and faults in a circuit
2. Preparation of VLSI testing data and feature selection according to the
ML model
3. Create an ML model to predict the behavior of a circuit by giving truth
table of a circuit
4. Training, and testing of machine learning model for high accuracy.
CHAPTER 1: INTRODUCTION
With improved technology, automated Machine Learning (ML)
applications are becoming more and more powerful. Different learning algorithms
are used to program machine learning models. These algorithms can learn from a
given set of data. This data can be in any form such as integers, strings, images,
videos, audios, etc. For example, voice acknowledgment frameworks, Siri and
Cortana utilize machine learning and profound neural systems to mimic human
communication [1]. As they advance, these applications will figure out how to
'comprehend' the subtleties and semantics of our dialect. For instance, Siri can
distinguish the trigger expression 'Hello Siri' under any condition using likelihood
[1]. By choosing suitable portions from a recorded database, the application would
then be able to pick reactions that nearly look like genuine discussion. Generally,
data-sets should be as big as possible to raise ML model accuracy. High
computational power and processing power are required to support the massive
data processing. In today’s world, these enormous computations are cheaper
because processing power accessible per dollar has most likely expanded by a
factor of ten generally at regular intervals during the last quarter of a century [2].
As an example, Since the 1940s, the power to process of Million Instructions Per
Second (MIPS) have increased by a factor of ten at regular intervals [2].
Therefore, ML techniques with different algorithms can be used in various
applications. Some examples of high-performing ML applications are image
recognition systems, voice recognition systems (i.e. cybernetic personal
assistants), and recommendation systems (Netflix, LinkedIn, Facebook etc.) [3].
The main idea of this thesis is to implement test pattern generation and
fault detection using ML. Detailed explanation of faults and test patterns are
2 2
covered in chapter 2. Test patterns are mainly used to determine whether the
circuit is faulty or not. Testing can be performed at the different stages of VLSI
development life cycle such as chip level, board level, and system level [4]. Thus,
to reduce the cost of a final product defects should be detected in earlier stages.
The rule of 10 implies that the cost to detect defect in a circuit increases ten times,
as it moves to further stages from chip level to the system level [5]. Figure 1
shows cost of VLSI testing throughout the years [6].
The test pattern generation and fault detection tools were created by the
Synopsys and Cadence companies [7, 8]. These test pattern generation tools assure
the fault coverage of the circuit or a chip. For the proposed ML approach these
tools were used to generate the data set.
This research is focused on the application of ML on digital logic circuit
testing and design verification. The fast processing, real-time predictions, and free
availability of ML frameworks are main reasons to investigate the implementation
Figure 1. The cost of VLSI testing [6]
3 3
of ML in VLSI testing and design verification. Considering these possibilities
extensive study of ML is needed from understanding the ML algorithms, and ML
platform on the Operating Systems(OS) for the preparation of a data-set.
To predict the output from given data-set different ML algorithmic models
are being used. These models are mainly divided into two categories:
1. Supervised learning
2. Unsupervised learning.
In supervised machine learning, the inputs and outputs are given to the
machine learning model and the model will learn by mapping the activation
function. The activation function is a mathematical inequality, which can be linear,
sigmoid, hyperbolic, etc. [9]. Whereas, in the unsupervised algorithm the model is
trained without output labels. For detailed understanding of ML reading of
Chapter 3 is recommended.
Supervised learning algorithms have been implemented in this research.
Identifying the type of problem is the first step of ML implementation. In
supervised learning typically, a problem can be of classification or to predict a
target numeric value (regression). A good example of classification is the spam
filter, whereas prediction of car price based on features (mileage, brand, age, etc.)
is an example of regression problem [9].
The system will not only perform accurately if feature selection from
training data is incorrect. In the initial stage success of an ML project depends on
good data and correct feature selection. As one of the regression problems in this
thesis is to predict fault coverage. Therefore, features chosen for this problem are
the number of inputs to the circuit and number of gates used in the circuit.
Programming language Python3 has been used with the TensorFlow platform for
4 4
ML [11, 12]. Specifically, Python3 language has been used because it is fully
compatible with the recent version of TensorFlow.
This thesis is separated into two parts. First part is the literature survey
which is covered in chapter 2 and 3. Second part is the implementation which is
covered in chapter 4.
Chapter 2 represents all the fundamentals of VLSI testing including the
approach of VLSI testing, levels of testing and abstraction, types of common fault
model, and classical stuck at fault models. In addition to this, it shows the different
techniques to reduce the stuck at faults as well as the algorithms used to detect the
stuck at faults in a circuit.
Chapter 3 represents the fundamentals of machine learning. In this chapter
different types of machine learning algorithms are discussed. This chapter covers
the topic related to supervised and unsupervised machine learning problems.
Moreover, it shows some insight on gradient descent algorithm and how it can be
used to train a machine learning model.
Chapter 4 covers the implementation, results and discussions sections. It
gives the information of the data-sets used in this research. Preparation of VLSI
testing data and feature selection is explained in this chapter followed by the
implementation procedure. Finally, results are discussed at the end of chapter 4.
CHAPTER 2: VLSI TESTING
VLSI testing is a very important aspect of a design, it used to discover the
presence of faults in the circuit. With the help of testing the correctness of the
circuit can be determined (the circuit can be at gate or transistor level). Another
way to test a circuit behavior is verification. The main difference between testing
and verification is, testing will take circuit into a consideration whereas
verification will take design into consideration. There are two types of verification
1. Simulation-based verification and 2. Formal approaches [13]. The difference
between the testing and verification is given in table 1.
Table 1. Difference between verification and testing [13]
Verification Testing
Verifies the correctness of a circuit. Verifies the correctness of a
manufactured hardware.
Performed by simulation or hardware
emulation or formal methods.
Two parts processes:
Test generation
Test application
Performed once prior to manufacturing. Test application performed on every
manufactured device.
Responsible for quality of design. Responsible for quality of device.
Levels of Testing
Rule of 10: indicates that if the fault has been detected at the chip level
then it will acquire 10 times more cost to detect the same fault at board level and
10 times more at the system level [5]. Therefore, when the chip is manufactured,
the testing should be done at the chip level to reduce the cost of testing. The stages
of VLSI testing are shown in figure 2. Level of abstraction is also another way to
classify the levels of testing which are shown in figure 3.
6 6
Levels of
testing
Chip
Board
System
Figure 2. Stages in VLSI testing
Level of Abstraction
Transistors (Netlisting MOS transistor)
Gate
RTL
Functional and behavioural model
Figure 3. Stages of VLSI design abstraction
7 7
Fault Modeling
Many parameters affect the fault model of the circuit such as the change in
physical parameter, different environmental condition, etc. The logical fault
modeling mainly focusses on how the physical failure of the circuit affects the
behavior.
There are plenty of physical defects present in the circuit and it is
impossible to consider all them individually. Therefore, the remedy is logical fault
modeling. The advantage of fault modeling is they cover most of the physical
failures.
Common Fault Models
The types of common fault models are explained in figure 4:
Stuck at fault. An individual signal or pins in the circuit are permanently
stuck at logic 0 or at a logic 1 is termed as stuck at fault. Basically, there are two
types of stuck at faults can happen in the circuit:
1. Stuck at 1 (S-A-1)
2. Stuck at 0 (S-A-0).
Stuck at Fault
Single Stuck at
Fault
Multiple Stuck at
Fault
Transistor Faults
Open Circuit Fault
Short Circuit Fault
Memory Faults
Coupling
Pattern Sensitive
Programable Logic Array
Stuck at
Cross Point
Bridging
Delay Faults
Transition
Path
Figure 4. VLSI circuit fault types
8 8
S-A-1: as shown in figure 5, irrespective of any input to AND gate the
output of AND gate is always 1. Because the output pin of the AND gate is
permanently stuck at 1. Thus, the actual behavior of an AND gate in the presence
of S-A-1 fault is shown in truth table 2.
Table 2. Truth Table of AND Gate at S-A-1 Fault
The physical representation of stuck at logic fault is shown in figure 6.
This is the totem pole circuit. If the pull-down transistor is always short-circuited,
then regardless of any input the output will always be 0 because the capacitor at
the output will always be connected to the ground through pull-down transistor.
0 0 1
0 1 1
1 0 1
1 1 1
Figure 5. Example of S-A-1 fault
9 9
Single stuck at fault. At a time only one stuck at fault taken into a
consideration then it is called as a single stuck at fault. Primarily, four properties
define the single stuck at fault.
1. Only one line is faulty at a time.
2. A line is always set to 0 or 1.
3. No intermittent states.
4. A fault can be at the input or at the output of the gate.
In any circuit, if K number of interconnecting lines are present than 2K
stuck at fault are possibly present. The example of a single stuck-at fault is given
in figure 7.
For example, if the gate 1 in figure 7 is faulty then the output of that gate is
always stuck at 0 or stuck at 1. Moreover, faulty gate 1 will equally affect the gate
2 and 3. If there is a fault in the input of gate 2 than the fault will appear at point b
but not at the points (a/c). The given circuit in figure 7 has 12 fault sites. For the
fan out of a gate 1 there will be 3 fault sites which is shown in figure 8.
Therefore, the total number of faults will be:
Figure 6. Totem pole circuit
10 10
12 Fault sites for S-A-0 + 12 Fault sites for S-A-1 = 24 Total Fault sites [13]
To reduce the number of fault sites, different reduction techniques are being
used such as fault equivalence, and fault collapsing.
Typically, single stuck at fault is preferable over the multiple stuck at fault.
Since it is very simple to handle at the computational level, it gives reasonable
good fault coverage. The test sets for the detection of single stuck-at fault also
detects many multiple stuck at faults [5, 13, 14].
Figure 7. NAND implementation of two input NOR gate [13]
Figure 8. Fan out circuit of two input NAND gate [13]
11 11
Fault equivalence. With the help of fault equivalence technique, the fault in
the circuit can be detected by which effect on the circuit is identical. If the two
faults in the circuit are identical then, one fault can be eliminated. The number of
fault sites in the circuit can be detected by given formula.
# primary inputs + # gates + #fan-out branches
To understand fault equivalence, consider an example of 3 input AND gate.
If any of the input lines of 3-input AND gate is stuck at 0 than it is same or
equivalent output line stuck at 0. The reason is, if any of the input line is stuck at 0
in AND gate then the output is going to be 0 likewise if the output line of AND
gate is stuck at 0 then out is going to 0. In figure 9, all four faults are similar.
Therefore, any 3 stuck at faults can be eliminated.
a0 = b0 = c0 = f0
Dominance fault collapsing. The number of faults can be reduced with the
help of fault equivalence, and fault collapsing. Figure 10 shows the fault
equivalency of all the logic gates. To understand the concept of fault collapsing
consider figure 11. For an instance, N is the number of test vectors that detects
fault f1. The fault f2 dominates fault f1 if and only if test vectors are same. Thus,
the f1 fault can be eliminated [15, 16].
Figure 9. 3-input AND gate [13]
13 13
D-Algorithm. Roth proposed the D-algorithm in 1966 [17]. It was the first
algorithm used in ATPG generation tool. However, there are other algorithms used
for the test generation such as boolean difference and literal proposition, but it
cannot be implemented on the computer [17]. Terminology associated with D-
algorithm are shown in figure 12.
Singular cover. It is the compact version of the truth table. AND Gate
Singular Cover (SC) and Truth Table are given below in tables 3 and 4. OR Gate
Singular Cover (SC) and Truth Table are given in tables 5 and 6.
Table 3. Truth Table of AND Gate
A B Y
0 0 0
1 0 0
0 1 0
1 1 1
Find Singular Cover
D-Intersection
Primitive D-Cube of a fault (PDCF)
Propogation D-Cubes (PDC)
Figure 12. Terminologies associated with D-Algorithm
14 14
Table 4. Singular cover of AND gate
A B Y
X 0 0
0 X 0
1 1 1
Table 5. Truth table of OR gate
A B Y
0 0 0
1 0 1
0 1 1
1 1 1
Table 6. Singular cover of OR gate
A B Y
X 1 1
1 X 1
0 0 0
D-intersection. In D-algorithm, mainly 5 valued signals are used {0, 1, X,
D, D’}. Each node in the circuit can take one of these 5 valued signals. Here, X is
used for don’t care condition. This value will assign to a node when the node value
will not affect the circuit. D refers to a fault, which implies that under a faulty
condition, the node value is D and in non-faulty condition, this node refers to a
15 15
different value. This could be different 0 under faulty and 1 under non-faulty or
vice-versa.
The D intersection is used to identify the final value of the node; these
values are coming from two different sources. As shown in table 7, the columns
correspond to one value, the rows correspond to one value, and the test engineer
can determine the final value of the node [5, 13, 14].
Table 7. D- Intersection
0 1 X D D’
0 0 D’ 0 ø ø
1 D 1 1 ø ø
X 0 1 X D D’
D ø ø D D *
D’ ø ø D’ * D’
Primitive D-cube of fault (PDCF). To detect the fault at a node; for
instance, stuck at 0 fault then the node should be in such a way that it generates the
reverse value 1.
The example of NOR gate is given in figure 13. To generate a stuck at 0
fault at node c the values at node a and b should be in such a way that the output of
NOR gate will be 1. To select the values of node a and b singular cover of NOR
gate will be used. According to the singular cover, the values of a and b will be 0.
Then intersect that raw with the values of a, b, and c are X, X, and 0
respectively. Singular cover and D-intersection of NOR gate are shown in tables 8
and 9.
16 16
Table 8. Singular Cover of NOR gate [13]
a b c
0 0 1
X 1 0
1 X 0
Table 9. D-Intersection of NOR gate for s-a-0 [13]
a b c
0 0 1
X X 0
0 0 D’
Propagation D-Cube (PDC). PDC consist of a table of each circuit element
which has entries for propagating faults for any of its input to the output. To
generate PDC entry corresponding to any one column. D-intersects any two rows
of the singular cover which have opposite values (0 and 1) in that column. There
can be multiple rows for one column. If there is a fault occurs at the input terminal
A then that fault should be propagated at the output terminal C. For that first a
person should check the singular cover of AND gate and propagate that fault to the
output. The example of 2 input AND gate is shown in figure 14. The PDC for both
the inputs A and B of an AND gate are shown in table 10 and 11.
Figure 13. 2-input NOR gate
17 17
Table 10. PDC of AND gate for node A [13]
A B C
0 X 0
1 1 1
D 1 D
Table 11. PDC of AND gate for node B [13]
A B C
X 0 0
1 1 1
1 D D
Figure 14. 2-input AND gate
18 18
Example of D-Algorithm. The example for D-Algorithm is shown in figure
15. The steps to find the test pattern for the circuit in figure 15 is shown in table
12.
Table 12. Test pattern generation using D-Algorithm
A B C D E F G H I
PDCF of NOR
gate
X X X 0 0 X D X X
PDC of NAND
gate
X X X X X 1 D D’ X
PDC of NOR
gate
0 X X X X X X D’ D’
Singular Cover of
AND gate
X 1 1 X X 1 X D’ D’
Test Pattern 0 1 1 0 0 1 D D’ D’
Figure 15. Example circuit for D-Algorithm
CHAPTER 3: MACHINE LEARNING
Machine learning is used to intelligently learn from data and it has proven
to solve the problems in many areas. This thesis is an application of machine
learning into digital circuit design verification, test pattern generation, and fault
detection. The supervised machine learning technique has been used for the
implementation of this research work. Linear regression and classification are two
types of supervised machine learning techniques. Both models were used to verify
the behavioral design verification, fault detection, and test pattern generation. The
linear regression model has been used to predict the test pattern and the number of
faults. The classification model has been used to verify the behavioral logic of a
circuit.
This chapter covers a background in machine learning. It includes the
different types of machine learning techniques and the gradient descent algorithm
to converge the ML model. The definitions of machine learning are given below.
Arthur Samuel: “It is a field of study that gives computers the ability to
learn without being explicitly programmed” [18].
Tom Mitchell: “A computer program is said to learn from experience E
with respect to some task T and some performance measure P, improve with
experience E” [18].
Types of Machine Learning
There are basically two types of machine learning:
1. Supervised machine learning
2. Unsupervised machine learning
20 20
Supervised Learning
In supervised learning, the right answer to the algorithm is given for every
data sets. This is also known as the regression problem.
Regression problem. In the given example below the housing price of the
city is being predicted. The prices are rounded to its nearest cents values. Here,
based on the size of a house the price of is predicted.
Regression is used to predict the continuous-valued function. As shown in
figure 16 for the prediction of housing price two types of line can be possible to
plot, the straight line and the quadratic line. Here, the ideal situation is to put a line
in which the housing price prediction will be as adjacent as the actual price of the
house. For that, the plotted line should be in such a way that it covers all the
plotted data. According to this concept, the second order quadratic equation line is
more appropriate than the straight line for figure 16. The prediction line totally
depends upon the actual data fed to the ML model
Figure 16. Housing price prediction [18]
21 21
Classification problem. In classification problem the prediction is based on
the discrete-valued function, it is either 0 or 1. As shown in figures 17 and 18
classification example can explain the types of breast cancer. Here, the prediction
is happening between malignant or benign types of cancer. Thus, there are two
discrete values: 0 for benign cancer, and 1 for malignant cancer.
Figure 17. Tumor size vs. age [18]
Figure 18. Tumor size [18]
22 22
Unsupervised Learning
In unsupervised learning the data sets are directly fed to the algorithm and
algorithm will do the clustering of the data sets. The structure of cluster is derived
from the data and the relationship between variables in the data. Here, the
continuous feedback is given, whereas in supervised learning no feedback is given
for the prediction.
Clustering Algorithm
For instance, as shown in figures 19 and 20, take a collection of 1,000,000
genes and automatically cluster these genes in a group that is somehow similar or
related by the different variable such as life space, location, etc.
Figure 20. Datasets on genes [18].
Figure 19. Example of clustering in unsupervised learning [18]
23 23
Non-Clustering
The best explanation of non-clustering phenomenon is the cocktail party as
shown in figure 21. Cocktail party algorithm: it allows to find the structure in
chaotic environment thus, it can identify the individual voices and music from a
mesh of sound [17].
Model and Cost Function
Model Representation
In supervised learning, the data sets from which the model is trained is
called as the training set. Table 13 shows the example of training sets for the
housing price.
Table 13. Training Sets for Housing Price [18]
Size in feet2 (X) Price ($) in 1000’s (Y)
2104 460
1416 232
1534 315
852 178
Figure 21. Cocktail party problem [18]
24 24
Notations:
m = number of training example
X = input variables.
Y = output variables
(X, Y) = one training example
(X(i), Y(i)) = ith training example, I is the index of training test.
The hypothesis is an estimated valued function, which takes input features
and gives the predicted output based on the estimated value. Training sets are
given to train the hypothesis for linear ML model. In figure 22 h represents the
hypothesis. It takes the size of the house as an input feature and predicts the price
of a house.
The representation of h in mathematical term is given below.
𝒉𝜽(𝒙) = 𝜽𝟎 + 𝜽𝟏𝒙 1
Here, function predicts that Y is the linear function of X. This hypothesis
predicts simple linear function. The equation for the linear function is given
below. The visual representation of this straight-line equation is shown in figure
23.
𝒚 = 𝒇(𝒙) = 𝒂 + 𝒃𝒙
2
Figure 22. Flow chart for the hypothesis
25 25
Where,
a = constant term or Y intercept.
b = coefficient of independent variable. For x = 1 and y = a + b.
If a = 25 and b = 5 thus, y = 25 + 5 = 30.
Cost Function [18]
Cost function helps to fit the suitable line according to data sets.
𝒉𝜽(𝒙) = 𝜽𝟎 + 𝜽𝟏𝒙 3
Based on ɵ0 and ɵ1 value different hypothesis function can be generated and
it is represented in figure 24.
Figure 23. Straight-line representation for linear regression
Figure 24. Different hypothesis function [18].
26 26
In linear regression the good fit of the parameter is important. The idea to
choose the parameter is given as choose 𝜃0 and 𝜃1 so that h 𝜃(x) is close to y for
the given training examples (x,y).
4
Minimization function for 𝜃0 and 𝜃1 is:
Where, m = number of training sets (samples)
𝒉𝜽(𝒙(𝒊)) = 𝒉𝜽𝟎+ 𝜽𝟏𝒙(𝒊) 5
Here, the objective is to find the value of 𝜃0 and 𝜃1 in such a way that the
equation for minimization value will be minimized. The hypothesis representation
based on cost function is shown in figure 25.
Cost function [18].
6
7
Figure 25. Hypothesis representation based on cost function [18]
27 27
The cost function is the actual difference between the predicted value and
the actual value. The graphical representation of the hypothesis of cost function is
shown in figure 26.
Another way to represent the 3-D plot in 2-D is contour plot which is
shown in figure 27.
Figure 26. 3-D plot of cost function [18]
Figure 27. Representation of 3-D into 2-D [18]
28 28
Gradient Descent
This algorithm is used to minimize the cost function.
Steps for Gradient Descent
Start with some random value of 𝜃0 and 𝜃1 (for example 𝜃0 = 0 and 𝜃1 = 0).
Keep changing 𝜃0 and 𝜃1 to reduce the cost function until its end up to minimize.
According to graph A in figure 28 for the starting the optimizer is at the top of 3-D
mountain and to optimize the function, the optimizer needs to come down from the
top of the mountain. Moreover, the optimizer does not know in which direction to
go thus, the optimizer will look around and decide that which is the appropriate
path to come down than optimizer follows the same step and keep going until it
reaches the lower level of the mountain and this lower level is called as a local
optimum point. There can be more than one local optima it totally depends upon
where the optimizer started on the slope. Every time the model run to train the
starting point for the optimizer is different. Therefore, the optimum value of cost is
different for every training session. This is explained in figure 29.
Figure 28. 3-D plot for gradient descent optimization graph A [18]
29 29
Figure 29. 3-D plot of gradient descent optimization graphB
The gradient descent algorithm equation is given in figure 30. The
generalized code to simulate the gradient descent algorithm is given in figure 31.
Figure 31. Equation for the local optimum point [17]
Figure 30. Simulation code for gradient descent [18]
30 30
Here, the value of both the theta was changed simultaneously. The alpha is
called as a learning rate. The convergence of the function is depending on the
alpha. If the alpha is too small than the system will take a longer time to converge
the function and if alpha is too large than the gradient descent can be overshoot the
minimum and it may fail to converge or even diverge. This is explained in figure
32. The red line is showing that the alpha is too large that it is overshooting the
function and blue and green lines are the example of small alpha rate. If theta will
reach to any optima which means the system is converged after that it no longer
converge.
Batching in Gradient Descent
Each step of gradient descent uses all the training example. In batch, the
computation of derivative is the computation of the sum.
Figure 32. Different learning rate (alpha) [19]
CHAPTER 4: IMPLEMENTATION, RESULTS, AND DISCUSSION
Implementation
The detailed knowledge of ML has been covered in chapter 3. Now in this
section the focus is towards data. Before going into the data generation and ML
model training let’s investigate the methods for generating test patterns, number of
faults and fault coverage with proposed ML approach. The flowchart in figure 33
shows the proposed approach for the test pattern generation and fault detection in
combinational VLSI circuits using ML.
Figure 33. Flow chart comparison between proposed machine learning method and
existing EDA tool method
32 32
The first step of the procedure is to create a Verilog code for boolean
equation based on the truth table. The Register-Transfer Level (RTL) view of the
boolean equations is generated by using Verilog code. From this RTL view, the
data set for the machine learning model can be created. To create this data set the
next step is to come up with the process and the procedure for the data collection.
This procedure will be explained later in this chapter. ML tool further uses this
data set as features for training model to produce the test patterns, the number of
faults and fault coverage. The selected list of input and output features of the
training ML model is given below:
1. The number of gates
2. Primary inputs of the circuit
3. Test patterns
4. Number of faults
5. Fault coverage
The number of gates and primary inputs are given as a feature to the
machine learning model to predict the total number of faults and fault coverage.
Likewise, to predict the test patterns the number of gates, primary inputs, and the
number of faults are given as a feature to train the ML model.
Generation of the Data Set
Now, let’s define the data-set which will be used to feed the training
algorithm. EDA tool Synopsys and TetraMax will generate automatic test patterns
(ATPG). From ATPG output files will be collected and written into a Common
Separated Value (CSV) file. Now, this CSV file contains the required data-set.
Figure 34 shows the work flow of data collection.
34 34
Since this is the novel approach for VLSI testing with ML, data must be
manipulated while keeping its originality. As discussed in chapter 3 the data is
critical for machine accuracy. Therefore, the first stage was to come up with the
process for the collection of a datasets. Henceforth, to generate the dataset, the
substantial number of boolean equations were used. After collecting all the
boolean equations, the Verilog codes for the equations were written followed by
RTL view generation. For the Verilog code of boolean equations go over
Appendix A. From this RTL view the information about the primary input,
collection of gates, absolute number of possible faults was extracted. The example
of a boolean equation is shown in figure 35.
Boolean equation: AB + AC
The RTL view of boolean equation:
To generate the test patterns first Synopsys design compiler was used to
synthesize the Verilog design and after that TetraMax tool was used to generate
the test patterns.
Figure 35. RTL View of boolean equation [20]
35 35
The workflow of Synopsys design compiler to synthesize the design is
given below:
Step 1. Analyze and elaborate the design
Step 2. Apply the required constraints
Step 3. Compilation of the design
Once the design is synthesized then netlist (RTL view) of the design was
created after the compilation step. This netlist was given to TetraMax tool for the
generation of test patterns, fault detection, and fault coverage. The three basic
steps for TetraMax tool are given below:
Step 1. Build: read the netlist design and netlist library.
Step 2. DRC (It performs the rule check).
S rules (Scan chain rules)
Z rules (internal tristate busses and bi-directional pins)
C rules (Clocks or capture)
X rules (combinational feedback loop)
V Rules (Vector statements in the SPF)
Step 3: Test: Generates the test patterns and reports.
After the generation of test patterns and a total number of faults, this data
set was given to TensorFlow framework. TensorFlow is the open source ML
platform. However, the TensorFlow framework can accept the CSV file format as
an input data. For that, the first requirement was to write VLSI testing data
(generated by Synopsys and TetraMax) into CSV file. For .CSV data set refer to
Appendix B. After the conversion of data into CSV file, it was fed to the training
algorithm. Training algorithm was written using python programming language in
TensorFlow. Here, linear regression algorithm was implemented to train the ML
model.
36 36
The two widely used Python libraries integrated with TensorFlow for ML
are TFLearn and Scikit-Learn. TFLearn library was used for the implementation of
linear regression model. To train the linear regression model neural network is
used. The basic architecture of neural network is shown in figure 36. As TFLearn
is highly optimized for processing by the developers, it can save considerable
amount of time. The flow of TFLearn model is given in figure 37.
Figure 36. Neural network for regression model [21]
Figure 37. TensoFlow model flow
37 37
First, the training data is given to the input and output placeholder to the
machine learning model. These placeholders are the feed mechanism of the
TensorFlow. This feed mechanism allows injecting any data for the computation.
Once the data is injected for the computation then set of functions are required to
train the neural network model.
Figure 38 shows the flow chart of implemented ML model. First, to train
the model the features were extracted from the .CSV file. Then these features were
given to the Deep Neural Network (DNN) layer. The DNN layer have some
certain weight and biases they allow to shift the value of activation function. This
activation function is a cost function which shows the difference between the
predicted value and actual value. From this difference value gradient descent
optimizer will define how much weight and biases need to be shifted to reduce the
cost function difference. Once the cost function value is reduced to very minimum
then the ML model is trained. Because of the reduced cost value now machine can
predict very near to the actual output value. Once the ML model is trained then
some test data is given to test the model to check the accuracy of model prediction.
Figure 38. Flow chart of model implementation
38 38
Results
Results are given according to the following parameters.
1. Features
2. Algorithm
3. Output
4. Accuracy
In the last section we discussed how features are used as an input for the
ML algorithm. Accuracy in this implementation is calculated based on the mean of
the elements of a vector/matrix.
There are three different hypotheses used in this section to explain the
implementation of test pattern generation, number of faults and fault coverage, and
the prediction of behavioral model of the circuit respectively. Therefore, results
for the same are discussed first.
Test pattern prediction.
o Based on input features test patterns of the circuit were
predicted. This ML model uses four input features:
Number of Inputs
Number of Gates
Type 1 gate (OR)
Type 2 gate (AND)
Type 3 gate (NOT)
Number of Faults
Fault Coverage
o Features set size of 50 by 7 was used in this training model.
Test Pattern was used as an output label for validation. DNN
architecture was used to implement Linear Regression for
39 39
training this model. Low accuracy of 30% was achieved in
this case.
Fault Coverage (F.C.) and number of fault Prediction
o Input features used in this ML model are:
Number of Inputs
Number of Gates
o Features set size of 50 by 2 was used in this training model.
Fault Coverage was used as an output label for validation.
DNN architecture was used to implement Linear Regression
for training this model. Fault coverage was predicted with
accuracy of 91%
Circuit Logic Behavior Predictor:
o Behavior of the circuit was learned by training model. Based
on the input data the machine will perform learned logic.
Here, full adder was learned by the machine. 2 hidden layers
with 20 multi-layer perceptron were used in this model.
While, only single feature is used in this training model:
Primary Input Logic
o Output labels in this case are:
Sum
Carry
o 50 by 1 feature set size was used in this training model.
Average accuracy of 93.70% was obtained. Behavioral
predictor performs best among all 4 models. This means
machine learning can be implemented on the behavioral
level.
40 40
Discussion
To train a machine learning model some input and corresponding output
data set is given to it. Based on training data set the machine learning model will
predict the output. Number of OR gate, number of AND gates, and number of
NOT gates were also given as an input. From this input, data model was supposed
to predict the output test patterns. However, the number of test patterns for every
circuit is different based on unique circuit’s RTL structure. Therefore, data (test
patterns) is missing in many output rows. Table 14 shows sample of five rows of
data set and shaded areas represents the missing data.
Table 14. Sample dataset for test-pattern
Inputs Faults Gates OR gates
NOT gates
AND gates F.C%
Test Pattern 1
Test Pattern 2
Test Pattern 3
Test Patter 4
3 5 2 0 1 1 100 011 101 111 110
2 12 5 1 2 2 100 001 010 011 NaN
2 12 5 1 2 2 100 010 000 011 001
3 5 4 2 3 0 100 110 011 111 101
2 9 3 1 1 1 87.5 011 010 NaN NaN
Machine learning cannot predict accurately with false or missing data
(garbage in, garbage out). Therefore, the challenge is to handle the missing data.
This problem is also known as corruption of data. If ‘0’ values are given instead of
missing values (‘NaN’) then output will consist of false predictions. NaN data can
be a cause of large errors in machine learning models. As shown in figure 39 it can
be observed that model was trained and giving predictions though the data were
corrupted, whereas the actual output is different as shown in figure 40.
41 41
The linear regression machine learning model can predict the number of
fault with 91% of accuracy. As an input features: the number of gates and number
of primary inputs are given from that model can detect the number of fault in the
circuit. Fault prediction of a ML model is shown in table 15.
Table 15. Comparison between actual fault present in the circuit and predicted
fault by ML model
Inputs gates Actual Faults present in
the circuit
Predicted fault by ML
model
9 60 59.53 (60)
6 42 41.76 (42)
13 86 82.75 (83)
15 94 94.23 (94)
Figure 40. The actual out-put
Figure 39. Output prediction for test pattern generation
42 42
So far, the machine learning model was applied at gate level of the circuit
and the outcome of a model is not suitable. Machine learning model is unable to
detect the test patterns. For behavioral modeling the architecture for ML can be
interpreted as a black box as shown in figure 41.
The machine learning model is unaware of the circuit logic. Based on given
inputs and outputs (i.e. the truth table of the circuit) the implementation of this
machine learning model has been done using logistic regression, because the
output of the model is either 1 or 0 which comes under the category of
classification problem in machine learning. After implementing this model, the
prediction accuracy of the 3-bit adder was 93.75%.
Summary
The summary of implementation and result section is as follows:
[1] Prediction of number of faults was done successfully by linear regression
model with accuracy of 91%.
[2] However, without the predictions of test patterns the gate level machine
learning model is futile.
Figure 41. Behavior of Adder
43 43
[3] On the contrary, machine learning is suitable for the behavioral level of the
circuits. Without knowing the logic of the circuit, it was able to predict with
accuracy of 93.7%.
These summaries are shown in table 16.
Table 16. Objective Summary
Machine learning model Accuracy of ML model
Linear regression model for fault
coverage
91%
Classification model for
behavioral logic prediction of a
circuit
93.7%
leaner regression model for test
pattern generation
Not able to achieve high accuracy
because of data corruption.
CHAPTER 5: CONCLUSION
Mainly two types of machine learning models were used throughout this
research work. The implementation of this machine learning model has been done
on TensorFlow platform using Python programming language. Linear regression
technique has been used to predict the number of faults and test patterns for the
respective digital circuit. Whereas, Logistic regression technique has been used to
predict the behavior of a circuit.
The prediction of the number of faults was done successfully by linear
regression model with the accuracy of 91%. However, without the predictions of
test patterns the gate level machine learning model is futile. On the other hand,
without knowing the logic of a circuit, it was able to predict the behavior of a
digital circuit with the accuracy of 93.7%. This accuracy can be increased by
training with large data-sets.
In addition, the machine learning model cannot be used for corrupted data
since it gives false predictions. However, in terms of digital testing, the data sets
for test pattern generation are not corrupted. This is one of the biggest challenge
throughout this research work. For the accurate test patterns prediction,
uncorrupted (in terms of ML) large data sets are required. In this research work,
the implementation of the machine learning models was done on the bases of
smaller data sets because the purpose of this research work is to study the
application of machine learning into digital logic circuit verification and testing.
Therefore, to validate the algorithms larger data sets are required, for more solid
and trustable prediction.
CHAPTER 6: FUTURE WORK
As discussed in chapter 4, because of missing data in output row for test
pattern prediction machine learning model is considering it as corrupted data.
Another way to approach this problem is, to label every test pattern. For example,
if the circuit has 3 inputs than the maximum number of test patterns are 8. The
labeling for 3 input test patterns is given in table 17.
Table 17. Labeling the 3-input test patterns
Test patterns Labeling
000 A
001 B
010 C
011 D
101 E
100 F
111 G
110 H
According to the behavior of a circuit, the output test patterns can be
different for different circuits. The example is given in table 18. There are three
different entries for three different circuit and every circuit has 3 primary inputs.
However, the test patterns required to test the circuit are different. The first two
entries require 6 test patterns and the third entry requires only 4 test patterns. Thus,
it differs from circuit to circuit.
46 46
Table 18. Data set example for 3-input
No. of inputs
No. of
faults Fault
Coverage
Test pattern
1
Test pattern
2
Test pattern
3
Test pattern
4
Test pattern
5
Test pattern
6
3 26 81.81 (000) A (010) C (011) D (100) F (110) H (101) E
3 26 81.81 (110) H (011) D (000) A (010) C (111) G (100) F
3 26 81.25 (100) F (110) H (001) B (010) C
These are the different combination of labeling for 3 input data sets for only
26 number of faults. Therefore, any fault number, the combination of datasets will
follow the following equation:
Here, the k is several test patterns taken at a time and n are the total number
of the test patterns. For example, there are 3 test patterns at a time than the
combination of 3 test patterns will be:
8!
3! (8 − 3)!= 56
The data-set in table 19 is specifically for the 26 number of faults. Thus, the
total number of test patterns is 246. According to machine learning concept, 80%
47 47
test patterns are used to train the model and 20% is used to test the model. Thus,
199 test patterns will be used to train the model and 47 will be used to test the
model.
Table 19. Different combination for 3-input of fault
Because of this if the machine learning model is generalized than we need
millions of data. Moreover, this data is not open source because of this it is
impossible to implement this model. However, in future with the availability of
data this ML approach can be implemented. In table 20 for prediction of different
input gates the required combinations of test patterns are shown. From this table,
the conclusion is, the combination of test patterns is increasing exponentially with
the number of primary inputs. Which implies that it requires the larger data-set.
n r nCr
8 1 8
8 2 28
8 3 56
8 4 70
8 5 56
8 6 28
8 7 8
8 8 1
Total 246
48 48
Table 20. Combination of test patterns with respect to number of primary inputs
Number of input Number of
fault
Maximum number
of test pattern
Combination of test
patterns
3 20 8 For one number of
fault 246 test
pattern are possible.
For 9 faults
246*9 = 2214
Test pattern
combination
3 30 8
3 40 8
3 50 8
3 60 8
3 70 8
3 80 8
3 90 8
3 100 8
4 20 16 For one number of
fault 78404 test
pattern are possible.
For 9 faults
78404*9 = 705636
Test pattern
combination
4 30 16
4 40 16
4 50 16
4 60 16
4 70 16
4 80 16
4 90 16
4 100 16
REFERENCES
[1] C. Mitchell Feldman, "10 Real-World Examples of Machine Learning and AI
[2018]", Redpixie.com, 2018. [Online]. Available:
https://www.redpixie.com/blog/examples-of-machine-learning. [Accessed:
04- May- 2018].
[2] "Trends in the cost of computing – AI Impacts", Aiimpacts.org, 2018. [Online].
Available: https://aiimpacts.org/trends-in-the-cost-of-computing/. [Accessed:
04- May- 2018].
[3] Medium. (2018). 9 Applications of Machine Learning from Day-to-Day Life.
[online] Available at: https://medium.com/app-affairs/9-applications-of-
machine-learning-from-day-to-day-life-112a47a429d0 [Accessed 3 May
2018].
[4] P. Girard, C. Landrault, V. Moreda and S. Pravossoudovitch, "BIST test
pattern generator for delay testing", Electronics Letters, vol. 33, no. 17, p.
1429, 1997.
[5] H. Rahaman, S. Chattopadhyay and S. Chattopadhyay, Progress in VLSI
design and test. Berlin: Springer, 2012. [4] Essentials of Electronic Testing,
Michael L. Bushnell and Vishwani D. Agrawal, Springer Publications, 2000.
[6] Cs.uoi.gr, 2018. [Online]. Available:
http://www.cs.uoi.gr/~tsiatouhas/CCD/Section_8_1-2p.pdf. [Accessed: 03-
May- 2018].
[7] "Synopsys", Synopsys.com, 2018. [Online]. Available:
https://www.synopsys.com/. [Accessed: 03- May- 2018].
[8] "EDA Tools and IP for System Design Enablement | Cadence", Cadence.com,
2018. [Online]. Available: https://www.cadence.com/. [Accessed: 03- May-
2018].
[9] A. Géron, Hands-on machine learning with Scikit-Learn and TensorFlow.
Beijing [etc.]: O'Reilly, 2017.
[10] J. Bell, Machine learning. Indianapolis: John Wiley & Sons, 2015.
[11] "TensorFlow", TensorFlow, 2018. [Online]. Available:
https://www.tensorflow.org/. [Accessed: 03- May- 2018].
51 51
[12] "TensorFlow: Open source machine learning", YouTube, 2018. [Online].
Available: https://www.youtube.com/watch?v=oZikw5k_2FM. [Accessed:
03- May- 2018].
[13] "lecture 30 - Test Generation Methods (Contd.) Boolean Difference and D -
Algorithm", YouTube, 2018. [Online]. Available:
https://www.youtube.com/watch?v=FlwXy6hItSk. [Accessed: 03- May-
2018].
[14] L. Wang, C. Wu and X. Wen, VLSI Test Principles and Architectures.
Burlington: Elsevier, 2006.
[15] "Foundations of Popfly", 2018. [Online]. Available:
http://cdnc.itec.kit.edu/downloads/09_PODEM.pdf. [Accessed: 03- May-
2018].
[16] Nptel.ac.in, 2018. [Online]. Available:
http://www.nptel.ac.in/courses/106103016/27. [Accessed: 03- May- 2018].
[17] "Coursera | Online Courses From Top Universities. Join for Free", Coursera,
2018. [Online]. Available: https://www.coursera.org/learn/machine-
learning/home/welcome. [Accessed: 03- May- 2018].
[18] Infoscience.epfl.ch, 2018. [Online]. Available:
https://infoscience.epfl.ch/record/82307/files/95-04.pdf; [Accessed: 03- May-
2018].
[19] "Boolean Algebra and Reduction Techniques", Grace.bluegrass.kctcs.edu,
2018. [Online]. Available:
https://grace.bluegrass.kctcs.edu/~kdunn0001/files/Simplification/4_Simplifi
cation_print.html. [Accessed: 03- May- 2018].
[20] "Boolean Algebra and Reduction Techniques", Grace.bluegrass.kctcs.edu,
2018. [Online]. Available:
https://grace.bluegrass.kctcs.edu/~kdunn0001/files/Simplification/4_Simplifi
cation_print.html. [Accessed: 03- May- 2018].
[21] Electronic Design Automation for Integrated Circuits Handbook. Boca Raton,
FL: CRC Press, 2016.
[22] "The Bathtub Curve and Product Failure Behavior (Part 1 of
2)", Weibull.com, 2018. [Online]. Available:
http://www.weibull.com/hotwire/issue21/hottopics21.htm. [Accessed: 03-
May- 2018].
52 52
[23] "Lec-30 Testing-Part-I", YouTube, 2018. [Online]. Available:
https://www.youtube.com/watch?v=-4XBm5t7_Jg. [Accessed: 03- May-
2018].
[24] "Retrieving an unnamed variable in tensorflow", Stack Overflow, 2018.
[Online]. Available:
https://stackoverflow.com/questions/44639260/retrieving-an-unnamed-
variable-in-tensorflow. [Accessed: 03- May- 2018].
[25] "Project Jupyter", Jupyter.org, 2018. [Online]. Available: http://jupyter.org/.
[Accessed: 03- May- 2018].
[26] "Borye/machine-learning-coursera-1", GitHub, 2018. [Online]. Available:
https://github.com/Borye/machine-learning-coursera-1. [Accessed: 03- May-
2018].
[27] "Read Research2005.pdf", Readbag.com, 2018. [Online]. Available:
http://www.readbag.com/www2-ucy-ac-cy-researche-research2005.
[Accessed: 03- May- 2018].
[28] G. Moore, "Cramming More Components Onto Integrated Circuits",
Proceedings of the IEEE, vol. 86, no. 1, pp. 82-85, 1998.
[29] Synopsys.com, 2018. [Online]. Available:
https://www.synopsys.com/content/dam/synopsys/implementation&signoff/d
atasheets/tetramax-ds.pdf. [Accessed: 03- May- 2018].
[30] Ye Li, Yun-Ze Cal, Ru-Po Yin and Xiao-Ming Xu, "Fault diagnosis based on
support vector machine ensemble," 2005 International Conference on
Machine Learning and Cybernetics, Guangzhou, China, 2005, pp. 3309-3314
Vol. 6. doi: 10.1109/ICMLC.2005.1527514
[31] R. Goldman, K. Bartleson, T. Wood, V. Melikyan and E. Babayan,
"Synopsys' Educational Generic Memory Compiler," 10th European
Workshop on Microelectronics Education (EWME), Tallinn, 2014, pp. 89-92.
doi: 10.1109/EWME.2014.6877402 [32] Synopsys Users Group:
http://www.synopsys.com/Community/SNUG/Pages/default.aspx
55 55
Bench file code for Boolean Equations:
Boolean Equation 1:
# 2 inputs
# 1 outputs
# 0 D-type flip flops
# 1 inverters
# 3 gates (2 ANDs + 1 OR)
INPUT(A)
INPUT(B)
OUTPUT(F)
J = NOT(B)
I = AND(A, B)
H = AND(A, J)
F = OR(I, H)
Boolean Equation 2:
# 3 inputs
# 1 outputs
# 0 D-type flip flops
56 56
# 2 inverters
# 4 gates (3 ANDs + 1 OR)
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
J = NOT(A)
I = NOT(C)
X = AND(A,B,C)
Y = AND(A,B,I)
Z = AND(J,B,I)
F = OR(X,Y,Z)
Boolean Equation 3:
INPUT(A)
INPUT(B)
OUTPUT(F)
J = NOT(B)
X = OR(A,B)
57 57
Y = OR(A,J)
F = AND(X,Y)
Boolean Equation 4:
# 3 inputs
# 1 outputs
# 0 D-type flip flops
# 2 inverters
# 4 gates (3 ANDs + 1 OR)
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
J = NOT(A)
I = NOT(B)
X = OR(A,B,C)
Y = OR(C,J,I)
Z = OR(A,I,C)
F = AND(X,Y,Z)
58 58
Boolean Equation 5:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
J = NOT(B)
I = NOT(D)
K = NOT(E)
X = OR(J,C)
Y = AND(B,K)
Z = OR(I,Y)
T = AND(X,Z)
F = OR(A,T)
Boolean Equation 6:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
59 59
INPUT(E)
INPUT(T)
OUTPUT(F)
X = AND(A,B)
Y = AND(B,D,E)
Z = AND(B,C,E,T)
F = OR(X,Y,Z)
Boolean Equation 7:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
X = OR(B,C)
Y = AND(B,C)
Z = AND(X,Y)
T = AND(A,B)
F = OR(Z,T)
Boolean Equation 8:
INPUT(A)
INPUT(B)
60 60
INPUT(C)
OUTPUT(F)
X = AND(A,B)
Y = AND(B,C)
F = OR(X,Y)
Boolean Equation 9:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
X = AND(A,B)
Y = AND(B,C)
Z = AND(A,C)
F = OR(X,Y,Z,A)
Boolean Equation 10:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
61 61
Y = AND(B,C)
F = OR(A,Y)
Boolean Equation 11:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
X = NOT(A)
Y = AND(B,C)
F = OR(X,Y)
Boolean Equation 12:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
Y = AND(A,B,C)
F = NOT(Y)
62 62
Boolean Equation 13:
INPUT(A)
INPUT(B)
OUTPUT(F)
J = NOT(A)
K = NOT(B)
Y = AND(A,B)
X = AND(J,K)
F = OR(X,Y)
Boolean Equation 14:
INPUT(A)
INPUT(B)
OUTPUT(F)
J = NOT(A)
K = NOT(B)
Y = AND(J,B)
X = AND(A,K)
F = OR(X,Y)
63 63
Boolean Equation 15:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
J = NOT(A)
K = NOT(B)
T = NOT(C)
F = OR(J,K,T)
Boolean Equation 16:
INPUT(A)
INPUT(B)
OUTPUT(F)
J = NOT(A)
K = AND(A,B)
F = OR(J,K)
Boolean Equation 17:
INPUT(A)
INPUT(B)
64 64
INPUT(C)
OUTPUT(F)
J = NOT(B)
K = AND(C,J)
F = OR(A,K)
Boolean Equation 18:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = AND(A,B)
Y = AND(C,D)
F = OR(X,Y)
Boolean Equation 19:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
65 65
J = NOT(B)
I = NOT(C)
X = AND(J,I)
F = OR(A,X,D)
Boolean Equation 20:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
T = NOT(A)
J = NOT(B)
I = NOT(C)
X = AND(A,B,I)
Y = AND(T,J,D)
Z = AND(J,I,D)
F = OR(X,Y,Z)
Boolean Equation 21:
INPUT(A)
INPUT(B)
INPUT(C)
66 66
INPUT(D)
OUTPUT(F)
T = NOT(A)
J = NOT(D)
X = AND(A,B)
Y = AND(C,D)
Z = AND(J,T)
P = OR (X,Y)
F = AND(Z,P)
Boolean Equation 22:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
J = AND(C,D)
X = OR(B,J)
Y = NOT(X)
P = AND(A,C)
67 67
Z = OR(Y,P)
F = AND(Z,E)
Boolean Equation 23:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
J = AND(A,B)
X = OR(C,D)
P = AND(J,X)
Y = NOT(P)
F = AND(Y,E)
Boolean Equation 24:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
68 68
X = OR(A,B)
Y = AND(X,C)
F = NOT(Y)
Boolean Equation 25:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
X = OR(A,B)
Y = OR(A,C)
Z = AND(X,Y)
F = NOT(Z)
Boolean Equation 26:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
69 69
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
P = AND(X,Y,C)
Q = AND(X,B,Z)
R = AND(A,B,Z)
S = AND(A,B,C)
F = OR(P,Q,R,S)
Boolean Equation 27:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
P = AND(X,Y,C)
Q = AND(A,Y)
R = AND(A,C)
F = OR(P,Q,R)
70 70
Boolean Equation 28:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(C)
P = AND(A,B)
Q = AND(X,D)
F = OR(P,Q)
Boolean Equation
Boolean Equation
Boolean Equation
Boolean Equation 29:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
71 71
X = NOT(B)
Y = NOT(D)
P = AND(A,X)
Q = AND(X,C)
R = AND(C,Y)
S = AND(A,Y)
F = OR(P,Q,R,S)
Boolean Equation 30:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
P = AND(A,B)
Q = AND(A,C)
R = AND(A,D)
F = OR(P,Q,R)
72 72
Boolean Equation 31:
INPUT(A)
INPUT(B)
INPUT(C)
OUTPUT(F)
X = NOT(A)
P = AND(X,B)
Q = AND(X,C)
R = AND(B,C)
F = OR(P,Q,R)
Boolean Equation 32:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
P = AND(B,C)
Q = AND(A,D)
73 73
R = AND(C,D)
F = OR(P,Q,R)
Boolean Equation 33:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
T = NOT(D)
P = AND(Y,Z)
Q = AND(X,T)
R = AND(Z,T)
F = OR(P,Q,R)
Boolean Equation34:
INPUT(A)
INPUT(B)
74 74
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
P = AND(Y,C)
Q = AND(X,D)
R = AND(Z,D)
F = OR(P,Q,R)
Boolean Equation 35:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(C)
Y = NOT(D)
75 75
P = AND(B,X)
Q = AND(A,Y)
R = AND(C,Y)
F = OR(P,Q,R)
Boolean Equation 36:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(B)
Y = NOT(C)
Z = NOT(D)
P = OR(A,X)
Q = OR(X,C)
R = OR(Y,D)
S = OR(A,Z)
F = AND(P,Q,R,S)
76 76
Boolean Equation 37:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(A)
Y = NOT(C)
Z = NOT(D)
P = OR(X,B)
Q = OR(B,Y)
R = OR(C,Z)
S = OR(X,D)
F = AND(P,Q,R,S)
Boolean Equation 38:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
77 77
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
T = NOT(D)
P = OR(X,Y)
Q = OR(Y,Z)
R = OR(Z,T)
S = OR(X,T)
F = AND(P,Q,R,S)
Boolean Equation 39:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
P = OR(A,B)
Q = OR(A,C)
R = OR(A,D)
F = AND(P,Q,R)
78 78
Boolean Equation 40:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(A)
P = OR(X,B)
Q = OR(X,C)
R = OR(X,D)
F = AND(P,Q,R)
Boolean Equation 41:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(B)
79 79
Y = NOT(C)
Z = NOT(D)
P = OR(A,X)
Q = OR(A,Y)
R = OR(A,Z)
F = AND(P,Q,R)
Boolean Equation 42:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
T = NOT(D)
P = OR(X,Y)
Q = OR(X,Z)
R = OR(X,T)
F = AND(P,Q,R)
80 80
Boolean Equation 43:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
P = OR(B,C)
Q = OR(A,D)
R = OR(C,D)
F = AND(P,Q,R)
Boolean Equation 44:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
81 81
P = OR(Y,C)
Q = OR(X,D)
R = OR(Z,D)
F = AND(P,Q,R)
Boolean Equation 45:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
T = NOT(D)
P = OR(Y,Z)
Q = OR(X,T)
R = OR(Z,T)
F = AND(P,Q,R)
82 82
Boolean Equation 46:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
OUTPUT(F)
X = NOT(C)
Y = NOT(D)
P = OR(B,X)
Q = OR(A,Y)
R = OR(C,Y)
F = AND(P,Q,R)
Boolean Equation 47:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
83 83
P = AND(A,C)
Q = AND(A,D)
R = AND(A,B)
S = AND(A,E)
T = AND(D,E)
U = AND(D,C)
F = OR(P,Q,R,S,T,U)
Boolean Equation 48:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
X = NOT (A)
Y = NOT (D)
P = AND(X,C)
Q = AND(X,D)
R = AND(X,B)
S = AND(X,E)
T = AND(Y,E)
84 84
U = AND(Y,C)
F = OR(P,Q,R,S,T,U)
Boolean Equation 49:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
X = NOT(B)
Y = NOT(C)
Z = NOT(D)
L = NOT(E)
P = AND(A,Y)
Q = AND(A,Z)
R = AND(A,X)
S = AND(A,L)
T = AND(D,L)
U = AND(D,Y)
F = OR(P,Q,R,S,T,U)
85 85
Boolean Equation 50:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
H = NOT(D)
I = NOT(E)
P = AND(X,Z)
Q = AND(X,H)
R = AND(X,Y)
S = AND(X,I)
T = AND(H,I)
U = AND(H,Z)
F = OR(P,Q,R,S,T,U
Boolean Equation 51:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
P = OR(A,C)
Q = OR(A,D)
R = OR(A,B)
S = OR(A,E)
T = OR(D,E)
U = OR(D,C)
F = AND(P,Q,R,S,T,U)
86 86
Boolean Equation 52:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
X = NOT(A)
Y = NOT(B)
Z = NOT(C)
H = NOT(D)
I = NOT(E)
P = OR(X,Z)
Q = OR(X,H)
R = OR(X,Y)
S = OR(X,I)
T = OR(H,I)
U = OR(H,Z)
F = AND(P,Q,R,S,T,U)
Boolean Equation 53:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
X = NOT(A)
Y = NOT(D)
P = OR(X,C)
Q = OR(X,D)
87 87
R = OR(X,B)
S = OR(X,E)
T = OR(Y,E)
U = OR(Y,C)
F = AND(P,Q,R,S,T,U)
Boolean Equation 54:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
X = NOT(B)
Y = NOT(C)
Z = NOT(D)
H = NOT(E)
P = OR(A,Y)
Q = OR(A,Z)
R = OR(A,X)
S = OR(A,H)
T = OR(D,H)
U = OR(D,Y)
F = AND(P,Q,R,S,T,U)
Boolean Equation 55:
INPUT(A)
INPUT(B)
INPUT(C)
INPUT(D)
INPUT(E)
OUTPUT(F)
P = AND(A,E)
Q = AND(B,C)