A systematic approach to configurable functional verification of HW IP blocks at transaction level

11
A systematic approach to configurable functional verification of HW IP blocks at transaction level q Tomaz ˇ Nahtigal a , Primoz ˇ Puhar b , Andrej Z ˇ emva a,a Faculty of Electrical Engineering, University of Ljubljana, Trz ˇaska 25, 1000 Ljubljana, Slovenia b Hella Saturnus Slovenija, Letališka 17, 1001 Ljubljana, Slovenia article info Article history: Received 3 March 2011 Received in revised form 14 May 2012 Accepted 14 May 2012 Available online 22 June 2012 abstract With demand growing by the day, the complexity of electronic devices is constantly increasing. Since simulation is still the most used approach, functional verification has become one of the major bottlenecks in the design and verification flow. In this paper we propose a systematic approach to configurable functional verification of electronic devices. Based on a black box approach, it can be applied to any design where behavior can be expressed by a set of functions. It combines simulation- and assertion- based verification into a hybrid verification. The proposed specification-based coverage metric can be configured ranging from a very rapid to an exhaustive verification. The approach uses Transaction Level (TL) modeling to raise the abstraction level, providing fas- ter verification. The results of the proposed design and verification flow, Intellectual Prop- erty (IP) and Test Bench (TB) are reusable. The approach is demonstrated on two case-studies; a video-processing IP block and uni- versal serial bus host controller. The results consider both simulation times and TB gener- ation times. Ó 2012 Elsevier Ltd. All rights reserved. 1. Introduction Even though electronic devices already combine many different functions, the market continuously demands new func- tionality. In the future, the functionality of electronic devices will only need to be enhanced, meaning that their complexity will increase drastically, requiring more effort in the design and verification process. Simulation is still the most used approach to functional verification even though new verification techniques have been developed. However, due to the increased complexity of electronic devices, verification can take up to 80% of the device de- sign time and cost when using Register Transfer Level (RTL) and Hardware (HW) models for simulation ([1–3]). Considering the extreme time-to-market pressures, there is no doubt that advanced new solutions will have to be provided. One possible solution is in raising the level of abstraction. This approach has been successful in the past (transistor le- vel ? gate level ? RTL). Raising the level of abstraction to Transaction Level Modeling (TLM) and using SystemC ([4]) to de- scribe the models can provide many benefits. Device behavior can be expressed using high-level functions and high-level data types can be used for communication. The details of device architecture are omitted, allowing for a simpler and faster device model. This also permits a highly systematic approach to device modeling and verification. The designer is allowed to focus on the functionality first and on the implementation second. As a consequence, the verification process is also divided into stages. In the first stage, the high-level functional model is verified. Errors in the functional model that are propagated to 0045-7906/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compeleceng.2012.05.006 q Reviews processed and approved for publication by Editor-in-Chief Dr. Manu Malek. Corresponding author. Tel.: +386 1 4768 346; fax: +386 1 426 46 30. E-mail addresses: [email protected] (T. Nahtigal), [email protected] (P. Puhar), [email protected] (A. Z ˇ emva). Computers and Electrical Engineering 38 (2012) 1513–1523 Contents lists available at SciVerse ScienceDirect Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng

Transcript of A systematic approach to configurable functional verification of HW IP blocks at transaction level

Computers and Electrical Engineering 38 (2012) 1513–1523

Contents lists available at SciVerse ScienceDirect

Computers and Electrical Engineering

journal homepage: www.elsevier .com/ locate/compeleceng

A systematic approach to configurable functional verification of HW IPblocks at transaction level q

Tomaz Nahtigal a, Primoz Puhar b, Andrej Zemva a,⇑a Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, 1000 Ljubljana, Sloveniab Hella Saturnus Slovenija, Letališka 17, 1001 Ljubljana, Slovenia

a r t i c l e i n f o

Article history:Received 3 March 2011Received in revised form 14 May 2012Accepted 14 May 2012Available online 22 June 2012

0045-7906/$ - see front matter � 2012 Elsevier Ltdhttp://dx.doi.org/10.1016/j.compeleceng.2012.05.00

q Reviews processed and approved for publication⇑ Corresponding author. Tel.: +386 1 4768 346; fa

E-mail addresses: [email protected] (T.

a b s t r a c t

With demand growing by the day, the complexity of electronic devices is constantlyincreasing. Since simulation is still the most used approach, functional verification hasbecome one of the major bottlenecks in the design and verification flow.

In this paper we propose a systematic approach to configurable functional verification ofelectronic devices. Based on a black box approach, it can be applied to any design wherebehavior can be expressed by a set of functions. It combines simulation- and assertion-based verification into a hybrid verification. The proposed specification-based coveragemetric can be configured ranging from a very rapid to an exhaustive verification. Theapproach uses Transaction Level (TL) modeling to raise the abstraction level, providing fas-ter verification. The results of the proposed design and verification flow, Intellectual Prop-erty (IP) and Test Bench (TB) are reusable.

The approach is demonstrated on two case-studies; a video-processing IP block and uni-versal serial bus host controller. The results consider both simulation times and TB gener-ation times.

� 2012 Elsevier Ltd. All rights reserved.

1. Introduction

Even though electronic devices already combine many different functions, the market continuously demands new func-tionality. In the future, the functionality of electronic devices will only need to be enhanced, meaning that their complexitywill increase drastically, requiring more effort in the design and verification process.

Simulation is still the most used approach to functional verification even though new verification techniques have beendeveloped. However, due to the increased complexity of electronic devices, verification can take up to 80% of the device de-sign time and cost when using Register Transfer Level (RTL) and Hardware (HW) models for simulation ([1–3]). Consideringthe extreme time-to-market pressures, there is no doubt that advanced new solutions will have to be provided.

One possible solution is in raising the level of abstraction. This approach has been successful in the past (transistor le-vel ? gate level ? RTL). Raising the level of abstraction to Transaction Level Modeling (TLM) and using SystemC ([4]) to de-scribe the models can provide many benefits. Device behavior can be expressed using high-level functions and high-leveldata types can be used for communication. The details of device architecture are omitted, allowing for a simpler and fasterdevice model. This also permits a highly systematic approach to device modeling and verification. The designer is allowed tofocus on the functionality first and on the implementation second. As a consequence, the verification process is also dividedinto stages. In the first stage, the high-level functional model is verified. Errors in the functional model that are propagated to

. All rights reserved.6

by Editor-in-Chief Dr. Manu Malek.x: +386 1 426 46 30.

Nahtigal), [email protected] (P. Puhar), [email protected] (A. Zemva).

1514 T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523

the RTL level, or even to the gate level are very costly to repair in terms of time and money. So it is crucial to find and fix theseerrors in the early phases of the design and verification process. In the following stage, the RTL model is verified, with moreemphasis placed on implementation of the verified functionality.

Additional improvement can be achieved by combining formal Assertion-Based Verification (ABV) with Simulation-BasedVerification (SBV), resulting in a hybrid verification. Assertions are a set of formal rules defined by the designer that arechecked during the simulation. Assertions can help pinpoint an error in the design and find errors that would otherwisebe very difficult to detect.

The models used in this paper are described using SystemC. SystemC allows the HW description in C++, which is the basisfor SW/HW co-verification. Therefore, verified functionality can be implemented either in SW or HW.

A key concept in modern digital design is design reuse ([5]). Modeling the device with functional blocks and verifyingeach block individually allows the designer to construct a database of verified blocks for future use. The Test Bench (TB), con-structed for functional verification, can be reused for verification at lower levels of abstraction with the help of translators.The reuse of verified IP blocks and TB results allows for a considerable savings in time and cost.

1.1. Design and verification flow

The design and verification flow shown in Fig. 1 is composed of two parallel flows; design flow and verification flow. Theresult of the design flow is a TL model and that of the verification flow is a TB.

The basis for an effective design and verification flow is a good specification including, both a design specification andverification specification. The design specification defines the device behavior. The verification specification, also knownas the verification plan, defines the verification effort and operating conditions thus helping us to avoid an unnecessary ver-ification effort.

In order to measure the verification success, verification coverage metrics are used. There are as many different coveragemetrics as there are verification techniques. We can measure the code coverage, toggle coverage, state coverage, etc. None ofthese metrics actually checks whether the design meets its specification. The verification of a high-level functional TL modelrequires a metric that is not based on knowledge of architecture or implementation. We propose a specification-based Func-tional Coverage Metric (FCM) which measures compliance of the design with its specification. However, even though metricsare a useful indicator, a full coverage does not necessarily mean the absence of errors in the design ([2]). The FCM definitionis included in the verification specification. It defines the verification effort and is the basis for TB generation.

The success of a simulation-based verification depends on the quality of the TB used. The easiest to construct and mostcommonly used is a directed TB, which consists of several directed tests. They are used to test specific corner cases. Though

TL MODELREFINEMENT

DESIGNSPECIFICATION

TL MODELOK?

NO

START

END

YES

TBGENERATION

VERIFICATIONSPECIFICATION

REQUIREDCOVERAGEREACHED?

NO

YES

Fig. 1. Design and verification flow.

T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523 1515

they are usually fast, the overall coverage of a directed TB is relatively low. Directed tests are usually bound to a specificdevice model and in general are not reusable. Random tests are more generally applicable. They can be used on any devicemodel and can be a useful addition to directed tests. An effective random TB takes more time but the overall coverage isgreater and the verification space is covered more evenly.

The design flow in a top-down approach is called refinement. In the design and verification flow from Fig. 1, the designspecification is refined into a TL model. This is usually a manual procedure, especially when refining a specification written ina natural language. At the end, correctness of the model is verified by using a previously generated TB.

1.2. Related work

Several design and verification approaches have been proposed in the literature. Bombieri et al. ([6,7]) propose the use ofa top-down approach from TL to RTL. The approach is supported by a hybrid verification flow using simulation and ABV. Thecoverage metric used in their work is based on a mutation model and checks whether all the injected faults in the architec-ture have been detected. They propose the use of a reusable TB and assertions but do not provide the description of the gen-eration procedure.

Habibi et al. ([8,9]) favor a top-down design and verification flow from TL towards the RTL and RTL verification. ABV isproposed using a Property Specification Language (PSL) and their own Abstract state machine Language (AsmL). The TBsare generated randomly or using a generic algorithm.

Da Silva et al. ([10–12]) propose VeriSC, a simulation-based co-verification of TL and RTL models. TBs are generated usingthe SystemC verification library while the coverage metric is not specified. Kakoee et al. ([13]) propose a transaction-levelverification of the communication between a system of IPs using ABV. A new coverage metric is proposed accompaniedby a TB generation proposal. Ara and Suzuki ([14]) propose a transaction-level SBV achieving better verification at the signallevel by considering ‘‘toggle coverage’’. Goudarzi et al. ([15,16]) propose a design and verification flow using ODYSSEY. Theirflow starts by partitioning a C++ code into SW (C++) and HW (SystemC). The correctness of transformation is confirmed byco-simulation of both models and ABV. The design flow is highlighted. Lahbib et al. ([3]) discuss techniques for translatingthe RTL models modeled in Hardware Description Language (HDL) into TL models modeled in SystemC to enable faster sim-ulation and verification at the Electronic System Level (ESL). They propose the translation of the existing PSL code into Sys-temC in order to be used for ABV.

TLM also, recently became an industry standard, quickly achieving acceptance within companies ([4]). Electronic DesignAutomation (EDA) companies ([17–19]) are developing tools and IP libraries that enable TL design and verification. ESL toolsenable simulation of SystemC TL models, the co-simulation and ABV of HDL RTL and SystemC TL models.

1.3. Our contribution

In this paper, we propose a new systematic approach to configurable functional verification of HW IP blocks. It can beapplied to any design that can be modeled as a set of high-level functions.

Like the design and verification flow proposed by Bombieri et al. ([6,7]), ours is a top-down approach combining SBV withABV, resulting in a hybrid verification. The novelty of our approach is a new specification-based FCM measuring the compli-ance of the design with its specification and providing a basis for TB generation. The design and verification process is effi-ciently divided into two consecutive stages; in the first stage the designer is allowed to focus on functionality and onimplementation in the second. The verification process can be adjusted to perform everything from a rapid to an exhaustiveverification. The TB used for verification on TL can, with the help of a transactor, be reused on RTL, thus reducing the effortneeded in the subsequent design and verification flow stages.

The rest of the paper is organized as follows. In Section 2, the approach is explained more in detail. In Sections 3 and 4,two examples are shown. Section 5 includes results of our work and Section 6 concludes the paper.

2. Verification flow

The verification flow starts with the verification specification, commonly known as verification plan. It includes defini-tions of the SBV and ABV. The effort of the SBV is defined by defining FCM while the effort of ABV is defined by defining asser-tions. The verification specification is usually written in a non-formal way, i.e. in a natural language such as English.

The functional verification is carried out by the verification environment as shown in Fig. 2 ([20]). The verification envi-ronment is intended to simulate the operational environment surrounding the Design Under Verification (DUV) and there-fore consists of a test controller, monitor, evaluator, stimulator, responder, function and operation channel. The stimulatormodels all the devices controlling the DUV by initiating function channel transactions. The responder models all the devicesthe DUV controls by responding to operation channel transactions ([20]). The verification environment is written in SystemCand is modeled using the TLM paradigm.

The TLM abstraction levels are defined separately for communication and computation, as shown in Fig. 3 ([21,22]). Theabstraction level of the computation model is defined by the timing accuracy and can range from a cycle-accurate to an unti-med model. The data granularity defining communication abstraction can vary from the least abstract bus size model

STIMULATOR

TEST CONTROLLER

DUV RESPONDER

MONITOR/EVALUATOR

Function channel Operation channel

Fig. 2. Verification environment.

Timingaccuracy

Data granularity

UntimedApproximately-

timedCycle-

accurate

Bus Size

Bus Packet

ApplicationPacket

PV

Algorithmic model

BCA CA - RTL

PVT

Fig. 3. TLM paradigm.

1516 T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523

through a bus packet model to the most abstract application packet model. The most common intersections between theTLM communication and computation abstraction are highlighted.

The least abstract TL model is the Cycle-Accurate (CA) model. It uses a four-state digital bus for communication and istriggered by a clock signal. It is at a similar level of abstraction as the RTL model. The most abstract TL model is the untimedalgorithmic model. Computation time is ignored and the model is triggered by transaction events. The algorithmic modeluses application packets for communication. An application packet can represent a color tone, serial data stream, pictureor even a 5-min video. When an application packet represents a bus transaction, the model is called Programmer’s View(PV) model. The other two most used TL model types using approximate timing are called Bus-Cycle Accurate (BCA) and Pro-grammer’s View plus Timing (PVT) models. They are beyond the scope of this paper and are described in detail in ([22]).

The proposed verification environment is modeled by using either the Algorithmic or Programmer’s View models depend-ing on the desired communication abstraction level. The lack of the communication and computation detail enables a veryfast simulation and high-level modeling of functions.

The verification runs as follows. Each transaction is a combination of a request and response. The stimulator initiates thefirst transaction over the function channel by sending a Test Vector (TV) in the form of a function request to the DUV. TheDUV processes the request by initiating the first operation transaction with the responder over the operation channel. Theresponder concludes the first operation transaction by returning its response to the DUV and awaits the next transaction. TheDUV and responder exchange several operation transactions to complete the function demanded by the stimulator. After thelast operation response is returned, the DUV returns its function response to the stimulator. The monitor monitors the func-tion and operation channel transactions and the evaluator checks the correctness of transactions with assertions and mea-sures the verification coverage. The test controller controls the verification environment according to the TB.

2.1. Systematic approach

The DUV is seen as a black box. Only inputs (X, W) and outputs (Y, Z) can be observed or controlled. Its functionality F canbe expressed with several high-level functions fi of inputs X and W, where X represents the function request from the stim-ulator and W represents the operation response from the responder:

F ¼ ffiðX;WÞg; i 2 f1; . . . ; Ig: ð1Þ

Inputs X and W consist of several variables xj and wk:

fX;Wg ¼ fxj;wkg; j 2 f1; . . . ; Jg; k 2 f1; . . . ;Kg ð2Þ

where J and K are the number of variables in X and W, respectively. Since function fi can be independent of a certain variable,the variable is excluded and subsets Xi and Wi for function fi are defined (Fig. 4):

f1(X1,W1)f2(X2,W2)

…fI(XI,WI)

X

W

STIMULATOR RESPONDERZ

Y

Fig. 4. Device functional model.

T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523 1517

fiðXi;WiÞ; Xi � X; Wi �W : ð3Þ

Ji and Ki are the sizes of the subsets. For a better illustration, Xi and Wi are merged into Vi, which contains all functionvariables:

Vi ¼ fXi;Wig ¼ fv i lg; l 2 f1; . . . ; Lig; Li ¼ Ji þ Ki: ð4Þ

Since most devices can be observed as a black box and modeled using high-level variables for inputs and outputs, theproposed systematic approach to the functional verification can be applied to them.

For an even better illustration we will focus on the verification of a single function f with variable subsets X and W mergedinto V.

2.2. Configurable verification

Each variable vl out of V has a defined set of possible values stated in the verification specification: vl 2 (ol, pl). An exhaus-tive verification of function f considers all possible combinations of values within its set of variables. When using non-dis-crete variables (e.g. analog signals), there is an unlimited number of variable value combinations resulting in an endlessexhaustive verification. Even when using discrete or quantized variables with large sets, an exhaustive verification can taketoo long.

A faster verification can be achieved by grouping values of a variable into subsets (i.e. bags) and only considering theircombinations ([23]). A set of a variable is divided into D bags. Since different variables can have different value sets, whichthemselves can have different divisors (one can have more bags than the other), we define the divisor of vl value set as Dl. Thebags for variable vl can be expressed as:

Bl ¼ Blð1Þ Blð2Þ Blð3Þ : : : BlðDlÞ½ �: ð5Þ

The simplest division is linear as shown in Fig. 5 with a subset size of: ql = ol � pl/Dl,il and jl being the beginning and end ofthe value set of variable vl, respectively. The set of each bag can be expressed as:

BlðdlÞ ¼ ðil þ ðdl � 1Þ � kl; il þ dl � klÞ ð6Þ

where bag index dl 2 {1,2, . . . ,Dl}. Bl(dl) refers to dthl bag of lth, variable.

By defining the divisors, FCM is defined. The FCM configuration is in the hands of the designer. On one hand, when wewant to have a very fast verification, Dl must be set to a low number. On the other hand, when we want to have an exhaustiveverification, Dl must be set to the highest value possible, i.e. to the number of values of the variable. The number of combi-nations (M) for function f with L variables is given by:

Mðf Þ ¼ M ¼YL

l¼1

Dl: ð7Þ

2.3. Effective test bench generation

An effective TB generation is one of the main goals of our work. For function f, a TB, consisting of test vectors TVm, has to begenerated. A TB must contain all the different combinations of bags (8). Therefore, its size equals M (7).

0

Variable Vl set

Variable Vl subsets

ol pl

ol ol+ql ol+2ql ol+3ql pl

1 2 3

Fig. 5. Value set and subsets.

Table 1Exempl

TV

(1)(2)(3)(4)(5)(6)

1518 T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523

TB ¼ fTV1; TV2; . . . ; TVMg: ð8Þ

The TV is a set of values applied to V (4). The size of the TV therefore equals L. The TV can be expressed as:

TV ¼ fb1; b2; . . . ; bl; . . . ; bLg; ð9Þ

where bl is the value applied to variable vl during the verification process. Each TV consists of values selected from a differentcombination of bags. Value bl(dl) is randomly selected out of Bl(dl) (6):

blðdlÞ ¼ randðBlðdlÞÞ: ð10Þ

The random selection ensures that since only one combination of certain bags is executed, more than one value is used inthe TB. The SystemC verification library enables the random generation of a TV ([4]).

2.4. A simple example

With the following example, we will illustrate the verification flow of a simple device. We want to design a device thatreceives data, computes the result and sends it over the RS-232 serial interface. The device is also connected to a sensor overthe RS-485 network. The device receives the initial value over the RS-232 interface, requests the first and the second valuefrom the sensor and returns the sum of all three values over the RS-232 interface.

We raise the level of abstraction by representing the digital serial communication data with an unsigned integer andeliminating the notation of time. The high-level design specification now states: ‘‘The device will be given an integer. The deviceshould then request two integers from the sensor, one after another. At the end, it should return the sum of all three integers.’’ Theexemplary verification specification states: ‘‘The input value set is (9, 17) and the value set of the sensor is (7, 16). The input dataset should be divided into three subsets and the sensor data set into two subsets.’’

The verification flow runs as follows. First, we define variable sets X, W and V. The function channel consists of two vari-ables: input variable x1 and the result y1. The operation channel also consists of two variables: the first and the second oper-ation response, w1 and w2. In this way, we can control all sequential states of the variable. Since we are only dealing with onefunction, variable set V consists of all three variables:

V ¼ fx1;w1;w2g: ð11Þ

Second, we define the bag vectors according to the specified dividers. Divider D1 equals three, which results in three bagsfor variable x1 as presented in the first row of Eq. (12). The variable sets of w1 and w2 are also divided linearly by dividers D2

and D3 into two bags each. The bags result in:

B1 ¼ ð9;11Þ ð12;14Þ ð15;17Þ½ �B2 ¼ ð7;11Þ ð12;16Þ½ �B3 ¼ ð7;11Þ ð12;16Þ½ �

ð12Þ

Finally, we can construct a TB. Its size can be calculated using (7), resulting in twelve TVs. The size of the TV is three sincethe TV shall be applied to three variables. The first TV is constructed using the first combination of bags. The complete TB isshown in Table 1.

After generating the TB, the first TV is presented to the DUV. Variable x1 is assigned the value 9, w1 the value 8 and w2 thevalue 11. As a result, the DUV should respond with variable y1 holding the value 28.

The verification flow, i.e. the TB generation, is fully automated after defining function variables, their sets and divisors.

2.5. Assertion-based verification

ABV is a formal type of verification commonly used on RTL with PSL. ABV is also possible using SystemC and runs as fol-lows. All functional design properties that are expressible in a formal way are assigned an assertion. During simulation, theevaluator checks the correctness of these assertions. In case of a failed assertion, a report is generated pinpointing the errorin the design. Therefore, failed assertions are very easy to debug. Since some functionality aspects are easier to verify by ABVand others by SBV, a hybrid verification combines the advantages of both types.

ary TB.

Variables TV Variables

x1 w1 w2 x1 w1 w2

9 8 11 (7) 10 7 1513 7 7 (8) 14 11 1315 10 8 (9) 17 10 1610 12 11 (10) 9 15 1612 16 10 (11) 13 16 1215 15 9 (12) 16 12 14

T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523 1519

Assertions are realized as C++ functions. Consistent with the black box approach, they can only check inputs and outputsof the DUV. They are triggered by the evaluator on function and/or operation transaction, depending on their purpose.

The assertion written for the simple example presented in Section 2.4 checks whether returned variable y1 actually equalsx1 + w1 + w2.

3. Video-processing IP block

Our first case-study describes a model of a video-processing IP block. The model in question includes three functionsoperating over one rectangular object in the display space: Fill (f1), Copy (f2) and CopyTransparent (f3) with the functionrequest variables X. The functions are demonstrated in Fig. 6. The Fill function fills the object pixel-by-pixel with the Value(four times, lower left corner). The Copy function copies the object from the origin position (center) to the target position(lower right corner). Finally, the CopyTransparent function also copies the object, but only pixels whose values in the originposition (lower left corner) are not zero (non-black, upper left corner).

The described operations take place in the responder modeling video memory. While the function channel transmits the TVconsisting of variable values, the operation channel transports memory control variables (address, data and read/write signal)to the responder. Operations include pixel read and pixel write. The Fill function executes one write operation and the Copyfunction one read and one write operation per pixel. The CopyTransparent function first reads the origin position pixel and thenthe target position pixel. If the origin position pixel is not zero, then the origin position pixel is written to the target position pixel.On the other hand, if the origin position pixel equals zero, then the target position pixel is written back. The CopyTransparentfunction therefore requires two read and one write operation per pixel. The total number of operations depends on the selectedfunction and pixel count. The pixel count is defined by variables and equals the object size that lies inside the display.

The variables used in this case-study are discrete. They are listed in Table 2 along with their value sets. The Fill functionhas a different set of input variables than the Copy and CopyTransparent functions. Since Fill fills the object, the origin posi-tion is irrelevant. And since Copy and CopyTransparent copy objects, the fill Value is irrelevant and not included in the TB.

Three FCMs with different dividers were defined for verifying the functions as shown in Table 2.The ABV checks for the number of operation transactions, the correct sequence of read/write transactions and possible

address overflow.

4. Universal serial bus host controller

In our second case-study, we modeled a full-speed Universal Serial Bus (USB) host controller according to the USB 2.0Specification [24]. The USB host controller consists of several layers of functionality that are defined as follows. The first layer

Fig. 6. Result of the functions over an object in a display.

Table 2Video-processing IP block variables, value sets, function associations and dividers.

Variable Value set Function FCM

f1 f2 f3 A B C

Display�width {1, 2, . . . , 256} X1 X2 X3 2 3 4Display�height {1, 2, . . . , 256} X1 X2 X3 2 3 4Object�width {1, 2, . . . , 256} X1 X2 X3 2 3 4Object�height {1, 2, . . . , 256} X1 X2 X3 2 3 4Origin�width {0, 1, . . . , 255} X2 X3 2 3 4Origin�height {0, 1, . . . , 255} X2 X3 2 3 4Target�width {0, 1, . . ., 255} X1 X2 X3 2 3 4Target�height {0, 1, . . . , 255} X1 X2 X3 2 3 4Value {0, 1, . . . , 255} X1 2 3 4

1520 T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523

deals with the parallel-to-serial conversion and device detection. The second layer completes the packet where the synchro-nization field (Sync) and the end of packet (EOP) are added to the data. The third, i.e. the packet layer, generates USB packetsand makes a cyclic redundancy check (CRC) when required. The fourth, i.e. the transfer layer, provides different types oftransfers. The fifth layer is the functionality layer allowing for connection to the endpoints. The upper layers are applicationlayers.

Our model does not include all the features described in the specification, only some of the full-speed USB host controller.Our intention was to show how selected features of certain layers can be modeled. The layers that were modeled are thePacket and Transfer layers given in Chapter 8 (Protocol Layer) of the specification [24].

The basis of the USB transfer is the USB packet. It consists of common USB packet fields, i.e. the Sync field, Packet ID (PID)field, address (ADDR) field, endpoint (ENDP) field, data (DATA) field, CRC field and EOP field. The PID field identifies the pack-et, the ADDR field indicates the device the packet is designated for, the ENDP field specifies the device endpoint, the DATAfield contains the data and the CRC field detects errors. Sync and EOP fields are beyond the scope of this paper since they area part of another layer.

In order to improve the verification speed, we chose not to calculate CRC and to avoid binary representation of the PIDfield. Two types of CRC are required for the USB packet: CRC5 and CRC16. The high-level model will specify only the CRCtype. The PID packet field is used to identify the packet. It first identifies whether the packet is of the Token, Data or Hand-shake type. Our model will only distinguish between the IN and OUT Token packet type, the DATA0 and DATA1 Data packettype, and the ACK, STALL and NAK Handshake packet type. The PID was modeled with a high-level data type and only ex-presses the PID type. The reader can find out more about the CRC calculation and PID in the specification [24].

The Packet layer is responsible for generating USB packets. It distinguishes between the Token, Data and Handshake pack-ets. There are some other packet types defined in the USB specification that we chose to omit. Each packet type has a dif-ferent structure, as can be seen in Fig. 7.

The Transfer layer is responsible for sending and receiving the USB packets. It distinguishes between several types ofdata-flow, from which we will only model the Bulk transfer used for a large data transfer. The model includes two functions;the Bulk IN (f4) and OUT (f5) transfer with both the input and response variables (Table 3). The Bulk IN transfer starts with thehost issuing the IN Token packet. The addressed function responds with the DATA0 or DATA1 Data packet containing thedata from the target endpoint, STALL or NAK Handshake packet reporting a busy function or false reception, respectively.In the case of the Data packet, the host responds with an ACK Handshake packet. The Bulk OUT transfer starts with the hostissuing the OUT Token packet and a subsequent DATA0 or DATA1 Data packet. The function responds with the ACK, STALL orNAK Handshake packet, reporting an acknowledge, busy function or false reception, respectively. Additional functionalitycan be added later.

PID

Sync

Handshake packet:

Data packet:

ADDR

Token packet:

Sync

Sync

PID

PID

DATA

ENDP CRC

CRC

EOP

EOP

EOP

Fig. 7. USB packets.

Table 3USB host controller variables, value sets, function associations and dividers.

Variable Value set Function FCM

f4 f5 A B C

Addr {0, 1, . . . , 127} X4 X5 32 64 128EndPoint {0, 1, . . . , 15} X4 X5 4 8 16Data {0, 1, . . . , 1023} X5 16 32 1024DataSlot {0, 1} X4 X5 2 2 2Response {0, 1, 2} W4 W5 3 3 3

T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523 1521

This model uses both function requests (X) and operation response (W) variables. The variable Response has the values 0,1 and 2 representing ACK, STALL and NAK, respectively.

The USB host controller can communicate with up to 128 devices with 16 endpoints each. In order to verify the function-ality with all addresses and endpoints, the responder connected to the DUV has to model 128 devices.

Three different FCMs were composed for verifying the functions. They are presented in Table 3. The first two FCMs wereselected randomly. The third FCM presents an exhaustive verification with the exception of the variable Data.

The ABV checks for the correct packet content, packet sequence, address, data, endpoint and CRC type.Additional functionality in the form of functions (f6, . . .) representing other transfer types can be easily added to the mod-

el. Also, additional packet types can be modeled and verified.

5. Results

The verification environment, two device models and assertions were modeled in SystemC and the TBs were generatedusing the SystemC verification library. All the models were compiled using Microsoft Visual Studio 2008. The simulationswere executed using an Intel Core 2 Quad Q6600 processor at 2.4 GHz with 4 GB of DDR2 RAM running Microsoft WindowsVista with each simulation occupying only one core.

In Sections 3 and 4, we have defined three different FCMs for each of the five functions modeled. This means that 15 dif-ferent TBs had to be generated, varying in size due to the different numbers of function variables and different FCMs. Thecorresponding sizes of the TBs are presented on the left side of Table 4, while the times required for their generation areshown in the middle of Table 4.

Following the TB generation, the models were verified using the generated TBs. The simulation times required for veri-fication are presented on the right side of Table 4. Table 5 presents the results for the average simulation time of a single TVfor a TL model and the average simulation times of a single TV for the corresponding RTL models modeled in SystemC andVHDL respectively. The results for the RTL models have been taken from previous research ([25,26]). The average simulationtimes for single TVs only depend on the function and are independent of the FCM selection. We can see that some functionsexecute much faster than others. For example, function f3 requires approximately 700 times longer to simulate a single TVthen f4. The results also clearly show that the TLM models simulate much faster than RTL models, hence supporting the use ofa higher abstraction level of the device models.

Further results of the TB generation are presented in Fig. 8. The Figure shows the times required for an average single TVgeneration. It shows that the TV generation times depend significantly upon the TV size and that they are relatively inde-

Table 4TB sizes, simulation and TB generation times for different functions and FCMs.

Function TB sizes TB generation times Simulation times

A B C A B C A B C

f1 128 2187 16.4k 0.021 s 0.290 s 2.01 s 10.2 s 167 s 1240 sf2 256 6561 65.5k 0.046 s 1.10 s 10.0 s 38.9 s 1000 s 9800 sf3 256 6561 65.5k 0.047 s 1.07 s 10.3 s 60.2 s 1480 s 14700 sf4 768 3072 12.3k 0.081 s 0.320 s 1.26 s 0.260 s 1.03 s 4.05 sf5 12.3k 98.3k 12.6M 1.22 s 9.47 s 1210 s 4.83 s 38.7 s 4980 s

Table 5Average simulation times of a single TV for different types of models.

Function TLM RTL (SystemC) RTL (VHDL)

f1 77.2 ms 628 ms 1600 msf2 151 ms 1230 ms 3120 msf3 228 ms 1850 ms 4710 msf4 0.330 ms 1.84 ms 2,15 msf5 0.390 ms 2.18 ms 2,54 ms

80

100

120

140

160

180

200

A B C

f1 f2 f3 f4 f5

FCM

Sing

leT

Vge

nera

tion

times

[us]

Fig. 8. Single TV generation times for different functions and FCMs.

1522 T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523

pendent of the FCM selection, especially for FCMs with more than 1000 TVs. Furthermore, the TB generation times presentless than 0.2% of the total verification time of f1, f2 and f3 and can be ignored. However, this is not the case with f4 and f5

where TB generation occupies up to 30% of the total verification time.Having shown that the time required to simulate is independent of the FCM selection, we can extrapolate the results of

our work to a FCM representing an exhaustive verification. We can conclude that an exhaustive verification of f3 on the samecomputer would take approximately 1.3⁄1011 years while it would take 1.5⁄102446 years to exhaustively verify f5.

Besides SBV, ABV was performed, too. No coverage metric was used for ABV since assertions had to be checked for everytransaction and the transactions had to be correct in order to ensure the functionality of the model in question is the same asthat of the specification.

6. Conclusion

In this paper, we proposed a systematic approach to configurable functional verification at the TLM level. The hybrid ver-ification used in our design and verification flow combines SBV and ABV. We showed that by using the black box approach itis possible to model any electronic device whose functionality can be expressed as a list of functions of high-level variables.We clearly demonstrated this on two very different case-studies.

We showed how easy it is to configure a specification-based FCM ranging from a rapid to an exhaustive verification justby defining the variable divisors. We also established that an exhaustive verification of modern systems is very time-con-suming and is therefore often not practical.

The proposed approach can be used more than once in the design and verification flow process. In the early phases ofdevelopment, the designer can use a fast verification on more than one occasion, while an exhaustive verification can berun at the end of the design process. The TB can also be reused on the RTL model with the use of a transactor.

When designing an electronic system, we propose to construct it by combining a set of pre-verified IP blocks modeled andverified by using our approach.

Acknowledgements

This work was supported in part by Slovenian Research Agency (ARRS) under Grant 1000-07-310266 and Centre of Excel-lence Namaste.

References

[1] Swan S. SystemC transaction level models and RTL verification. In: Design automation conference; 2006. p. 90–2.[2] Tasiran S, Keutzer K. Coverage metrics for functional validation of hardware designs. IEEE Des Test Comput 2001;18(4):36–45.[3] Lahbib Y, Kallel M, Dhouib A, Hechkel M, Perrin A, Tourki R. System on chips optimization using ABV and automatic generation of SystemC codes.

Microprocess Microsyst 2007;31(7):433–44.[4] Open SystemC Initiative (OSCI). <http://www.systemc.org>; 2011.[5] Keutzer K, Malik S, Newton AR, Rabaey JM, Sangiovanni-Vincentelli A. System-level design: orthogonalization of concerns and platform-based design.

IEEE Trans CAD Integr Circ Syst 2000;19(12):1523–43.[6] Bombieri N, Fummi F, Pravadelli G. Reuse and optimization of testbenches and properties in a TLM-to-RTL design flow. ACM Trans Des Automat

Electron Syst 2008;13(3):1–22.[7] Bombieri N, Fummi F, Pravadelli G, Fedeli A. Hybrid, incremental assertion-based verification for TLM design flows. IEEE Des Test Comput

2007;24(2):140–52.[8] Habibi A, Tahar S. Design and verification of SystemC transaction-level models. IEEE Trans Very Large Scale Integr (VLSI) Syst 2006;14(1):57–68.

T. Nahtigal et al. / Computers and Electrical Engineering 38 (2012) 1513–1523 1523

[9] Gawanmeh A, Tahar S, Moinudeen H, Habibi A. A design for verification approach using an embedding of PSL in AsmL. J Circ Syst Comput2007;16(6):859–81.

[10] da Silva KRG, Melcher EUK, Maia I, Cunha HdoN. A methodology aimed at better integration of functional verification and RTL design. Des AutomatEmbed Syst 2005;10(4):285–98.

[11] Araújo G, Barros E, Melcher E, Azevedo R, da Silva KRG, Prado B, et al. A SystemC-only design methodology and the CINE-IP multimedia platform. DesAutomat Embed Syst 2005;10(2–3):181–202.

[12] da Silva KRG, Melcher EUK, Araujo G, Pimenta VA. An automatic testbench generation tool for a SystemC functional verification methodology. In:Symposium on integrated circuits and systems design; 2004. p. 66–70.

[13] Kakoee MR, Neishaburi MH, Mohammadi S. Graph based test case generation for TLM functional verification. Microprocess Microsyst 2008;32(5–6):288–95. ISSN 0141-9331.

[14] Ara K, Suzuki K. Fine-grained transaction-level verification: using a variable transactor for improved coverage at the signal level. IEEE Trans CAD IntegrCirc Syst 2005;24(8):1234–40.

[15] Goudarzi M, Hessabi S, MohammadZadeh N, Zainolabedini N. The ODYSSEY approach to early simulation-based equivalence checking at ESL levelusing automatically generated executable transaction-level model. Microprocess Microsyst 2008;32(7):364–74. ISSN 0141-933.

[16] Gharehbaghi AM, Yaran BH, Hessabi S, Goudarzi M. An assertion-based verification methodology for system-level design. Comput Electr Eng2007;33(4):269–84.

[17] Mentor Graphics. <http://www.mentor.org>; 2011.[18] Synopsys. <http://www.synopsys.org>; 2011.[19] Cadence. <http://www.cadence.com>. 2011.[20] Wile B, Goss J, Roesner W. Comprehensive functional verification: the complete industry cycle (systems on silicon). Morgan Kaufmann Publishers Inc.;

2005. ISBN 0127518037.[21] Cai L, Gajski D. Transaction level modeling: an overview. In: CODES+ISSS ’03: proceedings of the 1st IEEE/ACM/IFIP international conference on

Hardware/software codesign and system synthesis; 2003. p. 19–24. ISBN 1-58113-742-7.[22] Ghenassia F. Transaction-level modeling with SystemC: TLM concepts and applications for embedded systems. Springer; 2005. ISBN 0387262326.[23] Siegmund R, Hensel U, Herrholz A, Volt I. A functional coverage prototype for SystemC-based verification of chipset designs. In: Design, automation &

test in Europe: European SystemC user group meeting; 2004.[24] Universal Serial Bus Specification, Revision 2.0. <http://www.usb.org/developers/docs>; 2000.[25] Puhar P, Zemva A. Simulation-based functional verification of a video processing IP block. In: International conference on microelectronics, devices and

materials; 2007. p. 171–6.[26] Puhar P, Zemva A. Hybrid functional verification of a USB host controller. Informacije MIDEM 2008;38(2):94–102. ISSN 0352-9045.

Tomaz Nahtigal is a Ph.D. student at the Faculty of Electrical Engineering, University of Ljubljana, Slovenia. He received his B.Sc. in Electrical Engineering atthe same university in 2007. His research interests include design and verification of digital systems, development of video and imaging applications andHW/SW co-design.

Primoz Puhar received his B.Sc. and Ph.D. in Electrical Engineering from the University of Ljubljana in 2005 and 2011. He is currently working at HellaSaturnus Slovenija as a hardware design engineer. His research interests include design and verification of digital systems and HW/SW co-design.

Andrej Zemva received his B.Sc., M.Sc. and Ph.D. degrees in Electrical Engineering from the University of Ljubljana in 1989, 1993 and 1996, respectively. Heis Professor at the Faculty of Electrical Engineering. His current research interests include digital signal processing, HW/SW co-design, ECG signal analysis,logic synthesis and optimization, test pattern generation and fault modeling.