A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

download A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

of 19

Transcript of A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    1/19

    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998 1389

    A Tutorial Introduction to Research onAnalog and Mixed-Signal Circuit Testing

    Linda S. Milor, Member, IEEE

    Abstract Traditionally, work on analog testing has focusedon diagnosing faults in board designs. Recently, with increasinglevels of integration, not just diagnosing faults, but distinguishingbetween faulty and good circuits has become a problem. Analogblocks embedded in digital systems may not easily be separatelytestable. Consequently, many papers have been recently writtenproposing techniques to reduce the burden of testing analog andmixed-signal circuits. This survey attempts to outline some of this recent work, ranging from tools for simulation-based testset development and optimization to built-in self-test (BIST)circuitry.

    Index Terms Analog circuits, analog system fault diagnosis,analog system testing, built-in testing, integrated circuit testing,mixed analogdigital integrated circuits, testing.

    I. INTRODUCTION

    H ISTORICALLY, electronic circuits were almost exclu-sively analog and were designed with discrete com-ponents. The components were mounted on printed circuitboards and tested with a bed of nails tester, allowing accessto all input and output voltages of components. Since thecomponents of an electronic system could be individuallytested, speed in identifying the cause of failures was moreof a problem. Testing research focused on the development of methods to rapidly diagnose component failures and assemblyerrors during eld servicing of weapons, navigation, andcommunication systems.

    The advent of integrated circuit (IC) technology and thescaling of transistor sizes have allowed the development of much larger electronic systems. Digital design techniqueshave become predominant because of their reliability andlower power consumption. However, although large electronicsystems can be constructed almost entirely with digital tech-niques, many systems still have analog components. This isbecause signals emanating from storage media, transmissionmedia, and physical sensors are often fundamentally analog.Moreover, digital systems may have to output analog signals toactuators, displays, and transmission media. Clearly, the needfor analog interface functions like lters, analog-to-digital con-verters (ADCs), phase-locked loops, etc., is inherent in suchsystems. The design of these interface functions as integratedcircuits has reduced their size and cost, but in turn, for testingpurposes, access to nodes is limited to primary inputs and

    Manuscript received November 7, 1996; revised December 23, 1997. Thispaper was recommended by Associate Editor G. W. Roberts.

    The author is with the Submicron Development Center, Advance MicroDevices, Sunnyvale, CA 94086 USA.

    Publisher Item Identier S 1057-7130(98)07529-6.

    Fig. 1. Test setup for a mixed-signal device [5].

    outputs, making it more difcult to locate component failureswhen circuit specications are not satised. Nevertheless,algorithms aimed at diagnosing component failures on boardscan be applied to identifying faulty components in analogICs. And in fact, given limited accessibility to internal nodesin analog integrated circuits, a large number of algorithmsand theoretical ndings for fault diagnosis and test signalselection have been developed throughout the 1970s and1980s [1], [2].

    Recently, due to the exploding telecommunications market,as well as markets for consumer and automotive electronics,more and more mixed-signal devices are being designed,integrating digital and analog components on a single chipin order to improve performance and reduce board size andcost. In the production of mixed-signal circuits, test can bea limiting factor, contributing signicantly to manufacturingcost [3]. A typical strategy for testing a mixed-signal chipinvolves, when possible, rst testing the digital and analogcomponents, followed by some system tests to check the at-speed interaction among components. In this case, the digitalparts would be tested with standard methods, aided by softwarefor automatic test pattern generation, scan chains, and built-inself-test (BIST), which has become mature and cost effective.Testing the analog parts and the combined system is less wellunderstood, where test sets are typically based on a designersexperience and specications on a circuits functionality.

    A typical test setup for testing the analog components isshown in Fig. 1. Such a setup involves applying digital inputsto the digital block, inputting a signal, which excites the analogportion of the mixed-signal circuit with a dc, sinusoid, square-wave, or some random signal, having a known probabilitydistribution function, and measuring the response with an rmspower meter, operating over a narrow and tunable frequencyband. Sinusoidal inputs are commonly used to test linearanalog circuits, such as ampliers, data converters, and lters,to verify the magnitude and phase of an output signal as afunction of the input frequency. Additionally, sinusoidal inputs

    10577130/98$10.00 1998 IEEE

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    2/19

    1390 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    are also used to quantify the extent of nonlinearity in an outputsignal by comparing the power contained in the harmonics ornoise to that of the fundamental signal, referred to as totalharmonic distortion [4]. As an alternative to deterministic inputsignals, random inputs, with a known spectral distribution, maybe used to test an analog circuits transfer characteristic, viaanalyzing the spectrum of the response.

    Many factors limit the straightforward application of suchan approach to testing mixed-signal circuits. First, marketpressures require very efcient development of bug-free testsets, which not only check the functionality of the analogcomponents, but also the at-speed operation of the entiresystem. Especially for at-speed testing, the interactions be-tween the digital and analog portions of the chip can becomplex and unique to the application, and as a result need tobe fully understood. This is certainly nontrivial and requiresextensive labor-intensive engineering work. It turns out thatfor many mixed-signal circuits, a signicant contribution totime-to-market comes from the time required for test programdevelopment and debugging. Test program development often

    begins after a design is complete due to limited CAD toolsupport for test set development, and it usually takes manyiterations between design and test to realize a testable designwhich satises specications. This contrasts strongly withtest program development for digital circuits. For digitalcircuits, CAD tools are extensively used to generate testpatterns, which are then tested on prototypes of the circuit,including register transfer level software descriptions, gate-level software descriptions, and eld-programmable gate array(FPGA) prototypes. This both validates the circuit design andthe test program prior to the availability of silicon. Hence, thistutorial will begin with a discussion of recent tools that havebeen developed to automate the test development cycle so that

    test sets can be created concurrently with the design phase.Then, even if mixed-signal test programs can be developed

    efciently, complete testing of some analog circuit specica-tions can be very costly. Consider, for example, measuringthe integral nonlinearity (INL) of an ADC. For a 13-bit ADC,this would require locating 8192 (2 ) input voltages whichcause transitions in the output between codes, at multipletemperatures. Such a large number of long tests can limitthroughput during production testing unless numerous teststations are in simultaneous operation. And given the highcost of high-speed mixed-signal test equipment, coupled withthe time that each device spends on a tester, testing can addseveral dollars to the cost of a device. For example, locatingall codes can add more than a dollar to the cost of a 13-bitADC, which typically sells for $15 [6]. Therefore, in SectionIII, methods for reducing the cost to production testing will bepresented. During production testing, the goal is to distinguishgood circuits from faulty ones with minimum cost, where costis inuenced by test time, throughput, and the cost of testequipment. Unlike with board designs, fault location is nota target because it is not possible to repair or replace faultycomponents. On the other hand, during design characterization,if a circuit has been identied as faulty, it is desirable tond the cause of failure. Hence, in Section IV, approachesto fault location and identication are discussed. This problem

    involves selecting input signals and measurements, in additionto the decision algorithm.

    Finally, the inputs to the analog components of a mixed-signal circuit may not be accessible to the tester. Moreover, itis not feasible for a designer to bring all of the analog inputsand outputs out to the package pins, and probe loading effectscan degrade measurements made on naked die. Consequently,extra components are often required to access internal nodesthrough primary inputs and outputs. But in this case, theparasitics introduced when accessibility is augmented candegrade some circuit performances. In Section V, designtechniques for improving the testability of embedded analogcomponents will be summarized. Specically, circuits forincreasing controllability and observability of internal signalsat analog component nodes will be presented. Additionally,work on analog and mixed-signal BIST will be discussed.Analog and mixed-signal BIST goes beyond simply improvingcontrollability and observability of internal nodes by attempt-ing to reduce the need for high-performance test equipmentthrough implementing test signal generators and analyzing

    test results on chip. Analog and mixed-signal BIST allows adesigner increased exibility to make the tradeoff between theincreased silicon area needed for BIST circuitry and externaltester requirements. Lastly, Section VI concludes this paperwith a summary.

    II. TEST PROGRAM DEVELOPMENT

    Creating the test programs to run mixed-signal testers is amajor bottleneck in the product delivery cycle for many mixed-signal circuits. Unlike digital test program development, whichis automated with the support of CAD tools for test programgeneration and veried with the help of software and hardware

    descriptions of the circuit prior to the availability of silicon,analog and mixed-signal test program development is labor-intensive, time-consuming, and must be done using fabricateddevices and on the tester. Hence, because the test engineerworks with prototypes of a circuit and the tester hardware andsoftware to develop and debug test programs, test programdevelopment begins after a design is complete and prototypeshave been manufactured. In fact, the delays due to waitingfor prototypes and the lack of automation for mixed-signaltest program development leads to signicant increases in theproduct development cycles, which can potentially be reducedif some debugging of the tester-circuit interface (load board)and the test program can be done prior to the availability of rst silicon.

    Recently, work on simulation tools to address this need hasbegun, targeting emulating the testing of mixed-signal devices.The idealistic goal of this work is depicted in Fig. 2. Clearly,such an ideal situation is not possible, due to inaccuracies insimulation models. Nevertheless, the diagram points out thatany reduction in test debug time after rst silicon directlytranslates into a reduction in time-to-market. Consequently, itis worthwhile to target such a goal. To achieve this goal, thetest engineer must create a software description of the testprogram, including models of tester hardware, the test board,code to program the tester to perform various tests in the

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    3/19

    MILOR: RESEARCH ON ANALOG AND MIXED-SIGNAL CIRCUIT TESTING 1391

    Fig. 2. Ideal potential time-to-market reduction due to the use of electronicdesign automation (EDA) tools for test program development.

    Fig. 3. The test environment.

    language of the target tester, and, of course, the circuit. Underthese conditions the test engineer may then simulate the testingprocess and evaluate the impact of parasitics on test results,

    check the synchronization between analog and digital testerresources, check impedance matching and transient effects,etc., in order to evaluate the effectiveness of a test programprior to the availability of prototypes. And, if simulationmodels are reasonably accurate, signicant portions of the testprogram can be debugged before rst silicon.

    Writing a test program involves dening input stimuli, testerresources, output responses, the postprocessing calculationsperformed, and the load board. Tools have been proposed toautomate several components in this process. These include thedesign of the load board, verication of each test, generationof tester source code, and the complete emulation of the entiretest program.

    The load board connects the device under test (DUT) to thetester resources (Fig. 3). Designing a load board is a time-consuming task in test program development, since a testprogram combines a number of tests, each of which mayrequire different connections and components on the loadboard. Kao and Xia have proposed a tool to automaticallysynthesize the load board [8]. This tool considers connectivitydata from test schematics for the individual tests, determinesshared tester resources, inserts switches for different setups,and generates a net list for the nal load board circuit. Inaddition, a printed circuit board place and route tool is usedto automatically place and route load board components, afterwhich parasitic information is extracted.

    Simulating a test may help identify problems with, say,settling times, impedance matching, load board parasitics,and instrument precision. This requires the test engineer toevaluate the performance of a device as measured on a pieceof commercial automatic test equipment with realistic approxi-mations to actual measurement and instrumentation techniquesused in the tester hardware. Some such models for testerresource simulation have been implemented and are describedin [9][11]. Given tester resource models, in [9][11] thesimulation of a test is performed in an environment whichcombines a traditional approach to simulation, i.e., SPICE,for smaller analog components, with behavioral models for

    measurement instruments and larger analog blocks. If thecircuit is a mixed-signal device, mixed-signal simulation tech-niques are used, combining circuit-level, analog behavioral,switch-level, gate-level logic, and behavioral logic vericationcapabilities. Simulation of a test not only makes it possible toinvestigate DUT-test interactions, but also makes it possibleto identify critical components through sensitivity analysis,whose tolerances, if too large, may degrade test results.

    In [12], a source code generator is proposed which automati-cally creates a test program in the language of the target testerhardware. This source code generator provides tester setupconditions and connectivity information, sets control bits, andcontrols the movement and analysis of data.

    Finally, a tool for checking test program synchronization isproposed in [7] and [11]. The software tools described in [7]and [11] are automatic test equipment (ATE) program emula-tors. Test program emulators are proposed to verify instrumentsettings, instrument precision, measurement synchronization,etc., using a workstation. In particular, during emulation, thetest program interacts with simulation models of the DUT, load

    board, and tester resources through a communication channel,where, as the test program executes, it sets up the virtualinstruments (simulation models of the tester resources) andsources the test patterns. These patterns are then simulated, andthe simulation responses are collected and postprocessed. Theemulator then performs the decision-making and branching,as is done in the real test program. Based on the results,more test patterns are fed back to the simulator, until testingis complete. The whole process works as if the test programis veried on-line on real tester hardware. Consequently, bothfault-free and faulty simulation prototypes of the circuit canbe tested to verify that the test program both passes gooddevices and rejects faulty ones. Given sufcient accuracy in

    the simulation models, if test program and design bugs arecaught by the emulator, signicant debug time would clearlybe saved, which would otherwise have been spent using thetester hardware, after the availability of silicon.

    III. REDUCING THE COST OF PRODUCTION TEST

    Some analog and mixed-signal circuits require large num-bers of specications to be veried, where checking all spec-ications can result in prohibitive testing times on expensiveautomated test equipment [6], [13]. In this section the problemof minimizing the cost of production testing is considered,assuming that the circuit being tested has a predened setof specications that need to be measured, with prespeciedbinning limits, indicating limits for failed die and limits forvarious performance grades of good die. The aim of techniquespresented in this section is to minimize testing time byoptimally ordering the tests (Section III-A) and by droppingsome specication tests without degrading fault coverage(Section III-B). The methods presented are best suited tolarge production runs where the cost of evaluating faultcoverage using a sample of circuits that are exhaustively andnonoptimally tested can be amortized over large numbers of circuits that are subsequently optimally tested. Therefore, thesemethods can be applied without the use of fault models, since

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    4/19

    1392 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    fault coverage of subsets of a test set can be computed basedon historical pass/fail data. Nevertheless, it may be desirableto optimally order and/or select a best subset for productiontesting for a new product prior to the availability of a largedatabase of historical pass/fail data. In this case, fault modelsare needed to generate such a database using simulation data.Hence, in Section III-C, fault modeling is discussed, togetherwith its application to evaluating the effectiveness of a test set.

    A. Optimal Ordering of Tests

    Typical industrial practice in production testing involvesperforming groups of tests, where if any set is failed, the dieis assigned a failure bin number and testing is terminated.The number of die in various bins provides information aboutthe common failure modes of the chip. The order of thegroups of tests with a common bin number often dependson the complexity of the test. In other words, usually grossfailures, like shorts between power and ground, are checkedrst, followed by, say, tests for defects in digital components.The last set of tests usually relates to checking the performanceof the entire system. This approach to test ordering intuitivelymaximizes the information attained about failure modes.

    Because testing is terminated as soon as a test is failed,average production testing time varies depending on the orderof the tests. Specically, if tests for gross failures are per-formed rst and a circuit frequently fails such tests, then theaverage production testing time will be shorter compared toanother circuit which mostly fails the system tests which areperformed last.

    If groups of tests are ordered to maximize information aboutfailure modes, they may not be optimized for minimizingproduction testing time. Production testing time not only

    depends on the time to complete each test, but also on theprobability that a particular test will be performed on a circuit,given its position in the test set. Consequently, if a largenumber of die fail a certain system test and if this systemtest also fails die with shorts between power and ground,production test time may be less if such a test is performedrst, at the expense of separating die with gross failures fromdie that just fail a particular system test. Hence, there isclearly a tradeoff between optimizing failure bin informationand test time, where the former is usually more importantduring the early stages of production, while the latter is moreimportant for mature products. On the other hand, if testingfor each failure bin is terminated as soon as a test is failed,

    the order of tests can certainly be optimized within the groupcorresponding to each failure bin to reduce test time, whileretaining as much failure information.

    Mathematically, the order of tests may be optimized asfollows. Suppose a test set has tests which are ordered fromthe rst position ( ) to the last ( ), requiring test timesof . The probability that the th test isperformed is

    where is the yield of the test in the th position. Averagetest time is then

    Average Test Time

    Hence, minimizing production testing time involves gatheringpass/fail data for each of the circuit specications using a

    sample of fabricated chips in order to calculate , theyield of the test in the position, given previous testsin positions to . Then Dijkstras Algorithm can beused to optimize the order [14]. Specically, the test selectionproblem is formulated as a shortest path problem in a directedgraph, where the computational complexity is dominated bythe number of possible subsets of the test set, . In order tocut the computational cost by avoiding the evaluation of allpossible subsets of the test set, two heuristic approaches totest ordering have also been proposed [15].

    B. Selecting a Subset of Specication Tests

    The easiest way to reduce the number of tests in a test setis to drop the tests that are never failed. It is possible that atest will not be failed if it is designed to detect a processingproblem that has not yet occurred. Moreover, it is likely thatsome tests will never be failed when there are many more cir-cuit specications that have to be measured than independentsources of variability in the manufacturing process. It turnsout that the order in which tests are performed will inuencewhich tests are never failed. For example, a redundant test,placed early in a test set, may detect some faulty circuits,which could be detected by a combination of tests performedlater. For example, a power supply short problem most cer-tainly will be detected by a system performance test, and

    therefore a test for such a short would be redundant. Hence,as there is a tradeoff between minimizing production test time,achieved by optimally ordering a test set, and maximizingfailure information, there is also a tradeoff between achievingminimal production test time through eliminating tests andmaximizing failure information. Clearly, failure informationis more important early in the product cycle, while reducingtest time is more important for mature products. And, if agroup of tests is assigned a single failure bin, redundant testsfrom that group can be eliminated at little cost.

    In [15] and [16], an algorithm has been developed thatorders tests so that the number of tests that have no dropoutis maximized. In other words, this algorithm identies andmaximizes the number of redundant tests. It uses historicalpass/fail data to identify those tests that detect some faultycircuits which are detected by no other test. If the sample sizeis large enough, all necessary tests will detect some such faultycircuits that are detected by no other test. In this way redundanttests are identied. On the other hand, if the sample size istoo small, some necessary tests may be wrongly identied asredundant, resulting in reduced fault coverage, dened as theprobability of correctly rejecting a faulty circuit. This mayoccur if some process corners have not yet been exercised.And in fact, for circuits with high yield, large sample sizesof nonoptimally tested circuits are needed in order to achieve

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    5/19

    MILOR: RESEARCH ON ANALOG AND MIXED-SIGNAL CIRCUIT TESTING 1393

    TABLE ISAMPLE SIZE AS A FUNCTION OF YIELD AND FAULT COVERAGE AT 95% CONFIDENCE

    high fault coverages, since if large sample sizes are available,presumably all realistic process corners have been exercised(Table I).

    A number of papers have attempted to go beyond justdropping tests that are never failed. These papers have pro-posed using a limited set of measurements to predict theresults of other measurements. Supposing that the numberof measurements is much larger than the number of primarysources of variation in the manufacturing process, the numberof primary sources of variation in the manufacturing processlimits the dimension of the set of basis vectors that span the

    measurement space. In order to characterize the measurementspace, these methods rely on linear models of deviations incircuit performances as a function of variations in parameterscharacterizing the manufacturing process. These linear modelscan be constructed by simulation, where the basis vectorsof the measurement space are sensitivities of measurementsto changes in process parameters [17]. Alternatively, basisvectors for the measurement space can be found empiricallyusing a sample of devices coming off the production line,assuming they manifest all of the sources of variability of themanufacturing process [18]. Using an empirical set of basisfunctions eliminates the need to simulate often very complexcircuits, which can either lead to results of limited accuracy

    or can be extremely computationally intensive. But empiricalbasis functions include noise, which can be minimized byaveraging repeated measurements of the same device. Andbasis functions can be missed if all process corners have notyet been exercised.

    Mathematically, for the case when the span of the measure-ment space is characterized by simulation [17], let denote avector of changes in measurement responses from nominal,and let denote a vector of changes in parameter valuesfrom nominal. Then, given a sensitivity matrix

    Alternatively, for an empirical model, the columns of woulddenote changes from nominal of the measurements fordifferent devices [18]. The problem addressed is to nd a setof measurements which can be used to predict the responses of other measurements not made. Suppose that the measurements

    are divided into two sets, where is a vector of themeasurements that are made and is a vector of

    measurements that need to be predicted. The sensitivity matrixcan correspondingly be divided into two matrices, and ,where

    Then, if and has rank , the measurementscan be used to predict parameters deviations

    where is the transpose of . And the remaining measure-ments are predicted as follows:

    For a circuit, the results of measurements and the predictedresults for measurements can be used to sort devices into

    performance bins.One runs into problems with this approach when isof rank less than or nearly so. In this case, for anychoice of measurements , cannot be inverted, andall of the parameters cannot be predicted. This wouldhappen if sensitivities for different sources of variation in themanufacturing process are not unique. In such a case, it wouldnot be possible to determine from measurements the values of those parameters with linearly dependent sensitivities. Infact, such parameters, which cannot be predicted unless thevalues of other parameters are known, are said to belong tothe same ambiguity group. Since the purpose of this work is tond a set of basis vectors that span the measurement space, the

    problem of ambiguities in is solved by dropping a columnof for one member of each ambiguity or approximateambiguity group. One way of identifying dependent columnsis QR factorization of with column pivoting, where aset of unambiguous columns is sequentially selected untilno more can be found [19]. In another approach, columnsare sequentially selected by choosing the largest element,corresponding to a high sensitivity of a measurement toa parameter , so that it is possible to accurately solve forthe parameters that are selected. This process also continuesuntil no more columns with large sensitivities can be found[20]. An alternative approach is to test all small subsets of columns for dependency, sequentially eliminating ambiguouscolumns [19]. The problem with this last approach is thatthe number of subsets to be tested grows exponentially with

    . A fourth approach to identifying ambiguity groups fromwhich ambiguous columns are identied begins by computingthe null space of using singular value decomposition orGaussian elimination [21]. The rows of the matrix that spanthe null space correspond to different ambiguity groups only if they are orthogonal. Hence the ambiguity groups are found bychecking the rows of the null space for pairwise orthogonality,which can be performed in polynomial time.

    Once columns corresponding to ambiguity groups havebeen pruned from , an optimal set of measurements can

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    6/19

    1394 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    be chosen, where the rank of determines the minimumnumber of measurements required. The method suggested in[17] involves nding a set of measurements which minimizesthe average prediction variance of the model coefcients .When the number of measurements is large, the maximumvariance is minimized with the -Optimality criterion, i.e.,maximizing . This approach is computationallycostly for large problems, and hence an approximate solution isfound in [17] using QR factorization of with pivoting. Theresulting pivots correspond to the selected measurements .In an alternative approach, the measurements are selected byminimizing the average standard error of the predicted output[22]. This algorithm is based on -Optimal experimentaldesign.

    For both approaches to test selection, the minimum numberof test vectors may not be sufcient to conclusively verifythe performance specications because of the size of thecondence bounds on the predicted test results, due to mea-surement noise and model inaccuracies. In fact, adding moremeasurements reduces the size of the condence bounds on

    the predicted test results , making it possible to conclu-sively verify if predicted test results are inside or outside of binning limits. In [22], the -Optimal criteria is used to selectadditional tests, and in [23] the measurement with the highestprediction variance is iteratively selected.

    These approaches to test selection based on linear modelinghave been proven to be effective for ADCs [6]. Specically,50 commercial 13-bit ADCs were exhaustively tested toreveal that 18 measurements of code transitions out of 8192sufced to model all randomness in the manufacturing processfor a batch of devices and to predict the remaining codetransitions. The remaining 77 devices were tested with thisreduced test set.

    C. Evaluating the Effectiveness of a Test Set

    How can we determine if a test set is effective for a givencircuit? Fault coverage is a good measure of the effectivenessof a test set, i.e., the probability that a faulty circuit failstests. One way to evaluate the fault coverage of a test setis to exhaustively test a sample of manufactured circuits.This approach would require a very large sample in orderto evaluate high fault coverages (Table I). If such historicaldata is not available, an alternative is to use simulation datato determine the faults that are detectable by a given test set,optimize the order of a test set, and eliminate redundant testsin a test set. For this to be possible, one needs an accurate andefcient simulation methodology and a fault model. Moreover,an efcient statistical simulation strategy is needed because of the very high computational cost associated with simulatinglarge systems and the very large number of random variablesneeded to model manufacturing processes.

    Faults in analog circuits can be classied into two cate-gories: catastrophic and parametric [24]. Catastrophic faultsinclude open nodes, shorts between nodes, and other topolog-ical changes in a circuit. For integrated circuits, they usuallyresult from local defect mechanisms like particles on the wafersurface generated by equipment during processing, particles

    that block exposure of a local area during masking, oxidedefects which short out transistors, severe misalignment of layers, etc. For printed circuits boards, catastrophic faultscan come from excess solder, resulting in bridging betweenpins, lack of solder, a broken pin of a component, missingcomponents, improperly oriented components, or use of awrong component (i.e., a resistor rather than a capacitor) [11].For integrated circuits, one way to produce a catastrophic faultlist is to mimic the defect size and frequency distribution of themanufacturing process using a Monte Carlo defect simulatorthat places missing or extra material in a given layer of alayout and extracts the impact on circuit topology [25], [26].Alternatively, prior to the availability of a layout, a faultlist may be generated from a schematic, i.e., broken wires,gate-drain shorts, etc. [27], [28]. For printed circuit boardtechnology, a fault list may include open and shorted pins,missing components, improperly oriented components, or useof the wrong component [11].

    Parametric faults refer to changes in a circuit that donot affect its connectivity. Such changes may be global or

    local. Global variation of parameters is due to imperfectparametric control in IC manufacturing. Such variations affectall transistors and capacitors on a die, and if large, resultin circuits that fail performance specications or are binnedas lower performance parts. Such variations have a largecomponent due to lot-to-lot uctuations, because of processingin different equipment at different times, a smaller componentdue to wafer-to-wafer uctuations, because of variations in theperformance of individual pieces of equipment as a functionof time, a smaller component due to within-wafer uctuations,often because of temperature gradients during etching, and asmaller component due to within-reticle uctuations, often dueto lens distortion which affects lithography [29]. For example,

    transistor channel length will vary both systematically andrandomly across a die, reticle, wafer, and between wafers andlots. In order to characterize global parametric variations, a setof independent factors is usually identied that explain lot-to-lot, wafer-to-wafer, and die-to-die variations in a process, oftenby principal components methods [30], [31].

    Local variations of parameters are due to local defect mech-anisms, like particles, which, say, enlarge a single transistorschannel length or capacitor locally, or local variations inparameters across a die because of imperfect process control,for example, due to local temperature gradients during etchingor local variations in the lens of a stepper. Local variationsin parameters due to defects are often modeled as largechanges in a single device parameter, like a change in asingle resistor value. Local variations due to imperfect processcontrol have little effect on digital components, but result inrandom differences between physically adjacent devices inanalog components, which gives rise to mismatch, to whichmany analog designs are very sensitive. Modeling mismatchentails supplementing global statistical process models withadditional variables indicating the extent of mismatch, possiblyas a function of device area and spacing [30], [32], [33].Incidentally, the data needed to characterize mismatch isharder to come by due to the lack of test structures on scribeline monitors that can be used to measure mismatch.

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    7/19

    MILOR: RESEARCH ON ANALOG AND MIXED-SIGNAL CIRCUIT TESTING 1395

    Fig. 4. The map between the parameter space and the measurement space. Parameters and measurements not in the acceptability region correspondto faculty circuits [15].

    Simulating both catastrophic and parametric faults can becumbersome because of the computational cost of simulatinglarge analog systems. A hierarchical approach to fault simu-lation reduces the computational cost. Hierarchical simulationinvolves partitioning a system into blocks. The blocks are thenelectrically simulated and their responses are stitched togetherwith a behavioral simulator in order to evaluate system per-formances. This is the approach taken in [11] and [33][37]for catastrophic fault simulation. And if a catastrophic faultmodel is used, fault coverage ( ) is typically dened as theratio of faults detected ( ) over faults simulated ( ) [11],[28], [38]. If faults are weighted by their likelihood , then

    Several authors use the same methodology for the simu-lation of parametric faults [34], [35], [39]. This correspondsto a parametric fault model involving only local variationsin geometries due to defects. Such a fault model does not

    include global parametric variations resulting from imperfectprocess control. Given such a local parametric fault model,in order to simulate a fault, a circuit parameter is set to anout-of-tolerance value and the resulting circuit is simulated.But how much out of tolerance should the parametric faultsbe? Models of defect size frequency indicate that small defectsare much more likely than large defects [40]. And very smalldefects result in only minor changes in circuit performances.Hence, such small defects may not cause a circuit to failspecications. Clearly, the denition of a parametric faultneeds to be related to the circuit specications, and specicallydening a parametric fault involves determining parameterlimits such that a circuit fails specications, which may ormay not coincide with parameter tolerances.

    Similarly, for global parametric faults, which result fromimperfect control in manufacturing, parameters closer to nom-inal values are much more likely than parameters which arefar from nominal, while parameter values that are far fromnominal are much more likely to cause a circuit to failspecications. And, as with local parametric faults, circuitswith parameter values that are close to tolerance limits maynot fail specications, and consequently may not be faulty.Hence, also for global parametric faults, determining if aparameter deviation results in a parametric fault involvesdetermining the map between the random variables describing

    the manufacturing process and circuit performances [15], [16].Typically, this is done using statistical modeling methods,where, based on a limited set of simulations, an equationis constructed for each circuit performance as a function of parameters modeling global variations in the manufacturingprocess and as a function of circuit parameters modeling localvariations due to defects [33], [41][46]. Given models foreach circuit performance and their corresponding specication,the set of parameters where all specications are satised maybe determined, called the acceptability region. Conversely, theset of parameters where at least one specication is failedis likewise determined (Fig. 4). Moreover, if the parameterscharacterizing the variations of the IC manufacturing processare described by a probability density function, parametricfaults may similarly be characterized by a probability densityfunction.

    What is fault coverage for parametric faults? For localparametric faults, created by defects, a fault list can begenerated from a layout using tools similar to those designedfor creating catastrophic fault lists [25]. Such tools mimicthe defect size and frequency distribution for each layer of a manufacturing process by placing extra or missing materialin a given layer of a layout and extracting the resulting circuitchanges. Those circuit changes that result in faults can beidentied using the map relating parameters to measurements.Fault coverage is consequently the fraction of those circuitchanges that result in faults that can be detected by a giventest set.

    Computing fault coverage for global parametric faults ismore complex. The digital/catastrophic fault coverage deni-tion ( ) does not apply for such faults since such faultsare characterized by a continuous distribution rather than a

    discrete one. Nevertheless, the ratio of the likelihood of faultsdetected over the likelihood of faults simulated is an equivalentdenition of parametric fault coverage [15], [16], [47]

    where is the probability density function of parametersmodeling the manufacturing process, is the complement

    of the acceptability region, i.e., the set of all parametric faults,

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    8/19

    1396 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    and is the set of parameters which correspond to circuitsthat fail a given test set.

    A straightforward way to evaluate the integrals in the aboveequation is to use Monte Carlo analysis, where a sample of parameters from the probability density function describingthe manufacturing process is simulated. At each , it is rstdetermined if is a fault, in which case , and if so,it is determined if is detected, and if so .The evaluation of whether or not is a fault and if it isdetected may be determined directly, using circuit simulation,or based on regression models of circuit performances, den-ing parametric faults. However, applying the Monte Carloalgorithm directly by simulating a circuit with a sample of parameters representing the manufacturing process may notlead to accurate results. Specically, if a small sample size isused, results will be inaccurate because of the sample size. Infact, unless a test is highly inaccurate, it may be hard to nd asample of faulty parameters which is not detected by the testset, i.e., and . Alternatively, if the sample size islarge, the computational cost of simulating just the blocks, i.e.,

    op amps, if not the whole system, hundreds of times, can bevery high, unless very inaccurate simulation models are used.Importance sampling can reduce this cost of simulation [48].Nevertheless, when applying the Monte Carlo algorithm, theuse of regression models of block performances as a functionof process parameters and a hierarchical simulation strategyreduces the computational cost most effectively with accurateresults [33], [41], [42], [46]. In other words, a limited setof simulations is performed to construct regression models,and then the regression models are used to evaluate if testsare passed or failed for the much larger random sample of parameters representing manufacturing process variations. Inthis case, the most signicant sources of inaccuracy will come

    from a combination of the accuracy of circuit simulation,which is used to construct the regression models, and theability of the regression models to mimic the simulator.

    If all circuit specications are tested, the parametric faultcoverage may still not be 100%, due to measurement noise[15]. Moreover, if tests other than the specication tests areused, as proposed in [28], [39], [47], and [49], there may bea systematic loss of fault coverage (Fig. 5). Similarly goodcircuits may fail tests due to measurement noise and if testsother than the specication tests are used. Yield coverage ( )has been proposed as a parameter to quantify the problem of discarding good circuits [47]

    Yield coverage may be computed in the same way as faultcoverage.

    Given the three denitions of fault coverage above, i.e.,catastrophic fault coverage, local parametric fault coverage,and global parametric fault coverage, which one should beused when evaluating a test set? Sachdev, in [50], considered

    Fig. 5. Systematic loss in fault coverage and yield coverage.

    Fig. 6. Test results for the Class AB amplier and the measured perfor-mances for the three devices that failed specication tests but passed theproposed test set for catastrophic faults [50].

    generating test sets for just catastrophic faults. The test sethe proposed for a Class AB amplier was derived based onrealistic catastrophic faults and demonstrated high catastrophicfault coverage of modeled faults by simple stimuli, i.e., simpledc, ac, and transient stimuli. This test program was thenappended to the existing conventional (specication-based)test program, in order to judge its effectiveness in a production

    test environment. The results are shown in Fig. 6. As canbe seen from the gure, the yield of the device was veryhigh (99.5%), and the fault coverage of the proposed test setwas only 73%. The performances of the three devices whichpassed tests for catastrophic faults but failed specicationtests are also shown in Fig. 6. Because the proposed testset was designed to detect catastrophic faults and becausedistributions of both local and global parametric faults havehigher frequencies of parameter values that correspond tocircuit performances close to specication limits, it appearsfrom these results that these three devices failed due toparametric faults.

    Because the sample of failed devices was so small, Sachdev[50] followed up this experiment with a larger one using thesame Class AB amplier. The results of this second experimentare shown in Fig. 7. It can be seen that the fault coverageof the test set designed solely for catastrophic faults was87%. The 433 circuits that failed the proposed test set buthad passed the conventional test set were then retested bythe conventional method. Of these 433 circuits, 51 passedthe conventional test set, indicating that measurement resultsare very close to specication limits, causing the circuitto pass or fail based on noise levels. The remaining 382circuits mostly failed specications on the input offset voltage,total harmonic distortion, and the signal-to-noise ratio. All of

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    9/19

    MILOR: RESEARCH ON ANALOG AND MIXED-SIGNAL CIRCUIT TESTING 1397

    Fig. 7. Test results for the Class AB amplier.

    TABLE IICATASTROPHIC FAULTS FOR A HIGH-PASS FILTER BLOCK [33]

    these specications are very sensitive to transistor matchingin the differential ampliers. Since poor transistor matchingis a parametric fault, it is likely that these circuits failedfor parametric faults. Clearly, it can be concluded that atest based solely on process defects (catastrophic faults) is

    not sufcient for ensuring that specications are satised,and consequently catastrophic fault coverage is insufcientin quantifying the quality of a test set for analog circuits.This is likely because, unlike digital circuits, which tend tohave less tight performance requirements but more functionalcomplexity, optimal analog circuit performance is often onlyachievable under optimal fabrication and operating conditions.Hence, the parametric fall-out for analog circuits is likely tobe more signicant compared to digital designs.

    Suppose, on the other hand, a test set is designed forhigh parametric fault coverage. Would such a test set beable to detect catastrophic faults? In [15], [16], and [33],algorithms for selecting optimal sets of specication tests

    based on parametric fault coverage have been presented.Specically in [33], a subset of 1024 frequency measurementsand measurements of the system offset, dynamic range, andtotal harmonic distortion for a bandpass lter was selected.This bandpass lter is a switched-capacitor design, composedof ve blocks, one high-pass lter, three biquads, and onesum-gain amplier. For each of the blocks, local defects weregenerated in the layout using VLASIC [25] in order to obtaina fault list. Each fault was simulated in order to compute theresulting distortion of parameters characterizing each of theblocks, i.e., gain, offset, etc. The results for the high-passblock are shown in Table II, where modications in block

    performances could be classied in eight groups, the mostcommon two being low gain and xed output voltage. Systemsimulation using the behavioral model was then performed inorder to determine if specication tests are passed or failed. Itcan be seen from Table II that almost all of the catastrophicfaults failed all of the specication tests. The 5% of thecatastrophic faults that did not fail all specications resultedin circuits that under nominal processing conditions wouldpass or almost pass all specication tests. Hence, for thesefaults, the impact of these catastrophic faults is to lower yield.And, it turned out that the specication tests needed to achievehigh parametric fault coverage were sufcient to detect thesecatastrophic faults when they were combined with variationsin parameters and resulted in circuits that failed specications.Consequently, in this example, a test set designed for highparametric fault coverage also achieved 100% catastrophicfault coverage.

    It seems that it can be concluded that test sets should be

    evaluated both in terms of parametric and catastrophic faultcoverage. Moreover, based on the limited set of experimentsthat have been done to date, there seems to be some evidencethat test sets designed for high parametric fault coverage aremore likely to detect catastrophic faults compared to the abilityof test sets designed for high catastrophic fault coverage todetect parametric faults.

    IV. F AULT DIAGNOSIS

    If an integrated circuit has been found to be faulty duringdesign characterization, before it is in high volume production,it may be useful to diagnose the cause of the failure. If faults

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    10/19

    1398 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    Fig. 8. Decision boundaries between the good circuit and three faulty circuits for measurement assuming equal prior probabilities.

    are identied and located, a circuit can be redesigned to beless sensitive to common failure mechanisms. Alternatively,if an analog or mixed-signal system with components thathave been separately tested fails system specications, itis also useful to nd the cause. In this case, problems insystem performance can occur due to assembly errors andthe degradation of components with time. It is thereforedesirable to have a methodology to rapidly identify componentfailures. Two distinct strategies have been proposed for analogfault diagnosis: simulation-before-test and simulation-after-test. Simulation-before-test approaches begin with a fault list.The faults are then simulated to determine the correspondingresponses to predetermined stimuli. Faults are consequentlydiagnosed by comparing simulated and observed responses.Simulation-after-test approaches, on the other hand, begin withthe failed responses, which are then used to estimate faultyparameter or component values. For a comprehensive surveyof these approaches see Bandler and Salama [2]. The sectionsbelow are intended to give a brief overview.

    Fault diagnosis techniques need fault models. Simulation-before-test techniques are better suited for detecting cata-strophic faults and local parametric faults, while they mayperform less well in detecting global parametric faults, sincefor such faults the separation between good performancesand faulty performances is less wide. On the other hand,simulation-after-test techniques are better suited for detectingproblems with global parametric variations and mismatch, andare not well suited for detecting catastrophic faults.

    A. Simulation-Before-Test

    Simulation-before-test algorithms are based on a fault dic-tionary. In particular, the most likely faults (usually cata-strophic) are anticipated based on a fault model, and a set of input stimuli and measurements are selected to detect faults.The measurements may be dc responses [51], ac responsesat circuit outputs [52][55], ac responses at the power supplynode [56], transient responses at circuit outputs [57], [58], ortransient responses at the power supply node [59]. Then for theset of potential faults, the circuits response for each stimulusis evaluated for each fault with all but the faulty parameter setto nominal values, i.e., the parameters for which the circuit wasdesigned. The responses for each fault are typically evaluatedusing circuit simulation, except for the special case when thecircuit is linear, where efcient techniques exist for computingthe faulty responses [55], [60], [61]. In particular, in [55] a

    symbolic simulator is proposed for evaluating the impact of faults on the frequency response of a circuit, greatly reducingthe computational cost needed to construct a fault dictionary. Itis also possible to construct a fault dictionary using measureddata, based on previously observed and diagnosed faults.

    After evaluating the responses corresponding to the faults,the faults and their corresponding responses are stored ina dictionary. To diagnose a fault, the measurements of thecircuit being tested are compared with measurements in thedictionary. The fault is identied by determining the closestsimulated fault using inspection [53], [58], the Euclidean norm[51], [54], [55], pattern matching [52], fuzzy distance [62], ora neural network [59].

    If parameters are assumed to lie exactly at their nominalvalues when the fault dictionary is constructed, errors infault identication can occur. Parameters are usually not atnominal values because of uncontrollable uctuations in circuitfabrication. Consequently, it is unlikely that measurementswill have values exactly equal to those stored in the faultdictionary. Furthermore, different faults may result in exactlythe same measurement because of variations in components.Approaches which account for distributions of parameters[63], [64] typically begin with Monte Carlo simulations of all anticipated faulty circuits and the good circuit. The useof Monte Carlo analysis, although greatly increasing thecomputational cost, provides estimates of the mean vectorsand covariance matrices of the distributions for the goodand every type of faulty circuit. Maximum likelihood is takenas the measure of distance to determine the most likely faultclass. Specically, let be the prior probability of the thfault type, i.e., is the likelihood of the th fault type basedon past data. Then for an observation , the correspondingfault is identied as belonging to fault class if the quadratic

    discrimination score is minimum

    Fig. 8 shows the decision boundaries which separate faultclasses, assuming equal prior probabilities. If it is known thatthe good circuit is much more likely than, say, fault 1, thedecision boundary would shift to the left, making it more likelyto identify the good circuit compared to fault 1. Moreover,if during testing the prior probabilities are updated, thedecision boundaries between faults would change accordingly.

    Some faults may result in measurements that are very closeto each other (Fig. 8). In this case, the probability that a

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    11/19

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    12/19

    1400 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    been discussed in Section III-B. In addition, classicationsof types of ambiguity, i.e., fault masking, fault dominance,and fault equivalence, are presented in [79]. However, evenif all parameters can be identied, their values cannot beprecisely determined, due to measurement inaccuracies andunmodeled variables. Specically, given variations of com-ponent or process parameter values within tolerance andmeasurement uncertainty, the minimum parameter changesthat can be diagnosed can be determined and the correspondingbest measurements can be identied [80]. Moreover, givenmeasurement uncertainty, in [20] the accuracies of parameterestimates are discussed, for both the cases where parameterscan be separately computed and where there are ambiguitiesamong parameters.

    V. D ESIGN FOR TESTABILITY TECHNIQUES

    The increased complexity of analog circuits and the reducedaccess to internal nodes has made it not only more difcult todiagnose and locate faulty components, but also the functionsof embedded components may be difcult to measure. Design-for-testability techniques at the very least aim to improvethe controllability and observability of internal nodes, so thatembedded functions can be tested. Such techniques includeanalog test busses and scan methods. Testability can also beimproved with BIST circuitry, where signal generators andanalysis circuitry are implemented on chip. A brief overviewof these techniques will be presented in the following sections.A more detailed discussion can be found in [5] and [81].

    A. Improving the Controllability and Observability of Internal Nodes

    Improving the testability of increasingly complex digital

    designs has resulted in the widespread use of scan chains.Scan chains involve shift registers, where data is seriallyshifted in through a test data input pin and through the scanchain to reach internal nodes. The test is performed and theresults are captured in registers. The results are then seriallyshifted to the output through the registers to reach the testdata output pin. Scan chains not only make it possible toimprove fault coverage, but also make it possible to link failures to specic circuit blocks. In addition, at the boardlevel, new high-density packaging technologies have madeit more difcult to identify failing components in a system.With such packaging technologies, once a chip is attached toa package, its pins are no longer accessible, and the abilityto remove a part reliably and without damage for testingpurposes may be limited. Boundary scan has been proposedto address this controllability and observability problem forsystem debug, where all chip pins for all components in asystem are connected in a scan chain (Fig. 9). A key advantageof boundary scan has been its ability to detect opens and shortsin the boards wiring, since 8090% of board failures resultfrom wiring problems. Hence, boundary scan cells combinethe shift registers that are used to input and output voltages toand from the core circuitry with components that a) disconnectthe IO pin from the core, b) set the IO pin to a logic level,and c) detect the logic level on another IO pin.

    Fig. 9. Boundary scan chain on a board.

    Fig. 10. Modied boundary scan for a mixed-signal component.

    In the mixed-signal domain, Fasang [82] has proposed theuse of boundary scan for mixed-signal designs. He proposestaking advantage of the fact that many mixed-signal designshave ADCs and DACs on chip. These can be used to digitizeall analog outputs before they are stored in scan cell registersand shifted to external pins and to convert digitized analoginput signals which are shifted into the scan path into analoginputs for the analog components. The proposed modiedboundary scan path for a mixed-signal component is shownin Fig. 10.

    This conguration provides some controllability and ob-servability of both digital and analog components. However,

    this conguration does not make it possible to isolate failureswithin the analog blocks. In order to enhance our abilityto diagnose failures within analog blocks, it may be desir-able to separately test the components of the analog blocks,i.e., ADCs, DACs, operational ampliers, oscillators, phase-locked loops, lters, etc. Given a partition of a complexchip, component test requires the isolation of the blocks, thecontrol of component inputs, and the observation of componentoutputs. In [83], a set of nodes to be accessed and componenttests to be performed, together with their test conditions, areoutlined for some common analog building blocks. Isolationof the blocks from digital circuitry can be achieved through

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    13/19

    MILOR: RESEARCH ON ANALOG AND MIXED-SIGNAL CIRCUIT TESTING 1401

    Fig. 11. Analog test bus architecture.

    Fig. 12. Simplied schematic of a P1149.4 test bus pin [85].

    interface storage elements, i.e., parallel or serial shift registersorganized in a scan chain [84]. Isolation between analog blocksmay be achieved through buffers, controlled by digital circuitry[84]. In addition, it may be necessary to disable feedback loopsin order to measure open-loop parameters, since feedback reduces input controllability.

    Given a set of nodes that need to be accessible in orderto test the blocks, it is generally neither feasible nor practicalto bring all of these nodes of an integrated circuit out to thepackage pins. One way to enhance observability is to add asmall metal contact connected to the node. The signal can thenbe measured with either buffered high-impedance micro probesor an electron beam tester. The chip area required for such acontact is signicantly less than what is needed for a bond, butif many such contracts are used, the area overhead may becomesignicant. Alternatively, analog inputs and outputs may becombined by multiplexing and routed to primary inputs andoutputs. Such a conguration is called an analog test bus [85](Fig. 11). In this case any block input and/or output would beexternally addressable so that real-time data may be input andobserved. Even nodes which are shielded for electromagneticinterference can be made observable. The analog test bus cellshown in Fig. 12 has been designed as an analog boundaryscan cell, although it can also be used to access internal analognodes. It therefore has components designed to check boardconnectivity, i.e., to a) disconnect the IO pin from the analogmacro, b) set the IO pin to a logic level, c) detect the logiclevel on another pin, and d) connect the IO pin to the two-wireanalog test bus.

    Clearly the design of the test bus must be done withcare. Specically, the test bus pin should not distort the

    Fig. 13. Voltage-based scan cell.

    real-time signals entering and leaving the blocks. Exceptfor very high frequency nodes, the capacitance load addedby, as a minimum, the two test bus transmission gates, themultiplexer/comparator, and the tri-state digital inverter isunlikely to cause signicant distortions of internal signals[85]. On the other hand, the bus capacitance, combined withthe high impedance of the transmission gates, may distortthe signal that is observed at or driven by the output pin,by increasing delays and reducing bandwidth. Nevertheless,

    transmission gates or buffers can be designed to allow forhigher output signal bandwidth at the expense of additionalcapacitive loading and/or area [85].

    For high-speed testing of many analog signals simultane-ously, the two signals at a time controllability and observabilitycapability of the analog test bus may not be sufcient. Instead,Wey [86], [87] and Soma [88] have proposed an entirelyanalog scan path implementation. In [86], voltages are storedin the scan cells, which are composed of sample-and-holdcircuits, each built with a switch for sampling, a capacitorfor storage, and a voltage follower for impedance bufferingbetween capacitors (Fig. 13). In order to minimize the inu-ence of the analog shift register test circuit on the node being

    observed, a high input impedance and low output impedancebuffer is placed between the node and the sampling switch.When a test is performed, data at various test points issimultaneously loaded to the holding capacitors by closingswitch . To scan out, the switch is opened and a two-phase clock is used to scan out voltages, like for digital scanchains. Specically, rst, is closed so that the voltage on

    is copied to . Then, is closed to copy the voltage tothe next cell. This process continues until all voltages reachthe output.

    In [87] and [88], currents, instead of voltages, are passedthrough the scan chain. The shift register is composed of current mirrors and switches. A possible implementation isshown in Fig. 14. In this approach, if a node voltage is beingobserved, it is rst converted to a current and isolated from thetest circuitry with a V/I converter. During loading, the switch

    is closed, and the current ows through transistorand sets the voltage on capacitor to the level needed for

    to support . The transistor is now capable of sinking a current when connected to a load. Then to scanout the stored current, is opened and the switches and

    are alternately closed. When is closed, transistoris connected to a load, and consequently sinks . As a result,

    ows through and charges up , so that willow through when it is connected to a load. Then, in

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    14/19

    1402 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    Fig. 14. Current-based scan cell.

    the next clock phase, is opened and is closed, so thatis then copied from to of the following cell.

    As with the voltage scan chain, this process continues untilcell currents reach the output. It should be noted that in boththe current-based and the voltage-based approaches, scan chainlength is limited by accuracy requirements. For example, clock feedthrough and mismatch in the current mirror transistors canlimit the accuracy of the scanned currents and voltages.

    B. Analog Built-In Self Test

    When test busses and scan circuitry are used to enhancetestability, signals have to be transmitted through long wiresand/or have to pass through transmission gates before theycan be measured. Hence, analog signals can be corrupted,and distortion may occur before measurements are made dueto parasitic loading and coupling. BIST circuitry helps toovercome this problem by going beyond simply controllingand observing component inputs and outputs. Instead, signalgenerators and analysis circuitry are implemented on chip, andsince signals do not have to be routed off chip, it is likely that

    there will be less distortion of these signals when they aremeasured. Consequently, dynamic tests can be performed atfull speed even during wafer probe and in the eld, withoutexternal test equipment. And the only signal that needs to berouted off chip is a pass/fail bit indicating the test results.On the other hand, one major problem faced by BIST circuitdesigns is the area overhead. Moreover, the design of high-quality signal generators and analysis circuitry can be complexand time-consuming.

    The hardware overhead for BIST is minimized if the testcircuitry is used by much more than one of the analogcomponents. In addition, some BIST designs only attempt toperform on-chip analysis of test results and rely on externalsignal sources, thereby minimizing the hardware overhead.These designs are mainly targeted for on-line test of high safetysystems, i.e., testing during operation. They signal errors due tocomponent degradation, electromagnetic interference, or heat.Such circuits either rely on area redundancy (multiple copiesof the same hardware) or time redundancy (the same hardwareis used to carry out repeated operations).

    In [89], a BIST circuit has been proposed for switched-capacitor lters, relying on partial replication of the circuitbeing tested (area redundancy). Using this technique, on-line testing is performed by rst decomposing the lter intoits component biquads. Multiplexing makes the input and

    output terminal of each biquad accessible. The circuit relieson a programmable biquad to implement a copy of each of the component biquads (Fig. 15). The programmable biquadcan implement any of the basic lter types, i.e., low-pass,bandpass, and high-pass, and many different frequency spec-ications by changing the capacitance values for each node.During testing, the same signal is input to the programmablebiquad as the component biquad being tested, and the resultingcontinuous signals are compared in real-time using a votercircuit. The voter circuit indicates an error if the outputs of thetwo lters differ by more than a specied tolerance margin atany time. An absolute acceptance window is implemented in[89], but in [90] it is noted that an absolute tolerance windowcan be too restrictive for signals with a large swing. Therefore,as an alternative, a circuit signaling an error based on a relativetolerance window has been proposed. In addition, instead of comparing signals in the time domain, the signal of the circuitbeing tested can be compared against a reference signal inthe frequency domain as well, using a gain detector, phasedetector, and a window comparator [91]. Note that for all

    techniques, the programmable biquad is used by all lters onthe chip, and as a result, the area overhead is essentially limitedto the area required for the programmable biquad.

    Another approach to concurrent testing relying upon arearedundancy involves using a continuous checksum [92]. Inthis case a circuit must be approximately linear. As a result,the time domain response can be described by state equations.A check variable is dened as equal to a linear combinationof the state variables, and in this approach additional circuitryis proposed which outputs the check variable. Consequently,the additional circuitry produces a continuous nonzero signalwhen the signals corresponding to the state variables deviatefrom nominal.

    Finally, in contrast to the previously mentioned approaches,time redundancy is exploited for concurrent testing of ADCsin [93]. Time redundancy employs a single piece of hardwareto carry out repeated operations. In order to avoid producingthe same erroneous result twice, in the repeated cycle, theinput operand is coded and the result decoded, after which acomparison is made with the result obtained in the previouscycle.

    All of the above techniques should be effective in detectinglocal catastrophic and some local parametric faults, since,given an appropriate input signal, there should be a cleardiscrepancy between the test circuits output and that of thecomponent being tested. On the other hand, because of the area

    overhead, the yield of the die will decrease [40]. This decrease,nevertheless, is likely to be small, if the area overhead of the test circuit is small. Such circuitry is likely to be lesseffective in detecting global parametric faults and componentdegradation since component variations in the test circuitry arelikely to track those in the circuit being tested. Hence, onlymajor changes in component values are likely to be detected.

    C. BIST for Mixed-Signal Integrated Circuits

    When designing BIST circuitry for mixed-signal integratedcircuits, it is often possible to exploit existing on-chip hard-

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    15/19

    MILOR: RESEARCH ON ANALOG AND MIXED-SIGNAL CIRCUIT TESTING 1403

    Fig. 15. A programmable biquad [89].

    Fig. 16. An example of a mixed-signal circuit architecture.

    ware and consequently reduce the area overhead needed for theon-chip generation of test signals and analysis of test results. Inparticular, such circuits have DACs and ADCs which may beused for testing through reconguring the connections betweenblocks. By taking advantage of ADCs and DACs, which arealready part of a design, mixed-signal circuits can be testedwith digital testers, components may be tested in parallel, andtesting is more easily performed in the eld.

    A common architecture for a mixed-signal circuit is shownin Fig. 16. This architecture assumes that a mixed-signalcircuit is composed of analog input components, connected toa large digital section by an ADC, which in turn is connected toanalog output components by a DAC. Given the on-chip DAC,a digital test stimulus may be implemented on-chip, in orderto test the analog output block. Specically, Ohletz [49] hasproposed a pseudorandom piecewise-constant input signal withdifferent amplitudes, generated with a linear feedback shiftregister (LFSR), the DAC, and an output amplier (Fig. 17).Alternatively, input stimuli could come from a ROM or DSPcircuitry, rather than an LFSR. All of these approaches keep

    Fig. 17. An input stimulus generator [49].

    the hardware overhead to a minimum by reconguring andreusing existing circuitry on chip during the test mode.

    During test mode, outputs of the analog output block maybe measured or routed to the analog input pins (Fig. 18).The outputs of the analog input block are embedded onchip and therefore not accessible. However, during the testmode, these outputs may be measured after conversion of thesignal to digital by the on-chip ADC. One way to capturethe output is by built-in logic block observers (BILBO) [49],

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    16/19

    1404 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    Fig. 18. Implementation of the BIST scheme in [49].

    [94], which are also used for digital test. The signals storedin the BILBO registers may then be fed to a multiple inputsignature register (MISR), which performs the task of on-chip data compaction using signature analysis [49]. Hence,the analog test results are consequently evaluated in thedigital domain, using the same techniques as used for on-chip evaluation of the digital response. Nevertheless, signature

    analysis is not the only way in which the digitized responsefrom an analog block can be analyzed. The response couldbe compared against a known good response, stored in aROM, before compaction, or postprocessing may be donebased on the functional characteristics of the analog blocks.Specically, in [39], given a pseudorandom piecewise-constantinput signal, circuitry for computing the auto-correlation andcross-correlation of the impulse response is proposed.

    In both [39] and [49], the effectiveness of pseudorandominputs in detecting catastrophic faults in analog componentshas been demonstrated. However, the effectiveness of suchapproaches in detecting both local and global parametric faultsstill needs to be determined. Because the circuit performancesthat are tested are different than the circuit specications, theremay be signicant systematic losses in fault coverage and/oryield coverage for parametric faults.

    In [95], a BIST circuit is proposed for looking at abnormalchanges in the power supply current. The proposed circuitinvolves an upper limit detector, for detecting an abnormallyhigh power supply current, a lower limit detector, for detectingan abnormally low power supply current, and some logic tosignal if there is a fault. The idea behind this approach is thatfaults will either increase or decrease the power supply currentcompared to the fault-free circuit. When using this power sup-ply current monitor to test ADCs, the input voltage is varied,

    so that all states of the ADC are exercised. A reasonablefault coverage of catastrophic faults has been demonstrated bysimulation. Nevertheless, as with using random inputs to testanalog blocks, the effectiveness of this approach in detectinglocal and global parametric faults is still unknown.

    A more traditional signal generator is proposed for BISTof ADCs in [4] and [96]. Specically, tests are designedto measure the signal-to-noise ratio, gain tracking, and thefrequency response of a sigmadelta ADC (Fig. 19). Thestimulus is a precise multitone oscillator designed for anuncalibrated environment [97], [98]. The design of the os-cillator is fully digital, except for an imprecise low-pass

    Fig. 19. BIST circuit for testing ADCs [4].

    Fig. 20. Signal-to-noise ratio test using a correlator circuit [5].

    lter, and it is digitally programmable for multiple amplitudes,frequencies, and phases. Three digital methods are proposedfor analyzing test results: the fast Fourier transform, theIEEE standard 1057, and a narrow-band digital lter. Thefast Fourier transform approach nds the signal and noisepowers using a correlator circuit, like the one shown in Fig. 20.Hence, the on-chip computing resources that are required toimplement this approach include registers to store the samples,plus circuitry to either compute or look-up values for the sine

    and cosine functions. The IEEE Standard 1057 nds the signaland noise powers by tting a sinusoid using regression andrequires similar on-chip resources. And lastly, the narrow-banddigital lter approach relies on on-chip bandpass and notchlters, as shown in Fig. 21. In [4] and [96], it is argued thatthe area overhead for the narrow-band digital lter is the leastof the above three methods, but the test results are biased, sincesome of the noise power may be mistakenly included in thesignal power. However, the bias may be minimized throughproper design of the lter.

    It is not only possible to build BIST circuitry for ADCs,but also DACs can be tested with on-chip circuitry. In [99],

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    17/19

    MILOR: RESEARCH ON ANALOG AND MIXED-SIGNAL CIRCUIT TESTING 1405

    Fig. 21. Signal-to-noise ratio test using a digital lter [4].

    Fig. 22. Mixed-signal BIST scheme [4], [5].

    a BIST circuit has been designed to test offset, gain, integrallinearity, and differential linearity of DACs. For this circuit,the input is a digital sequence generated on chip, and the outputis analyzed with sample-and-hold circuitry, various referencevoltages, and a comparator. A modied version of this circuithas also been proposed for testing successive approximationADCs, since they involve DACs.

    Finally, since many circuits contain both ADCs and DACs,a sequential strategy for verifying both is proposed in [4](Fig. 22). First, all digital circuitry is tested, including thedigital components of the DAC, ADC, and the on-chip signalgenerator. Then, the ADC is tested using the on-chip digital

    oscillator. Next, the smoothing lter of the DAC is testedusing signals from the digital signal processing unit. Its outputis digitized by the ADC, and its response is analyzed bythe digital circuitry. This veries the functionality of thesmoothing lter, and the combination of the digital signalgenerator and smoothing lter can now be used as a calibratedanalog signal source. This also completes the testing of theDAC and ADC, since all other digital circuitry has been tested.Once the DAC, ADC, and signal generator are consideredfunctional, the converters and the analog signal generator canbe used to test other on-chip analog functions.

    The fault coverage of this approach to on-chip BIST stillneeds to be veried. However, because the test methods arevery similar to traditional functional testing, it is likely thatfault coverage will be high, even for global parametric faults.This is because, although global parametric variations in thetest circuit will track those in the circuit being tested, thefunctions of the test circuit and the circuit being tested aredifferent, and hence process sensitivities are likely to differ. Asa result, one can expect less systematic loss in fault coverageand yield coverage for parametric faults.

    VI. SUMMARY

    Recently, work on analog testing has evolved from itsearly focus on diagnosing failures and degradation in analog

    board designs toward a focus on production test, includingCAD tools for test set design and optimization, and circuitdesign techniques targeted to explore the tradeoff betweenon-chip versus external testers. Clearly, diagnosing failuresis still important. Many circuits are still designed with dis-crete components, and failures in the eld should be quicklycorrected. Moreover, integrated circuits must be characterizedbefore production runs, and this involves determining anysystematic sources of yield loss, including components whoseperformances may uctuate too much due to variations inmanufacturing. However, the new areas of research in analogand mixed-signal testing are motivated by new concerns.Specically, as analog and mixed-signal circuits become morecomplex and have shorter product cycles, they frequentlycannot be tested using methods developed in the past, dueto the longer testing times needed for high precision analogcomponents and the lack of accessibility of analog componentsembedded in large mixed-signal chips. Research addressingthese problems is still preliminary and is likely to evolverapidly in the coming years.

    ACKNOWLEDGMENT

    The author would like to thank M. Ohletz, W. Kao, and thefour reviewers for their helpful comments on this manuscript.

    REFERENCES

    [1] P. Duhamel and J. C. Rault, Automatic test generation techniques foranalog circuits and systems: A review, IEEE Trans. Circuits Syst., vol.CS-26, pp. 411439, July 1979.

    [2] J. W. Bandler and A. E. Salama, Fault diagnosis of analog circuits,Proc. IEEE, vol. 73, Aug. 1985.

    [3] A. Rappaport et al. , Panel discussion: Impediments to mixed-signal ICdevelopment, in Proc. ISSCC, 1991, pp. 200201.

    [4] M. F. Toner and G. W. Roberts, A BIST SNR, gain tracking and

    frequency response test of a sigmadelta ADC, IEEE Trans. CircuitsSyst. II, vol. 42, pp. 115, Jan. 1995.

    [5] G. W. Roberts, Improving the testability of mixed-signal integratedcircuits, in Proc. CICC, 1997, pp. 214221.

    [6] T. M. Souders and G. N. Stenbakken, Cutting the high cost of testing, IEEE Spectrum, pp. 4851, Mar. 1991.

    [7] J. Q. Xia, T. Austin, and N. Khouzam, Dynamic test emulation forEDA-based mixed-signal test development automation, in Proc. Int.Test Conf., 1995, pp. 761770.

    [8] W. H. Kao and J. Q. Xia, Automatic synthesis of DUT board circuitsfor testing of mixed signal ICs, in VLSI Test Symp., 1993, pp. 230236.

    [9] B. Webster, An integrated analog test simulation environment, in Proc. Int. Test Conf., 1989, pp. 567571.

    [10] S. C. Bateman and W. H. Kao, Simulation of an integrated design andtest environment for mixed-signal integrated circuits, in Proc. Int. Test Conf., 1992, pp. 405414.

    [11] P. Caunegre and C. Abraham, Achieving simulation-based test programverication and fault simulation capabilities for mixed-signal systems,in European Design and Test Conf., 1995, pp. 469477.

    [12] W. Kao, J. Xia, and T. Boydston, Automatic test program generationfor mixed signal ICs via design to test link, in Proc. Int. Test Conf.,1992, pp. 860865.

    [13] L. Bonet et al. , Test features of the MC145472 ISDN U-transceiver,in Proc. Int. Test Conf., 1990, pp. 6879.

    [14] S. D. Huss and R. S. Gyurcsik, Optimal ordering of analog integratecircuit tests to minimize test time, in Proc. DAC, 1991, pp. 494499.

    [15] L. Milor and A. L. Sangiovanni-Vincentelli, Minimizing production testtime to detect faults in analog circuits, IEEE Trans. Computer-Aided Design, vol. 13, pp. 796813, June 1994.

    [16] , Optimal test set design for analog circuits, in Proc. ICCAD,1990, pp. 294297.

    [17] G. N. Stenbakken and T. M. Souders, Test-point selection and testa-bility measures via QR factorization of linear models, IEEE Trans. Instrum. Measur., vol. 36, pp. 406410, June 1987.

  • 8/11/2019 A Tutorial Introduction to Research on Analog and Mixed-Signal Circuit Testing

    18/19

    1406 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998

    [18] , Linear error modeling of analog and mixed-signal devices, inProc. Int. Test Conf., 1991, pp. 573581.

    [19] G. N. Stenbakken, T. M. Souders, and G. W. Stewart, Ambiguitygroups and testability, IEEE Trans. Instrum. Measur., vol. 38, pp.941947, Oct. 1989.

    [20] G. J. Hemink, B. W. Meijer, and H. G. Kerkhoff, Testability analysisof analog systems, IEEE Trans. Computer-Aided Design, vol. 9, pp.573583, June 1990.

    [21] E. Liu et al. , Analog testability analysis and fault diagnosis usingbehavioral modeling, in Proc. CICC, 1994, pp. 413416.

    [22] E. Felt and A. L. Sangiovanni-Vincentelli, Testing of analog systemsusing behavioral models and optimal experimental design techniques,in Proc. ICCAD, 1994, pp. 672678.

    [23] T. M. Souders and G. N. Stenbakken, A comprehensive approach formodeling and testing analog and mixed-signal devices, in Proc. Int.Test Conf., 1990, pp. 169176.

    [24] W. Maly, A. J. Strojwas, and S. W. Director, VLSI yield predictionand estimation: A unied framework, IEEE Trans. Computer-Aided Design, vol. 5, pp. 114130, Jan. 1986.

    [25] D. M. H. Walker, Yield Simulation for Integrated Circuits. Norwell,MA: Kluwer, 1987.

    [26] A. Jee and F. J. Ferguson, Carafe: An inductive fault analysis tool forCMOS VLSI circuits, in Proc. IEEE VLSI Test Symp., 1993, pp. 9298.

    [27] M. Ohletz, Local layout realistic faults mapping schemefor analogue integrated circuits, in Proc. CICC, 1996, pp. 475478.

    [28] L. Milor and V. Visvanathan, Detection of catastrophic faults in analogintegrated circuits, IEEE Trans. Computer-Aided Design, vol. 8, pp.114130, Feb. 1989.

    [29] C. J. B. Spanos and S. W. Director, Parameter extraction for statisticalIC process characterization, IEEE Trans. Computer-Aided Design, vol.5, pp. 6679, Jan. 1986.

    [30] C. Michael and M. Ismail, Statistical modeling of device mismatch foranalog MOS integrated circuits, IEEE J. Solid-State Circuits, vol. 27,pp. 154165, Jan. 1992.

    [31] K. Eshbaugh, Generation of correlated parameters for statistical cir-cuit simulation, IEEE Trans. Computer-Aided Design, vol. 11, pp.11981206, Oct. 1992.

    [32] N. Salamina and M. R. Rencher, Statistical bipolar circuit design usingMSTAT, in Proc. ICCAD, 1989, pp. 198201.

    [33] C. Y. Chao, H. J. Lin, and L. Milor, Optimal testing of VLSI analogcircuits, IEEE Trans. Computer-Aided Design, vol. 16, pp. 5877, Jan.1997.

    [34] N. Nagi, A. Chatterjee, and J. A. Abraham, Fault simulation of linearanalog circuits, J. Electron. Testing: Theory and Applicat., vol. 4, pp.345360, 1993.

    [35] N. Nagi and J. Abraham, Hierarchical fault modeling for linear analogcircuits, Analog Integrated Circuits and Signal Processing, vol. 10, pp.8999, June/July 1996.

    [36] R. J. A. Harvey et al. , Analogue fault simulation based on layoutdependent fault models, in Proc. Int. Test Conf., 1994, pp. 641649.

    [37] A. Meixner and W. Maly, Fault modeling for the testing of mixedintegrated circuits, in Proc. Int. Test Conf., 1991, pp. 564572.

    [38] C. Sebeke, J. P. Teixeira, and M. J. Ohletz, Automatic fault extractionand simulation of layout realistic faults for integrated analogue circuits,in European Design and Test Conf., 1995, pp. 464468.

    [39] C. Y. Pan and K. T. Cheng, Pseudo-random testing and signatureanalysis for mixed-signal circuits, in Proc. ICCAD, 1995, pp. 102107.

    [40] C. H. Stapper and R. J. Rosner, Integrated circuit yield managementand yield analysis: Development and implementation, IEEE Trans.Semiconduct. Manufact., vol. 8, pp. 95102, May 1995.

    [41] C. M. Kurker et al. , Hierarchical yield estimation of large analogintegrated circuits, IEEE J. Solid-State Circuits, vol. 28, pp. 203209,Mar. 1993.

    [42] C. Y. Chao and L. Milor, Performance modeling of circuits usingadditive regression splines, IEEE Trans. Semiconduct. Manufact., vol.8, pp. 239251, Aug. 1995.

    [43] L. Milor and A. Sangiovanni-Vincentelli, Computing parametric yieldaccurately and efciently, in Proc. ICCAD, 1990, pp. 116119.

    [44] T. K. Yu et al. , Statistical performance modeling and parametric yieldestimation of MOS VLSI, IEEE Trans. Computer-Aided Design, vol.CAD-6, pp. 10131022, Nov. 1987.

    [45] K. K. Low and S. W.