Design for Test Ability

download Design for Test Ability

of 15

Transcript of Design for Test Ability

  • 8/3/2019 Design for Test Ability

    1/15

    PROCEEDINGS OF THE IEEE, VOL. 1, NO. 1, JANUARY 19 83

    Design for Testability-A SurveyTHOMAS w.WILLIAMS, MEMBER,EEE, AND P.PARKER, MEMBER,EEE

    Invited Paper

    A h c t - T h i s paper dimme? he basics of d e or testability. Ashort review of testing isgiven alongwith somereasons why one shouldtest. The different techniques of d e or testability an discusseddetail. These include techniqueswhich cau be applied to todays tech-inndogies and techniques which have been recently introduced aud winsoon appear in new designs.

    ICI. INTRODUCTION

    NTEGRATEDCircuitTechnology is nowmoving fromLarge-Scale Integration (LSI) to Very-Large-Scale Integra-tion (VLSI). This increase in gate count, which now can be

    as much as factors of three to fi ie times, has also brought adecrease in gate costs, along ith improvementsn performance.All these attributes ofVLSIarewelcomedby the industry.However, a problemnever adequately solvedbyLSI is stillwith us and is getting much worse: the problem of determining,in a cost-effective way, whether a component, module,r boardhas been manufacturedcorrectly [ 1 -[ 3 , 521-[ 681.The testing problem has two major facets:1) test generation [741-[ 9912) test verification [l oo ]- [ 1141,

    Test generation is the processof enumerating stimuli for acircuit which will demonstrate its correct operation. Test veri-fication is the process of proving that aset of tests are effectivetowards this end. To date, formal proof has been impossiblein practice. Faultsimulation hasbeen our best alternative,yielding a quantitative measure of test effectiveness. With thevast increase in circuit density, the ability to generate test pat-terns automatically and conduct fault simulation with hesepatterns has drastically waned.As a result, some manufacturersare foregoing hese more rigorous approaches and are acceptingthe risks of shipping adefective product. One generalapproachto addressing this problem is embodied in a collection of tech-niques known s Design for Testability [ 12 -[ 351.Design for Testability initially attracted interest in connec-tion with LSIdesigns. Today, in thecontext ofVLSI, thephrase is gaining even more currency. The collection of tech-niques that comprise Design for Testability are, in some cases,general guidelines; in other cases, they are hard and fast designrules. Together, they can be regarded essentially as a menu oftechniques, each with ts associatedcost of mplementationand returnon nvestment.Thepurposeof this paper is topresent the basic concepts in testing, beginning with the faultmodels and carrying through to the different techniques asso-ciated with Design for Testability which are known today in

    T. W. Williams is with IBM,General TechnologyDivision, Boulder,Manuscript received June 14, 198 2;revised September 15,1982.K. P. Parker is with Hewlett-Packard, Loveland Instrument Division,CO 80302.

    Loveland, CO 80537.

    the public sector. The design for testability techniques aredivided into two categories [ 101. The f i t category is that ofthe ad hoc technique for solving the testing problem. Thesetechniques solve a problem for a given design and are not gen-erallyapplicable to alldesigns. This is contrastedwith thesecond category of structured approaches. These techniquesaregenerallyapplicableandusually nvolve a set of designrules by which designs are implemented. The objective of astructured approach is to reduce the sequential complexity ofa network to aid test generation and est verification.The first ad hoc approach is partitioning [13l, [ 171, (231,[2 6] . Partitioning is the ability to disconnect one portion ofa network from another portion f a network n order to maketesting easier. The next approach which is used at the boardlevel is that of adding extra test points 1231, [241. The thirdad hoc approach is that ofBus Architecture Systems [ 121,

    [27]. This is similar to the partitioning approach and allowsone to divide and conquer-that is, to be able to reduce the net-work to smaller subnetworks which are much moremanageable.These subnetworks are not necessarily designedwith anydesignfor testability in mind. The forth technique whichbridgesboth the structured approach and the ad hoc approach is thatof Signature Analysis [121,271,33],55]. SignatureAnalysis requires some design rules at the board level, but isnot directed at the same objective as the structureapproachesare-that is, the ability to observe and control the state vari-ables of a sequential machine.For structured approaches, there are essentially four catego-ries which will be discussed-the first of which is a multiplexertechnique [ 141, [211, RandomAccessScan, that has beenrecently published and has been used,o some extent, by thersbefore. The next techniques are those of the Level-SensitiveScanDesign LSSD) [161, 181-[201, 341, 351approachand the Scan Path approach which will be discussed in detail.These techniques allow the test generation problem o be com-pletely reduced to one of generating tests for combinationallogic. Another approach which will be discussed is that of theScan/Set Logic1311.This is similar to the LSSD approachand the Scan Path approach since shift registers are used toload andunload data. However, these shift registersare notpart of the system data path and all system latches are notnecessarily controllable andobservable via the shift register.The fourth approachwhich will be discussed is that of Built-InLogic Block Observation (BILBO) [25] which has just recentlybeen proposed. This technique has the attributes of both theLSSD network and Scan Path network, the ability to separatethe network into combinational and sequential parts, and hasthe attribute of Signature Analysis-that is, employing linearfeedback shift registers.For each of the techniques described under the structuredapproach, he constraints, as well as variousways in which

    0018-9219/83/01oo-O098%01.000 983 IEEE

    Authorized licensed use limited to: University of North Texas. Downloaded on April 23, 2009 at 13:49 from IEEE Xplore. Restrictions apply.

  • 8/3/2019 Design for Test Ability

    2/15

    WILLIAMS AND PARKER: DESIGN FOR TESTABILITY 99

    q - - - y 0

    *]--)Ll(a)

    S-A-1

    1

    (b)Fig. 1 . Teat for input tuckat fault. (a)Fault-free A N D gate (goodmachine). (b) Faulty A N D gate (faulty machine).

    they can be exploited in design, manufacturing, testing, andfield servicing will be described. The basic storage devices andthe general logic structure resulting from the design constraintswill be described in detail. The mportant question of howmuch it costs in logic gates and operating speed will be dis-cussed qualitatively. All the structured approaches essentiallyallow the controllability and observability of the statevariablesin the sequential machine. In essence, then, testgeneration andfault simulation can be directed more at a combinational net-work, rather than ata sequential network.A. Definit ions and Assumptions

    A model offaults which is used throughout the industry thatdoes not take into account all possible defects, but is a moreglobal type of model, is the Stuck-At modeL The Stuck-Atmodel [11-[31, 191, [ l l l assumes that alog ic gate input oroutput is fixed to either a logic 0 or a logic 1. Fig. l(a) showsan A N D gate which is fault-free. Fig. l(b) shows an AND gatewith input A, Stuck-At-1 (S-A-1).The faulty AND gate perceives the A input as 1, irrespec-tive of the logic value placed on the nput. The pattern appliedto the fault-free A N D gates in Fig. 1 has an output value of 0since the input is 0 on the A input and 1 on the B input,and the ANDing of those two leads to a 0 on the output. Thepattern in Fig. l(b) shows an output of 1, since the A inputis perceived as a 1 even though a 0 is applied to t hat input.The 1 on the B input is perceived as a 1, and the results areANDed together to give a 1 output . Therefore, thepatternshown in Fig. l(a) and (b) is a test for the A input, S-A-1,since there is a difference between the faulty gate (faulty ma-chine) and the good gate (good machine). This pattern 01 onthe A and B inputs, respectively, is considered a test be-cause the good machine responds differently from the faultymachine. f they had the same esponse then hatpatternwould not have constituted a test for that fault.If a network contained Nnets, any netmay be good, Stuck-At1 or Stuck-At 0; thus all possible network state combinationswould be 3N. A network with 100 nets, then, would contain5 X lo4 differentcombinations of aults.Thiswould befar too many faults to assume. The run time of any programtrying to generate tests or fault simulate tests for this kind ofdesign would be impractical.

    Therefore, the industry, for manyyears,hasclung to thesingle Stuck-At fault assumption. That is, a goodmachinewill have no faults. The faulty machines that are assumed willhave one, and only one, of the stuck faults. In other words, allfaults taken two at a time are not assumed, nor are all faultstaken three at a time, etc. History has proven that the single

    Stuck-At fault assumption, inprior technologies,hasbeenadequate. However, there could besome problems inLSI-particularly with CMOS using the single Stuck-At fault as-sumption.The problem with CMOS is that there are a number of faultswhich could change a combinational network into a sequentialnetwork. Therefore, the combinational patterns are no longereffective in testing the network in all cases. It still remains tobe seen whether, in fact, the single Stuck-At fault assumptionwill survive the CMOS problems.Also, he single Stuck-At fault assumption does not, in gen-

    eral, cover the bridging faults [431 that may occur. Historicallyagain, bridging aults have been detected byhaving a high level-that is, in the high 90 percent-single Stuck-At fault coverage,where the single Stuck-At fault coverage is defined to be thenumber of faults hat are tested divided by the number offaults that are assumed.B. The VLSI Testing Problem

    The VLSI testing problem is the sum of a number of prob-lems. All the problems, in the final analysis, relate to the costof doing business (dealt with in the following section). Thereare two basic problem areas:

    1) test generation2) test verification via fault simulation.With respect to test generation, the problem is that as logicnetworks get larger, the ability to generate tests automaticallyis becoming more and more difficult.The second facet of the VLSI testing problem is the difficultyin fault simulating the test patterns. Fault simulation is thatprocess by which the fault coverage is determined for a specificset of input test patterns. In particular, at the conclusion ofthe fault simulation, every fault that is detected by the givenpattern set is listed. For a given logic network with 1000 two-input logicgates, the maximum number of single Stuck-Atfaults which can be assumed is 6000. Some reduction in thenumber of single Stuck-At faults canbeachieved by faultequivalencing [361, 381, 411, 421, 471. However, thenumber of single Stuck-At faults needed t o beassumed isabout 3000. Fault simulation, then, is the process of applyingevery given test pattern to a fault-free machine and to each ofthe 3000 copies of the good machine containing one, and onlyone, of the single Stuck-At faults. Thus fault simulation, withrespect to run time, is similar to doing 3001 goodmachinesimulations.

    Techniques are available to reduce the complexity of faultsimulation, however, it still is a very ime-consuming,andhence, expensiveask [96], 104], 105], 107], 110],[112]-[114].It has been observed that the computer run time to do test[801 generation and fault simulation is approximately propor-tional to the number of logic gates to thepower of 3; hence,small increases in gate count will yield quickly increasing runtimes. Equation (1)some cases. Other analyses have used the value 2 instead. A quick ra-The value of theexponent given here (3 ) is perhaps pessimistic intionale goes as follows: with a b e a r increase k n circuit size co m a anattendant linear increase n thenumber of failuremechanisms nowyielding k squared increase in work) . Also,as circuits become larger,they tend to become more strongly connected such that a given blockis effected by more blocks and even itself. This causes more work tobe done in a range we feel to be k cubed. This fairly nebulous conceptof connect ivity seems t o be the cause for debate on whether the expo-nent should be 3 or some other value.

  • 8/3/2019 Design for Test Ability

    3/15

    100 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 1 , JANUARY 1983T= KN 3 (1) Becausef the need to determine if aetwork has the attri-

    shows this relationship, where T is computer run time, N s thenumber of gates, and K is the proportionality constant. Therelationship does not take into account thealloff in automatictest generation capability due to sequential complexity of thenetwork. It has been observed that computer run time just forfault simulationis proportional to N2 ithout even consideringthe test generation phase.

    When one talks about testing, the topic of functional testingalways comes up as a feasible way to test a network. Theoreti-cally, to do a complete functional test (exhaustive testing)seems to imply that al l entries in a Karnaugh map (or excita-tion table) must be tested for a 1 or a 0. This means that if anetwork has N inputs and is purely combinational, then 2patterns are required to do a complete unctional test. Further-more, if a network has N inputs with M latches, at aminimumit takes 2N+M patterns to do a omplete functional est. Rarely

    butes of controllability andobservability that are desired, anumber of programs have been written which essentially giveanalytic measures of controllability and observability for dif-ferent nets in a given sequential network [691-[731.After observing the results of one of these programs in agiven network, the logic designer can then determine whethersome of the techniques, which will be described later, can beapplied to this network to ease the testing problem. For ex-ample, test points may be added at critical points which arenot observable or which are not controllable, or some of thetechniques ofScan Path or LSSD canbeused to initializecertain latches in the machine to avoid the difficulties of con-trollability associated with sequential machines. The popularityof such tools iscontinuing to grow, anda numberof companiesare now embarking upon their ownontrollability/observabilitymeasures.

    is that minimum ver obtainable;-and in fact, the number of UI. AD H O C DESIGN FOR TESTABILITYlo]tests required to do a complete functional test is very muchhigher than that. With LSI, this may be a network withN = 25Assuming one had the patterns and applied them at an applica-tion rate of 1ps per pattern, the test time would be over abillion years( lo9).

    Testinghasmoved from the afterthought position that tVLSI. When testing was part of the afterthought, it was a veryexpensive process. Products were discarded because there wasno adequate way to test them in productionquantities.There are two basic approaches which are prevalent today in

    and M = 50 , or 275 patterns, which is approximately3.8 X 1022 to O C C U PY to Part of he design environment in Lsl and

    C. Cost of TestingOne might ask why so much attention is now being given to

    the level of testability at chip and board levels. The bottomline is the cost of doing business. A standard among peoplefamiliar with the testing process is: If it costs $0.30 to detecta fault at the chip level, then it would cost $3 to detect thatsame fault when it was embedded at the board level; $30 whenit is embedded at the system level; and $300 when it is em-bedded at the system level but has to be found in the field.Thus if a fault can be detected at a chip or board level, thensignificantly larger costs per fault can be avoided at subsequentlevels of packaging.With VLSI and the inadequacy of automatic test generationand fault simulation, there is considerable difficulty in obtain-ing a level of testability required to achieve acceptable defectlevels. If the defect level of boards is too high, the cost of fieldrepairs is also too high.Thesecosts,and in somecases, theinability to obtain a sufficient test, have led to theneed to haveDesign for Testability.

    11. DESIGN OR TESTABILITYThere are two key concepts in Design for Testability: con-trollability andobservability. Control and observation of anetwork are central to implementing ts test procedure. Forexample, consider the case of the simple AND block in Fig. 1.In order to be able to test the A input Stuck-At 1, it wasnecessary to control the A input to 0 and the B input to1 and be able to observe the Cutput to determine whethera 0 was observed or a 1 was observed. The 0 is the result ofthe good machine, and the 1 would be the result, if you had afaulty machine. If this AND block is embedded into a muchlarger sequential network, he requirement ofbeingable tocontrol the A and B inputs to 0 and 1, respectively, andbeing able to observe the outputC, be it through some otherlogic blocks, s t i l l remains. Therein lies part of the problem ofbeing able to generate tests for a network.

    the industry to help solve the testing problem. The first-ap-proach categorized here is Ad Hoc, and the second approach iscategorized as a Structured Approach. The Ad Hoc techniquesare those techniques which can be applied to a given product,but are not directed at solving the general sequential problem.They usually do offer relief, and their cost is probably lowerthan the cost of the Structured Approaches. The StructuredApproaches, on the other hand, are trying t o solve the generalproblemwith design methodology, such that when thedesigner has completed his design from one of these particularapproaches, the results will be test generation and fault simula-tion at acceptable costs. StructuredApproaches lend them-selves more easily to design automation. Again, the main dif-ference between the two approaches is probably the cost ofimplementation and hence, the return on investment for thisextra cost. In the Ad Hoc approaches, the job of doing testgeneration and fault simulation are usually not as simple oras straightforward as they would be with the Structured Ap-proaches, as we shall see shortly.A number of techniques have evolved from MSI to LSI andnow into VLSI that fall under the category of the ad hoc a pproaches ofDesign for Testability. These techniques areusually solved at the board level and do not necessarily requirechanges in the logic design in order t o accomplish them.A . Partitioning

    Because the task of test pattern generation and fault simula-tion is proportional to the number of logic gates to the thirdpower, a significant amount of effort has been directed at a pproaches called Divide and Conquer.There are a number of ways in which the partitioning a pproach to Design for Testability can be implemented.Thefirst is to mechanical partition by dividing a network in half.In essence, this would reduce the test generation and faultsimulation tasks by 8 for two boards. Unfortunately, havingtwo boards rather than one boardcanbe a significant costdisadvantage and defeats the purpose of integration.

  • 8/3/2019 Design for Test Ability

    4/15

    WILLIAMS AND PARKER: DESIGN FOR TESTABILITY 101

    Z : , I j -Control 2

    Fig. 2. Use of degating logi c for logical partioning.

    clock degateA IFig. 3. Degating lines for oscillator.

    Anotherapproach that helps the partitioning problem, aswell as helping one to Divide and Conquer is to use jumperwires. These wires would go off the board and then back onthe board, so that the tester nd the test generator can controland observe these nets directly. However, this could mean asignificant number of 1/0 ontacts at the board level whichcould also get very costly.Degating is another technique for separating modules on aboard. For example, in Fig. 2 , a degating line goes to twoANDblocks that are driven from Module 1. The resultsof thosetwo A N D blocks go to two independent OR blocks-one con-trolled by Control Line 1, the other with Control Line 2. Theoutput of the O R block from Control Line 1 goes into Module2, and the output of Control Line 2 goes into Module 3 . Whenthe degate ine is at the0 value, the two Control ines, 1 and 2,can be used to drive directly into Modules 2 and 3 . Therefore,complete controllability of the inputs to Modules 2 and 3 canbe obtained by using these control lines. f those two netshappen to be very difficult nets to control, as pointed out, say,by a testability measure program, then this would be a verycosteffective wayof controlling those two nets and hence,being able to derive the tests at avery reasonable cost.A classical example of degating logic is that associated withan oscillator, as shown in Fig. 3. In general, if an oscillator isfree-running on a board, driving logic, it is very difficult, andsometimes impossible, to synchronize the tester with the ctiv-ity of the logic board. As a result, degating logic can be usedhere to block the oscillator and have a pseudo-clock line whichcan be controlled by the tester, so that the dc testing of all thelogic on that board can be synchronized. All of these tech-niques require a number of extra primary inputs and primaryoutputs and possibly extra modules o perform the degating.B. Test Points

    Another approach to help the controllability and observabil-ity of a sequential network is to use test points [231, [241. Ifa test point is used as a primary input to the network, thenthat can function to enhance controllability. If a test pointis used as a primary output, then that is used to enhance theobservability of a network. In some cases, a single pin can beused as both annput and anutput. IFor example, in Fig. 4 , Module 1 has a degate function, sothat the output of those two pins on the module could go tononcontrolling values. Thus the external pins which are dottedinto those nets could control those nets and drive Module 2.

    Module 1

    Degatellne

    Extra pmsFig. 4. Test points used as both inputs and outputs.

    Fig. 5 . Bed of Nails test .

    On the other hand, if the degate function is at the oppositevalue, then the output of Module 1 can be observed on theseexternal pins. Thus the enhancement of controllability andobservability canbe accommodated byaddingpinswhichcan act as both nputs and outputsunder certain degatingconditions.Another technique which can be used for controllability is tohave a pin which, in one mode, implies system operation, andin another mode takes N inputs and gates them to a decoder.The 2N outputs of thedecoder areused to control certainnets to values which otherwise would be difficult to obtain.By so doing, the controllability of the network is enhanced.As mentioned before, predictability is an issue which s asimportant as controllability and observability. Again, testpoints can be used here. For example, a CLEAR or PRESETfunction for all memory elements can be used. Thus the se-quential machine can be put into aknown state with very fewpatterns.Another techniquewhich falls nto thecategory of test pointsand is verywidelyused is that of the BedofNails1311tester, Fig. 5. The Bed of Nails tester probes the underside ofa board to give a larger number of points for observability andcontrollability. This is in addition to the normal tester contactto the board under test. The drawback of this technique isthat the tester must have enough test points to be able to con-trol and observe each one of these nails on the Bed of Nailstester. Also, there are extra loads which are placed on the netsand this can cause some drive and receive problems. Further-more, the mechanical fixture which will hold the Bed of Nailshas to be constructed, so that the normal orces on the probesare sufficient to guarantee reliable contacts. Another applica-tion for the Bed of Nails testing is to do drive/sense nails[ 3 1 ] or in situ or in-circuit testing, which, effectively, isthe technique of testing each chip on he board independentlyof the other chips on the board. For eachhip, the appropriatenails and/or primary nputs are driven so as to prevent onechip from being driven by the other chips on the board. Oncethis state has been established, the isolated chip on the boardcan now be tested. In this case, the resolution to the failing

  • 8/3/2019 Design for Test Ability

    5/15

    102 PROCEEDINGS OF THE IEEE, VOL. 71;NO. 1 . JANUARY 1983

    Data BusFig. 6. Bus structured microcomputer.

    chip is much better than edge connector tests, however, thereis some exposure to incomplete testing of interconnectionsand care must be taken not to damage the circuit when over-driving it. Design for testability in a Bed of Nails environmentmust, take the issues of contact reliability, multiplicity, andelectrical loading into account.

    C. Bu s ArchitectureAn approach that has been used very successfully to attackthe partitioning problem by the microcomputer designers is touse abus tructured architecture. This architecture allowsaccess to critical buses which go to many different modules onthe computer board. For example, in Fig. 6 , you can see thatthe data bus is involved with both the microprocessor module,the ROM module, the RAM module, and the 1/0Controllermodule. If there is external access to the data bus and three ofthe four modules can be turned off the data bus-that is, theiroutputs canbe put nto a high-impedence state (three-statedriver)-then the data bus could be used to drive the fourthmodule, as if it were a primary input (or primary output) tothat particular module. Similarly, with the address bus, accessagainmustbe controlled externally to he board, and thusthe address bus can be very useful to controlling test patternsto themicrocomputer board. These buses, in essence, partitionthe board in a unique way, so that testing of subunits can beaccomplished. A drawback of busstructured designscomeswith faults on the bus itself. If a bus wire is stuck, any moduleor the bus trace itself may be the culprit. Normal testing isdone by deducing the location of a fault from voltage infor-mation. Isolating a bus failure may require current measure-ments, which are much more difficult to do.D . Signature Analysis

    This technique for testing, introduced in 1977 [2 7] , [33 ],[55] is heavily reliant on planning done in the design stage.That is why this technique falls between the Ad Hoc and theStructured Approaches for Design for Testability, since somecare must be taken at theboard level in order to ensure properoperation of this Signature Analysis of the board [ 121. Signa-ture Analysis is well-suited t o bus structure architectures, aspreviously mentioned and in particular, those associated withmicrocomputers. Thiswi become more apparentshortly.The integral part of the Signature Analysis approach is thatof a linear feedback shift register [81. Fig. 7 shows an exampleof a 3-bit linear feedback shift register. This linear feedbackshift register is made up of three shift register latches. Eachone is represented by a combination of an L1 latch and anL 2 latch. These can be thought of as the master latch beingthe L1 latch and the slave latch being the L 2 latch. An A clock clocks a l l the L 1 latches, and a B clock clocks all the

    a1

    IFig. 7. Counting capabilitiesof a linear feedback shift register.

    Fig. 8. Use of signature analysis tool.L 2 latches, so that turning the A and Bylocks on andoff independently will shift the shift register1-bit positionto he right. Furthermore, this linear shift registerhasanEXCLUSIVE-OR gate which takes the output, Q2, the secondbit in the shift register, and EXCLUSIVE-ORSit with the thirdbit in the shift register, Q3. The result of that EXCLUSIVE-ORis the input to the fust shift register. A single clock could beused for this shift register, which is generally the case, how-ever, this concept wilt be used shortly when some of the struc-tured design approaches are discussed which use two nonover-lappingclocks.Fig. 7 showshow this linear feedback shiftregister will count for different initial values.

    For longershift registers, the maximal length inear feedbackconfigurations canbe obtainedbyconsulting tables [8 ] todetermine where t o tap off the linear feedback shift registerto perform the EXCLUSIVE-OR function. Of course, only EX-CLUSIVE-OR blocks can be used, otherwise, he linearity wouldnot be preserved.The key to Signature Analysis is to design a network whichcan stimulate itself. A good example of such a networkwouldbemicroprocessor-based oards,ince they can stimulatethemselves using he intelligence of the processor driven by thememory on theboard.The Signature Analysis procedure is one which has the shiftregister in the Signature Analysis tool, which is external to theboard and not part of the board n any way, synchronizedwith the clocking that occurs on he board, see Fig. 8. A probeis used to probe a particular net on the board. The result ofthat probe is ExcLusIvE-ORed into the linear feedback shiftregister. Of course, it is important hat he linear feedbackshift registerbe initialized to the same starting placeeverytime, and that the clocking sequence be a fiied number, sothat the tests can be repeated. The board must also have someinitialization, so that its esponse will be repeated as well.After a futed number of clock periods-lets assume SO-aparticular valuewillbe stored in Ql , Q2, nd Q3. It is notnecessady he value that wouldhave occurred if the linearfeedback shift register was just counted 50 imes-Modulo 7.

    Authorized licensed use limited to: University of North Texas. Downloaded on April 23, 2009 at 13:49 from IEEE Xplore. Restrictions apply.

  • 8/3/2019 Design for Test Ability

    6/15

    WILLIAMS AND PARKER: DESIGN FOR TESTABILITY 103The value will be changed, because the values coming from theboard via the probe will not necessarily be a continuous stringof 1s; there will be 1s intermixed with0s.The placewhere the shi f t register stops on he SignatureAnalysis Tool-that is, the values for Ql, Q2, and Q3 is theSignature for that particular-node for the good machine. Thequestion is: If there were errors present at one ormore pointsin the string of 50 observations of that particular net of theboard, would the value stored in the shift register for Ql ,Q2,and Q3 be different than he one for the good machine? It hasbeen shown that with a 16-bit linear feedback shift register,the probability of detecting one or more errors is extremelyhigh [55]. In essence, the signature, or residue, is the re-mainder of the data stream after division by an irreduceablepolynomial. There is considerable data compression-that is,after the results of a number of shifting operations, the testdata are reduced t o 16 bits, or, in the case of Fig. 8, 3 bits.Thus the result of the Signature Analysis tool is basically aG o / N d ; o for the output for thatarticular module.If the bad output for that modulewereallowed to cyckaround through a number of other modules on the board andthen feed back into this particular module, it would not beclear after examining all the nodes in the loop which modulewas defective-whether it was the module whose output wasbeing observed, or whether it was another module upstream inthe path. This gives rise to two requirements for SignatureAnalysis. First of all, closed-loop paths must be broken at theboard level. Second, the best place to sta rt probing with Sig-nature Analysis is with a kernel of logic. In other words, ona microprocessor-based board, one would start with the out-puts of the microprocessor itself and then build up from thatparticular point, once it has been determined that the micro-processor is good.

    This breaking of closed loops is a tenant of Design for Test-ability and for Signature Analysis. There is a little overheadfor implementing Signature Analysis. Some ROM space wouldbe required (to stimulate the self-test), as well as extra jumpers,in order to break closed loops on the board. Once this is done,however, the test can be obtained for ery little cost. The onlyquestion that remains is about the quality of the tests-that is,how good are the tests that are being generated, do theycovera l l the faults, etc.Unfortunately,he logicmodels-for example, micropro-cessors-are not readily available to the board user. Even if amicroprocessor logic model were available, they would not beable to do a complete fault simulation of the patterns becauseit would be too large. Hence, Signature Analysis may be thebest that could be done for thisarticular board with thegiveninputs which the designerhas. Presently, largenumbers ofusers are currently using the Signature Analysis technique totest boards containing LSI and VLSI components.

    IV. STRUCTUREDESIGN OR TESTABILITYToday, with the utilization of LSI and VLSI technology, ithas become apparent that even more care will have to beakenin the design stage in order t o ensure testability and produce-ability of digital networks. This has led to rigorous and highlystructured design practices. These efforts are being spearheadednot by themakers of LSI/VLSI devices but byelectronicsf i i s

    whichpossesscaptiveICfacilitiesand the manufacturers oflarge main-frame computers.Moststructureddesignpractices[14]-[16],[18]-[21],[25],[311, [321, [34] , [35] are built upon the concept that if the

    Fig. 9. Classical model of a sequential networkutilizing a shift registerfor storage.

    values in all the latches can be controlled to any specific value,and if they can be observed with a very straightforward opera-tionhenhe test generation, andossibly, the faulttask, can be reduced to that of doing test generation and faultsimulation for a combinational logic network. A control signalcan switch the memory elements from their normalmodeof operation to a mode that makes them controllable andobservable.It appears from the literature that several companies, such asIBM, Fujitsu Ltd.,Sperry-Univac,andNipponElectric Co.,Ltd. [14]-[16] ,-[18] -[21], [31], [32], [35] have beendedi-cating formidable amounts of esources toward StructuredDesign for Testability. One notes simply by scanning the liter-ature on testing, that many of the practical concepts and toolsfor testing were developed by main-frame manufacturers whodo not lack for processor power. It is significant, then, thatthese companies, with their resources,have ecognized thatunstructured designs ead to unacceptable testing problems.Presently, IBM has extensively documented its efforts in Struc-tured Design for Testability, and these are reviewed first.A . Level-Sensitive Scan Design (LS SD )

    With the concept that the memory elements in an IC can bethreaded together into a shift register, the memory elementsvalues can be both controlled and observed. Fig. 9 shows thefamiliar generalized sequential circuit model modified to usea shift register. This technique enhances both controllabilityand observability, allowing s to augment testing by controllinginputs and internal states, and easily examining internal statebehavior. An apparent disadvantage is the serialization of thetest, potentially costing more time for actually running a test.

    LSSD is IBMs discipline for structural design for testability.Scan refers to the ability to shift into or out f any state ofthe network. Level-sensitive refers to constraints on circuitexcitation, logic depth, and the handling of clocked circuitry.A key element in the design is the shift register latch (SRL)such as can be implemented in Fig. 10. Such a circuit is im-mune t o most anomalies in the ac characteristics of the clock,requiring only that it emain high (sample) at least long enoughto stabilize the feedback loop, before being returned to thelow (hold) state [ 181, [ 191. The lines D and C form the nor-mal mode memory function while lines I , A , E , and L 2 com-prise additional circuitry for the shift register function.The shift registers are threaded by connecting I to L 2 andoperated by clocking lines A and E in two-phase fashion. Fig.11 shows four modules threaded for shift register action. Nownote in Fig. 11 thateach module could be an SRL or, one evelup, a board containing threaded ICs, etc. Each level of pack-

  • 8/3/2019 Design for Test Ability

    7/15

    104 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 1, JANUARY 1983@l-G-L1A O + L 2B O+L 10

    I II +L2

    A+ IB O 1

    (b)Fig. 10. Shift register latch (SRL). (a)ymbolicepresentation.(b) Implementation n ANDINVERT gates.

    I I ,Chip,- +L2L2i q can out

    Modules (Chip)

    Scan InAScan outB

    Board

    Fig. 11. Interconnection of SRL's on an integrated circuit and board.

    aging requires the same four additional lines to implement theshift register scan feature. F i g . 12 depicts a general structurefor an LSSD subsystem with a two-phase system clock. Addi-tional rules concerning the gating of clocks, etc., are given byWilliams and Eichelberger [ 181, [ 191. Also, it is not practicalto implement RAM with SRL memory, so additional proce-dures are required to handle embedded RAM circuitry [201.

    Given that an LSSD structure is achieved, what are the r ewards? It turns out that the network can now be thought ofas purely combinational, where tests are applied via primary

    Comb!.nationalnetworkN

    I A 1 hiftI C2canr n shift&Fig. 12. General structure of an L S D subsystem with tw o systemdocks.inputs and shift-register outputs. The testing of combinationalcircuits is a well understood and (barely) tractable problem.Now techniques such as the DAlgorithm [93] compiled codeBoolean simulation [21, [74],11061, [ 1071, and adaptive ran-dom test generation [87],[95], [981are again viable a pproaches to the testing problem. Further, as small subsystemsare tested, their aggregates into larger systems are also testableby cataloging the position of each testable subsystem in theshift register hain.System tests become (ideally) simpleconcatenations of subsystem tests. Though ideals are rarelyachieved, the potential for solving otherwise hopeless testingproblems is very encouraging.In considering the cost performance mpacts, there are anumber of negative impacts associated with the LSSD designphilosophy. First of all, the shift register latches in the shiftregister are, logically, two or three t imes as complex as simplelatches. Up to four additional primary nputsloutputs arerequired at each package level for controlof the shift registers.Externalasynchronous nput signalsmust not changemorethan once every clock cycle. Finally, all timing within the sub-system is controlled by externally generated clock sgnas.In terms of additional complexity of the shift register holdlatches, the overhead from experiencehas been in the range of4 to 20 percent. The difference is due to the extent towhichthe systemdesignermadeuseof the L2 latches for systemfunction. It has been reported in the IBM System 38 literaturethat 85 percent f the L2 atches were usedforsystem function.This drastically reduces the overhead associated with this de-sign technique.With respect to the primary inputs/outputs that re requiredto operate the shift register, this can be reduced signifkantlyby making functional use of some of the pins. For example,the scan-out pin could be a functional output of an SRL forthat particular chip. Also, overall performance of the sub-system may be degraded by the clocking requirement, but theeffect should be small.The LSSD structured design approach for Design for Test-ability eliminates or alleviates some of the problems in design-ing, manufacturing and maintaining LSI systems at a easonablecost.

  • 8/3/2019 Design for Test Ability

    8/15

    WILLIAMS AND PARKER: DESIGN FOR TESTABILITY 105

    Clock 2-ystemData

    Input tP iiv I-Latch

    Clock 1-ig. 13. Racelesa D-type flip-flop with Scan Path.

    B. Scan PathIn 1975, a survey paper of test generation systems in Japan

    was presented by members of Nippon Electric Co., Ltd. [2 1 .In that survey paper, a technique they described as Scan Pathwas presented. The Scan Path technique has the same objec-tives as the LSSD approach which hasjust been described. TheScan Path technique similarities and differences to the LSSDapproach will be presented.

    The memory elements that are used n the Scan Path a pproach are shown in Fig. 13. This memory element is called araceless D-type flip-flop with Scan Pzth.In system operation, Clock 2 is at a logic value of 1 for theentire period. This, in essence, blocks the test or scan inputfrom affecting the values in the f i t latch. This D-type flipflop really contains two latches. Also,by having Clock 2 at alogic value of 1, thevalues in Latch 2 are not disturbed.Clock 1 is the sole clock in system operation for this Dt yp eflipflop. When Clock 1 is at a value of 0, the System Data In-put can be loaded into Latch 1. As long as Clock 1 is 0 forsufficient time to latch up the data, it can then turn off. Asit turns off, it then will make Latch 2 sensitive to the data out-put of Latch 1. As long as Clock 1 is equal to a 1 so that datacan be latched up into Latch 2, reliable operation will occur.This assumes that as long as the output of Latch 2 does notcome around and feed the system data input to Latch 1 andchange it during the time that the inputs to both Latch 1 andLatch 2 are active. The period of time that this can occur isrelated to the delay of the inverter block for Clock 1. A similarphenomenon will occur with Clock 2 and its associated inverterblock. This race condition is the exposure to the use of onlyone system clock.This points outa significant difference between the ScanPath approach and the LSSD approach. One of the basic prin-ciples of the LSSD approach is level-sensitive operation-theability to operate the clocks in such a fashion that no races willexist. In the LSSD approach, a separate clock is required forLatch 1 from the clock that operates Latch 2.In terms of the scanning function, the D-type flip-flop withScan Path has its own scan input called test input. This isclocked into the L1 latch by Clock 2 when Clock 2 is a 0, andthe results of the L 1 latch re clocked into Latch 2 when Clock2 is a 1. Again, this applies to master/slave operation of Latch 1and Latch 2 with its associated race with proper attention todelays this race will not be a problem.

    Another feature of the Scan Path approach is the configura-tion used at the logic card level. Modules on the logic card areal l connected up into a serial scanpath, such that for each card,there is one scan path. In addition, there are gates for selectinga particular card in a subsystem. In F i g . 14, when X and Y areboth equal to 1-that is the selection mechanism-Clock 2 willthen be allowed to shift data through the scan path. Any othertime, Clock 2 will be blocked, and its output will be blocked.The reason for blocking the output is that a number of cardoutputs can then be put together; thus the blocking functionwill put their output to noncontrolling values, so that a partic-ular card can haveunique control of the unique test output forthat system.

    It has been reported by the Nippon Electric Company thatthey haveused the Scan Path approach, plus partitioningwhich will be described next, for systems with 100 000 blocksor more. This was for the FLT-700 System, which is a largeprocessor system.The partitioning technique is one which automatically sepa-rates the combinational network into smaller subnetworks, sothat the test generator can do test generation for the small sub-networks, rather than the larger networks. A partition is auto-matically generated by backtracing from the D-type flip-flops,through the combinational logic, until it encounters a D-typeflipflop in the backtrace (or primary input). Some care mustbe taken so that the partitions do not et too large.

    To hat end, the Nippon Electric Company approach hasused a controlled D-type flipflop to block the backtracing ofcertain partitions when they become too high. This is anotherfacet of Design for Testability-that is, the ntroduction ofextra flip-flops totally ndependent of function, in order tocontrol the partitioning algorithm.

    Other han he lack of the level sensitive attribute t o theScan Path approach, the technique is very similar to the LSSDapproach. The introduction of he Scan Path approach wasthe first practical implementation of shift registers for testingwhich was incorporated in a total system.C. ScanlSet Logic

    A technique similar to Scan Path and LSSD, but not exactlythe same, is the Scan/Set technique put forth by Sperry-Univac[ 3 11. The basic concept of this technique is to have shiftregisters, as in Scan Path or in LSSD, but these shift registersare not in the data path. That is, they are not in the systemdata path; they are independent of all the system latches. Fig.15 shows an example of the Scan/Set Logic, referred to as bitserial logic.The basic concept is that he sequential network can be

    Authorized licensed use limited to: University of North Texas. Downloaded on April 23, 2009 at 13:49 from IEEE Xplore. Restrictions apply.

  • 8/3/2019 Design for Test Ability

    9/15

    106 PROCEEDINGS OF THE IEEE, VOL. 1, NO . 1 , JANUARY 198364 bit Oeriaishift register

    Set FC N- -erne

    Scan FC N\ \ \ t t t- -[Lig. 15. -/Set Logic (bitacrid).

    sampled at up to 64 points. These points can be loaded intothe 64-bit shift register with a single clock. Once the 64 bitsare loaded, a shifting process will occur, and the data will bescanned out through th e sca wu t pin. In the caw of the setfunction, the 64 bits can be funneled into the system ogic,and then the appropriate clocking structure required to loaddata into the system latches is required in this system logic.Furthermore, the set function could also be used to controldifferent paths to ease the testing function..In general, this serial Scan/Set Logicwouldbe integratedonto he samechip that contrains sequential system ogic.However, some applications have been put forth where the bitserial ScanlSet Logic was off-chip, and the bit-serial Scan/SetLogiconlysampled outputs or drove inputs to facilitate in-circuit testing.

    Recently, Motorola has come forth with a hip which is TLand which has 12L ogic integrated on that same chip. This hasthe Scan/Set Logic bit serial shift registers built in 12L. TheT2 L portion of the chip is a gate array, and the 12L is on thechip, whether the customer wants it or not. It is up to thecustomer to use the bit-serial logic f he chooses.At this point, it should be explained that if all the latcheswithin the system sequential network are not both scannedand set, then the test generation function s not necessarily re-duced t o a total combinational test generation function andfault simulation function. However, this techniquewigreatlyreduce the task of test generation and ault simulation.Again, the Scan/Set echnique has the sameobjectives asScan Path and LSSD-that is, controllability and observability.However, in erms of its mplementation, t is not requiredthat the set function set all system latches, or that the scanfunction scan all system latches. This design flexibility wouldhave a reflection in the software support required to imple-ment such a technique.Another advantage of this technique is that the scan functioncan occur during system operation-that is, the sampling pulseto the 64-bit serial shift register can occur while system clocksare being applied to the ystem sequential logic, so that a snap-shot of the sequential machine canbe obtained and off-loadedwithout any degradation n system performance.D . Random-Access Scan

    Another technique similar to the ScanPath technique andLSSD is the Random-Access Scan technique put forth by Fu-jitsu [ 141. This technique has the same objective as Scan Pathand LSSD-that is, to have complete controllability and observ-ability of all internal latches. Thus the test generation func-

    -CK 0 1 - Q0, T

    Fig. 16. Polarityhold-type addressable latch.

    Y-Adr 0 1 IFig. 17. Set/Reaet type addressable latch.

    tion can be reduced to that of combinational test generationand combinational fault simulation as well.Random-Access Scan differs from the other two echniquesin that shift registers are not employed. What is employed isan addressing scheme which allows each latch to be uniquely

    selected, so that it can be either controlled or observed. Themechanism for addressing is very similar to thatof a Random-Access Memory, and hence, itsname.Figs. 16 and 17 show the two basic latch configurations thatare required for the Random-Access Scan approach. Fig. 16 is

    a single latch which has added to it an extra data port whichis a Scan Data In port (SDI). These data are clocked into thelatch by the SCK clock. The SCK clock can only affect thislatch, if both the X and Y addresses are one. Furthermore,when the X address and Y address are one, then the Scan DataOut (SDO) point can be observed. System data labeled Data inFigs. 16 and 17 are loaded into this latch by the system clocklabeled CK.The set/reset-type addressable latch in Fig. 17 doesnot havea scan clock to load data into the system latch. This latch isfmt cleared by the CL line, and the CL line is connected toother latches that are also set/reset-type addressable latches.This, then, places the output value Q to a 0 value. A preset isdirected at those latches that are required to be set to a 1 forthat particular test. This preset is directed by addressing eachone of-those latches and applying the preset pulse labeledPR.The output of the latch Qwi then go to a 1. The observabilitymechanism for Scan Data Out is exactly the same as for thelatch shown in Fig. 16.Fig. 18 gives an overall view of the system configuration ofthe Random-Access Scan approach. Notice that, basically,there is a Y address, an X address, adecoder, the address-able torage elements, which are the memory elements orlatches, and the sequential machine, system clocks, and CLEARfunction. There is also an SDI which is the input for a givenlatch, an SDO which is the output data for that given latch,and a scanclock. There is also one logic gate necessary tocreate the preset function.

  • 8/3/2019 Design for Test Ability

    10/15

    WILLIAMS AND PARKER: DESIGN FOR TESTABILITY 107

    Inputs Circuit outputJ

    AddressableClearandClocksSD ISD K

    Dec.-I-canaddress X-Decoder SDOFig.18 . Random-Access Scan network.

    The Random-Access Scan technique allows the observabilityand controllability of all system latches. In addition, any pointin the combinational network can be observed with the addi-tion of one gate per observation point, as well as one addressin the address gate, per observationpoint.While the Scan Path approach and the LSSD approach re-quire two latches for every point which needs to be observed,the overhead for Random-Access Scan is about three to fourgates per storage element. In terms of primary inputsloutputs,the overhead is between 10 and 20. This pin overhead can bediminished by using the serial scan approach for the X and Yaddress counter, which would lead to 6 primary inputsloutputs.

    V. SELF-TESTINGN D BUILT-INTESTSAs a natural outgrowth of the Structured Design approach

    for Design for Testability, Self-Tests and Built-In Tests havebeen getting considerably more attention. Four techniques i l lbe discussed, which fall into this category, BILBO, SyndromeTesting, Testing by Verifying Walsh Testing Coefficients, andAutonomous Testing.Eachof these techniques willbe d escribed.A . Built-In Logic Block Observation, BILBO

    A technique ecentlypresented akes the Scan Path andLSSD concept and integrates it with the Signature Analysisconcept.The end result is a echnique or Built-InLogicBlock Observation, BILBO [251.Fig. 19 gives the form of an 8-bit BILBO register. The blocklabeled Li ( i = 1, 2, * * , 8 ) are the system latches. B 1 and B 2are control values for controlling the different functions that

    the BILBO register can perform. SIN s the scan-in input to the8-bit register, and SOUTs the scan-out for the 8-bit register.Qi ( i = 1, 2, * , 8 ) are the output values for the eight systemlatches. Zi (i = 1,2, * , 8 ) are the inputs from the combina-tional logic. The structure that his network wibe embeddedinto will be discussed shortly.There are three primary modes of operation for this register,as well as one secondary mode of operation for this register.The first is shown in Fig. 19(b)-that is,with B 1 and B z equalto 11. This is a Basic System Operation mode, in which the

    ( 4Fig. 19 . BILBO and it s different modes. (a) General form of BILBOregister. (b) B , B , = 1 1 system orientation mode. (c) B I B , = 00 linearshift register mode. (d) B , B , = 10 signature analysis register with

    rn multiple inputs(Z,Z,, .. 2, .

    PNGen 0 0 S A R e gFig. 20. Use of BILBO registers to test combinational Network 1 .

    Zi values are loaded into the L i , and the outputs are availableon Q i for system operation. This would be your normal regis-ter function.

    When B 1 B 2 quals 00, he BILBO register takes on the formof a linear shift register, as shown in Fig. 19(c). Scan-in inputto the left, through some inverters, and basically lining up theeight egisters into a singlescan path, until the scan-out isreached. This is similar to Scan Path and LSSD.The third mode is when B I B 2 equals 10. In this mode, theBILBO register takes on the attributes of a linear feedbackshift register of maximal length with multiple linear inputs.This is very similar to a SignatureAnalysis register, except thatthere is more than one input. In this situation, there re eightunique inputs. Thus after a certain number of shift clocks,say, 100, there would be a unique signature left in the BILBOregister for he goodmachine. This good machine signaturecould be off-loaded from the register by changing from ModeB 1B2 = 10 to Mode B1B z = 00, n which case a shift registeroperation would exist, and the signature then could be observedfrom the scan-out primary output.The fourth function that the BILBO register can-perform isB1 B2 equal to 01, which would force a reset on the register.(This is not depicted in Fig. 19.)The BILBO egisters are used in the system operation, asshown in Fig. 20. Basically, a BILBO register with combina-tional logicand another BILBO register with combinationallogic, as well as the output of the second combinational logicnetwork can feed back into the inputof the first BILBO regis-

    Authorized licensed use limited to: University of North Texas. Downloaded on April 23, 2009 at 13:49 from IEEE Xplore. Restrictions apply.

  • 8/3/2019 Design for Test Ability

    11/15

    10 8 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 1, JANUARY 1983

    1 - It tPN Gen 4 - ARe gfig. 21. Use of BILBO registers t o test combinational Network 2.

    ter. The BILBO approach takes one other act into account, andthat is that, in general, combinational logic is highly susceptibleto random patterns. Thus if the inputs to the BILBO register,Z I , Z Z , ,ZS, can be controlled to fixedvalues,such thatthe BILBO register is in the maximal length linear feedbackshift register mode (Signature Analysis) it will output a se-quence of patterns which are very close t o random patterns.Thus random patterns can be generated quite readily from thisregister. These sequences are called Pseudo Random Patterns

    If, in the f i i t operation, this BILBO register on the left inFig. 20 is used as the PN generator-that is, its data inputs areheld to fixed values-then the output of that BILBO registerwill be random patterns. This will then do a reasonable test, ifsufficient numbers of patterns are applied, of the Combina-tional Logic Network 1. The results of this test can be storedin a Signature Analysis register approach with multiple inputsto the BILBO register on the right. After a fixed number ofpatterns have been applied, the signatureis scanned out of theBILBO register on the right for good machine compliance. Ifthat is successfully completed, then the roles are reversed, andthe BILBO register on the right will be used as a PN sequencegenerator; the BILBO register on the left will then be used asa Signature Analysis register with multiple inputs from Com-binational LogicNetwork 2, see F i g . 21. n this mode, theCombinational LogicNetwork 2 will have randompatternsapplied to its inputs and its outputsstored in the BILBO regis-ter on the far left. Thus the testing of the combinational logicnetworks 1and 2 can be completed at very high speeds by onlyapplying the shift clocks, while the two BILBO registers are inthe Signature Analysis mode. At the conclusion of the tests,off-loading of patterns can occur, and determination of goodmachine operation can be made.This technique solves the problem of test generationandfault simulation if the combinational networksare susceptibleto random patterns. There are some known networks whichare not susceptible to random patterns. They areProgram-mable Logic Arrays (PLAs), see F i g . 22. The reason for this isthat the fan-in in PLAs is too large. If an AND gate in thesearch array had 20 inputs, then each random pattern wouldhave 1/220 probability of coming up with the correct inputpattern. On the other hand, random combinational logic net-works with maximum fan-in of 4 a n do quite well with ran-dom patterns.The BILBO technique solves another problem and that is oftest data volume. In LSSD, Scan Path, Scan/Set, or Random-Access Scan, a considerable amount of test data volume is in-volved with the shifting in and out. With BIBLO, if 100 pat-terns are run between scan+uts, the test data volume may bereduced by a factorof 100. The overhead for this technique ishigher than for LSSD since about two EXCLUSIVE-ORS mustbe used per latch position. Also, there is more delay in thesystem data path (one or two gate delays). If VLSI has thehuge number of logic gates available than this may be a veryefficient way to use them.

    (PN).

    n input linese..

    Bit PartitioningNetworks

    Bit linesSearch Array -ccL -WordLines

    + + 0 outputs tFig. 22. PLA model.

    IFig. 23 . Syndrome test structure.

    B. SyndromeTestingRecently, a echnique was shown whichcould be used to

    test a network with fairly minor changes to the network. Thetechnique is Syndrome Testing. The echnique requires thatall 2 patterns beapplied to the nput of the network and thenthe number of 1s on the output e counted [ 1151, [ 1161.Testing is done by comparinghe number of 1s for thegoodmachine to the number of 1s for the faulty machine. If thereis a difference, the fault(s) in the faulty machine are detected(or Syndrome testable). To be more formal the Syndrome is:

    Definition 1 : The Syndrome S of a Boolean function is d efiied asKS=-2

    where K is the number of minterns realized by he function, andn is the number of binary input lines to theBoolean function.Not all Boolean functions are totally Syndrome testable forall the single stuckat-faults. Procedures are given in [ 1 151with a minimal or near minimal number of primary inputs tomake the networks Syndrome testable. In a number of realnetworks (i.e., SN7418 1, etc.) the numbers of extra primaryinputs needed was at. most one (

  • 8/3/2019 Design for Test Ability

    12/15

    WILLIAMS AND PARKER: DESIGN FOR TESTABILITY

    x1z :F

    Fig. 24. Function t o be tested with Walsh coefficients.

    TABLE IEXAMPLESF WAUH FUNCTIONSND WALSHCOEFFICIENTXX . X ~ X ~ WALLFALLv3 F2FW , 220 0 0 -1

    + I1l1111 1- 1111111 0-1111110 1-11l1110 0 - 111I111 1-1111I11 0-11I1110 1-111ll

    c = 4pare network. The overhead quoted is necessary to make theCUT Syndrome testable and does not include the pattern gen-erator, counter, or compare register.C . Testing b y Verifying Walsh Coefficients

    A technique which is similar to Syndrome Testing, in that itrequires all possible input patterns be applied to the combina-tional network, is testing by verifying Walsh coefficients [ 1 17.This technique only checks two of the Walsh coefficients andthen makes conclusionsabout henetworkwith respect tostuck-at-faults.

    In order to calculate the Walsh coefficients, the logical value0 (1) is associated with the arithmetic value - l(+ l). There are2 Walsh functions. W o s defined to be 1, Wi s derived froma l l possible (arithmetic) products of the subject of independentinput variables selected for that Walsh function. Table I showsthe Walsh function for Wlr W, 3 , then W2F, 1,F, inally W,and WdF. These values are calculated for the network inFig.24. If the values are summed for WAF , the Walsh coefficientC d s calculated. The Walsh coefficient Cois just WoFsummed.This is equivalent to the Syndrome in magnitude times 2. IfCa # 0 then all stuck-at-faults on primary inputs will be de-tected by measuring CA. f the fault is presentCa = 0. If thenetwork has Cd = 0 it can be easily modified such that C # 0.If the network has reconvergent fan-out then further checksneed to be made (the number of inverters in each path has acertain property); see [ 1171. If these are successful, then bychecking Cd and CO, ll the single stuck-at-faults can be de-tected. Some design constraints maybe needed to make surethat the network is testable by measuring Cd and Co. Fig. 25shows the network needed to determine Cd and Co. The valuep is the parity of the driving counter and the response counteris an upidown counter. Note, two passes must be made of thedriving counter, one forCA and one for Co.D . Autonomous Test ing

    The fourth technique which will be discussed in the area ofself-test/built-in-test is Autonomous Testing [ 1181. Autono-mous Testing like Syndrome Testing and testing Walsh coeffi-cients requires all possible patterns be applied to the networkinputs. However, withAutonomous Testing theoutputs of

    109

    GoobFaulty

    Iz

    - Countw Dnvlng m e 1Fig. 25 . Tester for veryfying C,and Ca alsh coefficients.

    N S Mode

    1

    N = 1 Normal OperallonFig. 27 . Reconfigurable 3-bit LFSR module.

    N = 0. S = 1: Slgnature AnalyzerFig. 28 . Reconfigurable 3-bit LFSR module.

    the network must be checked for each pattern against the valuefor the good machine. The results is that irrespective of thefault model Autonomous Testing will detect the faults (assum-ing the faulty machine does not urn into a equential machinefrom a combinational machine). In order to help the networkapply its own patterns and accumulate the results of the testsrather than observingevery pattern for 2 input patterns, astructure similar to BILBO register is used. This register hassome unique attributes and is shown in Figs. 26-29. If a com-binational network has 100 inputs, the network must be modi-fied such that the subnetwork can be verified and, thus, thewhole network will be tested.

    Two approaches to partitioning are presented in the paperDesign forAutonomousTest [ 1181. The first is to useAuthorized licensed use limited to: University of North Texas. Downloaded on April 23, 2009 at 13:49 from IEEE Xplore. Restrictions apply.

  • 8/3/2019 Design for Test Ability

    13/15

    110 PROCEEDINGS OF THE IEEE, VOL. 1 , NO . 1 , JANUARY 1983

    N = 0. S = 0: nput GeneratorFig. 29 . Reconfigurable 3-bit LFSR module.

    V VFig. 30. Autonomous Testing-general network.

    AFig. 3 1. Autonomous Testing-functional mode.

    VFig. 32. Autonomous Testing-configuration to test network G ,.

    multiplexers to separate the network and the second is a Sen-sitized Partitioning to separate the network. Fig. 3 0 shows thegeneral network with multiplexers, Fig. 3 1 shows the networkin functional mode, and Fig. 32 shows the network in a modeto test subnetwork G I . This approach could involve a signifi-cant gate overhead to implement in some networks. Thus theSensitized Partitioning approach is put forth. For example,the 74181 ALU/Function Generator is partitioned using theSensitized Partitioning. By inspecting the network, two typesof subnetworks can be partitioned out, four subnetworks N 1 ,one subnetwork N2 Figs. 33 and 34). By further inspection,all the Li outputs of network N1 anbe tested by holdingSz = SJ = low. Further, all the Hi outputs of network N1 canbe tested by holding So = S1 = high, since sensitized pathsexist through he subnetwork N2. Thus far fewer than 2input patterns can be applied to the network to test it.

    Tmt L , = 0.1.2.3 Ht NFig. 33. Autonomous Testing with sensitized partitioning.

    Fig. 34. Autonomous Testing with sensitized partitioning.

    V I. CONCLUSIONThe areaof Design for Testability is becoming a populartopicby necessity. Those usersof LSI/VLSI which do not

    have their own captive IC facilities are at he mercy of thevendors for information. And, until the vendor information isdrastically changed, the Ad Hoc approaches to design for test-abilitywibe the only answer.

    In that segment of the industry which can afford to implament the Structured Design for Testability approach, there isconsiderable hope of getting quality test patterns at a verymodest cost. Furthermore, many innovative techniques areappearing in the Structured Approach and probably will con-tinue as we meander through V U 1 and into more dense tech-nologies.

    There is a new opportunity arriving in the form of gate ~ Y Sthat allow low volume users access to VLSI technology. Ifthey choose, structured design disciplines can be utilized. Per-haps Silicon Foundries of the future will offer a combinedpackage of structured, testable modules and support softwareto automatically provide the user with f i i h e d parts AND ests.

    ACKNOWLEDGMENTThe authors wish to thank D. J. Brown for his helpful com-ments and suggestions. The assistanceof Ms. B. Fletcher,Ms. C. Mendoza, Ms. L. Clark, Ms. J. Allen, and J. Smith inpreparing this manuscript for publication was invaluable.

    REFERENCESGeneral References and Surveys

    [ 1 1 M.A. Breuer, Ed...Diagnosis an d Rehble Design of Digi ta l S y s[21 H . Y . Chang, E. G. Manning, nd G. Metze, FauZt Diugnosis ofrems Rockville, MD: omputer Science Press, 1976.

    Authorized licensed use limited to: University of North Texas. Downloaded on April 23, 2009 at 13:49 from IEEE Xplore. Restrictions apply.

  • 8/3/2019 Design for Test Ability

    14/15

    WILLIAMS AND PARKER: DESIGN FOR TESTABILITY 111

    [3] A.D. Friedmanand P.R. Menon, FaultDetectio n in DigitalDigital System s New York: Wiley-Interscience, 1970.[ 4 ] F. C. Hennie, Finite State Models fo r Logical Machines. NewCircuits. Englewood Cliffs, NJ: Rentice-Hall, 1971.[ 51 P. G. Kovijanic, in A new look at test generation and veritica-York: Wiley, 1968.tion, n R o c .14 th Design Automat ionConf . , IEEEPub.77CH1216-1C, pp. 58-63, June 1977.[ 61 E. I. Muehldorf, Designing LSI logic for testab ility, in Dig.Papers , 976 A n n SemiconductorTestS ymp. , IEEE Pub.[ 7 ] E. I. Muehldorf and A.D. Savkar, LSI logic testing-An over-view, IEEE Trans Comput . , vol. C-30, no. 1, pp. 1-17, Jan.1981.

    76CH1179-1C, Pp. 45-49 , OCt. 1976.

    [81 W.W. Peterson and E. J. Weldon, Error Correcting Code s Cam-191 A. K. Susskind, Diagnostics for logic networks, IEEE Spec-bridge, MA: MIT Press, 1972..~ tnrm,vol. 10, pp. 40-47, Oct. 1973.

    design for testability,Compu t er , pp. 9-21, Oct. 1979.tronics Terms New York: Wiley-Interscience, 1 972 .

    [ 101 T. W. Williams and K. P. Parker, Testing logic networks and[ 11 ] IEEE, Inc., IEEE Standard Dictionary of Electrical and Elec-

    Designing for Testability[ 121 A designers guide t o signature analysis, Hewlett-PackardApplicationNote222,HewlettPackard,5301StevensCreek

    Blvd., Santa Clara, CA 95050.[13 ] S. B. Akers,Partitioning or estability, J. DesAu t omat .Fault-Tolerant Comput.,vol. 1,no. 2, Feb. 1977.[ 141 H. Ando, Testing VLSI with random access scan, in Dig. Pa-pers Compcon 80 , IEEE Pub. 80CH1491-OC, pp. 50-52, Feb.[15] P. Bottorffand E. I. Muehldorf,Impactof LSI on complex1980.digital circuit board testing, Electro 77, New York, NY, Apr.1977.[ 161 S. DasGupta , E. B. Eichelberger, and T. W. Williams, LSI chip

    State Circuits Con$ (San Francisco, CA, Feb. 1978), pp. 216-design for estability, n Dig. Tech.Papers, 1978Int.Solid-^.-[ 171 Designing digital circuits for testability, Hewlett-Packard Ap-

    1 E. B. Eichelberger and T. W. Williams, A logic design str uct ureplication Note 210-4, Hewiett Packard, Loveland, CO 80537.for LSI testability, J. D es A utom at . Faul t-Tolerant Comput . ,

    L 1 1 .

    1

    vol. 2, no. 2, pp. 165-178, May 1978.Design Automa tion Conf., IEEE Pub. 77CH1216-1C, pp. 462-, A logic design structu re for LSI esting, n R o c . 1 4 t h468, June 1977.E. B. Eichelberger, E. J.Muehldorf, R.G. Walter, and T. W.Williams, A logic design structu re for testing internal arrays,in Roc. 3rd USA-Japan Computer Con$ (San Francisco, CA,S. Funatsu, N. Wakatsuki, and T. Arima, Test generation sys-tems n Japan, n Ro c. 12t h Design Automa t ion Symp. , pp.114-122. June 1975.

    -

    OCt. 1978), pp. 266-272.

    [22] HC.-Godoy, G.B. Frankl in, and P. S. Bottoroff, Automaticabilitygroundrules, n Ro c. 14t h Design Automat ion Conf . ,checkingof logic design structure orcompliancewith test-

    [23]J. P. Hayes, On modifying logic networks to improve heirIEEE Pub. 77CH1216-1C, pp. 469-478 , J une 1977 .diagnosability, IEEE Trans Comput., vol. C-23, pp. 56-62,Jan. 1974.[2 4] J . P. Hayes and A. D. Friedman, Test point placement t o sim-plify fault detection, n FTC-3, Dig. Papers , 1 973 Symp . on

    [2 5] B. Koenemann, J. Mucha, and G. Zwiehoff, Built-in logic blockFault-Tolerant Computing, pp. 73-78, June 1973.observation techniques, in Dig. Papers, 1979 Test C onf., IEEE[26] M . D. Lippman and E. S.Donn, Design forethought promoteseasier testing of microcomputer boards, Electronics, pp. 113-[ 27 1 H. J. Nadig, Signature analysistoncep ts, examples, and guide-119. Jan. 18,1979.

    [28 ] M. Neil and R. Goodner, Designing a servicemans needs intolines, Hewlett-Packard J., pp. 15-21, May 1977.microprocessor based systems, Electronics, pp. 122-128, Mar.1,1979.[29] S. M . Reddy, Easily testable ealization or logic functions,ZEETC Trans Comput.,vol. (2-21, pp. 1183-1188, Nov. 1972.[30] K. K. Saliya and S. M. Reddy, O n minimally testable logic net-works, ZEEE Trans Comput . , vol. C-23, pp. 1204-1207, Nov.1974.[ 31 1 J. H. Stewart, Future testing ofarge LSI circuit cards, in Dig.Papers 1977 Semiconductor Test Symp., EEE Pub. 77CH1261-(32 1 A. Toth and C. Holt, Automated data base-driven digital test-ing, Compu t er , pp. 13-19, Jan. 1974.(331 E. White, Signature analysis, enhancing the serviceability ofmicroprocessor-based ndustrialproducts, n Proc. 4th IECI

    , - - - - --

    Pub. 79CH1509-9C, pp. 37-41, Oct. 1979.

    7C, PP. 6-17, OCt. 1977.

    (341 M.J.Y. Williams and J. B. Angell, Enhancing testab ility of largeAnnual Conf ., IEEE Pub. 78CH1312-8, pp. 68-76, Mar. 1978.IEEE Trans Comput . , vol. C-22, pp. 46-60, Jan. 1973.scale ntegratedcircuits via test pointsandadditional ogic,[ 35 1 T. W. Williams, Utilization of a st ructu reddesign fo r reliabilityand serviceability, in Dig., Gov ernm ent Microcircuits A pplica-tions Con$ (Monterey, CA, Nov. 1978), pp. 441-444.

    Faults and Fault Modelingmachines, in Proc. Symp. on Computers and Automata (poly-R. Bou te and E. J. McCluskey, Fault equivalence in sequentialtech. Inst. Brooklyn, Apr. 13-15, 1971), pp. 483-507.for output faults n sequential machines,ZEEE Trans Comput . ,R. T. Boute, Optimal and near-optimal checking experimentsvol. C-23,110. 11 , pp. 1207-1213, Nov. 1974.faults in sequential machines, Tech. Rep. 38, SU-SEL-72-052,Stanford Univ., Stanford, CA, Nov. 1972.IEEE Trans. Comp ut., vol. C-24, pp. 476-482, May 1975.F.J.O. Dias, Fault masking incombinational logic circuits,J. P. Hayes, A NAND model for fault diagnosis in combina-tional logic networks, IEEE TransComput . , vol. (2-20, pp.1496-1506, Dec. 1971.E. J. McCluskey and F. W. Clegg, Fault equivalence in combina-tional logic networks, IEEE TransComput . , vol. C-20, pp.K.C.Y. Mei, Fault dominance in combinational circuits, Tech.Note 2 , Digital Systems Lab., Stanfo rd Univ., Aug. 197 0.C-23, no. 7 , pp. 720-727, July 1974.R.C. Ogus, The probability of a correct output from a com-binational ircuit, IEEE Trans. Comput., vol. C-24, no. 5 ,pp. 534-544, May 1975.K. P. Parker an d E. J. McCluskey, Analysis of logic circu its withvol. (2-24, no. 5 , pp. 573-578, May 1975.faults using input signal probabilities, IEEE Trans Comput . ,K.K. Saliya and S. M . Reddy,Faultdetecting estsets orReed-Muller canonic networks,ZEEE Trans Comput . , pp. 995-998, Oct. 1975.D. R. Scher tz and G. Metze, A new representation for fau ltsncombinational digital circuits, IEEE Trans Comput . , vol. C-21,fault in a sequential digital circuit, IEEE Trans Comput.,vol.J. J. Shedletsky and E. J. McCluskey, The error latency of aC-25, no. 6, pp. 655-659, June 1976.circuit, in FTCS-5, Dig. Papers, 5th Int. S ymp . o nFault Toler-ant Comput ing (Paris, France, June 1975), pp. 210-214.tionalcircuits, ZEEE TransComput . , vol. C-22, no. 11 , pp.K. To, Fault folding for irredundant and redundant combina-D. T. Wang, Proper ties of fau lts and crit icalities f values undertests for combinational networks, IEEE Trans. Compu t., vol.C-24, no. 7, pp. 746-750, July 1975.

    - Equivalence anddominance elationsbetweenoutput

    1286-1293, NOV.1971.- Bridghg and stuck-at faults, IEEE Trans Comput . , vol.

    no. 8, PP. 858-866, Aug. 1972.

    - The error atency of a fault n a combinational digital1008-1015, NOV.1973.

    Testing and Fault Location[52] R.P. Batni and C.R. Kime, A module level testing approachfor combinational networks, IEEE Trans Comput . , vol. C-25,no. 6, pp. 594-604, June 1976.[ 531 S. Bisset, Exhaustive testing of microprocessors and related de-

    vices: A practical solution, inDig. Papers, 1977 Semicond uctor[ 541 R. J. Czepiel, S. H. Foreman, and R. J. Prilik, System for logic,T e s t S y m p . , pp. 38-41, Oct. 1977.-parametric and analog testing, in Dig, Papers, 19 76 Semicon-ductor Test Symp. ,pp. 54-69, Oct. 1976.[ 5 5 1 R. A. Frohwerk, Signature analysis: A new digital field servicemethod, Hewlett-Packard J . , pp. 2-8, May 1977.[56] B.A. Grimmer,Test echniquesforcircuitboardscontaininglarge memories ndmicroprocessors,n Dig. Papers, 976[ 57 1 W. A. Groves, Rapid digital fault isolation with FASTRACE,Semiconductor Test Symp. ,pp. 16-2 1, Oct. 1976.[Sa] J. P. Hayes, Rapidcount esting orcombinational logic cir-Hewletf-Packard J.,pp. 8-13, Mar. 1979.cuits, ZEEE TransComput . , vol. C-25, no. 6, pp.613-620,June 1976.1591 -, Detectionofpattern sensitive faults n andom accessmemories, IEEE Trans Comput . , vol. C-24, no. 2, Feb. 1975,[60 1 -, Testing logic circuits by transition counting, in FTC-5,Dig. Papers, 5th Int. Sym p. on ault Tolerant Computing(Paris,France, June 1975), pp. 215-219.[611 J. T. Healy, Economic realities of testing microprocessors, inDig. Papers, 1977 Semicond uctor Test Symp., pp. 47-52, Oct.[621 E. C. Lee, A simple concept in microprocessor testing, inDig.

    PP. 150-160.

    1977.

  • 8/3/2019 Design for Test Ability

    15/15

    112 PROCEEDINGS OF THE IEEE, VOL. 71 , NO. 1, JANUARY 1983Papers, I976 S emicond uctor Test S ymp.,EEE Pub. 76CH1179-

    [631 J. Losq, Referenceless random testing, in FTC S-6, Dig. Papers,6 t h In t . S ymp. on Fault-Tolemnt Computing (Pittsburgh, PA,[64] S. Palmquist and D. Chapman,Expanding heboundariesofJune 21-23,1976), pp. 81-86.

    , LSI testing with an advanced pattern controller, in Dig. Papers,[65 1 K.P. Parker, Compact testing: Testing with compressed data,I976 Semicondctor Test Symp. ,pp. 70-75, Oct. 1976.in F T C S -6 , Dig. Papers, 6th Int. Sym p. on Faulr-TolerantCom-puting (Pittsburgh, PA, June21-23,1976).[661 J. J. Shedletsky, A rationale for the random testing of combi-national digital circuits, in Dig. Papers, Compc on 75 Fall Meet.(Washington, DC, Sept. 9-11, 1975), pp. 5-9.[67] V. P. Strini. Fault location in a semiconductor random accessmemory unit, IEEE Trans Compu t . , vol. (2-27, no. 4 , pp. 379-[68] C. W. Weller, inAnengineeringapproach to IC testsystem385, Apr. 1978.maintenance, in Dig. Papers, 1977Semiconductor Test Symp.,

    lC, pp. 13-15, Oct. 1976.

    pp. 144-145, Oct. 1977.Testability Measures[691 W. . Dejka, Measure of testability in device and systemdesign,in Roc. 20th Midwe st Symp . Circuit s Sys t . , pp. 39-52, Aug.[701 L. H. Goldstein, Controllsbility/observability analysis of digital1977.

    circuits, IEEE Trans. Circuits Syst . , vol. CAS-26, no. 9 , pp.[71 1 W. . Keiner and R. P. West. Testabi lity measures, presentedat AUTOTESTCON 77, Nov. 1977.[72 1 P. G.Kovijanic, testability analysis, in Dig. Papers, I979 Tes tConf., IEEE Pub. 79CH1509-9C, pp. 310-316, Qct. 1979.[73] J. E. Stephenson and J. Grason,A testability measure for repis-ter ransfer leveldigitalcircuits, n Proc. 6t h FaultTolerantComputing Symp., pp. 101-107, June 1976.

    685-693, Sept. 1979.

    Test Generation[74] V. Agrawal and P. AgrawaAn autom atic test generation sys-tem for ILLIAC N ogic boards, IEEE Tram Comput., vol. C-C-21,no. 9,pp. 1015-1017, Sept. 1972.[75] D. B. Armstrong, On mding a nearly minimal set of fault de-tection tests for combinational ogic nets, IEEE Tr am Elec-[761 R. Betancourt, Derivation of minimum test sets for unate logi-

    tron. Comput.,vol. EC-15, no. 1, pp. 66-73, Feb. 1966.cal circuits, IEEE Trans. Compu t., vol. C-20, no. 11 , pp. 1264-

    [771 D. C. Bossen and S. I. Heng, Cause and effectanalysis fo r mul-1269, Nov. 1973.tiole fault detection in combinational networks, IEEE Trans[78] P. S. Bottorff et al . , Testgeneration or large networks, inComput.,rol. 12-20, no. 11 , pp. 1252-1257, Nov. 1971.Proc. 14th Design Autom ation Conf., IEEE Pub. 77CH1216-1C,

    I

    1

    pp. 479-485, June 1977.ments, I . Assoc. Comput.Mach.,vol. 6, no. 1, pp. 33-36,1959.R. D. Edlred,Test outinesbased on symbolic ogic tate-sented at the 17th Design Automation Conf., Minneapolis, MN,P. Goel, Test generation costs analysis and projections, pre-1980.E. P. Hsieh et al., Delay test generation, in R o c . 1 4 t h DesignAutomat ion Conf., IEEEPub. 77CH1216-1C, pp. 486-491,June 1977.tiple fault analysis, IEEE Trans. Cornput., vol. C-24, no. 7, pp.C. T. Ku and G. M . Masson, The Boolean difference and mul-

    E. I. Muehldod, Test pattern generation as a part of the totalSemiconductor Test Symp. , p. 4-7, Oct. 1978.designprocess, in LSI and Boards: Dig. Papers, I9 7 8 A nnE. I . Mueh ldod and T. W. Wmiams, Optimized stuck fault estpattern for PLA macros, in Dig. Papers, I9 77 Sem iconductorTest Symp. , IEEE Pub. 77CH1216-7C, pp. 89-101, Oct. 1977.M. R. Page, Generation of diagnostic tests using prime impli-cants, CoordinatedScienceLab.Rep. R-414, University ofIllinois, Urbana, May 1969.networks by pseudo Boolean programming, IEEE Trans.Com-S. G. Papaioannou, Optimal test generation in combinstionidput., vol. C-26, no. 6 , pp. 553-560, June 1977.

    691-695, July 1975.

    [87 1 K.P. Parker, Adaptive random test generation, J. Des. Auto-mot. Fault Tolemnt Comput.,vol. 1 , no. 1 , pp. 62-83, Oct. 1976.[ 8 8 ] -, Probabilistic test generation, Tech. Note 18 , Digital Sys-tems Labora tory, Stanfo rd University. Stanfo rd, CA, Jan. 1973.[89] J. F. Poage and E. J. McCluskey, Derivation of optimum testsfor sequential machines, in Proc. 5th Ann. Sym p. on S w f t c h h g[90] -, Derivation of optimum te&s to detect faults in combina-circuft Theory and ogicDesign. pp. 95-1 10,1964.

    [91] G. R. Putzolu and J. P. Roth, A heuristic algorithm for testingYork: Polytechnic Press, 1963.ofasynchronouscircuits, IEEE Tram Compu t . , vol. C-20,no. 6, pp. 639-647, June 1971.[92 1 J. P. R oth , W. G. Bouricius, and P. R. Schneider, Programmedalgorithms to compute tests to detect and distinguish betweenfailures in logic c ircuits, IEEE Trans. Electron Com put., vol.[93] J. P. Roth, Diagnosisof automa ta failures: Acalculusanda

    method, IBM J. R e s DeveZ., no. 10, pp. 278-281, Oct. 1966.[94 1 P. R. Schneider, On he necesity to examine Dchaira in diag-no. 11 , p. 114, Nov. 1967.nostic test generation-Anexample, IBM J. Res.Develop.,[95] H. D. Schnurmann, E. Lindbloom, R.G. Carpenter, Theweighted random test pattern generation, IEEE Trans Com-put . , vol. C-24, no. 7, pp. 695-700, July 1975.[ 9 6 ] E. F. Sellers, M. Y. Hsiao, nd L. W. Bearnson, Analyzing errorswith the Boolean difference, IEEE Trans Comput., vol. C-17,

    [97] D. T.Wang, An algorithm for he detection of ests sets fo rno. 7, pp. 676-683, July 1968.combinational logic networks,IEEE Trans. Com pur., vol. C-25,no. 7, pp. 742-746, July 1975.(981 T. W. Williams and E. E. Eichelberger, Random patterns wi thina structured sequential logic design, in Dig. Papers, 19 77S em i-conductor Test Symp. , IEEEPub. 77CH1261-7C, pp. 19-27,Oct. 1977.[ 9 9 ] S. S.Yau and S. C. Yang,Multiple aultdetection forcom-binational logic circuits, EEE Trans Comput.,vol.C-24, no. 5,pp. 233-242, May 1975.

    EC-16, PP. 567-580, Oct. 1967.

    Simuht ion[ 1001 D. B;Armstrong, A deductive method for simulating faults inlogic circuits, IEEE Trans. Compu t . , vol. C-22, no. 5, pp. 464-471, May 1972.[ 101 1 M. A. Breuer, Functional partitioning and simulation of digitalcircuits, IEEE Trans. Comput . , vol. C-19, no. 11 , pp. 1038-[ 102 1 H.Y.P. Chiang etal., Comparison of parallei and deductive fault1046, Nov. 1970.simulation, IEEE Trans. Com put., vol. C-23, no. 11, pp. 11 32-1138, Nov. 1974.[ lo31 E. B. Eichelberger, H a d detec tion in combinational and se-quential switching ircuits. IBM J. Res DeveZ., Mar. 1965.[ 1041 E. Manning and H.Y. Chang, Functional technique for efficientdigital fault simulation, in IEEE Int . Conv. Dig..p. 194,1968.[ lo51 K. P. Parker, Software simulator speeds digital board test gen-eration, Hewlett-Packard J. , pp. 13-19, Mar. 1979.[ loa] S. Seshu, O n an mproved diagnosis program, IEEE Trans.Electron Comput.,vol. EC-12. no. 1, pp. 76-79, Feb. 1965.[ lo71 S. Seshu and D.N. Freeman, The diagnosis of asynchronoussequentialswitchingsystems, IR E Tram Electron Compat . ,[ 10 8[10 9

    [ 110[111

    [ 112I113I l l 4

    vol. EC-11, no. 8, pp. 459-465, Aug. 1962.T. M.Storey and J. W. Barry, Delay test simulation, in R o c .14th Design Automation Conf., IEEE Pub. 77CH1216-1C, pp.491-494, June 1977.S.A. Szygenda and E. W. hompson. Modeling and digital sim-ulation for design verification diagnosis, EEE Trans Comput . ,vol. C-25, no. 12 , pp. 1242-1253, Dec. 1976.S. A. Szygenda, TEGAS2-Anatomy of a general purpose testgeneration and simulation system for digital logic,in Proc. 9thDesign Automation Workshop,pp. 116-127,1972.for implementationof a universal ime delay simulation forargeS. A. Szygenda, D. M. Rouse, and E. W. Thompson, A modeldigital networks, in AFIPS Con$ R o c . , vol. 36, pp. 207-216,1970.E. G. Ulrich andT. Baker. Concurrentsimulationofnearlyidentical digital networks, Computer, vol. 7, no. 4 , pp. 39-44,Apr. 1974.works, in R o c . 1 0 t h Design Automation Workshop, pp. 145-- The concurrent simulation of nearly identical digital net-150, June 1973.techniques based on simulation, in Proc. 9th Design Automa-E. G. Ulrich, T. Baker, and L. R. Williams. Fault test analysis

    [11 5 J. Savir. Syndrome-Testable design of combinational circuits,t ion Workshop,pp. 111-115,1972.IEEE Trans Comput . , vol. C-29, pp. 442-451, June 1980 (cor-rections: Nov. 1980).[ 1161 -, Syndrome-Testingof syndrome-untestablecombina-tional circuits, IEEE Tram Comput., vol. (2-30, pp. 606-608,[ 1171 A. K. S wki nd , Testing by verifyingWabhcoefficients, inAug. 1981.

    R o c . 1 1 t h A n n S y m p . n FauZt-Tolemnt Computing (Portland,MA), pp. 206-208, June 1981.[118] E. J.McCluskey and S. Bozorgui-Nesbat, Design forautono-mous test, ZEEE Tram Compu t . , vol. C-30, pp. 866-875, Nov.