Reliability, Availability and Serviceability

220

Transcript of Reliability, Availability and Serviceability

Page 1: Reliability, Availability and Serviceability
Page 2: Reliability, Availability and Serviceability

Reliability, Availability and Serviceability of Networks-on-Chip

Page 3: Reliability, Availability and Serviceability
Page 4: Reliability, Availability and Serviceability

Érika Cota Alexandre de Morais Amory Marcelo Soares Lubaszewski

Reliability, Availability and Serviceability of Networks-on-Chip

Page 5: Reliability, Availability and Serviceability

Érika CotaInstituto de InformáticaPorto Alegre, RS, [email protected]

Marcelo Soares LubaszewskiCEITEC SAEstrada João de Oliveira RemiãoPorto Alegre, RS, [email protected]

Alexandre de Morais AmoryHardware Design Support Group (GAPH)PUCRS – Faculdade de InformáticaPorto Alegre, RS, [email protected]

ISBN 978-1-4614-0790-4 e-ISBN 978-1-4614-0791-1DOI 10.1007/978-1-4614-0791-1Springer New York Dordrecht Heidelberg London

Library of Congress Control Number: 2011935738

© Springer Science+Business Media, LLC 2012All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden.The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)

Page 6: Reliability, Availability and Serviceability

To Ulisses and Luigi, may I embrace your enthusiasm.To José Cota and Maristella, may I learn from your perseverance.

Érika

To Leticia and Sonia.Alexandre de Morais Amory

To Zíngara, Natasha, Andressa and Bruno.Marcelo

Page 7: Reliability, Availability and Serviceability
Page 8: Reliability, Availability and Serviceability

vii

Preface

State-of-the-art electronic systems are based on hundreds of functional blocks (called IP, intellectual property cores) such as processors, memories, analog blocks, etc. which are integrated and manufactured together in a single silicon die. Those blocks need to communicate with each other and exchange another several thou-sands of bits in order to operate as a cell phone, an MP3 player, an HDTV decoder, etc. The design of a communication infrastructure within such complex systems (called SoCs – systems-on-chip) is a problem per se, because it requires high per-formance and high quality levels while connecting an ever increasing number of cores. Such requirements can be typically met by a dedicated communication chan-nel between two functional blocks, but this approach is not feasible when hundreds of blocks are involved. Thus, Networks-on-chip (NoCs) have been proposed as a solution to face the communication challenge within complex core-based electronic systems. However, to become an industrial reality, this new design paradigm still depends on the definition of feasible, efficient, and plug-and-play test mechanisms that can be used not only during manufacturing, to ensure a fault-free system, but also during operation, to ensure the correct behavior of the entire system. Such test mechanisms must apply to both the network itself and the IPs connected through the NoC since the whole system is integrated in a single die. Assuming the NoC is used to test the embedded IP cores, then the test of the NoC itself becomes an even more important issue to ensure the system test quality. On the other hand, the huge num-ber of interconnects allied to the shrinking of the chip dimensions make the NoC prone to a growing number of permanent and transient faults. The capability of detecting and, if possible, tolerating such faults in NoC-based systems-on-chips is mandatory to increase the number of dies that can be delivered and to ensure the correct operation of the system afterwards.

It is within the context reported above that this book presents an overview of the issues related to the test, diagnosis, and fault-tolerance of NoC-based systems. First, the characteristics of the NoC design (topologies, structures, routers, wrappers, and protocols) are presented, as well as a summary of the terms used in the field and an overview of the existing industrial and academic NoCs. Secondly, the main aspects

Page 9: Reliability, Availability and Serviceability

viii Preface

of the test of a NoC-based system are discussed, starting with the test of the embedded cores where the NoC plays an important role. Current test strategies are presented, such as the reuse of the network for core testing, test scheduling for the NoC reuse, test access methods and interface, efficient reuse of the network, and power-aware and thermal-aware NoC-based SoC testing. Then, the challenges and solutions for the NoC infrastructure (interconnects, routers, and network interface) test and diag-nosis are presented. Finally, fault tolerance techniques for the NoC are discussed, including techniques based on error control coding, retransmission, fault location, and system reconfiguration.

The main motivation to publish this book these days is the increasing interest on NoC-based designs in academia and industry. This new design paradigm is becoming an important trend because of its advantages on tackling the challenges of a complex SoC design. However, to become a real standard and an industrial reality, it is impor-tant that the issues related to its testing and to the testing of the systems built upon it are also well understood and dominated. Furthermore, as yield figures become an important issue for current technologies and electronic systems support an increasing number of safety-critical applications, fault-tolerance is mandatory for a large number of NoC-based devices. For the last 5 years or so, a number of testing and fault toler-ance approaches have been proposed for NoCs and NoC-based systems. Although this is a relatively new topic, current research covers already a considerable spectrum and its analysis at this point can help summarizing the scientific and technological advances made so far and to identify the open issues that still need to be addressed. On the other hand, a book that organizes such a large material can be of great help to those willing to start looking at reliability, availability and serviceability aspects of NoCs.

The authors of this book have been working on these topics for many years now, have published important results in major test conferences (ITC, VTS, ETS, DATE, etc.) and journals (IEEE TCOMP, ACM TODAES, IEEE TCAD, IEEE D&T of Computers, IET Computers and Digital Techniques), and have presented tutorials in different meetings (ETS 2008, SBCCI 2008, DATE 2009, ISCAS 2009, NOCS 2009, VTS 2010). The feedback received in all occasions was so positive, that we felt encouraged to summarize all these past experiences and produce this book.

Testing and fault tolerance architectures for NoC-based SoCs are rather recent research topics and, to the best of our knowledge, no other published book is exclu-sively devoted to investigate both of them simultaneously. Professionals, graduate students, design and test engineers, and researchers interested in having introductory and intermediate knowledge on recent advances in test, diagnosis, and fault-tolerance of integrated systems based on Networks-on-Chip, will find in this book a reference guide. We want the reader to understand the problems, challenges, most important solutions, and trade-offs related to the quality of NoC-based systems. We also want it to be a didactical book in the sense that the reader can reproduce the presented techniques either as a solution for a possible real problem or as the means to extract its own observations and conclusions and advance the state-of-the-art in this topic.

Porto Alegre, RS, Brazil Érika Cota Alexandre de Morais Amory Marcelo Soares Lubaszewski

Page 10: Reliability, Availability and Serviceability

ix

Acknowledgments

Although we have assumed the task of organizing and putting this material together, this book is actually the result of many years of collaborative work with several researchers who we are indebted to. We would like to acknowledge the work of several students, collaborators, and co-authors in several articles, who point out dif-ferent views and solutions over the technical challenges, struggle with simulations and deadlines but still keep enthusiastic about the topic. We thank the reviewers and attendants of the tutorial that originated this book whose feedback allowed us to refine and improve this work. We also acknowledge the support of our universities UFRGS and PUCRS for creating the environment for this research. We would like to thank Springer in the name of its editor Charles Glaser for his constant support on this project. Finally, on a personal side, we owe our families our gratitude for their unrestricted support during this period.

Porto Alegre, RS, Brazil Érika Cota Alexandre de Morais Amory Marcelo Soares Lubaszewski

Page 11: Reliability, Availability and Serviceability
Page 12: Reliability, Availability and Serviceability

xi

Contents

1 Introduction ............................................................................................ 11.1 Why Networks-on-Chip? ................................................................ 21.2 Reliability, Availability and Serviceability

in NoC-Based SoCs ........................................................................ 7References ................................................................................................ 9

2 NoC Basics .............................................................................................. 112.1 NoC Structure and Design Space .................................................... 11

2.1.1 Links .................................................................................... 122.1.2 Routers ................................................................................ 122.1.3 Network Interface ................................................................ 16

2.2 NoC Performance Parameters ......................................................... 172.3 NoC Topologies .............................................................................. 172.4 Quality of Service ........................................................................... 212.5 Industrial and Academic NoCs ....................................................... 21References ................................................................................................ 23

3 Systems-on-Chip Testing ....................................................................... 253.1 Test Basics ...................................................................................... 25

3.1.1 Fault Modeling .................................................................... 263.1.2 Fault Simulation .................................................................. 273.1.3 Automatic Test Pattern Generation ..................................... 283.1.4 Design for Test .................................................................... 28

3.2 SoC Testing Requirements .............................................................. 343.2.1 Core Test Requirements ...................................................... 343.2.2 Interconnection Requirements ............................................ 353.2.3 System Requirements ......................................................... 36

3.3 SoC Testing Approaches ................................................................. 363.3.1 Conceptual Test Architecture.............................................. 373.3.2 Test Access Mechanism Definition..................................... 38

Page 13: Reliability, Availability and Serviceability

xii Contents

3.3.3 Test Scheduling Definition.................................................. 453.3.4 Test Planning....................................................................... 47

3.4 Test Standard Initiatives .................................................................. 483.4.1 IEEE Standards 1500 and 1450.6 ....................................... 48

3.5 SoC Test Benchmarks ..................................................................... 503.6 On the Applicability of Standard SoC Test Strategies

in NoC-Based Systems ................................................................... 51References ................................................................................................ 53

4 NoC Reuse for SoC Modular Testing ................................................... 594.1 Basic NoC Reuse Model ................................................................. 59

4.1.1 Test Packets ......................................................................... 604.1.2 Network Interface and Test Wrapper .................................. 624.1.3 Interface with External Tester ............................................. 65

4.2 Preemptive Test Scheduling ............................................................ 664.2.1 Power-Aware Test Scheduling ............................................ 72

4.3 Non-preemptive Test Scheduling .................................................... 774.4 Multi-constrained Test Scheduling ................................................. 81References ................................................................................................ 83

5 Advanced Approaches for NoC Reuse ................................................. 855.1 Efficient Channel Utilization .......................................................... 855.2 Wrapper Design for NoC Reuse ..................................................... 935.3 ATE Wrapper for NoC Reuse ......................................................... 985.4 Test Scheduling for BE NoCs ......................................................... 101

5.4.1 Creating the Initial Solution ................................................ 1045.4.2 BottomUp Optimization ..................................................... 1065.4.3 TopDown Optimization ...................................................... 1085.4.4 Reshuffle Optimization ....................................................... 1105.4.5 Implementation of the Defined Test Architecture............... 111

5.5 Discussion ....................................................................................... 112References ................................................................................................ 112

6 Test and Diagnosis of Routers ............................................................... 1156.1 Introduction ..................................................................................... 1156.2 Testing the Network Interfaces ....................................................... 1166.3 Testing the Routers ......................................................................... 116

6.3.1 Structural-Based Strategies ................................................. 1176.3.2 Functional-Based Strategies ............................................... 122

6.4 Comparing the Approaches ............................................................. 1276.5 Concluding Remarks ....................................................................... 130References ................................................................................................ 131

7 Test and Diagnosis of Communication Channels ................................ 1337.1 Introduction ..................................................................................... 1337.2 Testing the Communication Channels ............................................ 134

7.2.1 Structural-Based Strategies ................................................. 1347.2.2 Functional-Based Strategies ............................................... 138

Page 14: Reliability, Availability and Serviceability

xiiiContents

7.3 Comparing the Approaches ........................................................... 1497.4 Concluding Remarks ..................................................................... 152References .............................................................................................. 153

8 Error Control Coding and Retransmission ......................................... 1558.1 Introduction ................................................................................... 1558.2 Joint Information and Time Redundancy ...................................... 1578.3 Joint Information, Time, and Space Redundancy ......................... 1608.4 Joint Error Control Coding and Crosstalk Avoidance Codes ....... 1658.5 Comparing the Approaches ........................................................... 1688.6 Discussion ..................................................................................... 172References .............................................................................................. 173

9 Error Location and Reconfiguration ................................................... 1759.1 Introduction ................................................................................... 1759.2 Fault Location ............................................................................... 1779.3 Reconfiguration ............................................................................. 1809.4 Comparing the Approaches ........................................................... 184

9.4.1 Fault Detection and Location Methods ............................. 1859.4.2 Fault Reconfiguration Methods ........................................ 188

9.5 Discussion and Directions for Future Work .................................. 190References .............................................................................................. 192

10 Concluding Remarks ............................................................................. 19510.1 Networks-on-Chip, Testing, and Reliability

as Key Challenges Towards Many-Core Systems ......................... 19510.2 Network-on-Chip Testing ............................................................. 19610.3 Testing Network-on-Chip Based Systems .................................... 19710.4 Fault Tolerance for Network-on-Chip Based Systems.................. 19810.5 Network-on-Chip RAS in Emerging Technologies ...................... 19910.6 Final Remarks ............................................................................... 199References ................................................................................................ 199

Index ................................................................................................................ 201

Page 15: Reliability, Availability and Serviceability
Page 16: Reliability, Availability and Serviceability

1É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_1, © Springer Science+Business Media, LLC 2012

The design and manufacturing of integrated circuits is currently based on the integration of a number of pre-designed intellectual property (IP) blocks, or cores, in a single chip. Although the reuse has always been present in the design of electronic circuits, this practice has been extended and formalized in the last two decades or so, becoming the new design paradigm of the electronic industry. The reuse of previ-ously designed functional blocks is now the key for the design of high performance circuits with large gate counts in a short time. Such a design practice is known as core-based or IP-based design, or simply as System-on-Chip (SoC). The main differ-ence between a SoC and a traditional System-on-Board (SoB), which is also based on previously designed parts, is that in the former, all cores are synthesized together in a single chip, whereas in the latter each functional block is synthesized and manu-factured separately, and then mounted in a discrete board. Furthermore, the reusable blocks of the SoC are also known as virtual components, since they are delivered as a description of logic rather than a manufactured IC, and this constitutes another important difference between traditional design methods and core-based systems.

In the early days of SoCs, components were not really designed for reuse. However, gradually, component design evolved to include more parameterization and standard interfaces (Bergamaschi and Cohn 2002). Current available cores include microprocessors, memories, network interfaces, cryptography circuits, analog interfaces, among others (Design and Reuse 2011). The more IP providers are present in the market, the more functionalities become available, and the more are the advantages of the core reuse. As a consequence, new technologies are incorpo-rated in the products while the design time is reduced.

Embedded systems are a typical application where the core-based design is extensively applied. Cell phones, portable medical equipments, robots, and automotive controllers, are some examples of such systems. The successful design of such complex single-chip applications requires expertise in a number of techno-logy areas such as signal processing, encryption, and analog and RF designs. These techno logies are increasingly hard to find in a single design house. Moreover, high performance, reduced power consumption and short time-to-market are common requirements for those applications. Therefore, it is interesting to have all (or most)

Chapter 1Introduction

Page 17: Reliability, Availability and Serviceability

2 1 Introduction

functional blocks (A/D converters, microprocessors, memories, mixed-signal blocks, and so on) already available. In this business model, the specialists in a specific design model (analog or RF, for example) are the core providers, and the application designer can focus on the system aspects only.

1.1 Why Networks-on-Chip?

The design of a system-on-chip is guided by several and usually antagonistic forces. First, there is the market, who wants it all: from dozens of functionalities to power efficiency and dependable products. Designers have to deal with a number of issues, such as system performance, chip area, power dissipation and energy consumption, and real-time requirements. Because of the inherent characteristics and sensitivity of current manufacturing technologies, reliability and dependability issues are becoming important not only in critical applications. Indeed, new applications based on an electronic system are very cost-sensitive and quality is a market feature as important as the functionality. Dependability is also crucial in a society that is now used to be connected everywhere and at all times. Of course, the vendors want to please the market. For this, designers are constantly struggling to improve design and manufacturing technologies. Thus, distinct technologies must be now integrated in a single chip, and new architectures and design paradigms are developed to counterbalance performance and power. Design time is reduced at every new prod-uct while time-in-market is also reduced and profit must be made in a matter of months. For high-volume applications such as mobile and automotive systems, for instance, hardware/software platforms are used to accelerate design time and reduce design costs. These platforms are composed of a basic set of IP cores (processor, co-processors and accelerators, some reconfigurable logic, etc.) that can be config-ured through software for a family of products. Still, in many cases, the developed hardware is not used for its original application after some time. Therefore, recon-figuration in many levels is important to increase the lifetime of the execution platform (Hutchings and Wirthlin 1995). The third conflicting force in the design of an electronic system is the manufacturer. On one hand, new manufacturing tech-nologies make it all real and support the stringent requirements in terms of integra-tion. On the other hand, the foundry has bad news when it comes to working in the edge of the technology. Crosstalk effects, electromigration, interconnect delay and timing closure are some examples of manufacturing problems that affect product yield and are typical of the current nanometer technologies.

In the middle of the whole design process, lies the communication infrastructure of the chip. As the core-based design paradigm grew strong, the implementation of a cost-effective communication mechanism for the several cores in the system became the main design bottleneck for the SoC design. Wiring became the most costly resource and now needs to be carefully considered. It has to deal with the several pre-designed cores that use distinct communication protocols and design technologies. One of the most important challenges of the core-based SoC designer

Page 18: Reliability, Availability and Serviceability

31.1 Why Networks-on-Chip?

is to devise a cost-effective communication infrastructure to meet different system requirements and constraints. While pre-designed cores have specific requirements and constraints, the communication mechanism is, at the end, the single component available for the system designer to play with in order to achieve the required system budgets in terms of performance and costs. Interconnections play a central role in the system operation and are especially subject to the bad effects of technology shrinking, because of the density and length of the wires.

Future systems will probably require communication templates with several doz-ens of Gbits/s of bandwidth (cell phones, network applications, etc.) (ITRS 2010). One can summarize the communication issues of a complex SoC in terms of six major requirements for the communication infrastructure:

1. Performance: It has to meet different performance levels of throughput, latency, wire delay, and synchronization.

2. Scalability: It has to make it easier the inclusion of additional functional units 3. Parallelism: It has to provide parallel communication between sets of cores, and

the sub-set of cores communicating in parallel may change over the system lifetime

4. Reusability: The defined infrastructure should be easily reusable in new designs to reduce design time

5. Quality of Service (QoS): It should provide, when necessary, guarantees of (at least) some of the services provided, not only in terms of performance, but also in terms of reliability

6. Reliability and Fault Tolerance: It should provide detection and recovery schemes for manufacturing and operational faults to increase not only system reliability and dependability, but also yield and system lifetime.

Until the 1990s there were three communication models one could consider for a system: point-to-point, bus-based, and bus matrix (or crossbar) models. Considering the communication requirements just mentioned, those available mod-els started to present some serious limitations for a growing number of applications. Point-to-point connections can ensure performance (latency, parallelism) but are not scalable or reusable. QoS may come for free in this model, but it is always an ad-hoc solution, as well as reliability and fault-tolerance. Design time is probably the most important drawback of this model as it has to be devised for each and every new system configuration.

Bus-based connections were the current practice until the 1990s and are still the standard communication model for many systems. In fact, a bus-based interconnec-tion model represents a simple and well-known concept, with many alternatives in the market (IBM 2011; ARM 2011) that use basically the same topology as depicted in Fig. 1.1, with different protocols (Lu and Koh 2003). Moreover, this model presents small latency once the communication is established and it is compatible to most IP cores available nowadays. In terms of the SoC communication require-ments, bus-based approaches are highly reusable because along the years libraries of compliant cores have been built. In addition, some approaches for fault-tolerant busses have also been developed (Metra et al. 2000; Rossi et al. 2008). However, as

Page 19: Reliability, Availability and Serviceability

4 1 Introduction

systems get bigger and the number of embedded cores increases, it is harder to implement a bus-based communication architecture capable of meeting the required performance. The more cores are attached to the bus, the harder it is to accomplish time-closure and quality-of-service. In addition, larger systems require longer wires and long wires require buffers and additional control (arbiters) to keep the signal integrity and improve parallelism.

An intermediate solution is the use of crossbar switches together with “local” busses (Pasricha et al. 2006). The crossbar switch, as shown in Fig. 1.2, allows the communication between any two local busses, which can reduce latency and give better quality-of-service for systems where performance is a major issue. However, although efficient, such a solution has a quite narrow range of applica-tion, and does not scale when the number of cores and, consequently, local busses, increase.

Traditional communication models cannot always fulfill the performance require-ments of current and future SoCs without posing new problems to power consump-tion and design reuse. Thus, in the early 2000s some authors have proposed the use of a pre-defined platform to implement the communication among the several cores

IPcore

IPcore

IPcore

IPcore

IPcore

IPcore

IPcore

IPcore

High-speed bus

Bus bridge

Bus bridge

Low-speed bus

I/O bus

Fig.1.1 Bus-based SoC communication model

Page 20: Reliability, Availability and Serviceability

51.1 Why Networks-on-Chip?

in a chip (Guerrier and Greiner 2000; Benini and De Micheli 2002; Dally and Towles 2001). Such a platform is implemented as an integrated switching network, called Network-on-Chip (NoC), and meets some of the key requirements of future systems: reusability, scalable bandwidth, and low power consumption. A study presented by (Zeferino et al. 2002) shows that NoCs have better communication performance than busses for a number as low as eight cores, if intensive communi-cation, e.g. each core exchanging messages with another one, is required. For lighter workloads (fewer messages with reduced size), the performance of a central bus will be better than the NoC in systems with up to 16 cores.

Networks-on-Chip are based on the interconnection networks largely used in parallel computers. NoCs can be defined as a structured set of routers and point-to-point channels interconnecting the processing cores of a SoC in order to support communication among them. Such a structure can be described as a graph with routers on the nodes and channels on the arcs. The NoC interconnect model can be viewed as an evolution of the segmented bus structure where wires are connected through a control logic (the router) which implements the communication control in a distributed model, opposed to the centralized control of the bus-based solution. In this model, the segmented wires are “public” and shared by all embedded cores.

NoCs typically use the message-passing communication model, and the processing cores attached to the network communicate by sending and receiving request and response messages. A message forwards from a sender to a receiver by requesting and reserving resources of the network in order to establish a route between the sender and the receiver. Depending on the network implementation, messages can be split into smaller structures named packets, which have the same format of a message and are individually routed. Packet-based networks present a better resource utilization, because packets are shorter and reserve a smaller number of channels

μproc

μproc

μproc

Decode

Input Stage

Decode

Input Stage

Decode

Input Stage

Masters

Slaves

IP core

IP core

IP core

IP core

IP core

IP core

arb

arb

arb

Matrix

Fig. 1.2 Optimized bus matrix communication architecture

Page 21: Reliability, Availability and Serviceability

6 1 Introduction

during their transfer. Notice that the message may have different formats depending on the protocol implemented by the network, but three blocks can be identified in the message (or packet) independent of the implemented protocol: the first informa-tion of the message is called the header which contains the data about the target node for the message. The header establishes the path between the source and the target node, according to the network routing algorithm. The second part of the message is called the payload and is composed of the actual data that needs to be sent to the target node. Finally, the end of the packet is indicated in the last word of the packet, which is called tail. Figure 1.3 depicts the basic message format and transmission along the network.

A NoC can be described by its topology (the organization of the cores and rout-ers) and the approaches used to implement the mechanisms for flow control, routing, arbitration, switching and buffering, as follows. The flow control deals with data traffic on the channels and inside the routers. Routing is the mechanism that defines the path a message takes from the sender to the receiver. The arbitration establishes priority rules when two or more messages request the same resource. Switching is the mechanism that takes an incoming message of a router and puts it in an output port of the router. Finally, buffering is the strategy used to store messages when a requested output channel is busy. Current cores usually need to use wrappers to adapt their interfaces and protocols to the ones of the target NoC. Such wrappers pack and unpack data exchanged by the processing cores with the network.

The NoC communication platform has been shown to comply with the most important SoC communication requirements and has been described as the solution for the communication bottleneck during system design. To understand why and how the NoC can meet such tight requirements, we review the basic ideas behind the NoC design and implementation in Chap. 2.

head

er

tail

payl

oad

payl

oad

Source Node

RouterTarget Node

Router Router Router

payl

oad

Fig. 1.3 Communication mechanism in a NoC

Page 22: Reliability, Availability and Serviceability

71.2 Reliability, Availability and Serviceability in NoC-Based SoCs

1.2 Reliability, Availability and Serviceability in NoC-Based SoCs

Reliability and fault tolerance are very important requirements when sub-micron technologies come into play, which is the case for current and future SoCs. One cannot talk about a NoC without talking about the SoC or the system around the NoC. The same applies for the reliability and fault tolerance issues.

Reliability can be measured and ensured through testing and fault tolerance. Testing defines the reliability of the circuit with respect to manufacturing defects. Fault tolerance ensures the reliability with respect to faults that appear during the system normal operation. Both aspects need to be considered in the NoC and in the NoC-based SoC.

First, one must consider the reliability and quality of the NoC itself. On one hand, communication channels compose the most part of the NoC and test of interconnec-tions has been studied for quite some time now. So, one can try to use this knowledge to verify the correct implementation, manufacturing, and operation of the NoC. On the other hand, there are three characteristics of the NoC that preclude the straight-forward reuse of known techniques for interconnections test and fault tolerance. The first characteristic is the density of the wires in the NoC, which is much higher than in a bus-based structure. The second characteristic is the technology with which the NoC is manufactured. The combination of a nanometer process and a high density structure makes the wires specially sensitive to some defects, such as crosstalk and wider range shorts for example, that are not common in other communication archi-tectures. Finally, the third characteristic is that the NoC is not only a communication architecture, i.e., it is not composed only of wires, but it has several logic blocks among the wires. In addition, the wires are not directly accessible, as it happens in a bus. They can only be accessed through the routers. And, vice-versa, the routers are not easily accessible as well, as we need to go through the core, network interface (NI) and/or channels to get to them. So, the test of each of these parts usually needs to rely on each other, which makes the process more complex.

In respect to the system-level reliability, the NoC also plays a central role. As we detail in Chap. 3, NoC-based systems usually cannot afford another solution than to use the NoC to connect the embedded cores to a test equipment during the system manufacturing test. Thus, the NoC must be operating correctly not only during normal mode, but also during the system test mode so that one can rely on the system test results.

In the remainder of this book we want to discuss these three aspects of the reli-ability and fault tolerance of NoC-based SoCs:

Reliability: Test and design-for-test (DfT) structures for NoC-based SoCs that help avoiding and detecting system faults;Availability: Fault tolerance in NoCs to increase the amount of time the system is actually operating;Serviceability: Test, DfT, and fault-tolerance in NoCs that help to easily diagnosing the system in case a fault occurs.

Page 23: Reliability, Availability and Serviceability

8 1 Introduction

To address these aspects, we have structured the next chapters in four major parts:In part 1, the fundamental concepts of Networks-on-Chip design, addressed in

Chap. 2, and the basics of System-on-Chip testing, covered by Chap. 3, are revis-ited. For this part of the book, we strongly encourage the readers that are unfamiliar with these topics to follow the references and get further knowledge on these two research areas before proceeding to the next chapters. Once the reader has become more familiar with these fundamentals, he or she can much easier go through the following chapters.

Part 2, composed of Chaps. 4 and 5, covers the reuse of the network-on-chip infrastructure as the mechanism to access and test the several cores that compose a System-on-Chip. This reuse model assumes that the NoC infrastructure is fault-free. Chapter 4 focuses on the basic reuse strategy assuming that a stream-like communi-cation can be established, through the NoC, between the cores under test and the external test sources and sinks. That implies a NoC with guaranteed fixed band-width and latency. Chapter 5 discusses more advanced reuse models that make use of different test packet models and best effort networks-on-chip.

Chapters 6 and 7 build the Part 3 of this book. These chapters focus on the test and diagnosis of the network-on-chip infrastructure used to provide access for test-ing the SoC cores in Chaps. 4 and 5. Chapter 6 presents the major techniques we can rely on nowadays to detect and locate faults in NoC network interfaces and routers. These techniques are supposed to cover the faults that may affect the flow control, routing, arbitration, switching and buffering of routers. Chapter 7 looks into the test and diagnosis of wires in the NoC communication channels. The capability of detecting interconnect faults is mandatory for yield improvement. Moreover, fault diagnosis of NoC wires can help fault tolerance approaches to mitigate the faults and maintain the network service.

Parts 2 and 3 of the book are devoted to the detection and diagnosis of manufac-turing defects affecting cores, routers and communication channels of NoC-based SoCs, while the system is in off-line test mode. Part 4, made up of Chaps. 8 and 9, deals with on-line testing strategies that are capable of detecting run-time faults during the system’s mission mode. Chapter 8 discusses the on-line fault detection on data transmitted over the NoC, using error control coding and/or retransmission. Chapter 9 presents system level fault tolerance techniques, where either an alterna-tive path is found avoiding the defective part of the NoC, or the hardware or the software are reconfigured to mask and isolate the defective block. These techniques are based on fault location, re-routing and/or router and NoC reconfiguration.

Finally, in Chap. 10 we summarize the main advances made so far in terms of reliability, availability and serviceability of NoC-based SoCs and conclude the book by discussing what, in our view, are currently the major open problems in the field. In the references we point to valuable material that can be used to face the challenge of providing a solution for these open problems.

Page 24: Reliability, Availability and Serviceability

9References

References

ARM (2011) AMBA specification. http://www.arm.com/products/system-ip/amba/index.php. Accessed 25 May 2011

Benini L, De Micheli G (2002) Networks on chips: a new SoC paradigm. IEEE Comput 35(1):70–78

Bergamaschi RA, Cohn J (2002) The A to Z of SoCs. In: Proceedings of the IEEE/ACM interna-tional conference on computer aided design (ICCAD), Yorktown Heights, pp 791–798

Dally WJ, Towles B (2001) Route packets, not wires: on-chip interconnection networks. In: Proceedings of the design automation conference (DAC), Las Vegas, pp 684–689

Guerrier P, Greiner A (2000) A generic architecture for on-chip packet-switched interconnections. In: Proceedings of the design automation and test in Europe conference (DATE), Paris, pp 250–256

Hutchings BL, Wirthlin MJ (1995) Implementation approaches for reconfigurable logic applica-tions. In: Proceedings of the international workshop field-programmable logic and applications (FPL), Oxford, pp 419–428

IBM (2011) The coreconnect TM bus architecture. http://www.chips.ibm.com/products/corecon-nect/docs/crcon_wp.pdf. Accessed 25 May 2011

ITRS (2010) International technology roadmap for semiconductors – 2010 update. Accessed 25 May 2011

Lu R, Koh C-K (2003) Samba-BUS: a high performance BUS architecture for system-on-chips. In: Proceedings of the IEEE/ACM international conference on computer aided design (ICCAD), San Jose, pp 8–12

Metra C, Favalli M, Riccó B (2000) Self-checking detection and diagnosis scheme for transient, delay and crosstalk faults affecting bus lines. IEEE Trans Comput 49(6):560–574

Pasricha S, Dutt N, Ben-Romdhane M (2006) Constraint-driven bus matrix synthesis for MPSoC. In: Proceedings of the Asia and South Pacific conference on design automation (ASPDAC), Yokohama, pp 30–35

Design & Reuse (2011) http://www.design-reuse.com/sip/. Accessed 25 May 2011Rossi D, Nieuwland AK, van Dijk SVES, Kleihorst RP, Metra C (2008) Power consumption of

fault tolerant busses. IEEE Trans Very Large Scale Integr Syst 16(5):542–553Zeferino CA, Kreutz ME, Carro L, Susin AA (2002) A study on communication issues for sys-

tems-on-chip. In: Proceedings of the 15th symposium on integrated circuits and systems design (SBCCI), Porto Alegre, pp 121–126

Page 25: Reliability, Availability and Serviceability

11É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_2, © Springer Science+Business Media, LLC 2012

As the number of IP modules in Systems-on-Chip (SoCs) increases, bus-based interconnection architectures may prevent these systems to meet the performance required by many applications. For systems with intensive parallel communication requirements buses may not provide the required bandwidth, latency, and power consumption. A solution for such a communication bottleneck is the use of an embedded switching network, called Network-on-Chip (NoC), to interconnect the IP modules in SoCs. NoCs design space is considerably larger when compared to a bus-based solution, as different routing and arbitration strategies can be imple-mented as well as different organizations of the communication infrastructure. In addition, NoCs have an inherent redundancy that helps tolerate faults and deal with communication bottlenecks. This enables the SoC designer to find suitable solu-tions for different system characteristics and constraints.

This chapter first presents the main concepts involved in the design and use of net-works-on-chip such as the basic building blocks, examples of structural and logic orga-nizations of those blocks, performance parameters, and the definition of quality-of-service in the NoC environment. Then, a few examples of NoC implementations are listed, showing how different communication solutions can be defined over the same basic concepts. The reader can find a number of books that explore in detail the design and implementation issues of a NoC. For instance, one can mention Bertozzi et al. (2007), Dally and Towles (2004), De Micheli and Benini (2006), Duato et al. (2003), Flitch and Bertozzi (2010), Jantsch and Tenhunen (2010), and Peh and Jerger (2009). We refer to this material as the prime reference on the concepts resumed in this chapter.

2.1 NoC Structure and Design Space

A network-on-chip is composed of three main building blocks. The first and most important one are the links that physically connect the nodes and actually implement the communication. The second block is the router, which implements the communication protocol (the descentralized logic behind the communication protocol). One can see the

Chapter 2NoC Basics

Page 26: Reliability, Availability and Serviceability

12 2 NoC Basics

NoC as an evolution of the segmented busses where the router plays the role of a “much smarter buffer” (Bjerregaard and Mahadevan 2006). The router basically receives pack-ets from the shared links and, according to the address informed in each packet, it for-wards the packet to the core attached to it or to another shared link. The protocol itself consists of a set of policies defined during the design (and implemented within the router) to handle common situations during the transmission of a packet, such as, having two or more packets arriving at the same time or disputing the same channel, avoiding deadlock and livelock situations, reducing the communication latency, increasing the throughput, etc. The last building block is the network adapter (NA) or network interface (NI). This block makes the logic connection between the IP cores and the network, since each IP may have a distinct interface protocol with respect to the network.

2.1.1 Links

A communication link is composed of a set of wires and connects two routers in the network. Links may consist of one or more logical or physical channels and each channel is composed of a set of wires. In the remaining chapters, unless stated otherwise, the words net, wire, and line mean a single wire interconnecting two entities (routers and/or IP cores). The words channel and link mean a group of wires connecting two entities. Typically, a NoC link has two physical channels making a full-duplex connection between the routers (two unidirection channels in opposite directions). The number of wires per channel is uniform throughout the network and is known as the channel bitwidth.

The implementation of a link includes the definition of the synchronization pro-tocol between source and target nodes. This protocol can be implemented by dedi-cated wires set during the communication or through other approaches such as FIFOs (Chelcea and Nowick 2001). Asynchronous links are also an interesting option to implement globally asynchronous locally synchronous (GALS) systems where local handshake protocols are assumed (Bjerregaard and Mahadevan 2006). The links ultimately define the raw performance (due to link delays) and power consumption in an NoC and designers are supposed to provide fast, reliable, and low-power interconnects between nodes in the network.

The concept of flits is defined at the link level. Flits (flow control units) are the atomic units that form packets and streams. In most cases, a flit corresponds to a phit (physical unit), which is the minimum amount of data that is transmitted in one link transaction. In this case, the flit width matches the width of the channel. However, when highly serialized links are used, a flit may be composed of several phits.

2.1.2 Routers

The design and implementation of a router requires the definition of a set of policies to deal with packet collision, the routing itself, and so on. A NoC router is com-posed of a number of input ports (connected to shared NoC channels), a number of

Page 27: Reliability, Availability and Serviceability

132.1 NoC Structure and Design Space

output ports (connected to possibly other shared channels), a switching matrix connecting the input ports to the output ports, and a local port to access the IP core connected to this router. As an example, the interface of the RaSoC router (Zeferino and Susin 2003) is presented in Fig. 2.1. Herein, we use the terms router and switch as synonymous, but the term switch can also mean the internal switch matrix that actually connects the router inputs to its outputs.

In addition to this physical connection infrastructure, the router also contains a logic block that implements the flow control policies (routing, arbiter, etc.) and defines the overall strategy for moving data though the NoC.

A – flow control policy characterizes the packet movement along the NoC and as such it involves both global (NoC-level) and local (router-level) issues. One can ensure a deadlock-free routing, for instance, by taking specific measures in the flow control policy (by avoiding certain paths within the NoC for example). Also, the optimization of the NoC resources usage (channels, bandwidth, etc.) and the guarantees on the communication can be ensured as part of the flow control policy (for instance, by choosing a routing algorithm that minimizes the path or by implementing virtual channels to reduce congestion, etc.). Guarantees on the communication performance and quality are kown as “quality-of-service” and will be detailed later on. Control can be centralized or distributed. In centralized control, routing decisions are made globally and applied to all nodes, with an strategy that garantees no traffic contention. This approach avoids the need for and arbitration unit but requires that all nodes share a common sense of time. A possible implementation of this approach is the use of Time Division Multiplexing (TDM) mechanisms where each packet is associated to a time frame (Millberg et al. 2004; Goossens et al. 2005). However, NoCs typically use a distributed control, where each router makes decisions locally. Virtual channels (VCs) are an important concept related to the flow control. VCs implement the concept of multiplexing a single physical channel over several logically separate channels with individual and independent buffer queues. The main goal of a VC implementation is to improve performance by avoiding deadlocks, optimizing

Fig. 2.1 Interface views of a typical router (a) functional view, (b) architectural view

Page 28: Reliability, Availability and Serviceability

14 2 NoC Basics

wire usage and providing some traffic guarantees (Bjerregaard and Mahadevan 2006). Deadlock occurs when network resources are fully occupied and waiting for each other to be released to proceed with the communication, that is, when two paths are blocked in a cyclic fashion (Dally and Towles 2004). Livelock occurs when the status of the resources keep changing (there is no deadlock) but the communication is not completed.The – routing algorithm is the logic that selects one output port to forward a packet that arrives at the router input. This port is selected according to the routing infor-mation available in the packet header. There are several possible routing algo-rithms that can be used in a NoC, each one leading to different trade-offs between performance and cost. For instance, in a deterministic routing a packet always uses the same path between two specific nodes. Common deterministic routing schemes are source routing and XY routing. In source routing, the source core specifies the route to the destination. In XY routing, the packet follows the rows first, then moves along the columns toward the destination or vice versa (Bjerregaard and Mahadevan 2006). In the adaptive routing, alternative paths between two nodes may be used if the original path or a local link is congested. This involves a dynamic evaluation of the link load and implies a dynamic load balancing strategy. Negative First (NF) and West First (WF) algorithms proposed by Glass and Ni (1994) are examples of adaptive routing algorithms. In the static routing, paths between cores are defined at compilation time (of the application), while in the dynamic routing the path is defined at run-time. An unicast routing indicates that a packet has a single target whereas in the multicast routing a packet can be sent to several nodes in the NoC simultaneously (similar to a bus) or sev-eral slaves of a master node. Similarly, a broadcast communication targets all nodes whereas a narrowcast communication initiated by a master is related to a single slave associated to it. A routing algorithm can also be classified as minimal or non-minimal. A minimal routing guarantees that the shortest path to destination is always chosen. Minimal routing algorithms are those where a bounding box is virtually present and implies that only decreasing distances from source to desti-nation are valid. On the other hand, non minimal routing algorithms allow increas-ing the distance from source to destination. Routing algorithms can lead to or avoid the occurrence of deadlocks and livelocks. For instance, the turn model (Glass and Ni 1994) is a routing algorithm that prohibits certain turns that could lead to a cycle in the network and to a risk of deadlock. The odd-even turn model (Chiu 2000) restricts the locations in the network where some types of turns can be taken. Another routing algorithm worth mentioning is the hot potato routing. In this algorithm, the packet is immediately forwarded towards the path with the lowest delay (instead, for example, of the shortest or minimal path). This routing scheme is also called deflective routing because if a packet cannot be accepted by the target node, it is deflected into the network, to return at a later time. The packet is not stored in a buffer (buffer-less approach) and each packet has a set of pre-ferred outputs that will be used whenever possible in the forwarding operation.While the routing algorithm selects an output port for a packet, the – arbitration logic implemented in the router selects one input port when multiple packets

Page 29: Reliability, Availability and Serviceability

152.1 NoC Structure and Design Space

arrive at the router simultaneously requesting the same output port. Again, one has several options to implement the arbiter: it can be distributed (one per port) or centralized (one per router), it can be based on static (fixed) or dynamic (vari-able) priorities among ports. A centralized arbiter optimizes the use of the router switching matrix, but may lead to higher latency whereas the distributed approach optimizes the latency. Arbitration logic also defines whether the network assumes a delay or a loss communication model. In the delay model, packets can be delayed, but never dropped. In the loss model a packet can be dropped as a solu-tion, for instance, to a congestion situation. In this case, retransmission logic must be implemented as well (Bjerregaard and Mahadevan 2006).The – switching defines how the data is transmitted from the source node to the target one. In the circuit switching approach the whole path (including routers and channels) from source to node is previously established (by the header) and reserved for the transmission of the whole packet. The payload is not sent until the whole path has been reserved. This can increase latency, but once the path is defined, this approach can give some guaranteed throughput, for example. In the packet-based switching approach on the other hand, all flits of the packet are sent as the header establishes the connection between routers. Still in this model the designer can choose between different buffering and forward strategies that impact the overall NoC traffic (storing the whole packet in each router before establishing the connection to the next router or sending the flits in a pipeline mode for instance). In the store-and-forward strategy, the node stores the com-plete packet before forwarding it to the next node in the path. In this strategy one must ensure that the buffer size at each node is sufficient to store the whole packet or the packet can be stalled. In the wormhole strategy, on the other hand, the node makes the routing decision and forwards the packet as soon as the header arrives. The subsequent flits follow the header as they arrive. This reduces the latency within the router, but in case of packet stalling, many links risk to be locked at once. The virtual-cut-through mechanism is similar to the wormhole approach but, before forwarding the first data flit to the next node in the path, the node waits for a confirmation that the whole packet can be accepted by the next node. Thus, in case of stalling, no links are affected, only the current node.The – Buffering policy is the strategy used to store information in the router when there is congestion in the network and a packet cannot be forwarded right away. The buffering strategy (number, location and size of the buffers), has an impor-tant impact on the network traffic and, therefore, on the NoC performance. In addition, the buffers are responsible for a large portion of the router area. One can have a single buffer in the router, shared by all input ports, or one can have one buffer per port (input or output). The main advantage of the first approach is the area optimization, but the control can be more complex and additional care must be taken to deal with buffer overflow. In the distributed approach, each input port has its own buffer and the most common implementation is in the form of a FIFO, although other implementations are also possible. Distributed output buffers are also possible, but they tend to be less efficient because several input ports may need to store data in a single structure.

Page 30: Reliability, Availability and Serviceability

16 2 NoC Basics

Figure 2.2 depicts a typical router architecture (used in 2D NoCs) with the above mentioned elements identified. One can observe that the NoC designer has several possible strategies to implement a router (and, therefore, the network communication protocol) leading to a really large design space. This is the main advantage of the NoC approach and explains why this platform has more chances to meet the system com-munication requirements. Different from a bus structure, one can tail the NoC accord-ing to the specific requirements of the application and issues such as guaranteed performance can be naturally implemented as part of the network flow control.

2.1.3 Network Interface

The third NoC building block is the network adapter (NA) or network interface (NI). This block makes the logic connection between the IP cores and the network,

S

WAH

andshake

10 bits

10 bits

Handshake

A

A

Handshake

10 b

its

10 b

its

A

Han

dsha

ke

10 bits

10 bitsE

A

Handshake

10 b

its

10 b

its

N

A

Handshake10 b

its 10 b

its

L

crossbar switch matrix

input buffer

Arbiter block

Handshake block

channel

channel

channel

channel

channel

10 bits

10 b

its

10 bits10

bits

10 b

its

val_

inac

k_ou

tac

k_in

Vak

_out

val_

in

ack_

in val_

out

val_

in

ack_

in val_

out

ack_

out

val_inack_in

ack_outval_out

val_

inac

k_in

val_

out

ack_

out

val_inack_in

val_outack_out

Fig. 2.2 Architecture of a typical router

Page 31: Reliability, Availability and Serviceability

172.3 NoC Topologies

since each IP may have a distinct interface protocol with respect to the network. This block is important because it allows the separation between computation and communication. This allows the reuse of both, core and communication infrastruc-ture independent of each other (Bjerregaard and Mahadevan 2006).

The adapter can be divided into two parts: a front end and a back end. The front end handles the core requests and is ideally unaware of the NoC. This part is usually implemented as a socket – OCP (OCPIP 2011), VCI (VSI Alliance 2011), AXI (ARM 2011), DTL (Philips Semiconductors 2002), etc. The back end part handles the network protocol (assembles and disassembles the packet, reorder buffers, implement synchronization protocols, helps the router in terms of storage, etc.).

2.2 NoC Performance Parameters

The performance of a network-on-chip can be evaluated by three parameters: band-width, throughput, and latency.

The bandwidth refers to the maximum rate of data propagation once a message is in the network. The unit of measure for bandwidth is bit per second (bps) and it usu-ally considers the whole packet, including the bits of the header, payload and tail.

Throughput is defined by Duato et al. (2003) as the maximum traffic accepted by the network, that is, the maximum amount of information delivered per time unit. The throughput measure is messages per second or messages per clock cycle. One can have a normalized throughput (independently from the size of the messages and of the network) by dividing it by the size of the messages and by the size of the network. As a result, the unit of the normalized throughput is bits per node per clock cycle (or per second).

Latency is the time elapsed between the beginning of the transmission of a mes-sage (or packet) and its complete reception at the target node. Latency is measured in time units and mostly used as comparison basis among different design choices. In this case, latency can also be expressed in terms of simulator clock cycles. Normally, the latency of a single packet is not meaningful (Duato et al. 2003) and one uses the average latency to evaluate the network performance. On the other hand, when some messages present a much higher latency than the average, this may be important. Therefore the standard deviation may be an interesting measure as well.

2.3 NoC Topologies

A NoC can be characterized by the structure of the routers connections. This struc-ture or organization is called topology and is represented by a graph G(N,C) where N is the set of routers and C is the set of communication channels. The routers can be connected in direct or indirect topologies.

In the direct topologies, each router is associated to a processor and this pair can be seen as a single element in the system (so-called a node in the network). In this topology, each node is directly connected to a fixed number of neighbor nodes and

Page 32: Reliability, Availability and Serviceability

18 2 NoC Basics

a message between two nodes goes through one or more intermediate nodes. Only the routers are involved in the communication in a direct topology and the communication is based on the routing algorithm implemented by the routers. Most NoC implementations are based on orthogonal arrangements of the routers in a direct topology. In this arrangement, nodes are distributed in a n-dimensional space and the packet moves in one dimension at a time. These arrangements are the ones that present the best tradeoff between cost and performance, and also present good scalability. The most common direct topologies are the n-dimensional grid or mesh, torus (or k-ary n-cube) and the hypercube, as shown in Fig. 2.3.

In an indirect topology not all routers are connected to processing units as in the direct model. Instead, some routers are used only to propagate the messages through the network, while other routers are connected to the logic and only those can be source and/or target of a message. Some topologies of indirect networks stand out: the crossbar, and the multi-stage. The multi-stage topology is a regular NoC, where routers are identical and organized in stages. Input and output stages are connected to the functional units in one side and to the internal nodes in another side. For instance, Fig. 2.4 shows two examples of indirect networks.

Router Router Router

Router Router Router

Core 1 Core 2 Core 3

Core 4 Core 5 Core 6

node

Node 1

Node 2

Node 3

Node4

Node5

Node 6

Node7

Node8

Node9

a

c d

b

Node 2

Node7

Node1

Node 5

Node3

Node 6

Node8

Node4

Node 1

Node7

Node5

Node3

Node6

Node8

Node2

Node4

2-D grid

3-D Hypercube Octagon

2-D torus

Fig. 2.3 NoCs with direct regular topologies (a) 2-D grid, (b) 2-D torus, (c) 3-D hypercube, (d) octagon

Page 33: Reliability, Availability and Serviceability

192.3 NoC Topologies

Another possible classification for network topologies is related to the regularity of the connections between routers. In regular networks, all routers are identical in terms of number of ports connecting to other routers or elements in the network. For instance, in a regular grid topology presented in Fig. 2.3a all routers have five ports,

Core

Core

Core

Core

Core

Core

Core

Core

Core

Core

Core

Core

Core

Core

Core

Core

Router Router

RouterRouter Router Router

Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 8

Router

a

b

Fat-tree

3-stage butterfly

Fig. 2.4 NoCs with indirect topologies (a) fat-tree, (b) three-stage butterfly

Page 34: Reliability, Availability and Serviceability

20 2 NoC Basics

one local port connecting to the functional unit and another four ports connecting to neighbor routers. In irregular topologies the routers may present different connec-tion patterns, usually defined according to the application (Pasricha and Dutt 2008), as depicted in Fig. 2.5.

Router Router Router

Router

Core 1 Core 2 Core 3

Core 4 Core 5

Router Router Router

Router

Core 6 Core 7 Core 8

Core 9

Node

Node Node

Node Node

Node Node

Node Node

Node

Node

Node Node

Node

Node

Router

a

b

Fig. 2.5 NoCs with irregular topologies (a) reduced mesh, (b) cluster-based hybrid topology

Page 35: Reliability, Availability and Serviceability

212.5 Industrial and Academic NoCs

2.4 Quality of Service

According to Bjerregaard and Mahadevan (2006), quality of service (QoS) is defined “as service quantification that is provided by the network to the demanding core”. QoS is also about predictability of the communication behavior (Goossens et al. 2003). Following from the previous definition, one must first define the required services and a quantification measure. Typically, services are defined based on the performance metrics of the network (low latency, high throughput, low power, etc.). Then, one must define implementation mechanisms to meet the core’s demands using the network services.

Goossens et al. (2003) define two types of NoCs with respect to QoS:

Best-effort (BE) NoCs – These NoCs offer no commitment. For most NoCs this means that only completion of the communication is ensured.Guaranteed Services (GS) NoCs – Those NoCs ensure that some services requirements can be accomplished. Commitment can be defined in several lev-els, such as correctness of the result, completion of the transaction, bounds on performance, and so on.

Due to the characteristics of the on-chip networks and the systems based on those structures, service guarantees and predictability are usually given through hardware implementations, as opposed to statistical guarantees of macro-networks. For instance, in real-time or other critical applications, one really needs guarantees more than predictability. In order to give hard guarantees, one must use specific routing schemes (virtual channels, adaptive routing, fault-tolerant router and link structures, etc.) to ensure the required levels of performance and/or reliability. Naturally, for GS NoCs, implementation costs and complexity grows with the com-plexity of the system and the QoS requirements. BE NoCs, on the other hand, tends to present a better utilization of the available resources.

2.5 Industrial and Academic NoCs

In order to demonstrate how different communication solutions can be more natu-rally implemented using a NoC, we mention a few industrial and academic NoC implementations with their basic characteristics. We will not detail these NoCs, we will only mention the most important characteristics to show how even a small dif-ference in the design can create a different communication structure that fits better one application. The variety of NoCs available shows the flexibility of this infra-structure and explains why it is being pointed out as a good solution for future complex chips.

– ÆTHEREAL – This NoC was proposed by Philips (Goossens et al. 2005) and it is both a BE and a GT (guaranteed throughput) NoC implemented in a syn-chronous indirect topology with wormhole switching, and contention-free source routing algorithm based on TDM. It implements a number of connection types

Page 36: Reliability, Availability and Serviceability

22 2 NoC Basics

including narrowcast, multicast, and simple connection and the network adapter can be synthesized for four standard socket interfaces (master or slave, OCP, DTL, or AXI based) (Dielissen et al. 2003).

– STNoC – Proposed by ST Microelectronics (Karim et al. 2002), this GS NoC presents a spidergon/ring topology with minimal path, 32-bit links, and input buffering. The main difference in this NoC is the topology, which is quite effi-cient from the performance point of view.

– Nostrum – In this guaranteed bandwidth NoC proposed by KTH (Kumar et al. 2002), the protocol includes the data encoding to reduce power consumption. It is implemented in a 2D mesh topology with hot potato routing. Links are com-posed of 128 bits of data plus 10 bits of control. Virtual channels and a TDM mechanism are used to ensure bandwidth.

– SPIN – One of the first proposed NoC architectures, SPIN network was devel-oped at LIP6 (Guerrier and Greiner 2000). It presents a fat-tree topology with wormhole switching, deterministic and adaptive (deflective) routing, input buff-ering and two shared output buffering. It uses 36-bit links (32 data bits plus 4 control bits). It also implements the VCI socket in the network interface.

– XPIPES – This NoC presents an arbitrary topology, tailored to the application to improve performance and reduce costs (Osso et al. 2003). It requires a better support from CAD tools to be synthesized (Jalabert et al. 2004; Murali and De Micheli 2004) and was proposed in a cooperation between University of Bologna and Stanford University. XPipes consists of soft macros of switches and links that can be instantiated during synthesis. The standard OCP protocol is used in the network interface.

– SoCin – SoCin NoC has been proposed by Universidade Federal do Rio Grande do Sul (UFRGS) (Zeferino and Susin 2003). SoCin is a simple and parameteriz-able BE NoC that can be implemented in 2D mesh or torus topologies, with nar-rowcasting routing, input buffering, and parameterizable channel width. Due to its simplicity and availability, it has been used as case study for the first papers on NoC-based testing and other test strategies that are discussed later in this book.

– QNoC – Developed at Technion in Israel (Bolotin et al. 2004), this direct NoC is implemented in an irregular mesh topology with wormhole switching and XY minimal routing scheme. Four different classes of traffic are defined to improve QoS, although hard guarantees are not given.

– HERMES – This NoC was proposed by Pontifícia Universidade Católica do Rio Grande do Sul (Moraes et al. 2004) and implements a direct 2-D mesh topology with wormhole switching and minimal XY routing. Hermes is a best-effort NoC with parameterizable input queuing and presents a whole set of support develop-ment tools.

– MANGO – Developed at Technical University of Denmark this NoC imple-ments a message-passing asynchronous (clockless) protocol with GS services over OCP interfaces. Mango also provides BE services using credit-based and source routing (Bjerregaard et al. 2005). BE connections are source routed and virtual channels are used in GS communications.

Page 37: Reliability, Availability and Serviceability

23References

The list above is by no means exhaustive and several other implementations can be listed: CHAIN (Bainbridge and Furber 2002), Proteo (Siguenza-Tortosa et al. 2004), SoCBus(Sathe et al. 2003), ANoC (Beigne et al. 2005), etc. Each NoC has a distinct aspect that leads to a distinct tradeoff among performance, QoS, cost, fault tolerance, etc. This only shows that indeed the design space for this communication infrastructure is quite larger when compared to a bus-based solution.

Despite the variability in the design decisions, one can map some common design trends among available NoCs: most implementations use packet switching for com-munications for its efficiency; most NoCs use 2D mesh topologies because of the good tradeoff between cost and performance; XY routing is very common, for mesh topologies, although not standard, due to its property of being deadlock-free; most NoCs use input buffering only, again because of the tradeoff between cost and per-formance gain.

References

VSI Alliance (2011) Virtual component interface standard version 2. VSI alliance. www.vsi.org. Accessed 26 May 2011

ARM (2011) AMBA advanced extensible interface (AXI) protocol specification, version 2.0. http://www.arm.com. Accessed 26 May 2011

Bainbridge J, Furber S (2002) CHAIN: a delay-insensitive chip area interconnect. IEEE Micro 22(5):16–23

Beigne E, Clermidy F, Vivet P, Clouard A, Renaudin M (2005) An asynchronous NOC architecture providing low latency service and its multi-level design framework. In: Proceedings of the 11th international symposium on asynchronous circuits and systems (ASYNC), Pasadena, pp 54–63

Bertozzi D, Kumar S, Palesi M (2007) Networks-on-chip. Hindawi Publishing Corporation, New York, NY

Bjerregaard T, Mahadevan S (2006) A survey of research and practices of network-on-chip. ACM Comput Surv 38:1–51

Bjerregaard T, Mahadevan S, Olsen RG, Sparsø J (2005) An OCP compliant network adapter for gals-based soc design using the MANGO network-on-chip. In: Proceedings of international symposium on system-on-chip (ISSoC), Tampere, Finland, pp 171–174

Bolotin E, Cidon I, Ginosar R, Kolodny A (2004) QNoC: QoS architecture and design process for network on chip. Elsevier J Syst Architect EUROMICRO J 50:2–3, 105–128

Chelcea T, Nowick SM (2001) Robust interfaces for mixed-timing systems with application to latency-insensitive protocols. In: Proceedings of the 38th design automation conference (DAC), Las Vegas, pp 21–26

Chiu G-M (2000) The odd-even turn model for adaptive routing. IEEE Trans Parallel Distrib Syst 11(7):729–738

Dally WJ, Towles BP (2004) Principles and practices of interconnection networks. The Morgan Kaufmann series in computer architecture and design. Morgan Kaufmann, Burlington

De Micheli G, Benini L (2006) Networks on chips: technology and tools (systems on silicon). Morgan Kaufmann, Burlington

Dielissen J, Radulescu A, Goossens K, Rijpkema E (2003). Concepts and implementation of the Phillips network-on-chip. In: Proceedings of the IP based SOC (IPSOC), Grenoble, France

Duato J, Yalamanchili S, Ni LM (2003) Interconnection networks: an engineering approach. Morgan Kaufmann, Burlington

Page 38: Reliability, Availability and Serviceability

24 2 NoC Basics

Flich J, Bertozzi D (2010) Designing network on-chip architectures in the nanoscale era. Chapman & Hall/CRC Computational Science, Boca Raton

Glass C, Ni L (1994) The turn model for adaptive routing. J Assoc Comput Mach 41(5):874–902Goossens K, Dielissen J, van Meerbergen J, Poplavko P, Radulescu A, Rijpkema E, Waterlander

E, Wielage P (2003) Guaranteeing the quality of services in networks on chip. In: Jantsch A, Tenhunen H (eds) Networks-on-chip. Kluwer, Boston

Goossens K, Dielissen J, Radulescu A (2005) Æthereal network on chip: concepts, architectures and implementations. IEEE Design Test Comput 22(5):414–421

Guerrier P, Greiner A (2000) A generic architecture for on-chip packet-switched interconnections. In: Proceedings of the design automation and test in Europe conference (DATE), Paris, pp 250–256

Jalabert A, Murali S, Benini L, De Micheli G (2004). XpipesCompiler: a tool for instantiating application specific networks-on-chip. In: Proceedings of design, automation and testing in Europe conference (DATE), Dresden, pp 884–889

Jantsch A, Tenhunen H (2010) Networks on chip. Kluwer, BostonKarim F, Nguyen A, Dey S (2002) An interconnect architecture for networking systems on chips.

IEEE Micro 22(5):36–45Kumar S, Jantsch A, Soininen J-P, Forsell M, Millberg M, Oberg J, Tiensyrj A K, Hemani A

(2002). A network-on-chip architecture and design methodology. In: Proceedings of the computer society annual symposium on VLSI (ISVLSI), Pittsburgh, pp 117–124

Millberg M Nilsson E, Thid R, Jantsch A (2004). Guaranteed bandwidth using looped containers in temporally disjoint networks within the nostrum network-on-chip. In: Proceedings of design, automation and testing in Europe conference (DATE), Paris, pp 890–895

Moraes F, Calazans N, Mello A, Möller L, Ost L (2004) HERMES: an infrastructure for low area overhead packet-switching networks on chip. VLSI Integr 38:69–93

Murali S and De Micheli G (2004) SUNMAP: A tool for automatic topology selection and generation for NoCs. In: Proceedings of the 41st design automation conference (DAC), San Diego, pp 914–919

OCPIP (2011) http://www.ocpip.org/. Accessed 25 May 2011Osso M D, Biccari G, Giovannini L, Bertozzi D, Benini L (2003) Xpipes: a latency insensitive

parameterized network-on-chip architecture for multi-processor SoCs. In: Proceedings of 21st international conference on computer design (ICCD), San Jose, pp 536–539

Pasricha S, Dutt N (2008) On-chip communication architectures: system on chip interconnect (systems on silicon). Morgan Kaufmann, Burlington

Peh L-S, Jerger NE (2009) On-chip networks (synthesis lectures on computer architecture). Morgan and Claypool, San Rafael

Philips Semiconductors (2002) Device Transaction Level (DTL) protocol specification, version 2.2Sathe S, Wiklund D, Liu D (2003). Design of a switching node (router) for on-chip networks. In:

Proceedings of the 5th International Conference on ASIC, Beijing, pp 75–78Siguenza-Tortosa D, Ahonen T, Nurmi J (2004) Issues in the development of a practical NoC: the

Proteo concept. Integr VLSI J 38(1):95–105Zeferino CA, Susin AA (2003) SoCIN: a parametric and scalable network-on-chip. In: Proceedings

of the 16th symposium on integrated circuits and systems design (SBCCI), Sao Paulo, pp 169–174

Page 39: Reliability, Availability and Serviceability

25É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_3, © Springer Science+Business Media, LLC 2012

The design cycle of a complex system has greatly improved since the advent of the core-based design paradigm. Nevertheless, as technology evolves, new problems become the focus of attention. Currently, industry seems to be on pace in terms of design productivity and time-to-market, but yield, power dissipation, and reliability issues are still a challenge for complex core-based systems-on-chip (SoCs) (Venkatraman et al. 2009).

In the early days of SoCs, the capital costs for testing was about 50% of the over-all IC cost (Zorian et al. 2000). Since then, efficient SoC test architectures have been devised and one can find today mature techniques for modular testing, which is the basic step to start planning the test of a complex system. In this chapter we will first review the basics of testing and SoC testing and briefly discuss the most prominent test solutions proposed for complex SoCs. Then, we will analyze the adequacy of those solutions to a complex system that uses a NoC as its interconnect platform so we can focus on the testing of NoC-based SoCs in the subsequent chap-ters. It is important to mention that we probably left out several works that have proposed and improved SoC testing approaches in many ways. This is of course not intentional, but only a result of the large amount of techniques that were published all over the world on this topic. We encourage those that are willing to get a deeper knowledge on this to follow the references and the most important publication vehicles in the area.

3.1 Test Basics

Electronic systems are submitted to a series of verification and validation tasks since its basic conception until its manufacturing and also during its lifetime. Such verifi-cation activities include design formal verifications (automated or not), functional simulation tests, manufacturing tests or even design modifications to make the manufacturing test possible. The ultimate goal of the verification task depends on when the test is applied. During the design phase, tests aim at catching specification

Chapter 3Systems-on-Chip Testing

Page 40: Reliability, Availability and Serviceability

26 3 Systems-on-Chip Testing

problems, programming errors or performance deviations. Manufacturing testing, on the other hand, assumes a correct system design and focus on the manufacturing errors caused by problems in the physical implementation of the circuit. The verification activities covered in this book are all related to manufacturing and maintenance tests. This means we assume a verified and fault-free project of the system and focus on the faults introduced during system physical production or during its normal usage in the field.

Manufacturing testing can be performed in many ways. One can, for instance, observe the chip functional behavior by providing a set of inputs from the expected input domain. This is called functional testing and can be executed in the nominal speed of the system. Functional testing has many advantages such as the low generation and application cost and the capability of detecting performance problems. However, considering the amount of possible problems that can be introduced during manufac-turing, it is known that functional testing is usually not sufficient to give guarantees about the quality of the manufactured circuit. For this, structural testing is normally used. In the structural testing approach, specific inputs are carefully selected to exer-cise the circuit. Such inputs are called test vectors or test patterns and may not be expected inputs from the functional point of view. However, those inputs are capable of stressing the circuit logic in such a way that possible faults are exposed with a certain level of certainty. Test methods for digital systems have reached a considerable level of maturity and many structural techniques and automated processes are available. Herein we review the most basic testing concepts, considering the reader that is unfamiliar with the hardware testing field. Again, we refer to traditional authors in this topic such as Abramovici et al. (1994), Bushnell and Agrawal (2000), Jha and Gupta (2003), Wang et al. (2006), among others, for further reference.

3.1.1 Fault Modeling

Circuit tests can only be effective when looking for faults that realistically represent actual physical mechanisms and layouts. Many defects are inherent to the silicon substrate, others arise due to the impurities of the fabrication process and material. Others yet can appear during the several manufacturing stages (mask alignment, doping level in the contact layer, etc.). These types of defects usually affect several devices and cause multiple faults (manifestation of the defect in the logic) in the die or even in the chip. Defects can also appear during the system operation, normally producing single faults, due to aging, thermal or electro-mechanical phenomena.

Faults can be classified as permanent, transient or intermittent, depending on its manifestation pattern. Permanent faults, if observable, can be always observed if the same input pattern (or test vector) is applied in the circuit i.e., once installed, the fault is permanently present the circuit. Short circuits or open connections are typical exam-ples of defects that cause permanent faults. Transient and intermittent faults have a random manifestation pattern. Transient faults are a result of some electromagnetic or radiation-induced interference in the circuit. For instance, the incidence of an energized

Page 41: Reliability, Availability and Serviceability

273.1 Test Basics

particle in the circuit can affect the transistor threshold levels and cause a flip-flop to invert its stored value. This phenomenon is called a bit-flip and is observable only when the particle hits the circuit. After some time, the effect of the hit cannot be observed anymore. Intermittent faults manifest as gate and path delays, timing discrepancies or even changes in the state of a memory cell, but are caused by non-environmental condi-tions such as aging, variance in components manufacturing, hazards and races in critical timing paths, among others (Bushnell and Agrawal 2000).

Many fault models have been proposed in the literature and have shown to cover most actual defects. The most used fault model is the stuck-at one, that considers a signal permanently stuck in a logic value (0 or 1) (Abramovici et al. 1994). Transistors that are always on (stuck-on) or off (stuck-open) are also a common model for permanent faults whereas delayed gates transition times (slow-to-rise and slow-to-fall models) can model some intermittent faults such as crosstalks. For interconnects, bridging faults are a common model for short-circuits between wires and one can distinguish between AND-bridging (when the resulting value for both wires involved in the short is the AND of their original values) and OR-bridging (when the resulting value for both wires involved in the short is the OR of their original values). Unconnected wires and cumulative delays (path delays) are also examples of interconnect fault models.

3.1.2 Fault Simulation

Fault simulation consists on the simulation of the circuit under the presence of faults that are intentionally injected with the main goal of evaluating the quality of the set of test vectors in terms of number, type and location of faults the set detects. The following steps are involved in the process of fault simulation:

Simulation of the circuit model with no faults injected; –Reduction of the list of faults ( – fault collapsing) by removing faults presenting an equivalent behavior ( fault equivalence) and faults that dominate others with respect to detection requirements ( fault dominance);Injection of faults in the circuit description ( – fault injection);Comparison of the results of the simulation with and without faults. A disagree- –ment indicates the detection of a fault and the detected fault can be removed from the list of faults ( fault dropping).

Fault simulation is used with several purposes. For instance, it allows the definition of the fault coverage (the percentage of faults detected among all possible faults defined for the circuit under a fault model assumption) of a test set. Another purpose of the fault simulation is to help diagnosis (fault location) through the definition of a fault dictionary that identifies which faults are detected by each test vector.

Fault simulation can be an expensive process due to the size of the circuit and of the fault model and many strategies have been proposed to efficiently implement this task (Abramovici et al. 1994; Bushnell and Agrawal 2000).

Page 42: Reliability, Availability and Serviceability

28 3 Systems-on-Chip Testing

3.1.3 Automatic Test Pattern Generation

After the definition of a suitable fault model and of a fault simulation process, the automatic generation of test stimuli (test patterns) is the natural step towards the definition of a cost-effective test procedure for an electronic system.

The problem of generating a set of test patterns consists in finding a (possibly minimal) set of input values and output measures capable of exposing as many faults as possible i.e., a test set that maximizes the fault coverage. For digital circuits, one can find three basic strategies for test pattern generation: exhaustive, pseudo-random and deterministic. Exhaustive testing assumes that all possible values in the input domain will be used as test stimuli. Clearly, this strategy is only feasible for very small circuits although it ensures the maximum possible fault coverage. Pseudo-random test generation covers a sub-set of the input domain by choosing “random” values starting with a seed and for a given length of the test set i.e., a pre-defined number of test vectors. Different seeds and test lengths lead to different sub-sets, distinct fault coverage values, and test application costs. This approach is a good compromise between generation and application costs and achieved fault coverage. However, it has been shown that some faults are resistant to pseudo-random vectors. Those faults can only be detected by very specific input values which are not normally generated in a unbiased environment such as the pseudo-random approach and are tackled by deterministic approaches. Deterministic test generation approaches can be further divided into algebraic and topological strategies. Algebraic methods are based on boolean expressions that describe the circuit behavior (Bushnell and Agrawal 2000). Topological methods are the most used deterministic test pattern generators (TPGs) and use the circuit structure to find suitable input values. These methods are also called path sensitization methods whose most prom-inent representative is the D-algorithm proposed by Roth et al. (1967). The approach consists in first activating the fault by forcing a value that is the opposite from the fault value in a given signal. Then, the algorithm tries to propagate the fault value to one or more primary outputs. Finally, the algorithm tries to set the primary inputs with values that justify the fault value in the signal under test. The procedure is repeated for all expected faults in all signals in the circuit and a test vector is gener-ated for each fault. Many improvements over this basic idea were proposed in the literature to make the process a cost-effective one (Bushnell and Agrawal 2000).

3.1.4 Design for Test

Despite the advances in the test generation process, the fault coverage obtained for complex sequential circuits is rather low when only a simple topological search is used. The main reason for this is that sequential circuits present loops and condi-tions that cannot be resolved or justified to the circuit pins. Moreover, the number of primary inputs (outputs) and the number of paths those inputs (outputs) can directly influence is very low compared to the circuit size. Thus, the idea of changing the

Page 43: Reliability, Availability and Serviceability

293.1 Test Basics

design to make the test easier became very appealing when large scale integrated (LSI) circuits became a reality. The main goal of a design for testability (DFT) approach is to increase the accessibility to the internal signals of a circuit during test. One can find different types of DFT strategies, but the most common ones are the so-called scan path and the use of built-in test structures.

Scan-based testing (Eichelberger and Williams 1978) consists in connecting the circuit flip-flops together to form a shift register (called scan chain) that is further connected to the circuit interface. During the circuit normal operation, the scan register is transparent and each flip-flop works in its original mode, as shown in Fig. 3.1a. In test mode, shown in Fig. 3.1b, the register is activated and one can serially load test stimuli in the scan register (scan-in operation). Then, a test enable signal is activated, allowing the loaded stimulus to be applied to the combinational logic of the circuit. In the same cycle, the flip-flops receive and store the resulting values of

Combinationallogic

PrimaryInputs

Primary Outputs

FF

FF

FF

FF

scan-in

shift scan-out

Combinationallogic

Scan chain

PrimaryInputs

Primary Outputs

a

b

original circuit

circuit with scan infrastructure

Fig. 3.1 Scan-based testing: (a) original circuit, (b) circuit with scan infrastructure

Page 44: Reliability, Availability and Serviceability

30 3 Systems-on-Chip Testing

the computation. Then, this test response can be collected by scanning out the values stored in the register (scan-out operation).

To implement this solution a scan flip-flop such as the one depicted in Fig. 3.2 must replace the original flip-flop, which can be done automatically by design tools nowadays.

A full scan test is implemented when all available flip-flops in the circuit are replaced by scan flip-flops. Partial scan test is also an option and design tools can automatically choose which flip-flops should be replaced according to testability measures over the internal signals (Bushnell and Agrawal 2000).

The test application time (also called test length) of scan-based testing is a function of the length L of the defined scan chain (L is the number of flip-flops in the chain) and of the number of test patterns p generated by an ATPG, as shown in Eq. 3.1 below:

2 4test length p L p (3.1)

Equation 3.1 includes the time required to test the scan chain itself, which consists in loading the sequence 001100… of length L + 4 to the chain (Bushnell and Agrawal 2000). After testing the scan chain, one test pattern is loaded into the chain (scan-in operation) to set the scan flip-flops. Then, one clock cycle is required to apply the vector to the logic under test and load the response back in the scan flip-flops. Finally, a scan-out sequence is performed and, at the same time, a new test vector is loaded. The process repeats for all test vectors in the set.

Scan flip-flops can be arranged as a single long scan chain or as a set of smaller scan chains. In the first option, one has the smallest test pin count in the circuit periphery, but larger (usually prohibitive) test times. Thus, the use of multiple, smaller, and balanced scan chains (scan chains with the same or similar lengths) is a common approach.

A scan-based solution is also used to improve the access to internal signals in a printed circuit board (PCB). This solution is called boundary-scan (LeBlanc 1984) and consists in the implementation of a large scan chain connecting all pins connected to the board, as shown in Fig. 3.3a. Each pin in the chip periphery is connected to a so-called boundary-scan cell and all boundary-scan cells are further connected in a serial mode to form a long peripheral scan-chain throughout the PCB. A boundary-scan cell can be configured to allow the normal operation of the system or to implement three types of testing (Maunder and Tulloss 1990; Bushnell and Agrawal 2000): internal

SFF0

1

mux

D Q

CK

datain

scanin

test

dataoutscanout

/

clock

Fig. 3.2 Scan flip-flop

Page 45: Reliability, Availability and Serviceability

313.1 Test Basics

test mode (chip inputs are controlled and chip outputs are observed through the peripheral scan chain); external test mode (board connections can be controlled and observed through the peripheral chain); and sample mode (chip inputs and outputs are only observed through the peripheral scan chain). The boundary-scan technique is the basis of the JTAG standard (IEEE 1994) that is largely used in commercial chips. JTAG is also known as the IEEE 1149.1 standard and is composed of a set of registers and a test access port (TAP) that defines the interface and the operation protocol of the test infrastructure (Parker 2003), as shown in Fig. 3.3b.

board boundary scan path

circuit boundary scan path

BS cells

a

b

TAPController

TDI

TMS

TCK

TDO

TRST_N

chip core

chip connection

boundary-scan interface

Fig. 3.3 Boundary-scan architecture: (a) chip connection, (b) boundary-scan interface

Page 46: Reliability, Availability and Serviceability

32 3 Systems-on-Chip Testing

Scan-based testing can improve accessibility to the internal signals of the chip but the serial nature of the scan chain reduces the maximum possible test frequency and precludes tests that must be applied in the nominal operational frequency of the circuit. To tackle this problem, Built-In Self Test (BIST) strategies can be used. As presented in Fig. 3.4, in the BIST approach, some or all test functions are imple-mented inside the chip or board, instead of using (or in complement to) an external test equipment. A self-test scheme provides mechanisms for the generation of test stimuli, evaluation of test responses, test controlling and isolation of the inputs and outputs of the chip during test application (Wang 2006). BIST strategies always imply some sort of extra cost, in terms of area, power, delay, or pin count overhead. BIST can also potentially reduce the yield, since the final circuit is normally bigger than the original, with more transistors subject to defects. Therefore, BIST costs must be counterbalanced by the reduction in the test and maintenance costs and by the possible increase in the number and types of target faults.

Although the great diversity in the circuit’s functionality and architectures precludes the definition of an universal BIST solution, some structured BIST approaches have been successfully used to cover classes of circuits. For instance, the use of linear feedback shift registers (LFSRs) as generators of pseudo-random test vectors and/or test response analyzers or compressors (in the form of a multiple input shift register – MISR) is a commonly used BIST strategy because it presents a good compromise between test generation and application cost and resulting fault coverage. A generic LFSR is composed of D flip-flops configured as a shift register with XOR gates implementing a linear feedback network. Depending on the size of the shift register and on the location of the XOR gates in the feedback line, different series of vectors can be generated. Similarly, different compression characteristics can be obtained (Abramovici et al. 1994). A multifunctional BIST structure called BILBO (Built-In Logic Observer) (Konemann et al. 1979) is depicted in Fig 3.5. This structure

BIST controller

Test response analyzer

Test pattern generator

Core of the Circuit under test

Test go/no go

chipFig. 3.4 Generic BIST architecture

Page 47: Reliability, Availability and Serviceability

333.1 Test Basics

combines the functions of latching (c1 = c2 = 1), scan register (c1 = c2 = mode = 0), pseudo-random test generation or test compression (c1 = mode = 1, c2 = 0), and parallel initialization for sampling operations (c1 = 0, c2 = 1) (Bushnell and Agrawal 2000).

For scan-based circuits, a common BIST solution is shown in Fig. 3.6. This solution is called STUMPS (self-testing using MISR and parallel shift) and uses a LFSR-based test pattern generator that feeds test stimuli in all scan chains (Bardell and McAnney 1982). After the test application cycle, test responses are scanned out to the MISR block that compresses the result into a signature. When the scan chains outnumber the width of the LFSR and/or MISR used, a phase shifter and a compactor are required, respectively (Wang 2006).

As circuits grew larger, test architectures and solutions advanced to more structured and standardized implementations with support from CAD tools. Nevertheless, the size and complexity of electronic systems increased in a faster pace, provoking a paradigm shift in the design flow. By the end of the 1990s, the core-based approach was the new design paradigm being used in the industry and the realization of complex systems on a single chip imposed new challenges with respect to testing.

D Q

tpg0

sa0

D Q

tpg1

sa1

D Q

tpg2

sa2

D Q

tpg3

sa3

Scan outmux

clkmode

Scan in

c1

c201

Fig. 3.5 Four-input BILBO

Scan Path

Scan Path

Scan Path

CUT

LFS

R MIS

R

Pha

se s

hifte

r Com

pactor

BIST controller

Scan Path

Fig. 3.6 STUMPS architecture

Page 48: Reliability, Availability and Serviceability

34 3 Systems-on-Chip Testing

3.2 SoC Testing Requirements

The main SoC test requirements were stated in 1998 by Zorian, Marinissen, and Dey (Zorian 1997, 1998; Zorian et al. 1998) and comprise the test requirements for each embedded core, for the interconnection infrastructure, and for the system as a whole.

3.2.1 Core Test Requirements

Core test requirements can be seen as the first level of a complete SoC test solution and basically involves three definitions: the core test strategy, an electronic access to the core during test, and the core isolation during test.

– Definition of the core test approach: The definition of a test strategy for a core (Fig. 3.7) depends on the knowledge of the logic implemented by that block. Therefore, this task is usually performed by the core provider, which also assures the protection of the intellectual property associated to the reusable block. However, the core test strategy also depends on the target technology of the final system, the system test resources and the required fault coverage, but these parameters are not known by the core provider a priori. Thus, in general, the core provider supplies a basic set of test vectors and DFT strategies (scan chains and BIST controllers among others) for the core, to test for the most common technology

DRAM

SRAM

CPU

MPEG

ROMPCI

CUT

Test information from core provider

System characteristics

Test patterns

Fig. 3.7 Communication between core provider and core user to generate core testing data

Page 49: Reliability, Availability and Serviceability

353.2 SoC Testing Requirements

faults. For open source soft cores, the system integrator has access to the core description. In this case, he/she can make some modifications in the core logic to leverage the overall system test solution. In both cases, the communication between the two parts (core provider and core user) is the key for the successful testing of the block. One needs a common, unambiguous communication strategy between core providers and core users so that test information is correctly transmitted. The Core Test Language (CTL) defined in the IEEE Std 1450.6 (2005a) was devised with this goal.

– Access to the core periphery during test: During test, other pins, in addition to the functional interfaces of the core, must be accessed (scan-in and scan-out interfaces, control pins, testing clock, etc.). When the core has no built-in test generators and analyzers, it must be connected to another block that generates the test patterns and/or analyzes the test responses. This block can be an external tester or another core in the SoC. In any case, the connection of the core under test (CUT) and the tester block must be also implemented since this connection is not normally avail-able in the original functional connections of the CUT within the system. When an external automatic test equipment (ATE) is used, there is an additional issue that must be considered. The number of pins at system level is usually much smaller than the number of test pins required for all embedded cores (or even for a single core in many cases). Moreover, the definition of the test access mechanism (TAM) for each core impacts all other system test costs, such as test time and area over-head. Therefore, the definition of such mechanism must be carefully considered.

– Core isolation: During test, a core must be switched to a test mode, so that the test pins become ready to receive and send data and internal test circuitry (scan chains, test points, etc.) is enabled. Additionally, it is important to isolate the CUT from the rest of the system for many reasons: when TAMs are shared and need to be directed to a single CUT at a time, to avoid damaging other cores connected to the functional outputs of the CUT, when multiple cores are tested in parallel and share some functional connections or use distinct test patterns, to test the surround logic and connections of a core, or to enable the test of a core that uses built-in test structure. Therefore, there must exist an extra logic around the core to provide the several operation modes of this module along the system operation and testing. This logic can be either part of the core and be delivered along with the block itself, or be implemented by the system integrator, according to the core and system requirements (Zorian 1997, 1998). A crucial part of the IEEE Std 1500 (2005b) is the definition of such logic, called the test wrapper, that will be detailed later in this chapter.

3.2.2 Interconnection Requirements

Interconnection testing has been largely discussed in the literature (Kautz 1974; Wagner 1987; Feng et al. 1999; Cuviello et al. 1999; Zhao and Dey 2003; Marinissen et al. 2003) and one has now a considerable range of test strategies covering different

Page 50: Reliability, Availability and Serviceability

36 3 Systems-on-Chip Testing

types of faults affecting electronic wires. At SoC level, the test of the interconnections presents a single additional requirement: the possibility of precisely controlling and observing each connection. This test is, nevertheless, of extreme importance not only for the system characterization (determining the actual performance achieved by the system for instance), but also to ensure the whole system operation, as interconnections became one of the central blocks in system design. Furthermore, as the number of connections among cores is constantly increasing (in number and bitwidth), it has an important impact on the system test time. Interconnection testing relies on the exis-tence of both, the test access mechanism to each embedded core and on the core’s test wrapper that allows loading and capturing interconnection test signals.

3.2.3 System Requirements

System level test requirements are related to the cost-effective combination of the test access mechanism of each core and of the interconnection infrastructure in such a way that all cores and connections are properly tested without deeply affecting the system performance, cost, and design time. When combining the tests of each embedded core and of the interconnections, an overall test scheduling defines the order of testing of each part of the system: cores, interconnections and glue logic (additional logic that helps connecting the cores or implements extra functionality in the system). The test scheduling depends basically on the set of test resources available and shared among cores, on the ATE constraints, on the SoC test interface, and on the system power constraints. The main goal of a good test scheduling approach is to minimize the overall SoC test time, for a given set of resources. Optimizing the usage of the test infrastructure is of course desirable and often achieved by current test planning approaches, as we detail in the next section.

In addition to the test scheduling generation, system level testing also deals with the definition of a test controller, which can be implemented by either a specific hardware inside the chip, or by a software (test program) executing in the external ATE or in an embedded processor. This controller coordinates the test by sending the correct control and test signals to each block under test in the system according to the devised test scheduling.

In the next section, we present a summary of the solutions that have been presented in the last few years, in response to some of the defined test requirements. Then, these solutions are discussed on the context of a NoC-based system as well as the recent alternative solutions.

3.3 SoC Testing Approaches

Industry and academia made a considerable effort during the late 1990s and the first decade of this century to develop suitable solutions for the test of core-based systems. Responding to the several test requirements listed above one can group the techniques

Page 51: Reliability, Availability and Serviceability

373.3 SoC Testing Approaches

presented so far in four categories: (1) test access mechanism definition, (2) test scheduling methods, (3) test planning approaches, and (4) standardization initiatives. The techniques in the first group tackle the problem of defining an access mechanism to the cores periphery during test. Such solutions can be based either on the reuse of hardware resources already available in the system (functional connections or logic) or on the insertion of additional infrastructure for testing purposes only. Test scheduling methods aim at minimizing system test time and pure test scheduling approaches are based on an access mechanism previously defined. We call test planning approaches the more comprehensive proposals where several system aspects (TAM, test time, power budget, fault coverage, additional test requirements, and so on) are considered when devising the SoC testing solution. Finally, the fourth group comprises the initia-tives for the standardization of the interface between the cores and the system during test. In the sequel, some representative works of these four groups will be discussed. Again, we must warn the reader that very good books dedicated specifically to this topic are available in the market (Chakrabarty 2002; Chakrabarty et al. 2002; Larsson 2005; Silva et al. 2006; Wang et al. 2007) and should be the prime reference on this. The reason for including this section here is to give the reader a brief introduction on the SoC testing field so he/she can follow the rest of this book.

3.3.1 Conceptual Test Architecture

Zorian et al. (1998) introduced the conceptual test architecture for embedded cores as well as the nomenclature for its elements that has been used in the literature. The conceptual architecture, shown in Fig. 3.8, is composed of four basic elements:

A – test stimuli source for the real-time test pattern generation;A – test sink for the reception and evaluation of the test responses;A – Test Access Mechanism (TAM) for the transportation of the test data from the test source to the core and from the core to the test sink;A core – wrapper, for the connection of the core terminals to the TAM terminals, providing the mechanisms for isolation and integration of the core to the system during test.

As a conceptual architecture, this proposal does not assume or determine any specific implementation aspect of those four elements. They must be identified in any SoC test architecture, but their organization is at the discretion of the system integrator and depends on the core requirements and system constraints. There are, on the other hand, functional requirements for the wrapper structure, which must allow the core to operate in at least three modes: normal, internal test, and external or interconnection test. Additionally, the wrapper must also implement some type of bypass mode to isolate the core from the system when other cores are being tested.

Test sources and sinks can be implemented off-chip (using an external test equip-ment), on-chip (through BIST structures or reusing functional units of the system), or they can be a combination of both (for example, when a core is tested by a

Page 52: Reliability, Availability and Serviceability

38 3 Systems-on-Chip Testing

combination of deterministic and pseudo-random vectors). Furthermore, the source and the sink do not need to be of the same type, that is, one can have an on-chip test source and an off-chip test sink, or vice-versa. Despite the freedom of the conceptual architecture, most SoC testing approaches assume an external test equipment as the main source and sink of test data. Some authors propose though the reuse of the embedded microprocessor for this role (Papachristou et al. 1999; Hwang and Abraham 2001; Lahiri et al. 2002; Chen et al. 2002). In those methods the micro-processor is already connected (for normal system operation) to a large number (if not all) embedded cores through a functional bus or a hierarchy of buses. However, as there is a single or a small number of test processors being used and connections are normally not shared among cores, only one or a few cores are tested at a time. Therefore, this type of solution is usually effective for small systems where the microprocessor is connected to most embedded cores.

The test wrapper is a simple adapter around the core that connects the TAM(s) to the core (Marinissen et al. 2000b). The wrapper provides the switching between nor-mal functional access and test access via the TAM. Besides providing accessibility and isolation of the core during test, one of the roles of the wrapper is to provide accessibility to the interconnections of that core for the interconnection testing. Moreover, wrappers provide width adaptation when core test interface width is larger than the TAM width (Marinissen et al. 2000b).

3.3.2 Test Access Mechanism Definition

The test access mechanism communicates the core under test to the pattern sources and the test sinks. Although the same mechanism can be used for the transport of the test data in both directions, this is not mandatory and a number of TAM combinations

DRAM

SRAM

CPU

MPEG

ROMPCI

CUTCUTtest data source

test access mechanism (TAM)

test access mechanism (TAM)

test data sink

test data generation and analysisOn-chip or off-chip

Test wrapper

Fig. 3.8 Conceptual architecture for SoC testing

Page 53: Reliability, Availability and Serviceability

393.3 SoC Testing Approaches

can co-exist for the same core (Zorian et al. 1998). Indeed, most successful approaches for TAM design are based on the combination of access mechanisms following the diversity of a complex core-based SoC. The design of a TAM always searches the best trade-off between the transport capacity of the access mechanism and its application cost. The capacity of data transportation is limited by the capacity of the source and sink, and by the system area that can be used by the TAM, which is usually measured as the TAM bitwidth. Figure 3.9 shows the relationship between the TAM bitwidth and the test costs in terms of area and pin overhead, and the core test time.

Along the years several approaches for TAM architectures were proposed, ranging from the reuse of functional buses, interconnections, or even functional units of the system to the use of specific infrastructure (with several configurations) inserted into the system only for testing access. The most prominent approaches for TAM definition presented so far are discussed below.

The first TAM proposals (Whetsel 1997; Bhattacharya 1998; Lee and Huang 2000; Hu and Yibe 2001; Oakland 2000) were mainly based on the IEEE Std 1149.1 (1994). The basic assumption of these approaches is that many cores being used at the time were ASICs (application-specific integrated circuits) in the past, and the boundary-scan was already implemented for those modules. Moreover, as the JTAG mechanism requires only five extra pins at system-level, pin count would be low. Some authors used the advantages of the IEEE Std 1149 to deal with hierarchical SoCs (Li et al. 2002a, b). However, the inclusion of a TAP controller into a core makes the integra-tion of such a core into a SoC more difficult (Lousberg 2002), since an extra level of controlling is required for each TAPed module. The main disadvantage of the JTAG-based methods is, nevertheless, the possibly excessive testing time caused by the reduced TAM bandwidth provided by the system-level TAP.

Other authors tackled the access problem by reusing available system resources, such as cores and functional interconnections. The use of cores’ functionality as access path was explored by Ghosh et al. (1997), Chakrabarty et al. (2001), Makris and Orailoglu (1998), and Yoneda and Fujiwara (2002). In these approaches, test access to embedded cores is based on transparent paths through other cores and design modules as depicted in Fig. 3.10.

Fig. 3.9 Relationship between TAM width and basic test costs

Page 54: Reliability, Availability and Serviceability

40 3 Systems-on-Chip Testing

In this approach, every core should not only come with a set of pre-computed tests, but also with a set of transparent paths, capable of transporting test data through the core. Transparent paths can either be part of the cores functionality (Chiusano et al. 2000) or be synthesized accordingly (Chakrabarty et al. 2001; Makris and Orailoglu 1998; Yoneda and Fujiwara 2002). The main drawback of the transparent paths is the need for the core designer to provide extra functionality without knowing the system requirements. This may lead to an excessive number or, on the contrary, to an insuf-ficient number of access paths. Ghosh et al. (1998) proposed a solution where different transparent paths are available in different versions of each core, and each version has a distinct area overhead. Thus, only one version of the core is actually used and the system area is optimized. Still, one cannot assure that all required paths will be avail-able or a huge number of core versions may still be required thus affecting design time. On the other hand, to define the transparent modes during the system integration, soft cores are assumed, and the access to the core description is required.

In another venue, system functional interconnections are reused during test for test data transportation. One of the first implementations of this approach was presented by Nourani and Papachristou (1998a, b, 2000). They propose the introduction of a bypass mode for each core input port to its output port through which the test data can be transferred. The system is then modeled as a directed weighted graph in which the core accessibility is solved as a shortest path problem. Nowadays, this bypass mode is implemented by the standard test wrapper that will be discussed later on this chapter.

Finally, a fourth line of solutions advocates the insertion of test buses in the system to implement the test access mechanism. Even though the dedicated wiring increases the area costs of the SoC, the scalability and the possibility of modeling

wrapper

CORE

Test

sig

nal

wrapper

CUT

n inputs

core

n-bits connection

Fig. 3.10 TAM defined as transparent paths through other cores

Page 55: Reliability, Availability and Serviceability

413.3 SoC Testing Approaches

the problem with a limited number of variables makes this TAM a very interesting one. As a result, this approach was the most explored test architecture and actually became the standard solution for complex SoCs for its cost-effectiveness.

Varma and Bhatia (1998) proposed one of the first test bus approaches, which is called VisibleCores. Their approach is based on two dedicated on-chip variable-width buses, one for transporting test control signals, and one for transporting test data sig-nals, as shown in Fig. 3.11. Additional control logic is included to connect every embed-ded core to the test buses. During test, a single CUT is connected to the test buses at a time. This constraint implies high test time as cores cannot be tested in parallel. Moreover, some tests (interconnections, for instance) may involve multiple cores and are not tackled by this approach. In this approach the bitwidths of the test buses are fixed and are defined based on the largest required bitwidth among the cores.

Aerts and Marinissen (1998) proposed three scan chain architectures to be imple-mented at chip level to connect the chip interface to the cores interfaces. These scan chain models set the foundations for the definition of the TestRail architecture (Marinissen et al. 1998) which later became the most effective architecture for SoC testing. The three possible configurations of chip-level scan chains are called, respectively, the Multiplexing architecture, the Daisychain architecture, and the Distribution architecture. They aim at reducing the size of the test set at system level, that is, the TAM is such that the complete set of test vectors (comprising the test vectors of all embedded cores) is optimized thus reducing the storage and communication costs within the ATE. The number of pins available at the chip interface to connect the internal TAM to the external ATE is given and based on this parameter the best architecture can be chosen.

In the Multiplexing and Daisychain architectures shown in Fig. 3.12 (control and functional connections are omitted for simplicity), all cores have access to the total

core core

core core core

Functional connections

Control bus

Output test bus

Input test bus

Fig. 3.11 TestBus architecture

Page 56: Reliability, Availability and Serviceability

42 3 Systems-on-Chip Testing

available TAM width, while in the Distribution architecture (Fig. 3.13) the total avail-able TAM width is distributed over the cores. In the Multiplexing architecture, only one core wrapper can be accessed at a time. Consequently, in this architecture the cores must be serially tested. To optimize the chip-level test set, the test integrator must have access to the cores description and generate balanced internal scan-chains, that is, synthesize internal scan-chains of similar size in such a way that the length of the longer scan-chain is minimized. An important drawback of this architecture is that testing the functional connections between cores is difficult as the TAM does

core core

core core core

Con

cept

ual

mul

tiple

xer

N

N

N

N

N

N

N

N

core core

core core core

bypass register

N N

multiplexer

a

bmultiplexing

daisychain

Fig. 3.12 Basic system-level scan chain architectures: (a) multiplexing, (b) Daisychain

Page 57: Reliability, Availability and Serviceability

433.3 SoC Testing Approaches

not provide access to multiple wrappers in a single time. Also, embedded cores must be tested sequentially as they share the output interface with the external tester. The other two basic architectures do not have these restrictions. The Daisychain architecture includes a bypass mode around the cores and defines a long scan-chain connecting all cores in the system. The number of internal scan-chains per core equals the bitwidth of the chip-level scan chain and is defined according to the number of test pins available at chip-level. Initially, a test vector is shifted in and applied to all cores. Test responses are captured and shifted out while another vector is shifted in. When all test vectors have been applied to a single core, this core is put into bypass mode to reduce test time, and test vectors of the remaining cores continue to be applied. As the cores finish their test, they enter the bypass mode until only a single core (the one with the largest test set) remains in test mode. Finally, in the Distribution architecture the number of test pins at chip-level is distributed among the embedded cores following an algorithm that optimizes the scan chain bitwidth per core considering the overall test time optimization.

In that same year, Marinissen et al. (1998) proposed the TestRail architecture, which is a combination of these previous scan-chain architectures in a flexible way at system level. TestRail (Fig. 3.14) allows for multiple scan-chains on one SoC. Each chain operates independently, as in the Distribution architecture. Thus, the test integrator can explore the test space to find the best tradeoff among test time, area overhead, test power consumption, and so on, while still meeting the cores testing requirements.

core core

core core core

N1

N1

N2

N3

N2

N4

N3N4N5

N5

Fig. 3.13 Distributed scan chain architecture

Page 58: Reliability, Availability and Serviceability

44 3 Systems-on-Chip Testing

The original TestRail approach works best if the internal scan chains of each core can be defined or optimized by the system integrator. Later approaches have further improved the TestRail principle by defining algorithms to optimize only the connection of the core to the TAM through the test wrapper, as we describe next.

In the first test access architectures, the connection between a single core and the TAM were assumed complete, that is, all wires of that TAM were assigned to the core. Such architectures are called fixed-width TAMs (Iyengar et al. 2002b). Iyengar et al. (2002b) defines as flexible-width TAMs the “core-TAM assignments where the granularity of TAM wires is considered, instead of considering the entire TAM bundle as one inseparable entity”.

Co-optimization of wrappers and TestRails for fixed-width connections was investigated by Goel and Marinissen (2002a, b). An architecture-independent heu-ristic algorithm that optimizes the test architecture for cores with both fixed-width and flexible-width scan chains was further proposed by Goel and Marinissen (2002c). The algorithm efficiently determines the number of TAMs and their widths, the assignment of modules to TAMs, and the wrapper design per module.

Chakrabarty (2000b) proved that many problems related to the definition of bus-based TAMs are NP-hard. Additionally, he used an Integer Linear Programming (ILP) heuristic to model the test bus assignment problem. In this problem, for a given number of test pins at the system interface divided into a given number of test buses, the best width for each test bus and the best assignment of test buses to cores is defined so that the system test time is minimized. In further works, Chakrabarty, Iyengar, Marinissen, and others have improved this original ILP model to optimize the test bus assignment under other system constraints, such as, power and place&route

N1

N2

N1

N2

N3

N3

core core

core core core

Fig. 3.14 TestRail architecture

Page 59: Reliability, Availability and Serviceability

453.3 SoC Testing Approaches

requirements (Chakrabarty 2000a), wrapper and TAM co-optimization (Iyengar et al. 2001, 2002a), and test data compression (Iyengar et al. 2003). Finally, Xu and Nicolici (2004) proposed a multi-frequency TAM to reduce test application time.

The issue of designing balanced scan chains within the wrapper was addressed by Marinissen et al. (2000b). To solve the problem, the authors proposed two poly-nomial-time algorithms that yield near-optimal results. The wrapper optimization was modeled as a bin packing problem by Iyengar et al. (2001) and an algorithm based on the Best Fit Decreasing heuristic was proposed as a solution (Iyengar et al. 2002c). The algorithm aims at minimizing both the core testing time and the TAM width required for the test wrapper.

Yoneda et al. (2006) proposed a test strategy (wrapper and TAM design and a test scheduling algorithm) for SoCs with cores operating at different clock frequencies during test. The combination of multiple clock domains with the bandwidth conver-sion and gated clocks was proposed by Yu et al. (2007) to further improve power-constrained test schedules.

A few authors have proposed the use of alternative TAM architectures, also assuming the possibility of including extra hardware only for test purposes in the chip. The use of crossbar switches for the efficient communication at varying bitwidth between cores and the test bus was proposed by Benabdenbi et al. (2000, 2002) and Basu et al. (2002a, b). Nahvi and Ivanov (2001) proposed the use of a packet switching communication-based TAM for a SoC. The proposed TAM model is called NIMA (Novel Indirect and Modular Architecture), and it is defined to allow modularity, generality, and configurability for the test architecture. This architecture is very similar to a functional on-chip network, but it is specifically designed for the test task. Thus, routing and addressing strategies are defined considering the test require-ments of each system. Moreover, routing is hardwired, assuming that a test schedule is defined by the system designer before the system synthesis. The results presented in that work showed a good performance of the proposed TAM model with respect to area overhead and test time.

3.3.3 Test Scheduling Definition

System test scheduling is normally devised together with the system TAM. Indeed, most effective approaches are the ones that take into consideration several test constraints (TAM definition, test time minimization, test volume reduction, etc.). Some authors, however, have proposed strategies to generate efficient test scheduling for systems with specific characteristics, such as cores with multiple test sets and BIST-based testing or a previously defined TAM.

Sugihara et al. (1998, 2000) and Jervan et al. (2000, 2002) propose automatic methods to select the best combination of pseudo-random and deterministic patterns for each core in an SoC to minimize the system test time. In both works, each core is assumed to have a number of test sets, each set using a different combination of

Page 60: Reliability, Availability and Serviceability

46 3 Systems-on-Chip Testing

pseudo-random and deterministic patterns. The selection algorithms choose one set for each core so that the system test time is minimized. The quality of the final solution for these approaches is related to the number of possible test sets, which defines the search space for the selection algorithms. In addition, test time minimization is based on the test parallelization of the BISTed cores. Zhao and Upadhyaya (2002) also assume a combination of test sets for each core and minimize system test time for a given set of tests for the cores, a set of resources, the test access architecture and the maximum power budget.

To reduce system test time, one wants to test as many cores in parallel as possi-ble. However, this test requirement may imply an excessive power dissipation of the SoC in test mode. The resulting power dissipation can be even higher than the dissipation in normal operation mode where a smaller percentage of the cores are active in a single time (Zorian 1993). Thus, many test scheduling approaches were developed with this constraint in mind. In these approaches, for a given TAM architecture, a test scheduling is devised to minimize, optimize, or only meet the system power constraints (Chou et al. 1997; Muresan et al. 2000; Rosinger et al. 2001a; Rosinger et al. 2002; Pomeranz and Reddy 2002; Zhao and Upadhyaya 2005).

Ravikumar et al. (1999) assume all embedded cores have BIST-based testing and allow sharing of test resources (pattern generators and signature registers) among cores. They proposed then an algorithm that minimizes test application time and the test area overhead, treating the total power dissipation as a constraint. Further, they expanded their work (Ravikumar et al. 2000) to deal with a library of possible map-pings for each core in the system, where each mapping has a different power consumption and area. The algorithm thus selects which version of each core should be synthesized so that a minimal test time for a given power constraint is found.

Both TAM optimization and test scheduling significantly influence the test time, test data volume and test cost for SoCs. Furthermore, TAMs and test schedules are strongly dependent on each other and it is very unlikely that a test schedule devised for a given TAM can be reused “as is” to another TAM organization. Integrated methods that perform TAM design and test scheduling in conjunction are therefore required to achieve low-cost, high-quality tests.

Larsson and Peng (2001a) presented an integrated technique for test scheduling and scan-chain division under power constraints. The design of test wrappers to allow for multiple scan chain configurations within a core was also studied. Larsson and Fujiwara (2002) extended this model by (1) allowing several different band-widths at cores and (2) controlling the cores test power consumption. Iyengar and Chakrabarty (2001, 2002a) considered precedence-based scheduling of large SoCs (when some cores must be tested in a specific order). Flottes et al. (2002) proposed a sessionless test scheme where several constraints in terms of test resource sharing, power dissipation and precedence are taken into account. Huang et al. (2002) presented a method to solve the resource allocation and test scheduling problems together without being tied to any specific TAM. Koranne and Iyengar (2002) proposed the use of k-tuples as a compact and standardized representation of the test schedules to facilitate the evaluation of SoC test automation solutions considering

Page 61: Reliability, Availability and Serviceability

473.3 SoC Testing Approaches

precedence relations among tests and power constraints. Finally, genetic algorithms have also been used by Chattopadhyay and Reddy (2003) to solve the problems of test scheduling and TAM partition.

3.3.4 Test Planning

A number of frameworks and more comprehensive models were proposed to cope with global optimizations for the final solution. These frameworks differ by the type of core test methods addressed and the TAM definition, but they all manage the distribution of test resources considering a variety of cost factors.

Benso et al. (2000) presented a tool for integration of cores with different test requirements (full scan, partial scan and BIST-ready cores). The TAM follows a bus-based model that connects a group of cores to a BIST controller. Scheduling of BIST resources and data pattern delivery are also considered in the test solution. Frameworks for SoC testing considering test time minimization, TAM optimiza-tion, test set selection, and test resource placement, along with test resources and power consumption constraints were studied by Larsson and Peng (2001a, 2002) and Larsson et al. (2001b, 2002). The integrated tool is based on the fact that dif-ferent test sets can be used to test a core. This way, each test set is evaluated under power, time, memory requirements, and so on, and the best test set is chosen according to the system constraints. They further assume that scan chains can be divided into smaller ones, to accelerate test time. BIST resources are then placed in the system according to their usage by the cores. After the TAM definition, the test schedule is generated so that the minimum test time for that specific TAM is achieved.

Another test planning approach (Cota et al. 2002, 2004) proposed a heterogeneous TAM where functional connections were reused as much as possible, but without compromising system costs. The resulting test solution represents the best tradeoff in terms of test time, pin count, and area overhead, considering the system charac-teristics and power constraints. The main contribution of the proposed methodology was the expansion of the TAM models, its integration with the test scheduling defi-nition, and the consequent possibility of exploration of the system design space during the definition of a test solution.

Iyengar et al. (2002d) described an integrated framework for plug-and-play SoC test automation. The framework is based on a wrapper/TAM co-optimization approach and includes a scheduling algorithm that incorporates preemption, prece-dence and power constraints. Finally, the relationship between the TAM width and the tester data volume is studied to identify an effective TAM width for the SoC.

Goel and Marinissen (2003) extend existing SoC test architecture design approaches to minimize the required tester vector memory depth and test applica-tion time. Test data volume reduction has been further addressed by Touba (2002), Sinanoglu and Orailoglu (2002), Gonciari et al. (2002), Rosinger et al. (2001b), Chandra and Chakrabarty (2002a, b).

Page 62: Reliability, Availability and Serviceability

48 3 Systems-on-Chip Testing

3.4 Test Standard Initiatives

3.4.1 IEEE Standards 1500 and 1450.6

In September 1995, the IEEE Test Technology Technical Committee (TTTC) created a Technical Activity Committee (TAC) for the study of the test and design-for-test of core-based SoCs. This committee became the IEEE P1500 Standard for Embedded Core Test (SECT) group in June 1997, with the main goal of developing a standard mechanism for the test of core-based systems (Marinissen et al. 1999). The final standard was approved in August 2005 and is now known as the Standard Testability Method for Embedded Core-based Integrated Circuits (IEEE Std 1500 2005b).

The main goal of the IEEE Std 1500 is to facilitate core-based testing, i.e., testing of large system chips mostly composed by intellectual property (IP) blocks. This task is known as modular testing, as each and every core (module) in the system has to be accessed and tested after manufacturing even though the test information for these mod-ules is not generated by the system integrator. Furthermore, the standard also enables the test of the external logic surrounding the core. The motivation behind this industry-wide standard is to enable the reuse of tests when a core is used in multiple different SoCs, as well as to enable the testing of SoCs with multiple cores from distinct core providers.

IEEE Std 1500 does not define the internal test methods or internal test structures of the cores as these are strongly dependent on the core logic and design. Rather, the standard provides a clear communication mechanism between the core provider and the system integrator by standardizing only the test information transfer model and the functionality of the logic block that connects the core to the system during test (Marinissen et al. 2002c). IEEE 1500 also does not cover SoC test integration and optimization since each system has a different set of requirements and constraints to be met and must be dealt with by the system integrator.

The main element of the IEEE Std 1500 is a scalable core test architecture based on the definition of a core wrapper that interfaces the IP block with the system implementing the different operation modes of the core (normal, internal test, exter-nal test, etc.). The standard uses a test-specific language as the communication mechanism between core providers and users. This language is subject of another standardization initiative and is called IEEE Std 1450.6 core test language (CTL) which has been approved in December 2005 (IEEE Std 1450.6 2005a).

CTL was initially part of the IEEE Std 1500 as a second work group and focused on defining a standard language in which all test-related information to be trans-ferred from core providers to core users could be expressed (Kapur et al. 1999). As the work of both groups (wrapper and language design) evolved, they were sepa-rated into two standards in such a way that CTL, the language, became IEEE Std 1450.6, and the information model for cores that uses CTL (description of wrapped and unwrapped cores) remained in IEEE Std 1500.

The core test language allows the “representation of design constructs and characteristics that are needed to be made visible by the core provider” and the “representation of test patterns that are to be reused for cores in an SoC test flow” (IEEE Std 1450.6 2005a).

Page 63: Reliability, Availability and Serviceability

493.4 Test Standard Initiatives

3.4.1.1 Scalable Core Test Architecture

From the basic elements of the conceptual test architecture shown in Fig. 3.7, IEEE Std1500 only standardizes the wrapper. The IEEE Std 1500 wrapper is a shell around the embedded core that isolates that core from its environment during system testing. For this, the wrapper must implement at least three operation modes: (1) functional operation, in which the wrapper is transparent and functional signals flow normally from the system to the core and vice-versa; (2) inward-facing test modes, in which the wrapper connects the test interface of the core to the system test access mechanism and the core is tested; and (3) outward-facing test modes, in which the core is isolated from the system but the wrapper is connected to the system-level TAM and it is used to transmit test data for other modules (Marinissen et al. 2000b).

Figure 3.15 gives an overview of the main elements of the IEEE Std 1500 wrapper architecture (IEEE Std 1500 2005b). Dotted lines in the figure represent optional structures. The main structure is the wrapper boundary register (WBR) which comprises a number of wrapper cells that implement the actual switch between operation modes. Each wrapper cell is connected to a core terminal (except for a few specific cases) but different cell models can be present in a single wrapper (Amory et al. 2007). The wrapper has a mandatory one-bit input/output port pair, WSI (Wrapper Serial Input) and WSO (Wrapper Serial Output). The parallel test interface in the wrapper is optional and depends on the chip-level TAM architecture.

CUT

Scan chainsW

rapp

er b

ound

ary

regi

ster

Functional input

Functionaloutput

Input parallel TAM

Output parallel TAM

Wrapper boundary register

Fun

ctio

nal i

nput

sF

unctional outputs

Test control

WIR

WBY

wrapper

Wrapper serial input Wrapper

serial output

Fig. 3.15 IEEE Std. 1500 wrapper architecture

Page 64: Reliability, Availability and Serviceability

50 3 Systems-on-Chip Testing

The control of the operation modes is implemented through the Wrapper Instruction Register (WIR). This register can be loaded either serially, through WSI, or in parallel. Finally, a Wrapper Bypass Register (WBY) provides a bypass path from the serial input and the serial output. Optionally, parallel bypass can be implemented through the wrapper boundary register by connecting the outputs of wrapper input cells to the test inputs of the wrapper output cells.

IEEE Std 1500 (2005b) defines two compliance levels, referred to as IEEE 1500 Unwrapped compliance and 1500 Wrapped compliance. In both situations, the core is assumed to have an associated CTL description. The wrapped compliance level “refers to a core that incorporates an IEEE 1500 wrapper function”. In this case, the associated CTL program describes the core testing information and the wrapper operation. In the unwrapped compliance level the wrapper is not present yet, but the associated CTL program contains the information on the basis of which a compliant wrapper can be built. The two compliance levels provide flexibility in the usage of the standard which can be adapted to different business models as well to different IP and system characteristics.

While the test of unwrapped blocks was discussed by Xu and Nicolici (2005), several works have adapted the basic wrapper definition to include additional func-tionalities. For instance, an efficient wrapper architecture for wrapped compliant hierarchical cores was proposed by Sehgal et al. (2004). Xu et al. (2007) discussed the wrapper design for cores with multiple clock domains. Recently, Benso et al. (2008) proposed a systematic methodology for the implementation of customized frame-works that check the compliance of a core to the standard.

A short tutorial on the IEEE Std 1500 is presented by Marinissen and Zorian (2009) together with some application case studies that demonstrate how the scalable archi-tecture can be applied to complex SoCs.

3.5 SoC Test Benchmarks

Marinissen et al. (2002a, b) proposed a set of SoC test benchmarks with the main goal of “stimulate research into new methods and tools for modular testing of SoCs and to enable the objective comparison of such methods and tools with respect to effectiveness and efficiency.” The benchmarks format provides the cores test requirements in terms of number and type of tests, number of test patterns, and cores test interface (number and size of internal scan chains, number of test pins, hierarchy level of the core) (Marinissen et al. 2002b). Twelve systems compose the ITC’02 SoC Test Benchmarks, among academic and industrial contributions (Marinissen et al. 2002b):

u226, from Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil;d281 and d695, from Duke University, Durham, USA;h953, from National Tsing Hua University, Hsinchu, Taiwan;g1023, from Jiri Gaisler and University of Stuttgart, Stuttgart, Germany;f2126, from Faraday Technologies, Hsinchu, Taiwan;

Page 65: Reliability, Availability and Serviceability

513.6 On the Applicability of Standard SoC Test Strategies in NoC-Based Systems

q12710, from Hewlett-Packard, Shrewbury, USA;p22081, p34392, and p93791, from Philips Electronics, Eindhoven The Netherlands;t512505, from Texas Instruments, Bangalore, India;a586710, from Analog Devices, Austin, USA.

The names assigned to the benchmarks consist of one letter, followed by a number. The letter represents the contributor of the benchmark and the number indicates the test complexity of the SoC. This number is a function of the number of primary inputs, primary outputs, bidirectional terminals, scan chains, internal scan chain lengths, and test patterns.

3.6 On the Applicability of Standard SoC Test Strategies in NoC-Based Systems

As one can observe from the previous sections, many challenges for the test of an SoC have been tackled in the last 15 years or so. Nowadays, one can say that there is a cost-effective solution for a complex SoC. This solution is based on three concepts:

Wrapped compliant cores with associated CTL program;Flexible bus-based test access mechanisms (TestRail and its variants);Integrated definition of the wrapper, TAM, and test scheduling, taking into account ATE and system constraints (test time, power budget, area overhead, cores test requirements, etc.), through efficient heuristics.

Despite the challenges in terms of area overhead, power consumption during test, and design time, one of the most important constraints for the test solution is the reduc-tion in the test time. This is mainly due to the ATE constraints and the huge number of test patterns that must be applied to a complex system in a short period of time.

For systems using traditional connection models (buses and direct connections), where access to each core from the system interface is not trivial or not even possible, the bus-based test access mechanism (where another bus is inserted into the system to be used only for testing the embedded cores) is the approach that allows the definition of simple, regular, and efficient test solutions for complex SoCs. This approach can decouple testing issues (test time reduction, test patterns reuse, etc.) from system characteristics (cores DfT and connection map, floor planning, etc.). Thus, testing planning is more dependent on the cores present in the system than to the system characteristics, although those last parameters can be considered in the process. Because of this, a test solution can also be reused in future chips using the same or a similar sub-set of cores. With all the advantages of the bus-based approach (simple and well known concept of a bus, simple connection to the cores during test, reusability, ideal for modular testing, availability of efficient test planning algorithms, small resulting test time, flexibility), the cost of inserting additional buses in the system normally pays off.

Page 66: Reliability, Availability and Serviceability

52 3 Systems-on-Chip Testing

Even though efficient test architectures have been proposed in the last few years, the costs related to the test of current SoCs are still an important part of the total manufacturing cost of the system-chips (ITRS 2009) and one can still see a consid-erable effort for its reduction. As manufacturing technology evolves one can do more and more in a single die and more complex systems are built every day. Testing has to be adapted to deal with each new design and manufacturing breakthrough. Thus, new fault models, geometric increase of logic, stronger interaction between cores and system layers, even tighter design and test requirements with larger design spaces to explore (power issues, testing of the test infrastructure, diversity of the logic to be tested, etc.) are constant challenges for a good testing approach.

When a NoC is used as the interconnection platform of a complex system, one must re-evaluate the pros and cons of the bus-based TAM model. In a NoC-based SoC, the network (mostly the wiring) already occupies a considerable area in the chip. Hence, depending on its organization, the NoC may already impose some place&route challenges. Therefore, the inclusion of extra hardware such as test buses that spread throughout the chip may present additional routing and yield prob-lems. On the other hand, the communication capabilities of the NoC can be used to efficiently transmit test data during the test of the system since the NoC indeed provides a path to/from all embedded cores from/to any available system interface. Finally, one can argue that the communication problems and requirements of the normal system operation are also present during test:

Performance, scalability, and parallelism: Given the huge test data volume and the at-speed testing requirements, throughput, synchronization and parallelism are essential in an efficient TAM. Test buses meet those requirements, but faces the same problems of a functional bus when the number of cores connected to a single test bus increases. Current solutions show that the number of test bus in the system is actually limited by the available test interface between the tester and the system.Reusability: The whole concept of modular testing allows a test solution to be developed for a given set of modules leaving system characteristics as additional parameters to be taken into account. This favors the reuse of a test solution in systems sharing a similar set of embedded cores. However, the actual implemen-tation of the test solution is still strongly connected to the system implementation as extra hardware is added to the chip.Reliability and fault tolerance: Test buses must also be tested and must be work-ing properly to make the system test possible. Being an exclusive access mecha-nism to each core, how can the system be tested if a test bus presents a manufacturing fault? How can one ensure that transient faults or typical wiring faults (in a system dominated by wiring) are not affecting the test bus and, con-sequently, the test of the cores?

For those reasons, one needs alternative approaches to test a NoC-based system. The rest of this book will present some recent strategies proposed in the literature in this direction.

Page 67: Reliability, Availability and Serviceability

53References

References

Abramovici M, Breuer MA, Friedman AD (1994) Digital systems testing & testable design. Wiley-IEEE Press, New York

Aerts J, Marinissen EJ (1998) Scan chain design for test time reduction in core-based ICs. In: Proceedings of the international test conference (ITC), Washington, DC, pp 448–457

Amory AM, Goossens K, Marinissen EJ, Lubaszewski M, Moraes F (2007) Wrapper design for the reuse of a bus, network-on-chip, or other functional interconnect as test access mechanism. IET Comput Digital Tech 1(3):197–206

Bardell PH, McAnney WH (1982) Self-testing of multiple logic modules. In: Proceedings of the international test conference (ITC), Philadelphia, pp 200–204

Basu S, Mukhopadhay D, Roychoudhury D, Sengupta I, Bhawmik S (2002a) Reformatting test patterns for testing embedded core based system using test access mechanism (TAM) switch. In: Proceedings of the 7th Asia and South Pacific design automation conference (ASPDAC), Bangalore, pp 598–603

Basu S, Sengupta I, Chowdhury DR, Bhawmik S (2002b) An integrated approach to testing embed-ded cores and interconnects using test access mechanism (TAM) switch. J Electron Test 18(4):475–485

Benabdenbi M, Maroufi W, Marzouki M (2000) Cas-Bus: a scalable and reconfigurable test access mechanism for systems on a chip. In: Proceedings of the design, automation and test in Europe conference (DATE), Paris, pp 141–145

Benabdenbi M, Maroufi W, Marzouki M (2002) CAS-BUS: a test access mechanism and a toolbox environment for core-based system chip testing. J Electron Test 18(4):455–473

Benso A, Chiusano S, Di Carlo S, Prinetto P, Ricciato F, Spadari M, Zorian Y (2000) HD2BIST: a hierarchical framework for bist scheduling, data patterns delivering and diagnosis in SoCs. In: Proceedings of the international test conference (ITC), Atlantic City, pp 892–901

Benso A, Di Carlo S, Prinetto P, Zorian Y (2008) IEEE standard 1500 compliance verification for embedded cores. IEEE Trans VLSI 16(4):397–407

Bhattacharya D (1998) Hierarchical test access architecture for embedded cores in an integrated circuit. In: Proceedings of the 16th IEEE VLSI test symposium (VTS), Princeton, pp 8–14

Bushnell M, Agrawal V (2000) Essentials of electronic testing for digital, memory, and mixed-signal VLSI circuits (frontiers in electronic testing volume 17). Springer, New York

Chakrabarty K (2000a) Design of system-on-a-chip test access architectures under place-and-route and power constraints. In: Proceedings of the ACM/IEEE design automation conference (DAC), Los Angeles, pp 432–437

Chakrabarty K (2000b) Design of system-on-a-chip test access architectures using integer linear programming. In: Proceedings of the 18th IEEE VLSI test symposium (VTS), Montréal, pp 127–134

Chakrabarty K (2002) SOC (System-on-a-Chip) testing for plug and play test automation (frontiers in electronic testing). Springer, Dordrecht

Chakrabarty K, Mukherjee R, Exnicios A (2001) Synthesis of transparent circuits for hierarchical and system-on-a-chip test. In: Proceedings of the 14th international conference on VLSI design, Bangalore, pp 431–436

Chakrabarty K, Iyengar V, Chandra A (2002) Test resource partitioning for system-on-a-chip (frontiers in electronic testing volume 20). Springer, Dordrecht

Chandra A, Chakrabarty K (2002a) Reduction of SOC test data volume, scan power and testing time using alternating run-length codes. In: Proceedings of the ACM/IEEE design automation conference (DAC), New Orleans, pp 673–678

Chandra A, Chakrabarty K (2002b) Test data compression and decompression based on internal scan chains and Golomb coding. IEEE Trans CAD 21(6):715–722

Chattopadhyay S, Reddy KS (2003) Genetic algorithm based test scheduling and test access mech-anism design for system-on-chips. In: Proceedings of the 16th international conference on VLSI design, New Delhi, pp 341–346

Page 68: Reliability, Availability and Serviceability

54 3 Systems-on-Chip Testing

Chen L, Bai X, Dey S (2002) Testing for interconnect crosstalk defects using on-chip embedded processor cores. J Electron Test 18(4):529–538

Chiusano S, Prinetto P, Wunderlich H-J (2000) Non-intrusive BIST for systems-on-a-chip. In: Proceedings of the international test conference (ITC), Atlantic City, pp 644–651

Chou RM, Saluja KK, Agrawal VD (1997) Scheduling tests for VLSI systems under power con-straints. IEEE Transactions on VLSI 5(2):175–184

Cota E, Carro L, Orailoglu A, Lubaszewski M (2002) Test planning and design space exploration in core-based environment. In: Proceedings of the design, automation and test in Europe (DATE), Paris, pp 483–490

Cota EF, Carro L, Lubaszewski M, Orailoglu A (2004) Searching for global test costs optimization in core-based systems. J Electron Test 20(4):357–373

Cuviello M, Dey S, Bai X, Zhao Y (1999) Fault modeling and simulation for crosstalk in system-on-chip interconnects. In: Proceedings of the IEEE/ACM international conference on com-puter-aided design (ICCAD), San Jose, pp 297–303

Eichelberger EB, Williams TW (1978) A logic design structure for LSI testability. J Des Autom Fault Tolerant Comput 2(2):165–178

Feng W, Meyer FJ, Lombardi F (1999) Novel control pattern generators for interconnect testing with boundary scan. In: Proceedings of the international symposium on defect and fault toler-ance in VLSI systems, Albuquerque, pp 112–120

Flottes M-L, Pouget J, Rouzeyre B (2002) A heuristic for test scheduling at system level. In: Proceedings of the design, automation and test in Europe conference (DATE), Paris, p 1124

Ghosh I, Jha NK, Dey S (1997) A low overhead design for testability and test generation technique for core-based systems. In: Proceedings of the international test conference (ITC), Washington, DC, pp 50–59

Ghosh I, Dey S, Jha NK (1998) A fast and low cost testing technique for core-based system-on-chip. In: Proceedings of the ACM/IEEE design automation conference (DAC), San Francisco, pp 542–547

Goel SK, Marinissen EJ (2002a) Cluster-based test architecture design for system-on-chip. In: Proceedings of the IEEE VLSI test symposium (VTS), Monterey, pp 259–264

Goel SK, Marinissen EJ (2002b) A novel test time reduction algorithm for test architecture design for core-based system chips. In: Proceedings of the 7th IEEE European test workshop (ETW), Corfu, pp 7–12

Goel SK, Marinissen EJ (2002c) Effective and efficient test architecture design for SoCs. In: Proceedings of the international test conference (ITC), Baltimore, pp 529–538

Goel KS, Marinissen EJ (2003) Layout-driven SOC test architecture design for test time and wire length minimization. In: Proceedings of the design, automation and test in Europe conference (DATE), Munich, pp 738–743

Gonciari P T, Al-Hashimi B M, Nicolici N (2002) Integrated test data decompression and core wrapper design for low-cost system-on-a-chip testing. In: Proceedings of the international test conference (ITC), Baltimore, pp 64–73

Hu H, Yibe S (2001) A scalable test mechanism and its optimization for test access to embedded cores. In: Proceedings of the 4th international conference on ASIC, Shanghai, pp 773–776

Huang Y, Cheng W-T, Tsai C-C, Mukherjee N, Samman O, Zaidan Y, Reddy SM (2002) On con-current test of core-based SoC design. J Electron Test 18(4):401–414

Hwang S, Abraham J A (2001) Reuse of addressable system bus for SoC testing. In: Proceedings of the 14th annual IEEE international ASIC/SOC conference, Washington, DC, pp 215–219

International Technology Roadmap for Semiconductors (2009) Test and test equipmentIyengar V, Chakrabarty K (2001) Precedence-based, preemptive, and power-constrained test

scheduling for system-on-a-chip. In: Proceedings of the IEEE VLSI test symposium (VTS), Marina Del Rey, CA, USA, pp 368–374

Iyengar V, Chakrabarty K (2002) System-on-a-chip test scheduling with precedence relationships, preemption, and power constraints. IEEE Trans CAD 21(9):1088–1094

Iyengar V, Chakrabarty K, Marinissen EJ (2001) Iterative test wrapper and test access mechanism co-optimization. In: Proceedings of the international test conference (ITC), Baltimore, pp 1023–1032

Page 69: Reliability, Availability and Serviceability

55References

Iyengar V, Chakrabarty K, Marinissen EJ (2002a) On using rectangle packing for soc wrapper/tam co-optimization. In: Proceedings of the IEEE VLSI test symposium (VTS), Monterey, pp 253–258

Iyengar V, Chakrabarty K, Marinissen EJ (2002b) Recent advances in test planning for modular testing of core-based SoCs. In: Proceedings of the Asian test symposium (ATS), Guam, pp 320–325

Iyengar V, Chakrabarty K, Marinissen EJ (2002c) Test wrapper and test access mechanism co-optimization for system-on-chip. J Electron Test 18(2):213–230

Iyengar V, Chakrabarty K, Marinissen EJ (2002d) Wrapper/TAM co-optimization, constraint-driven test scheduling, and tester data volume reduction for SoCs. In: Proceedings of the ACM/IEEE design automation conference (DAC), New Orleans, pp 685–690

Iyengar V, Chandra A, Schweizer S, Chakrabarty K (2003) A unified approach for SoC testing using test data compression and tam optimization. In: Proceedings of the design, automation and test in Europe conference (DATE), Munich, pp 1188–1189

Jervan G, Peng Z, Ubar R (2000) Test cost minimization for hybrid BIST. In: Proceedings of the IEEE international symposium on defect and fault tolerance in VLSI systems, San Francisco, CA , USA, pp 283–291

Jervan G, Peng Z, Ubar R, Kruus H (2002) A hybrid BIST architecture and its optimization for soc testing. In: Proceedings of the international symposium on quality electronic design, San Jose, pp 273–279

Jha NK, Gupta S (2003) Testing of digital systems. Cambridge University Press, CambridgeKapur R, Keller B, Koenemann B, Lousberg M, Reuter P, Taylor T, Varma P (1999) P1500-CTL:

towards a standard core test language. In: Proceedings of the VLSI test symposium (VTS), Berkeley, pp 489–490

Kautz WH (1974) Testing for faults in wiring networks. IEEE Trans Comput C23(4):358–363Konemann B, Mucha J, Zwiehof G (1979) Built-in logic block observation techniques. In:

Proceedings of the international test conference (ITC), Cherry Hill, New Jersey, pp 37–41Koranne S, Iyengar V (2002) On the use of k-tuples for SoC test schedule representation. In:

Proceedings of the international test conference (ITC), Baltimore, pp 539–548Lahiri K, Raghunathan A, Dey S (2002) Communication architecture based power management

for battery efficient system design. In: Proceedings of the ACM/IEEE design automation conference (DAC), New Orleans, pp 691–696

Larsson E (2005) Introduction to advanced system-on-chip test design and optimization (frontiers in electronic testing). Springer, Dordrecht

Larsson E, Fujiwara H (2002) Power constrained preemptive TAM scheduling. In: Proceedings of the 7th IEEE European test workshop (ETW), Corfu, pp 119–126

Larsson E, Peng Z (2001a) Test scheduling and scan-chain division under power constraint. In: Proceedings of the Asian test symposium (ATS), Kyoto, pp 259–264

Larsson E, Peng Z (2001b) An integrated system-on-chip test framework. In: Proceedings of the design, automation and test in Europe conference (DATE), Dresden, pp 138–144

Larsson E, Peng Z (2002) An integrated framework for the design and optimization of SoC test solutions. J Electron Test 18(4):385–400

Larsson E, Peng Z, Carlsson G (2001) The design and optimization of SoC test solutions. In: Proceedings of the IEEE/ACM international conference on computer aided design (ICCAD), San Jose, pp 523–530

Larsson E, Arvidsson K, Fujiwara H, Peng Z (2002) Integrated test scheduling, test parallelization and TAM design. In: Proceedings of the 11th Asian test symposium (ATS), Guam, pp 397–404

LeBlanc JJ (1984) LOCST: a built-in self-test technique. IEEE Des Test Comput 1(4):45–52Lee K-J, Huang C-I (2000) A hierarchical test control architecture for core based design. In:

Proceedings of the Asian test symposium (ATS), Taipei, pp 248–253Li J-F, Huang H-J, Chen J-B, Su C-P, Wu C-W, Cheng C, Chen S-I, Hwang C-Y, Lin H-P (2002a)

A hierarchical test methodology for systems on chip. IEEE Micro 22(5):69–81Li J-F, Huang H-J, Chen J-B, Su C-P, Wu C-W, Cheng C, Chen S-I, Hwang C-Y, Lin H-P (2002b)

A hierarchical test scheme for system-on-chip designs. In: Proceedings of the design, automa-tion and test in Europe conference (DATE), Paris, pp 486–490

Lousberg M (2002) TAPs all over my chips. In: Proceedings of the international test conference (ITC), Baltimore, p 1189

Page 70: Reliability, Availability and Serviceability

56 3 Systems-on-Chip Testing

Makris Y, Orailoglu A (1998) RTL test justification and propagation analysis for modular designs. J Electron Test 13(2):105–120

Marinissen E, Zorian Y (2009) IEEE Std 1500 enables modular SoC testing. IEEE Des Test Comput 26(1):8–17

Marinissen EJ, Arendsen R, Bos G, Dingemanse H, Lousberg M, Wouters C (1998) A structured and scalable mechanism for test access to embedded reusable cores. In: Proceedings of the international test conference (ITC), Washington, DC, pp 284–293

Marinissen EJ, Zorian Y, Kapur R, Taylor T, Whetsel L (1999) Towards a standard for embedded core test: an example. In: Proceedings of the international test conference (ITC), Atlantic City, pp 616–627

Marinissen EJ, Kapur R, Zorian Y (2000a) On using IEEE P1500 SECT for test plug-n-play. In: Proceedings of the international test conference (ITC), Atlantic City, pp 770–777

Marinissen E J, Goel SK, Lousberg M (2000b) Wrapper design for embedded core test. In: Proceedings of the international test conference (ITC), Atlantic City, pp 911–920

Marinissen EJ, Iyengar V, Chakrabarty K (2002a) A set of benchmarks for modular testing of SoCs. In: Proceedings of the international test conference (ITC), Baltimore, pp 521–528

Marinissen EJ, Iyengar V, Chakrabarty K (2002b) ITC’02 SoC test benchmarks. http://itc-02socbenchm.pratt.duke.edu/. Accessed 23 Aug 2010

Marinissen EJ, Kapur R, Lousberg M, McLaurin T, Ricchetti M, Zorian Y (2002c) On IEEE P1500’s standard for embedded core test. J Electron Test 18(4):365–383

Marinissen EJ, Vermeulen B, Hollmann H, Bennetts RG (2003) Minimizing pattern count for interconnect test under a ground bounce constraint. IEEE Des Test Comput 20(2):8–18

Maunder CM, Tulloss RE (1990) The test access port and boundary-scan architecture. IEEE Computer Society Press, Los Alamitos/Washington, DC

Muresan V, Wang X, Vladutiu M (2000) A comparison of classical scheduling approaches in power-constrained block-test scheduling. In: Proceedings of the international test conference (ITC), Atlantic City, pp 882–891

Nahvi M, Ivanov A (2001) A packet switching communication-based test access mechanism for system chips. In: Proceedings of the IEEE European test workshop (ETW), Stockholm, pp 81–86

Nourani M, Papachristou C (1998a) A bypass scheme for core-based system fault testing. In: Proceedings of the design, automation and test in Europe conference (DATE), Paris, pp 979–980

Nourani M, Papachristou C (1998b) Parallelism in structural fault testing of embedded cores. In: Proceedings of the 16th IEEE VLSI test symposium (VTS), Monterey, pp 15–20

Nourani M, Papachristou C (2000) An ILP formulation to optimize test access mechanism in system-on-chip testing. In: Proceedings of the international test conference (ITC), Atlantic City, pp 902–910

Oakland SF (2000) Considerations for implementing IEEE 1149.1 on system-on-a-chip integrated circuits. In: Proceedings of the international test conference (ITC), Atlantic City, pp 628–637

Papachristou CA, Martin F, Nourani M (1999) Microprocessor based testing for core-based system on chip. In: Proceedings of the ACM/IEEE design automation conference (DAC), New Orleans, pp 586–591

Parker KP (2003) The boundary scan handbook. Kluwer, BostonPomeranz I, Reddy SM (2002) A partitioning and storage based built-in test pattern generation

method for delay faults in scan circuits. In: Proceedings of the Asian test symposium (ATS), Guam, pp 110–115

Ravikumar CP, Verma A, Chandra G (1999) A polynomial-time algorithm for power constrained testing of core-based systems. In: Proceedings of the Asian test symposium (ATS), Shanghai, pp 107–112

Ravikumar CP, Chandra G, Verma A (2000) Simultaneous module selection and scheduling for power-constrained testing of core based systems. In: Proceedings of the international conference on VLSI design, Calcutta, pp 462–467

Rosinger PM, Al-Hashimi BM, Nicolici N (2001a) Power constrained test scheduling using power profile manipulation. In: Proceedings of the IEEE international symposium on circuits and systems (ISCAS), Sydney, vol 5. pp 251–254

Page 71: Reliability, Availability and Serviceability

57References

Rosinger P, Gonciari PT, Al-Hashimi BM, Nicolici N (2001b) Simultaneous reduction in volume of test data and power dissipation for systems-on-a-chip. Electron Lett 37(24):1434–1436

Rosinger PM, Al-Hashimi BM, Nicolici N (2002) Power profile manipulation: a new approach for reducing test application time under power constraints. IEEE Trans CAD 21(10):1217–1225

Roth JP, Bouricius WG, Schneider PR (1967) Programmed algorithms to compute tests to detect and distinguish between failures in logic circuits. IEEE Trans Elec Comput EC 16(5):567–580

Sehgal A, Goel SK, Marinissen EJ, Chakrabarty K (2004) IEEE P1500-compliant test wrapper design for hierarchical cores. In: Proceedings of the international test conference (ITC), Charlotte, pp 1203–1212

Semiconductor Industry Association (1997) The national technology roadmap for semiconductors. Semiconductor Industry Association, San Jose

Silva F, McLaurin T, Waayers T (2006) The core test wrapper handbook: rationale and application of IEEE Std. 1500 (frontiers in electronic testing). Springer, Dordrecht

Sinanoglu O, Orailoglu A (2002) Efficient construction of aliasing-free compaction circuitry. IEEE Micro 22(5):82–92

IEEE Standards Board (1994) IEEE standard test access port and boundary-scan architecture. IEEE/ANSI standard 1149–1

IEEE Standards Board (2005a) IEEE standard test interface language (STIL) for digital test vector data – core test language (CTL). IEEE Std 1450.6

IEEE Standards Board (2005b) IEEE standard testability method for embedded core-based inte-grated circuits. IEEE Std 1500

Sugihara M, Date H, Yasuura H (1998) A novel test methodology for core-based system LSIs and a testing time minimization problem. In: Proceedings of the international test conference (ITC), Washington, DC, pp 465–472

Sugihara M, Date H, Yasuura H (2000) Analysis and minimization of test time in a combined BIST and external test approach. In: Proceedings of the design, automation and test in Europe confer-ence (DATE), Paris, pp 134–140

Touba NA (2002) Deterministic test vector compression/decompression for systems-on-a-chip using an embedded processor. J Electron Test 18(4):503–514

Varma P, Bhatia S (1998) A structured test re-use methodology for core-based system chips. In: Proceedings of the international test conference (ITC), Washington, DC, pp 294–302

Venkatraman R, Pundoor S, Koithyar A, Rao M, Rao JC (2009) Optimisation quality assessment in large, complex soc designs – challenges and solutions. In: Proceedings of the 22nd interna-tional conference on VLSI design, New Delhi, pp 525–530

Wagner PT (1987) Interconnect testing with boundary scan. In: Proceedings of the international test conference (ITC), Washington, DC, pp 52–57

Wang L-T (2006) Logic built-in self test. In: Wang L-T, Wu C-W, Wen X (eds) VLSI test principles and architectures: design for testability. Morgan Kaufmann, Amsterdam/Boston

Wang L-T, Wu C-W, Wen X (2006) VLSI test principles and architectures: design for testability (systems on silicon). Morgan Kaufmann, Amsterdam/Boston

Wang L-T, Stroud CE, Touba NA (2007) System-on-chip test architectures: nanometer design for testability (systems on silicon). Morgan Kaufmann, Burlington

Whetsel L (1997) An IEEE 1149.1 based test access architecture for ICs with embedded cores. In: Proceedings of the international test conference (ITC), Washington, DC, pp 69–78

Xu Q, Nicolici N (2004) Multi-frequency test access mechanism design for modular SoC testing. In: Proceedings of the 13th Asian test symposium (ATS), Kenting

Xu Q, Nicolici N (2005) Modular and rapid testing of SOCs with unwrapped logic blocks. IEEE Trans VLSI 13(11):1275–1285

Xu Q, Nicolici N, Chakrabarty K (2007) Test wrapper design and optimization under power con-straints for embedded cores with multiple clock domains. IEEE Trans CAD 26(8):1539–1547

Yoneda T, Fujiwara H (2002) Design for consecutive testability of system-on-a-chip with built-in self testable cores. J Electron Test 18(4):487–501

Yoneda T, Masuda K, Fujiwara H (2006) Power-constrained test scheduling for multi-clock domain SoCs. In: Proceedings of the design, automation and test in Europe conference (DATE), Munich

Page 72: Reliability, Availability and Serviceability

58 3 Systems-on-Chip Testing

Yu TE, Yoneda T, ZhaoD, Fujiwara H (2007) Using domain partitioning in wrapper design for IP cores under power constraints. In: Proceedings of the 25th VLSI test symposium (VTS), Berkeley

Zhao Y, Dey S (2003) Fault-coverage analysis techniques of crosstalk in chip interconnects. IEEE Trans CAD 22(6):770–782

Zhao D, Upadhyaya S (2002) Adaptive test scheduling in SoC’s by dynamic partitioning. In: Proceedings of the 17th IEEE international symposium on defect and fault tolerance in VLSI systems, Vancouver, pp 334–342

Zhao D, Upadhyaya S (2005) Dynamically partitioned test scheduling with adaptive TAM configu-ration for power-constrained SoC testing. IEEE Trans CAD 24(6):956–965

Zorian Y (1993) A distributed BIST control scheme for complex VLSI devices. In: Proceedings of the VLSI test symposium (VTS), Atlantic City, New Jersey, pp 6–11

Zorian Y (1997) Test requirements for embedded core-based systems and IEEE P1500. In: Proceedings of the international test conference (ITC), Washington, DC, pp 191–199

Zorian Y (1998) System-chip test strategies. In: Proceedings of the ACM/IEEE design automation conference (DAC), San Francisco, pp 752–757

Zorian Y, Marinissen EJ, Dey S (1998) Testing embedded-core based system chips. In: Proceedings of the international test conference (ITC), Washington, DC, pp 130–143

Zorian Y, Dey S, Rodgers MJ (2000) Test of future system-on-chips. IEEE/ACM international conference on computer-aided design (ICCAD), San Jose, pp 392–398

Page 73: Reliability, Availability and Serviceability

59É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_4, © Springer Science+Business Media, LLC 2012

In this chapter we cover the first proposed test approaches that reuse the NoC as Test Access Mechanism (TAM) in a core-based system. First, the basic reuse strategy is presented, including the very few modifications implemented in the network inter-face, and the definition of the test packets to make the test possible. Then, two test scheduling approaches (preemptive and non-preemptive) are discussed. These basic reuse strategies focus on the definition of specific test scheduling algorithms, since the TAM (NoC) architecture and transport capacity are given. The reuse model and the scheduling algorithms presented here assume a stream-like communication can be established, through the NoC, between the cores under test and the external test sources and sinks. This assumption implies a NoC with guaranteed fixed band-width and latency. Other reuse models (use of different test packet models and BE NoCs) are discussed in Chap. 5.

4.1 Basic NoC Reuse Model

The idea of reusing the NoC as test access mechanism appeared in the first refer-ences on NoC-based systems (Vermeulen et al. 2003). Even though the problems of using the traditional SoC test solution in a NoC-based are discussed by Vermeulen et al. (2003), they advocate the NoC reuse mostly for the test of the NoC itself. The first NoC reuse strategy for the test of embedded cores was proposed in Cota et al. (2003b), where the first premises for the NoC-based TAM were defined and a first test scheduling algorithm was evaluated in comparison to the traditional bus-based TAM. One will see that additional requirements for the NoC reuse as well as more efficient scheduling algorithms have been presented in the last few years. Nevertheless, the very basic premises are still valid and will be detailed next.

The basic model for NoC reuse is depicted in Fig. 4.1. Test sources and sinks are assumed to be off-chip and the connection between them and the embedded cores is implemented through the on-chip network using the available functional interfaces

Chapter 4NoC Reuse for SoC Modular Testing

Page 74: Reliability, Availability and Serviceability

60 4 NoC Reuse for SoC Modular Testing

that are connected to the NoC. Based on the fact that the NoC provides a physical connection among all cores connected to it, one can assume that there is a path between any chip-level interface connected to the NoC and any IP block connected to the NoC as well. Thus the test source (sink) can send (receive) a test vector (response) through any chip-level NoC interface. During test, embedded cores are assumed to be in a test mode whereas the network must be in normal operation mode. The network resources and protocol are assumed to be used “as is”, i.e., the NoC design and implementation are assumed to be guided only by the system functionality and the test solution only uses this resource. Of course, to be used during the test of the cores, the network must have been tested previously. Although the NoC testing is a pre-requisite for the cores testing, we decided to invert the presentation order for didactical reasons. Thus, the test of the NoC is discussed in the third part of this book. For now, let us just assume the network is fault-free or only the fault-free structures of the NoC are available to be reused as TAM.

4.1.1 Test Packets

As discussed in Chap. 2, the network implements a specific communication proto-col where each message is framed by a header and a tail that contain the necessary information to establish the connection between two nodes in the network. Therefore, to transmit test data through the NoC, one must format this data in the form of

SoC

core core core

corecorecore

Test sources/sinks

Fig. 4.1 Basic models for NoC reuse as TAM

Page 75: Reliability, Availability and Serviceability

614.1 Basic NoC Reuse Model

messages or packets. As detailed in Chap. 2, a single message can be split into packets to improve performance, but both mechanisms follow the same communication protocol. Thus, for instance, one can think of a single message from the test source to the core, containing all test vectors of that core. Similarly, a single message with all respective test responses can be assembled and sent from the core to the test sink. On the other extreme, one can think of several test packets traversing the network, each one containing a single or a few test vectors or test responses. In any case, one must add to the test data the communication information following the NoC protocol, i.e., a header and a tail. In this chapter we use the term test packet referring to either a message or a packet containing test data (vectors or responses).

A very simple test packet format was used in the first NoC-based TAM approaches. The reason for this simple format was to allow the reuse of both, the network and the original NI without modifications. The test packet format proposed in (Cota et al. 2003b) is shown in Fig. 4.2 and one can observe the presence of two headers, one with the communication information required by the NoC protocol (target node, for instance), and a second one with test control information.

Figure 4.3 shows an example of how each test bit is assembled in this packet format and fed to the CUT. The packet header handles the network protocol to establish the path to the core (in a packet containing a test vector) or to the test sink (in a packet containing the test response). When the packet arrives at the CUT, the network interface discards the header and transmits the test header and the payload to the core. However, the packet header must indicate that it is a test packet so that the NI becomes ready to receive the test header. The test header brings test control information for the core. For instance, test wrapper control bits and/or test enablers are loaded at this time. The test header can be split into multiple flits (or words), depending on the core and/or test wrapper operation. As far as the network is concerned, the test header is actually part of the packet payload and will be transmitted to the core with no further interpretation. Finally, the actual test data forms the packet payload. In the basic packet format, wrapper scan chains (WSC) are assumed to be defined in the test wrapper and are assumed to be activated by the previously delivered test header. Furthermore, one assumes that at most W wrapper scan chains

packet header

test header

payload

tail

payload

payload

…..

Fig. 4.2 Basic test packet format

Page 76: Reliability, Availability and Serviceability

62 4 NoC Reuse for SoC Modular Testing

are defined for each CUT, for W the NoC channel bitwidth. In the example of Fig. 4.3, there are five wrapper scan chains and the longest chain has four bits. Under these assumptions, the payload is assembled in such a way that each payload flit contains one bit of each WSC. This means that the NI can transmit one bit per flit, which is in accordance with the scan-in and scan-out operations. Furthermore, the length of the test packet is proportional to the length of the longest WSC of the CUT. Notice that most NoCs use the wormhole switching approach, where payload flits follow the header and is usually routed in one clock cycle (De Micheli and Benini 2006). Finally, the tail flit may optionally bring additional test control bits (for instance, a test application enable signal).

A packet containing a test response has the same format except for the test header, which can be excluded. On the other hand, one may use a test header in the test response packet to transmit additional information for an on-chip test sink. The similar reason applies for the tail flit in the test response packet. Furthermore, in this basic packet format it is implied that the test response packet is assembled at the network interface during the scan-out operation.

4.1.2 Network Interface and Test Wrapper

Cores are connected to the network by means of a network interface (NI) which translates the core communication protocol to the network protocol. In addition, as discussed in Chap. 3, each core is assumed to have a IEEE Std 1500 compliant test wrapper to enable the system test. In the basic NoC reuse approach, only the basic test wrapper functionalities are used. Figure 4.4 depicts the connection between the network, NI, test wrapper and CUT in normal operation mode whereas Fig. 4.5

packet header

00

11111

tail

22222

33333

44444

0 0

4 3 2 1

4 3 2 1

4 3 2 1

3 2

1 4

3 2

1

CUT

a b

test packet wrapper scan chains

Fig. 4.3 Basic test packet format: (a) test packet, (b) wrapper scan chains

Page 77: Reliability, Availability and Serviceability

634.1 Basic NoC Reuse Model

shows the active connections during test configuration (Fig. 4.5a) and test appli-cation (Fig. 4.5b).

During normal operation mode, the test wrapper is transparent and the core is connected to the network through the NI. When the header of a test packet arrives, the NI is reconfigured to receive the test header and load the wrapper control data. Finally, the NI unpacks the payload bits and connects the defined wrapper scan chains accordingly. Notice that in this configuration, the bypass mode of the test wrapper is not used. However, one may opt to set the test wrapper control bits directly from the test controller. In this case, the bypass register may be used to reduce the configuration cost.

Wrapper scan chains are defined for each CUT using the same algorithms devised for the traditional bus-based TAM model. Iyengar et al. (2002) proposed an algo-rithm called Design Wrapper that defines an optimal wrapper for the I/O terminals and internal scan chains of a core, such that the core testing time is minimized. The algorithm is based on the Best Fit Decreasing heuristic for the Bin Packing problem. It receives as input the available TAM bitwidth W for that core and provides the optimized wrapper scan chains that minimize the core test time for that TAM. Wrapper scan chains are formed by the original scan chains of the core (possibly combined). Functional I/O terminals of the core are also included in the wrapper scan chains if necessary. If a core has no internal scan chains, I/O terminals can also form one or more wrapper scan chains to adjust to the available TAM bitwidth. The testing time of a core can be then calculated as a function of the defined wrapper scan chains, as shown in Equation 4.1 below. In the equation, p is the number of test patterns defined for the core and s

i(s

o) is the length of the longest scan-in (scan-out)

wrapper chain defined for the core.

1 max , min( ,i o i ocore test time s s p s s (4.1)

NoC

buffer

NI

Protocolimplementation

CUT

Wrapper control

Test Wrapper

NI

Protocolimplementation

Fig. 4.4 NI and test wrapper connection during normal operation mode

Page 78: Reliability, Availability and Serviceability

64 4 NoC Reuse for SoC Modular Testing

The DesignWrapper algorithm can also be used for the NoC-based TAM model. In this case, the NoC channel bitwidth W is the available TAM bitwidth for all cores connected to the network. Thus, for instance, if the NoC implements communica-tion channels of 64 bits, each core can assume a 64-bit TAM and can have at most 64 wrapper scan chains defined. On the other hand, some cores may need less than W wrapper scan chains. Test packets for these cores will have some unused bits in each flit. Usually, for scan-based cores, the packet size in number of flits for the test vector and for the test response are the same. For non scan-based cores, these numbers may be different, since they only depend on the number of functional inputs and outputs of the core. For example, Table 4.1 shows the number of packets and packet size (in number of flits) for benchmark p22810 (Marinissen et al. 2002) considering a 32-bit communication channel.

NoC

buffer

NI

Protocolimplementation

CUT

Wrapper control

Test Wrapper

NI

Protocolimplementation

a

b

test configuration mode

test application mode

NoC

buffer

NI

Protocolimplementation

CUT

Wrapper control

Test Wrapper

NI

Protocolimplementation

Fig. 4.5 NI and test wrapper connection during test application: (a) test configuration mode, (b) test application mode

Page 79: Reliability, Availability and Serviceability

654.1 Basic NoC Reuse Model

4.1.3 Interface with External Tester

After defining the number of wrapper scan chains and organizing the test data into packets, let us consider the transmission of the test packets between an external test controller and the cores. Let us call external tester the set of off-chip test sources and sinks, which can be implemented as a single ATE or a combination of struc-tures. From the previous discussion one can assume that each flit in the test packet is W-bit wide, for W the channel bitwidth of the NoC. Let us assume for now that the interface between the external tester and the NoC is done through the functional system interface and is exactly W bits wide, as shown in Fig. 4.6. From the figure one can observe that the wrapper connected to a system interface core implements the test interface. One can either choose a few interface cores so that each interface forms a W-bit port with the tester, as shown in Fig. 4.6, or extra pins are added to each original system interface so that W-bit ports are available.

Table 4.1 Example of test packets for p22810 benchmark

Core Number of test packets Packet size (32-bit channel)

1 1,570 1302 24,648 23 6,216 24 444 25 404 2146 1,424 37 5,264 38 5,216 29 350 12210 76 9911 188 8812 186 8213 2 10414 216 7315 74 8016 16 10917 50 8918 1,288 6819 116 4320 248 7721 930 18622 118 7723 80 11524 54 10125 430 18126 362 40027 4 3428 52 100

Page 80: Reliability, Availability and Serviceability

66 4 NoC Reuse for SoC Modular Testing

4.2 Preemptive Test Scheduling

The test scheduling approaches presented in this chapter will be explained for mesh-based NoCs (grid and torus topology) using deterministic routing, input buffering, and wormhole switching. However, the algorithms are generic and can be easily adapted to other topologies, as long as a deterministic routing and input buffering are used.

All cores connected to a NoC can communicate with each other. Thus, one can use any defined test interface to connect a single core to the external tester. A NoC with N input test ports provides N test input paths to each core. Similarly, M output ports provides M possible test output paths for each core. Figure 4.7 shows an example of a 4 × 4 grid NoC with two test input ports and two test output ports defined. In the figure, each square represents a node, i.e., a router and its associated IP block. Each core has therefore two possible input paths and two possible output paths to communicate with the external tester (Fig. 4.7a) and one can test two cores simultaneously in this network configuration as shown in Fig. 4.7b. In the figure, we assume XY routing algorithm is used (De Micheli and Benini 2006).

The number of system interfaces used during test defines the initial number of paths that can be used in parallel to transmit test data. However, if there is more than one interface between the system and the external tester, the order on which the cores are tested is important, since different paths will be used for each core depend-ing on the system input/output available at that time. Consequently, each order leads to different conflicts over the network resources and results in different system test times. Furthermore, the access path length (input and output) depends on the test interface chosen for each core and may influence the core test time.

It is easy to observe however, that by testing only two cores simultaneously one leaves most NoC resources idle. In this example, around 50% of NoC resources (channels and routers) remain unused, while only two cores are tested in parallel. On the other hand, parallel testing of multiple cores is a key to reduce system test time which is, as mentioned before, one of the most important costs for testing.

router

Communication channels

W

WW

Fi

W

SoC

Sys

tem

func

tiona

lin

put p

ins

Fk ,Fi ≤ W

Fk

Core i

Fig. 4.6 Connection with the external test controller during test

Page 81: Reliability, Availability and Serviceability

674.2 Preemptive Test Scheduling

Furthermore, the NoC structure was originally proposed to establish multiple com-munications at a time. The main difficult to use this parallelism during test is the use of an external tester since the access to this off-chip resource is limited to the number of available system interfaces. In this scenario, the test of embedded cores can be modeled as a traditional dedicated path scheduling problem (Cota and Liu 2006), where NoC routers, channels, and interfaces are resources and test packets are processes using these resources for a certain period of time. A solution to this problem must find the best association between packets and resources in such a way that the overall system test time is minimized.

The preemptive test scheduling algorithm presented next was defined to maxi-mized the usage of NoC resources and test as many cores in parallel as possible with the main goal of reducing system test time. The algorithm, presented in Fig. 4.8, relies on the natural latency for packet delivery to share system interfaces among distinct cores. The placement of the cores in the network is assumed to be previ-ously defined by the application and is an input for the test scheduling heuristic. The algorithm receives as input the list of cores to be tested and, for each core, the pertinent test information: number of wrapper scan chains (wsc), length of the longest chain (L), and number of test vectors (p).

Initially, test vectors and test responses of all cores are defined as test packets with one vector or response per packet. Then, for each core, all possible input and output paths are defined and sorted in increasing order by path length (number of routers between the interface and the core).

A router takes a few cycles to process a packet header and forward the packet to either the core (through the network adapter) or to a router output port. During this time, incoming flits are stored in an input buffer and follow the header in a worm-hole approach. When the target core (CUT) is reached, the header is discarded and the remaining flits are passed to the associated network interface and fill in the wrapper scan chains as explained in Sect. 4.1.2. Equation 4.2 defines the time required to transmit a packet through a path (T

P). In the equation, T

R indicates the

number of cycles necessary to process the header in each router, NR is the number

input

input

output

outputCUT

CUT 1

CUT 2

input

output

input

output

a b

test parallelismmultiple access to the CUT

Fig. 4.7 Test access paths in a NoC with two input and two output ports: (a) multiple access to the CUT, (b) test parallelism

Page 82: Reliability, Availability and Serviceability

68 4 NoC Reuse for SoC Modular Testing

of routers in the path, TH indicates the number of cycles required to pack or unpack

a header, and P is the payload size, i.e., the number of flits of the payload. For each packet, two extra cycles are required by the core and its wrapper to process the test and be ready to pack and deliver the response packet.

2P H R RT T T N P (4.2)

There are two precedence rules in this problem formulation. The first one is related to the test responses. A packet containing a test response can only be sent after the corresponding vector is completely received and processed by the core. In traditional scan-based test schemes, test time is minimized by performing parallel scan-in and scan-out operations. That is, while a test response is extracted from the scan chain of the core, a new test vector is being injected in the chain. In the network, this parallelization is only possible if an input payload (test vector) is received by the core at the same time this core delivers an output payload (test response). For this to happen, the new vector packet must be delivered by the tester several cycles before the core finishes the processing of the previous vector. Indeed, a new packet can be sent as early as an input path becomes available, as long as the response of the previous vector is not blocked. Thus, the scheduling of each packet must consider the possibility of conflicts and subsequent blocking of the output path, to avoid loss of data.

In the proposed approach, each channel in the network is assigned a time tag, indicating when the channel is free to be used by a new packet. With this information,

Fig. 4.8 Preemptive test scheduling algorithm for NoC reuse

Page 83: Reliability, Availability and Serviceability

694.2 Preemptive Test Scheduling

it is possible to schedule the next vector packet of a core as soon as one path is available, provided that the packet will not arrive at the core interface before the response payload can proceed in the network. This strategy combines the network and the core parallelism while ensuring that the internal scan chains are not overwritten.

The second precedence rule is more general, and deals with the priority of use of a given path. One can define that cores with larger number of packets and larger size of packets have priority to use shorter paths to reduce test time. The idea is to associate the shortest path to the most expensive core, to minimize its test time. Thus, cores are initially sorted in decreasing order by test volume.

Each packet of each core must be transmitted from the external tester to the core and vice-versa. Furthermore, the original buffer structure of the router, designed according to the functional requirements, is reused with no modifications. The scheduling algorithm ensures the defined buffer is enough, since all conflicts are statically solved.

With these definitions, a variation of the list-scheduling algorithm (Gerez 1998) can be implemented as shown in Fig. 4.8.

In the algorithm, data structures are set in lines 1–7. A list of all packets that need to be scheduled is created (line 1) and populated by test packets defined for each and every core (line 4). For each core in the system, the number of wrapper scan chains, the length of the longest chain, and the minimum test time are defined using the algorithm DesignWrapper (Iyengar et al. 2002) described in Sect. 4.1.2. For p

i the

number of test patterns for core i, a total of 2*pi test packets is defined for each core

(following the model presented in Sect. 4.1.1) and included in the UTP list. Procedure DefineAccessPaths, in line 5, returns two sorted lists with all possible input and output access paths from the system test interfaces to each core. The lists are ordered by path length so that shorter paths are tried first to reduce test time. The UTP list is sorted (line 8) in decreasing order by the cores test time in such a way that all packets of a core are listed before a packet of the next core. In a given time, only the first packet of each core in the UTP list is enabled to be scheduled. The enable signal is the delivery time indication (DT

i) associated to each core. Initially, the first test

vector of all cores is enabled (line 6). When a vector packet is scheduled, the next packet chosen for a core is the correspondent response packet for that vector and the delivery time for that packet is set to the test cycle after the vector application in the core, which is calculated by Eq. 4.2. When a response packet is scheduled, the next vector of that core is enabled and the delivery time is set in such a way that the previous test response is not overwritten (line 22). Additional precedence requirements among tests of a single core can be accommodated in the algorithm by changing the order of the packets in the UTP list and adapting the procedure that selects the next packet to be scheduled (line 11).

The schedule is defined as a set of time slots of different sizes as shown in Fig. 4.9 for a fictional system composed of nine cores. In the figure, different line patterns show the access paths used for each packet.

The size of the time slot is measured in number of clock cycles. Each slot contains a set of packets being transmitted, and the end of a slot indicates either the

Page 84: Reliability, Availability and Serviceability

70 4 NoC Reuse for SoC Modular Testing

completion of a test or the beginning of the transmission of a new packet. One packet transmission may be distributed among several slots, and slots can be modi-fied as the schedule is being defined. Thus, for example, one slot can be broken up into two others, to include a new packet that starts in the middle of the original slot.

The structure of the schedule is initialized as a single time slot starting at cycle zero, and with no ending time, and the initial slot for scheduling (CurrentTimeSlot) is set to zero (line 9 in the algorithm of Fig. 4.8). CurrentTimeSlot represents a range [t

k .. t

l] of clock cycles and the algorithm process the time slots in increasing order

(starting at zero) scheduling as many operations as possible at a certain slot before moving to the next (line 13 in Fig. 4.8). When no free access path can be found for an enabled packet of a core, the delivery time of the core is updated with the time tag of the first free path for that core (line 17). When no more packets can be scheduled in the current slot, CurrentTimeSlot is updated to the slot that contains the smallest delivery time among all cores.

Figure 4.10 shows benchmark p22810 implemented in a 4 × 6 grid NoC. Nodes are labeled by the module number according to the benchmark description (Marinissen et al. 2002) and unlabeled nodes represent routers with no core associ-ated to them. Notice that this is a hierarchical benchmark, where some cores are embedded into other cores. In this example, only the higher level cores are connected to the NoC and the test packets of the embedded cores are added to the test set of the super core. The location of the cores in the networks as well as the network dimensions were randomly defined as one has no information about the functional

Par

alle

l tes

ting

4, v1

cycles

6, v1

4, r1

6, r1

6, v2

1 3

4 5

8 97

input

output

outputinput

6

2

6, r2

6, v3 6, r3

8, v1 8, r1

4, v2 4, r1

8, v2 8, r2

4, v3

8, v3

10060 110 140145160168 220 245 270 335 365 400 423 ....

Fig. 4.9 Example of the resulting preemptive test schedule

Page 85: Reliability, Availability and Serviceability

714.2 Preemptive Test Scheduling

connection for the ITC’02 SoC Test Benchmarks. The functional system-level interface of benchmark p22810 is composed by 10 input pins, 67 output pins, and 96 bidirectional pins. Thus, one can have, for instance, three 32-bit test input ports and three 32-bit test output ports, as shown in Fig. 4.10, to interface with the exter-nal tester. Other test interface configurations are possible and may impact the sys-tem test time, as detailed in Cota et al. (2004). Applying the PreemptiveNoCScheduling algorithm of Fig. 4.8 in this example, the resulting test time is 280, 241 clock cycles.

Cota et al. (2004) discuss how the system test time varies with different system configurations. For instance, the topology (grid or torus) or dimension of the network, as well as the placement of the cores in the NoC define the average length of the access paths thus affecting test time. However, for mesh-based NoCs, one can expect a small influence of the access path into the core test time. Indeed, the maximum distance between the tester and a CUT is X + Y routers, for X and Y the dimensions of the network. Assuming NoCs with square dimensions (X = Y) and applying the new path length in Eq. 4.2, we have

2 2

2 2

2

p H R R

p H R R R

p P R

T T T N X P

T T T N T X P

T T T X

TR is typically very small and the increase in the original transmission time for

a packet is in the order of a dozen cycles, which is neglectable when compared to most test packets.

The number of test interfaces, on the other hand, is the main driver for test time reduction. Figure 4.11 shows the test time reduction for two benchmarks when the number of test interfaces increases.

18 1 28

20 11 27 21

17 16 25

22 26

13

14 5

inputoutput

9 15

23 24

12 19

10

32

Fig. 4.10 Benchmark p22810 implemented in a 4 × 6 NoC

Page 86: Reliability, Availability and Serviceability

72 4 NoC Reuse for SoC Modular Testing

Considering the high level of test parallelization achieved by the network reuse, the system power constraint is another variable that may affect the system test time. To deal with this requirement the basic algorithm must be adapted, as explained next.

4.2.1 Power-Aware Test Scheduling

As in any test scheduling problem, power dissipation during test is an important issue that must be considered in the solution. When the NoC is used as TAM, there are three sources of power consumption besides the core itself: the network interface, the router, and the communication channel. Cota et al. (2003a) presented a model to evaluate the power dissipation of the system during test to include this data in the test scheduling algorithm. According to that model, Eq. 4.3 gives the dynamic consumption per cycle of a network router for the transmission of a single packet. In the equation, C

L is the load capacitance (technology-dependent constant), T is the

clock period, and s is the switching factor. Variables nff and n

gt represent the number

of active flip-flops and gates in the router, respectively, when one packet is being routed, while s

ff and s

gt are the switching factors for flip-flops and gates, respectively.

Notice that, for the flip-flops, there is a constant switching factor caused by the clock in addition to the eventual switching in the stored bit value.

2 1

1router L ff ff gt gtP C Vdd n nT

(4.3)

Equation 4.4 evaluates the power consumption of a communication channel in the NoC structure. The load capacitance of the channel is given by the product of the number of wires in the channel (ch

w), the length of the channel (ch

l), and the width

of the wire (wirew). Variable s

w is the switching factor for the wire. In this model, all

channels are assumed to have the same length, although this may be not the case in

Fig. 4.11 Test time reduction with increase in the number of test interfaces

Page 87: Reliability, Availability and Serviceability

734.2 Preemptive Test Scheduling

the actual implementation of the communication platform. As the power consumption is calculated per cycle, the size of the packet being transmitted is not important.

2 1

channel L w l w wP C Vdd ch wire chT

(4.4)

The total power consumption for a packet transmission is calculated according to the path established in the network for that packet: for each active router and each active channel in the path, the router and channel consumptions are added, as shown in Eq. 4.5.

packet routers router channels channelP n P n P (4.5)

The power consumption of a core during test depends on the core logic, on the test vectors, and on the order of application of the patterns. Thus, the model assumes this information is part of the test information that accompanies the IP block. Moreover, as the front-end of a network interface is usually developed for a specific core, the model assumes the core power information includes the dissipation of this part of the wrapper, while P

router includes the dissipation of the NI back-end. Finally,

the peak power consumption for receiving (wrapper consumption) and processing (core consumption) a single test vector is considered, that is, the power information P

core assumed for each core and its wrapper corresponds to the power dissipation of

the test pattern with highest dissipation for that core. Moreover, the power per cycle is considered, so that it becomes independent of the test frequency.

The power dissipation per cycle is considered in the test scheduling algorithm by assigning this information to each time slot of the test schedule. For each slot s, the total power dissipation is calculated by Eq. 4.6, where n is the number of packets being transmitted during this time slot and c is the number of cores being tested in this slot. Notice that a slot may have more than one packet being transmitted to/from the same core i. In this case, the power consumption of core i (test processing) is counted only once to the slot power consumption. This is because core i can actually process only one vector at a time and the power profile of the core is assumed to be the peak consumption among all vectors. The scheduling of two or more packets related to the same core in the same time slot means, therefore, that the packets are traversing the network, but only one of them is actually being processed by the core.

1 1

( ) ( ) ( )total packet corej n i c

P s P j P i£ £ £ £

(4.6)

The system power limit must be respected at each time slot s, that is:

max( )totalP s P£

where Pmax

is the power budget of the system. Thus, the test scheduling algorithm of Fig. 4.8 is adapted to calculate the total power required to transmit a packet before scheduling this packet. If the addition of this value to the total power

Page 88: Reliability, Availability and Serviceability

74 4 NoC Reuse for SoC Modular Testing

consumption of the slot does not exceed the power consumption limit Pmax

defined for the system, that packet can be scheduled. Otherwise, the packet is set to be scheduled later. The new algorithm is shown in Fig. 4.12.

The scheduling of a packet can be delayed either because of the power related to the access path (P

packet) or because of the power related to the core itself (P

core). In the

first case, another packet of a core that is already being tested is tried and Ppacket

exceeds the power budget of the current slot for all possible access paths. In the second case, a packet of a new core is tried and P

core alone exceeds the power budget

of the current slot. These two situations are shown in Fig. 4.13a, where packet v4 of core 6 (dashed light gray form) and packet v1 of core 8 (dotted darker gray form) cannot be scheduled. In both cases packets are delayed and the test time of the associ-ated core may increase. However, the algorithm tries to schedule packets to other cores thus reducing the impact on the overall system test time. This is exemplified in Fig. 4.13b where packet v1 of core 3 is scheduled in the slot not occupied by core 6.

Figure 4.14 shows the results of the power-aware test scheduling for benchmark p22810. In this example, the power dissipation of the cores is assumed to be at least two orders of magnitude higher than the power dissipation of the network resources

Fig. 4.12 Power-aware preemptive test scheduling algorithm for NoC reuse

Page 89: Reliability, Availability and Serviceability

754.2 Preemptive Test Scheduling

(routers and channels). Three different power budgets for the system are considered. System power budget is defined as a percentage of the sum of the power dissipation of all cores during test. Thus, for example, a power limit of 50% indicates that the system power budget corresponds to half of the sum of the power dissipation of all cores in test mode. Notice that in a real case, the designer can define any power limit.

From Fig. 4.14 one can observe that only very stringent power budgets affect the system test time. The example also shows that the more interfaces with the tester, the higher the impact of the power constraints on the system test time (Cota et al. 2004). In fact, the power limit may preclude the total usage of the network resources, thus increasing test time. If fewer interfaces are available, the parallelism is already restricted and the test time is not deeply affected.

As a first approach for NoC reuse during test, the preemptive test scheduling showed that the NoC can be a cost-effective solution as test access mechanism, lead-ing to test times comparable to the ones derived from dedicated bus-based TAMs

6, r4

Par

alle

l tes

ting

5, v1

cycles

6, v1

5, r1

6, r1

5, v2

6, v2

5, r2

6, r2

8, v1 8, r1

5, v3

5, r3

6, v3 6, r3 5, v4

5, r4

8, v1

6, v4

6, v4 6, r4

Par

alle

ltes

ting

5, v1

cycles

6, v1

5, r1

6, r1

5, v2

6, v2

5, r2

6, r2 5, v3

5, r3

6, v3 6, r3 5, v4

5, r4

6, v4 6, r4

8, v1

3, v1 3, r1

a

bpower budget achieved by either a packet or a core

core with lower power consumption scheduled

Fig. 4.13 Power-aware preemptive test scheduling example: (a) power budget achieved by either a packet or a core, (b) core with lower power consumption scheduled

Page 90: Reliability, Availability and Serviceability

76 4 NoC Reuse for SoC Modular Testing

without the burden of designing an additional communication infrastructure in the system. The preemptive scheduling assumes, on the other hand, a NoC perspective for testing, that is, the algorithm sees the test packets as another application for the NoC, only resolving the possible traffic conflicts beforehand to reduce the test time. Indeed, the NoC is typically designed for an expected functional communication pat-tern among the IP blocks. The communication graph of the final application is the basic information to define channel bitwidth, network dimensions, and placement of the cores in the NoC (De Micheli and Benini 2006). The challenge is that the actual communication (number and size of messages) cannot be completely predicted. On the other hand, for the “test application”, these details are known a priori which justi-fies a static scheduling of the “tasks” to improve network performance in the interest of the application, i.e., reducing test time. Despite the good results, this approach has three important limitations, two of them related to the algorithm itself and one related to the test requirements of the IP blocks, as discussed next.

The scheduling algorithm is a greedy approach and the result depends on the initial order of the packets (line 8 in Fig. 4.12). It is possible, therefore, that better test times can be achieved if packets are considered for schedule in a different order. Thus, a permutation of the initial list of packets could allow a better exploration of the scheduling search space. However, the level of granularity of the packets makes this further exploration very expensive. As discussed in Chap. 1, packet-based net-works present better resource utilization than message-based ones, because packets are shorter and reserve a smaller number of channels during their transfer. Since test was considered as another application to be executed in the network, NoC usage was the focus. Hence, test data was divided into small packets to improve network usage. However, in this model, the total number of test packets for a system is nor-mally huge, as is the number of test vectors of an SoC. For instance, for benchmark p22810 more than 50,000 packets were defined. The static scheduling of each packet implies a cycle-accurate evaluation of the NoC resources and, as a result, slots with only a few clock cycles are defined in the resulting schedule. This also makes the

Fig. 4.14 Power-aware preemptive test scheduling for p22810

Page 91: Reliability, Availability and Serviceability

774.3 Non-preemptive Test Scheduling

verifications of the access paths availability, power budget, etc. more expensive. Thus, the granularity of the test packets should be reduced to allow the definition of simpler data structures and a better exploration of the search space.

In terms of test requirements of the IP blocks, if a core allows preemptive testing, each test vector or the corresponding response for this core can be delivered as an individual packet using any available path. However, preemptive testing is not always possible in practice, especially for BIST and sequential core test (Iyengar and Chakrabarty 2002). If the core does not allow preemptive testing all test vectors must be delivered without interruption.

In addition, it is always desirable that the test pipeline of the core is not interrupted, i.e. the nth test vector will be shifted into the scan chains as the (n-1)th test response is shifted out. Even though the scheduling algorithm discussed in Sect. 4.2 tries to keep the core pipeline, this cannot be ensured. Thus, in case of preemption, the test pipeline has to be halted if either the test vector or test response cannot be scheduled due to the unavailability of test resources, i.e. channels and input/output ports. This may not only increase the complexity of wrapper control, but also cause potential increase on test time. The non-preemptive test scheduling approach explained below was devised, then, to deal with these limitations of the first solution.

4.3 Non-preemptive Test Scheduling

In the non-preemptive test scheduling, test packets are considerably larger than the original test packets and can actually be called test messages. Each message contains all test vectors/responses of a CUT that must be applied without preemp-tion. Thus, an access path must be available for longer times in order to allow a message to be scheduled. Moreover, the message containing the test responses must be delivered synchronously with the incoming vector message, so that core pipeline is kept. This means the algorithm must ensure that both, input and output access paths are available during the same period of time and schedule a pair of messages (the incoming and the outgoing messages) together.

The format of the test messages is very similar to the one defined in Sect. 4.1.1. The only difference is that now all vectors/responses follow the same header and there may be a test control flit between two test vectors, as shown in Fig. 4.15. Furthermore, the NI must be adapted to deal with the test control flit in the middle of an incoming packet.

To ensure the simultaneously scheduling of both, the test vector and its associated response, the scheduler assigns each core a complete input-CUT-output routing path, that is, the path includes an input port, an output port, and the corresponding channels that transport test vectors from the input to the core, and the test response from the core to the output. Once the core is scheduled on this path, all resources (input, output, chan-nels) on the path are reserved for the test of this core until the entire test set is completed. Test vectors will be routed to the core and test responses to the output in a pipelined fashion and the flow control in the network becomes similar to circuit switching.

Page 92: Reliability, Availability and Serviceability

78 4 NoC Reuse for SoC Modular Testing

Notice that in this approach, the number of cores being tested in parallel is limited by the number of available input/output pairs, since the whole path is reserved and used for a given core at a time. Let us assume, for instance, a system with four test interfaces (inputs 1 and 2 and outputs 3 and 4) as shown in Fig. 4.16. In this system, one can define four I/O pairs (1/3, 1/4, 2/3, and 2/4) and, for each pair, one can define a unique access path to each CUT. However, at most two paths will be active in any given time, as depicted in the figure, because the I/O ports are shared. In fact this is very similar to the traditional modular testing approaches and the NoC is actually seen as a combination of the multiplexing and distributed test bus architec-tures: each core is served by a dedicated W-bit bus (the unique path connecting a core to a pair of I/O ports) and several busses share a system interface. Thus, the test scheduling algorithm must efficiently assign input/output pairs to cores without resource conflicting, such that the overall test time is minimized (Liu et al. 2004).

The pseudo code of the non-preemptive test scheduling algorithm is sketched in Fig. 4.17 (Liu et al. 2004). The algorithm also has a greedy behavior in the sense that it sorts the cores in decreasing order of test time and tries to assign test-critical cores to the first available I/O pair. However, since a whole test set is consid-ered, instead of every single packet, a much smaller solution space is created. As a result, a significantly larger portion of the solution space can be explored by the algorithm, and the result can be better optimized. This exploration is implemented by the permutation of both, the list of cores (line 33) and the combination of inputs and outputs in pairs (line 5), as detailed next.

The heuristic starts by creating an ordered core list (line 4) as well as an I/O pairs list. The orders of I/O pairs in the list are permuted and every permutation is

packet header

00

11111

tail

22222

33333

44444

0 0

4 3 2 14 3 2 14 3 2 1

3 2

1

4 3

2 1

CUT55

66666

77777

88888

99999

…..

----

55

9 8 7 69 8 7 69 8 7 6

---

-

-

8 7

69

8 7

6

Fig. 4.15 The test message in the non-preemptive test scheduling

Page 93: Reliability, Availability and Serviceability

794.3 Non-preemptive Test Scheduling

In1

CUT

In2

O3

O4CUT

Fig. 4.16 Example of dedicated test access paths in the NoC

Fig. 4.17 Non-preemptive test scheduling algorithm

attempted (line 5). Different permutations represent different priorities of the I/O pairs when more than one I/O pairs are free to be assigned to a core.

The algorithm maintains a time tag on every resource (channels, input and output ports) indicating its availability. Once a routing path for a core is determined and

Page 94: Reliability, Availability and Serviceability

80 4 NoC Reuse for SoC Modular Testing

allocated, all related resources are reserved for the core and the time tags are updated (lines 27 and 28). The availability of I/O paths is examined by checking their time tags. If no complete I/O path is available, the current time has to be updated by the next most recent time tag (lines 11 and 16) and the cores will be attempted once again. Notice that even when an I/O pair is available there may exist conflicts over some links in the network that are part of access paths for different cores. If there exists one or more conflicts, the next core is attempted (line 21). If all unscheduled cores have been attempted and no one was scheduled, current time must be updated. If the com-plete path is free, then power budget is checked using the same strategy of the preemp-tive approach (line 24). If power limit is reached, another core is tried. Otherwise, the core is scheduled on the I/O and the corresponding resources will be updated with new time tags (lines 27 and 28). The core is then removed from the list and the next core is attempted for schedule. The whole procedure is repeated for different permutations of

Fig. 4.18 Non-preemptive test schedule example for benchmark d695

Page 95: Reliability, Availability and Serviceability

814.4 Multi-constrained Test Scheduling

the cores list and this number is defined by the user (line 32). Figure 4.18 shows the test schedule generated for benchmark d695 by one iteration of the algorithm.

The non-preemptive scheduling uses less resources of the NoC, but keeps the core pipeline and actually results in reduced test times, as shown in Fig. 4.19 for benchmark p22810 implemented in a 32-bit grid NoC.

4.4 Multi-constrained Test Scheduling

Although the dedicated path approach reduces system test time, it suffers from the lack of flexibility, i.e., the minimum manageable unit in test scheduling is the full test application time of a core. In practice, however, it is more feasible to assume that some cores require non-preemptive scheduling for maintaining test pipelines, while others can be tested preemptively. Furthermore, both algorithms presented in this chapter, focus on external tests. In practice, multiple test sets are often needed to test complex cores, e.g., cores are tested by both BIST and external test sessions. In addition, some precedence constraints may be required to impose a partial order among the tests (Iyengar and Chakrabarty 2002). For example, it is common to first apply BIST to target the random-detectable faults and then use external test to target the random-resistant faults. It may also be desirable to test the memory cores earlier because they can be then used to test logic cores. Therefore, a practical test plan should be able to take into consideration several constraints.

0

50000

100000

150000

200000

250000

300000

350000

2/2 3/3 4/4

# of I/Os

Test

ing

tim

e (C

ycle

s)

Preemptive

Dedicated-path

Benchmark p22810

Fig. 4.19 Test times for benchmark p22810 for preemptive and non-preemptive approaches

Page 96: Reliability, Availability and Serviceability

82 4 NoC Reuse for SoC Modular Testing

An algorithm that handles the abovementioned requirements was proposed by Cota and Liu (2006). The algorithm is based on the combination of packet-based and dedicated path routing and takes into account multiple test sets, precedence, and power constraints. In the combined algorithm, a list of unscheduled packets is defined and each packet can carry either a single test vector/response or a complete test set. This list is initialized with the packets corresponding to the first test vector of each core, with the consideration of precedence constraints and different types of test sets allowed. There are three possible execution flows in the algorithm, depending on the type of the packet selected to be scheduled:

1. If this packet belongs to a non-preemptive test, the first available I/O path that can be used by this packet is selected. The I/O path availability implies that all channels between the input and the core, and between the core and the output are free to be used. If there is no available I/O path for this packet the delivery time of this packet is set to the time when the first I/O pair in the list becomes available and a packet of another core is selected. Otherwise, the duration of this non-preemptive session is determined and the power consump-tion of this packet is calculated. If the addition of this packet in the current time slot does not exceed the system power limit, the packet is scheduled and the time tags of all network resources in the routing path are updated. The corre-sponding response packet is then automatically scheduled and removed from the list of unscheduled packets.

2. If the packet belongs to a preemptive test, the shortest available path that can be used by this packet is selected and the time for transmitting this packet is determined. If there is no available path, the delivery time of the packet is set to the time when the first path in the list of possible paths for the core becomes available and a packet of another core is selected. Otherwise, the power consumption of this packet is calculated and if the power limit is satisfied, the packet is scheduled and the delivery time of the next packet for the same core is set accordingly.

3. Finally, if the selected packet refers to an autonomous BIST test session, a single flit containing the BIST enable signal and other required information for BIST (e.g. reconfiguration values for programmable LFSRs) must be sent to the core. The test application time of each BIST session is known a priori. Under these assumptions, two cases are possible for BIST engine utilization. First, if each BISTed core has its own BIST engine, the transmission of the packet is similar to that of a preemptive test. The only difference is on the definition of the delivery time of the response packet, which is set according to the duration of the BIST session. Secondly, if several BIST engines are shared among all BISTed cores, the BIST sessions are scheduled as non-preemptive tests with precedence constraints.

Figure 4.20 shows the results of this comprehensive approach for a sample configuration of benchmark p22810.

Page 97: Reliability, Availability and Serviceability

83References

References

Cota E, Liu C (2006) Constraint-driven test scheduling for NoC-based systems. IEEE Trans CAD 25(11):2465–2478

Cota E, Carro L, Wagner F, Lubaszewski M (2003a) Power-aware NoC reuse on the testing of core-based systems. In: Proceedings of the international test conference (ITC), Charlotte, pp 612–621

Cota E, Kreutz M, Zeferino CA, Carro L, Lubaszewski M, Susin A (2003b) The impact of NoC reuse on the testing of core-based systems. In: Proceedings of the 21st VLSI test symposium (VTS), Napa Valley, pp 128–133

Cota E, Carro L, Lubaszewski M (2004) Reusing an on-chip network for the test of core-based systems. ACM Trans Des Automat Electron Syst 9(4):471–499

De Micheli G, Benini L (2006) Networks on chips: technology and tools. Morgan Kaufmann, San Francisco (Series in Systems on Silicon)

Gerez SH (1998) Algorithms for VLSI design automation. Wiley, Baffins LaneIyengar V, Chakrabarty K (2002) System-on-a-chip test scheduling with precedence relationships,

preemption, and power constraints. IEEE Trans CAD 21(9):1088–1094Iyengar V, Chakrabarty K, Marinissen EJ (2002) Test wrapper and test access mechanism

co-optimization for system-on-chip. J Electron Test 18(4):213–230Liu C, Cota E, Sharif H, Pradhan DK (2004) Test scheduling for network-on-chip with BIST and

precedence constraints. In: Proceedings of the international test conference (ITC), Charlotte, pp 1369–1378

Marinissen EJ, Iyengar V, Chakrabarty K (2002) ITC’02 SoC test benchmarks. http://itc02socbenchm.pratt.duke.edu/. Accessed 23 Aug 2010

Vermeulen B, Dielissen J, Goossens K, Ciordas C (2003) Bringing communication networks on a chip: test and verification implications. IEEE Commun Mag 41(9):74–81

0

50000

100000

150000

200000

250000

300000

350000

2/2 3/3 4/4

# of I/Os

Test

ing

tim

e (C

ycle

s)

Preemptive

Dedicated-path

with BIST

BIST andprecedence

Benchmark p22810

Fig. 4.20 Test times for benchmark p22810 for preemptive and non-preemptive approaches

Page 98: Reliability, Availability and Serviceability

85É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_5, © Springer Science+Business Media, LLC 2012

The test scheduling approaches discussed in Chap. 4 demonstrated that NoCs can be as a cost-effective TAM as a dedicated bus-based mechanism. Those approaches are based, however, on a single NoC model and on a few assumptions about the NoC, wrappers, and cores. Indeed, guaranteed services (GS) NoCs were assumed to meet the timing constraints of an external tester. Also, all pins at the core interface (functional and test pins) were assumed to be used during test to receive/deliver test data and the core test frequency was assumed to be equal to the NoC operation frequency. Finally, in those first reuse approaches, the available channel bitwidth may be sub-utilized for cores with a small test interface. In this chapter those assumptions are revised and more recent approaches that consider the NoC reuse in more detail are discussed. First, we present alternative test scheduling algorithms and wrapper models that improve channel utilization and consider additional system requirements such as the thermal budget. Then, the characteristics of the NoC communication protocol are taken into account to generate test interfaces for the external tester and test wrappers for the embedded cores. Those wrappers isolate the communication details and aim at using the available NoC bandwidth with no further assumptions. Based on these DfT structures, a test scheduling algorithm for BE NoCs with different topologies is presented.

5.1 Efficient Channel Utilization

Based on the non-preemptive test scheduling described in Chap. 4, a few approaches were proposed to improve the channel utilization and deal with additional system test constraints.

System test time is strongly correlated to the usage of the test interfaces. One of the main drawbacks when reusing the NoC for testing purposes is the limited number of test interfaces. A small number of test ports limits the possibilities of test parallelism, but there is a cost associated with the introduction of extra test ports: the circuit level

Chapter 5Advanced Approaches for NoC Reuse

Page 99: Reliability, Availability and Serviceability

86 5 Advanced Approaches for NoC Reuse

cost (extra pins) and the resource level cost (extra ATE channels for test pin monitoring). Dalmasso et al. (2008) propose the use of compression schemes to deal with the limited number of test interfaces at system-level. This strategy allows to increase the test parallelism without increasing the number of required ATE channels. Figure 5.1 resumes the main concept of this approach. Test data of each core is compressed before being stored in the ATE. Compressed data is transferred from the ATE to the NoC and decompressed within the NoC. This way, the available functional pins in the system interface can be connected to additional nodes in the NoC, thus providing more test interfaces to the test scheduling algorithms.

Nolen and Mahapatra (2005, 2008) observed that a considerable NoC bandwidth is neglected during test because a single test frequency is used for the whole system and this frequency is set by the slowest core under test (CUT). They argue that DfT structures, mainly scan circuitry, normally run at lower clock speeds compared to the operational logic. They propose then a multi-clock approach where all cores run in a single speed and the NoC runs at a faster speed (a multiple of the cores frequency). This assumption allows the division of the NoC bandwidth into time slots, according to the speed ration between the NoC and the cores. For instance, if the speed-ration is 2-to-1 (the NoC frequency is twice the cores frequency) each NoC channel is assigned two time slots, that can be further shared between cores

Test

Dat

a

Com

pres

sion

AT

E

On-

chip

Dec

ompr

esso

r

CUTW NWN

N w

rapp

ersc

an C

hain

s

Com

pres

sed

Test

D

ata

Communication channels

W

WW

Fi

W

NoCFunctional input pins Wk

Wk Fi W£ ≥

decompressor

Wrapper

Core i

router

Fig. 5.1 Compression scheme within the NoC

Page 100: Reliability, Availability and Serviceability

875.1 Efficient Channel Utilization

during test through a Time Division Multiplexing (TDM) approach. The proposed test scheduling algorithm follows the same basic ideas of the non-preemptive test scheduling presented in Chap. 4, but it is adapted to deal with the defined time slots. NoC resources are time-tagged for each defined time slot and a list scheduling algorithm is used to find an optimized test scheduling with lower I/O cost. Indeed, the better usage of the network bandwidth can leverage the limitation of the number of I/O ports required to reduce the overall test time. On the other hand, by keeping all cores in the same test frequency, NoC bandwidth can still be sub-utilized by some cores due to the distinct test width requirements of each core under test, as shown next.

The problem of different test width requirements is exemplified for benchmark p22810 (Marinissen et al. 2002) in Table 5.1. The table shows, for each core in the system (assuming a flatten system structure), the relationship between the original required core test interface (defined for a minimal dedicated TAM width) and the actually used channel width in a NoC-based approach. For instance, according to this benchmark description, Core 1 has 10 internal scan chains and the longest scan chain has 130 flip-flops. In addition, this core has 28 functional inputs, 56 functional outputs and 32 bidirectional pins in its interface. All those pins are used during test and bidirectional pins are assumed to be used for both, test inputs and outputs. Thus, one can assume 11 wrapper scan chains in this core test interface, since all input (or output) and bidirectional pins together are still shorter than the longest internal scan chain. This is the minimal number of wrapper scan chains for Core 1, and will lead to the smallest possible test time for this core, assuming that no modi-fications in the internal scan chains configuration are possible.

One can observe that Core 1 sub-utilizes the channel width even if the channel is as narrow as 16 bits. This actually happens for half of the cores in system p22810 (14 out of 28 cores highlighted in Table 5.1). Cores that require test interfaces larger than the available channel width (white lines in Table 5.1) need to combine internal scan chains to form wrapper scan chains up to the limit of the available channel width. This combination may lead or not to an increase in the size of the test packet. Let us consider, for instance, the case of Core 9. This core has originally 24 unbalanced internal scan chains, but a careful combination of those chains leads to a small increase in the maximum scan length for a channel width of 16 bits. As another example, Core 5 presents an original configuration of 29 unbalanced scan chains but one can find a combination of chains that preserves the original maxi-mum chain length of 214 flip-flops even for a 16-bit channel. On the other hand, for Core 26, even a careful combination of the original internal scan chains leads to a considerable increase in the maximum scan length. This is due to the fact that the original set of internal scan chains is balanced, with most chains of similar size. Thus, any combination of chains actually doubles its size. Furthermore, one can observe that testing time for legacy cores can decrease up to a limit (set by the original number internal scan chains). Thus, portions of the NoC channel width may actually remain idle during test and this issue tends to get worse as larger channel widths are provided.

Page 101: Reliability, Availability and Serviceability

88 5 Advanced Approaches for NoC Reuse

Liu et al. (2005) assume NoCs with on-chip clocking and a TDM-based test scheduling to improve the test data transfer and the channel usage in a power constrained solution. On-chip clock generation (Chickermane et al. 2001) is a DfT technique used to deal with the frequency gap between the external tester and a core under test. In this technique, on-chip circuitry (such as a PLL) multiplies the (slower) test clock signal received from the tester to generate faster test frequencies at the CUT. Liu et al. (2005) propose the use of multi-rate clocks to deal with the different test bandwidth requirements among cores. They propose a strategy to optimize the distribution of multi-rate clocks such that NoC channels are used more efficiently and the overall system test time is minimized. Power consumption of the cores is also considered in the optimization strategy such that cores with high power consumption figures are given slower test clocks. The use of the idle channel width is based on a combination of on-chip clocking and parallel-serial conversion. At the

Table 5.1 Example of test interfaces and packets size for benchmark p22810

Core

Number of required test pins

16-bit channel 32-bit channel

Used channel bitwidth

Flits per packet

Used channel bitwidth

Flits per packet

1 11 11 130 11 1302 47 16 3 32 23 38 16 3 32 24 64 16 4 32 25 30 16 214 30 2146 80 16 5 32 37 84 16 6 32 38 36 16 3 32 29 26 14 195 26 12210 5 5 99 5 9911 9 9 88 9 8812 12 12 82 12 8213 6 6 104 6 10414 4 4 73 4 7315 8 8 80 8 8016 2 2 109 2 10917 7 7 89 7 8918 7 7 68 7 6819 4 4 43 4 4320 5 5 77 5 7721 11 11 186 11 18622 4 4 77 4 7723 8 8 115 8 11524 8 8 101 8 10125 19 16 216 19 18126 32 16 798 32 40027 10 10 34 10 3428 6 6 100 6 100

Page 102: Reliability, Availability and Serviceability

895.1 Efficient Channel Utilization

test wrapper interface, the rule of feeding one scan flip-flop per clock cycle is still valid. However, the clock signal provided to the wrapper (i.e., the wrapper test frequency) is higher than the NoC test clock by a factor n, which can be defined by the on-chip clocking scheme. Thus, each flit received from the network actually carries test data corresponding to n original test flits of that core. For instance, let us use Core 15 from Table 5.1 as an example. A factor n = 2 for this core means that each test flit arriving from the network carries test data corresponding to two scan-in cycles, as depicted in Fig. 5.2. For this to happen, the wrapper of Core 5 must process the incoming flit twice as fast and multiplexers, controlled by the faster clock, are used to select which part of the incoming flit is fed to the wrapper.

The authors consider, on the other hand, that faster clocks imply higher power consumption. In the proposed strategy, they tackle this issue by slowing down the test clock of cores that cannot be scheduled due to power constraints. The slower clock is generated by a frequency divider and then synchronized with the tester clock. If the slower clock rate is a factor 1/n of the tester clock rate, each NoC channel can be viewed as n virtual channels, each one used by one core. This strategy requires a time-division scheduling of the test packets which is also presented by Liu et al. (2005). The scheduling is based on a set of possible on-chip clock rates for each core. Initially, the power consumption and the test time for each possible clock rate is evaluated. This information is used to determine a priority level per clock rate so that rates with higher priorities are assigned first to a core during scheduling. Priority levels represent different tradeoffs between power consumption and test time. The scheduling heuristic is based on the non-preemptive scheduling presented in Sect. 4.3. However, in this new approach each physical channel can be used by different cores. Thus, the verification of conflicts over channels is based on the evaluation of the total bandwidth allocated to each channel. Experimental results presented in Liu et al. (2005) show a reduction as high as 44% in the system test time, compared to the original non-preemptive approach, when cores that sub-utilize

packet header

00

1111122222

tail

3333344444

5555566666

0 0

6 5 4 3 2 1

6 5 4 3 2 1

6 5 4 3 2 1

6 5

4

3 2

1

CUT

…..

Fig. 5.2 Example of new packet format

Page 103: Reliability, Availability and Serviceability

90 5 Advanced Approaches for NoC Reuse

the channel width use faster clocks. These results depend, as expected, on the original channel width utilization. For instance, for benchmark p93791, improvements were in fact negligible as almost all cores already use the full capacity of the available channel width. However, when all cores in the system are given the option of using different clock rates, all systems used in the experiments showed improvements in the test time. Similar results were observed when power constraints were added to the system configuration. This work was later improved to achieve thermal balance at system-level (Liu and Iyengar 2006; Liu et al. 2006) and to support hierarchical SoC structures (Liu 2006). In the same line of reasoning, Ahn and Kang (2006) propose an adaptation of the rectangle packing heuristic to develop a test scheduling for NoC-based systems using multiple test clocks, also showing improvements in the test time.

Li et al. (2008) point out that on-chip clock manipulation requires a considerable design effort and increases test power consumption. They also observe that there is a limit for the frequency scaling at core level because internal scan chains have specific frequency ranges to operate properly (without setup-time or hold-time violations). Hence, they also propose a power-aware interleaved test scheduling approach to increase the test bandwidth, but combined to a new wrapper design strategy that does not rely on multi-rate clocks. To do this, they abandon the basic assumption that only one wrapper cell carries data to an internal scan chain of the core and define instead a wrapper cell group (WCG) to receive data from the channel, as shown in Fig. 5.3.

The WCG works like a buffer, receiving several bits of one chain in a single cycle. If the full channel bandwidth W is used, the least number flitsn of test flits that

a core needs is given by V

W, for V the test data volume of one test pattern of the core

(in bits). To simplify the control logic, all wrapper cell groups defined have the same number of cells. By defining the number of WCGs ( WCGN ) as an integer factor of the channel width, the whole channel width can be used even if it outnumbers the required cores test interface. Moreover, the number of wrapper cells in the WCG is defined in such a way that test flits are spread uniformly along the minimum expected test application time for that core, which is defined by the longest internal scan chain of the core. Thus, WCGN is defined as the maximum integer factor of W that satisfies Eq. 5.1 below, for maxL the length of the longest internal scan chain of the core.

WCG flitsmax

WN n

L (5.1)

In the proposed wrapper model, the internal scan chains of the core are merged into WCGN wrapper scan chains and each chain is bounded to a WCG, as shown in Fig. 5.3 for Module 10 of benchmark p22810. For this module, the test data volume of a single pattern is 259 bits, which requires a minimum of nine flits to be trans-ferred from the ATE for a 32-bit channel width (W = 32, 9flitsn ). According to Eq. 5.1, 2WCGN and each WCG has 16 cells. Thus, by combining the functional inputs, functional outputs and internal scan chains accordingly, one can still load the

Page 104: Reliability, Availability and Serviceability

915.1 Efficient Channel Utilization

complete test vector in only nine flits as opposed to the 99 flits required in the original wrapper proposed in Chap. 4. However, since the core test frequency is not changed, subsequent flits for the same core must be 16 clock cycles apart. Therefore, NoC resources used by this core are free during 15 cycles and can be used by other cores in the system. For this reason, a TDM scheduling is used.

Li et al. (2006) propose a dynamically reconfigurable wrapper where wrapper scan chains of different lengths can be defined during test application so that the whole channel bandwidth can be used at all times. The basic assumption for this approach is that wrapper scan chains have different lengths and shorter chains finish before others thus leaving unused bits in the test flit that can be used to carry additional test data of longer chains. Hence, in the proposed wrapper, a reconfigu-ration of the data flit format and of the connection between the wrapper and the NoC channel takes place whenever wrapper scan chains finish their test application and longer chains are still being used. Scan chains are sorted by length and organized in

99 scan FFs

98 scan FFs

10 scan FFs

2 scan FFs20

20 F

Is

20

30 F

Is

20

15 F

Os

CUT

20

15 F

Os

WC

G

Fig. 5.3 Example of new wrapper model

Page 105: Reliability, Availability and Serviceability

92 5 Advanced Approaches for NoC Reuse

groups. Each group has a set of balanced wrapper scan chains and the maximum length of the scan chains decreases as the group index increases, as shown in Fig. 5.4a. Furthermore, the number of defined wrapper scan chains is such that each group FSCG

i+1 is composed of sorted wrapper scan chains with index F

i + 1 to F

i+1,

where Fi and F

i+1 are consecutive integer factors of W. For instance, in Fig. 5.4a and

for W = 16, one has four groups of wrapper scan chains with 2, 2, 4, and 8 chains per group, respectively. Each group is connected to a corresponding number of wrapper cells. Thus, this wrapper has four configurations during test application. In each configuration, one test flit carries different number of bits per scan group, as exempli-fied in Fig. 5.4b. The figure shows the second configuration of this wrapper which

Scan chain 16

FSCG1Scan chain 1Scan chain 2Scan chain 3Scan chain 4Scan chain 5

Scan chain 9

Scan chain 8

FSCG2

FSCG3

FSCG4

FSCG1

FSCG2

FSCG3

FSCG4

Scan chain 16

Scan chain 1Scan chain 2Scan chain 3Scan chain 4Scan chain 5

Scan chain 9

Scan chain 8

NoC

buffer

NI

Input Control

First chains to finish

NoC

buffer

NI

Input Control

2

a

b

first configuration

second configuration

Fig. 5.4 Example of a reconfigurable wrapper model: (a) first configuration, (b) second configuration

Page 106: Reliability, Availability and Serviceability

935.2 Wrapper Design for NoC Reuse

takes place after group FSCG4 finishes. In this configuration, the eight bits freed from FSCG4 are distributed among the remaining groups. Experimental results presented by Li et al. (2006) show that the reconfigurable wrapper approach can indeed reduce the data flit waste. However, the authors do not discuss the implemen-tation cost of the proposed wrapper, in terms of area, configuration time and extra power consumption. This strategy also needs to be combined with a distinct test scheduling algorithm.

5.2 Wrapper Design for NoC Reuse

Amory et al. (2006, 2007b) propose the implementation of a test wrapper that enables the reuse during test of NoCs (Amory et al. 2006) or any other model of functional interconnection (Amory et al. 2007b). Functional interconnects are designed to meet functional specifications and when they are reused as TAM, they cannot be tuned to meet the specific requirements of the test traffic. Thus, the main goal of the test wrapper proposed by Amory et al. (2007b) is to minimize the core test time by adjusting the number of wrapper scan chains to use the available bandwidth provided by the functional interconnect. The proposed wrapper assumes that the NoC still provides guaranteed fixed bandwidth and latency.

As opposed to the wrapper models presented in Chap. 4, in the new wrapper model, not all pins in the core functional interface are used to transport test data. In fact, some pins must be used to interface with the NI and the NoC, also during test. Cores usually communicate with the underneath communication infrastructure through standard protocols, such as OCP (OCPIP 2011) or AXI (ARM 2011), among others. These protocols require specific control signals in the core interface. In general, protocol-aware core terminals can be divided into command, write, and read signals (Amory et al. 2007b). Not all groups are required and signals can be unidirectional (either input or output) or bidirectional. A port in the core interface is composed by zero or more terminals (signals) from each group and can be considered an initiator port (command port that starts communications) or a target port (one that receives a command). Write and read ports communicate data, respectively, in the same direc-tion and in the opposite direction of the command. Bidirectional ports may be present, but they are used in a single direction for testing purposes. Figure 5.5 shows an exam-ple of a core that uses DTL protocol (Philips Semiconductors 2002). The core has two DTL ports where the port in the left side is a target and write port, and the port in the right side is an initiator read-write one.

Each signal group has its own signals for handshaking control (signals valid and accept). The command group has three additional signals that indicate address, read/write direction, and block size. In test mode, one must define at least one input and one output port, so that test stimuli can be received and test responses can be sent out, respectively. Test stimuli can be received by either a target write (or read-write) or an initiator read (or read-write) port. Similarly, test responses can flow from either an initiator write (or read-write) port or a target read (or read-write) port.

Page 107: Reliability, Availability and Serviceability

94 5 Advanced Approaches for NoC Reuse

When the functional interconnection is reused and the test access ports are defined for a core, one must work with the bandwidth that is defined for those ports and the available connection. The wrapper mode proposed by Amory et al. (2007b) optimizes the core test time under this constraint. Moreover, only data signals of the defined test ports are used to transport the test vectors and responses (for instance, signals dtl_wr_data and dtl_rd_data in Fig. 5.5). Command signals, on the other hand, cannot remain in an undefined state because they need to play the communication protocol as in functional mode. The values for these command signals however are not known by the tester and must be part of the test wrapper logic.

The algorithm that defines the new test wrapper model for NoC reuse is depicted in Fig. 5.6. It receives, as input, the complete information about each protocol port of each core (including the direction of the port, the data width, the maximum bandwidth of that port in each direction, and the block size). The test information for each core is also provided as input (number of scan chains, length of each chain, number of test patterns, etc.). The algorithm starts by selecting exactly one test input and one test output port in the core interface (line 2). Whenever possible, two distinct ports are selected (one for input and another one for output) so that test bandwidth is maximized and test time is minimized. The resulting test access band-width b

test for that core is defined as the minimum bandwidth between the two

selected ports (the bandwidths of the wrapper scan input and output must be equal to keep the test pipeline). For instance, let us assume a NoC operating at 100 MHz with communication channels of 64 bits. Let us also assume the bitwidths of the selected input and output ports are, respectively, 32 and 40 bits. The resulting band-widths for these ports are, respectively, 3.2 and 4.0 Gbits/s. In this case, b

test = 3.2G

dtl_cmd_valid

dtl_cmd_accept

dtl_cmd_addr[32]

dtl_cmd_read

dtl_cmd_blocksize[6]

dtl_wr_valid

dtl_wr_accept

dtl_wr_data[32]

commandgroup

writegroup

dtl_cmd_valid

dtl_cmd_accept

dtl_cmd_addr[32]

dtl_cmd_read

dtl_cmd_blocksize[6]

dtl_wr_valid

dtl_wr_accept

dtl_wr_data[32]

dtl_rd_valid

dtl_rd_accept

dtl_rd_data[32]

commandgroup

writegroup

readgroup

Core logic

Fig. 5.5 Example of core terminals classification

Page 108: Reliability, Availability and Serviceability

955.2 Wrapper Design for NoC Reuse

bits/s. Thus, the test access bandwidth incorporates both, the core and the test access bandwidths.

However, as the core test frequency may be different from the NoC operation frequency, the actual number of test bits that can be loaded in parallel from the NoC may be different from the number of bits that can be processed by the core at each NoC cycle. For instance, let us assume, in our example, the core test frequency is 400 MHz, i.e., the core is capable of receiving new data from the NoC each 2.5 ns. During this period, and considering the defined test access bandwidth, only eight bits can be delivered (or received) by the NoC. Thus, a maximum of eight wrapper scan chains can be defined for this core, as calculated in line 3 of Fig. 5.6. On the other hand, the core test frequency is four times the NoC frequency and the core interface can receive/send as much as 32 bits per cycle. Hence, for each NoC cycle, four core test cycles can be executed. This means that four bits of each scan chain can be received/sent each NoC cycle. For a core with 100 scan flip-flops, each test packet would need at most four flits to accommodate all test bits, as each flit brings four bits of each chain.

The wi bits that arrive in parallel trough the selected test input port are equally

distributed over the defined wrapper scan chains in such a way that every pi clock

cycles a new flit arrives to fill in the defined scan chains (line 4). Similarly, the wo

bits of the selected output port are divided for the defined scan chains and every po

clock cycles a new flit is ready to be sent to the network (line 4).The wrapper cells defined in the IEEE Std. 1500 (2005) provide the required

functionalities for the implementation of the new wrapper model, except for the command terminals in the output port of the core. These terminals must receive specific signals during test to play the communication protocol. Hence, a new wrapper cell was proposed by Amory et al. (2007b) for these terminals. In the wrapper design algorithm, all terminals in the core interface are classified (line 5) as detailed

Fig. 5.6 Wrapper design algorithm for NoC reuse

Page 109: Reliability, Availability and Serviceability

96 5 Advanced Approaches for NoC Reuse

below so that the correct wrapper cell is designed for each one (line 6). Then, the cells are connected to form the wrapper scan chains (line 7).

Twelve classes are used to classify the core terminals (Amory et al. 2007b). The original four classes of the IEEE Std. 1500 wrapper are kept: functional inputs (FI) and outputs (FO), scan inputs (SI) and outputs (SO). In addition, eight new classes were created:

Selected data inputs (SDI) and outputs (SDO): Data terminals of the selected test input and output ports that carry actual test data;Remaining selected data inputs (RSDI) and outputs (RSDO): Data terminals of the selected test ports that carry unused data;Data inputs (DI) and outputs (DO): Terminals of the remaining functional ports not selected as test interfaces;Control inputs (CI) and outputs (CO): Non-data terminals of all functional ports.

All terminals, except for the ones classified as CO, are assigned a IEEE Std. 1500 compliant wrapper cell. An example of such a compliant cell is depicted in Fig. 5.7a. The CO-type terminals require a specific wrapper cell model to ensure they send the correct signals to the NoC even during test, since they implement the communica-tion protocol. An example of CO-type terminals are the valid/accept handshake pins of the DTL protocol. A possible structure of this specific wrapper cell is depicted in Fig. 5.7b (Amory et al. 2007b). Notice that the output value of this cell is given by the signal prot_in, which is defined according to the logic of each protocol and terminal. Some CO terminals require hard-coded ‘0’ or ‘1’ signals. Others, such as the ones that initiate a communication for sending the test responses, require a periodic behavior, that is, the terminal receives, for instance, a ‘0’ value during p

i-1

cycles, and in the cycle pi, a value ‘1’ is sent.

The connection of the wrapper cells to form the wrapper scan chains is such that the total length of each chain is minimized. First, the core-internal chains are distributed over the wsc

i chains defined in line 3 of Fig. 5.6. In this step, a con-

ventional partitioning algorithm (Marinissen et al. 2000) can be used. Then, wrapper input cells of terminals classified as RSDI, DI, CI, and FI are connected to the chains in such a way that the input cells are the first ones in the chains, followed by the core-internal chains and the length of all wrapper chains is minimized. Wrapper input cells of SDI terminals are included next. The output cells (terminals classified as RSDO, DO, CO, and FO) are included in the wrapper chains so that the maximum scan-out length of all wrapper chains is minimized. Finally, wrapper cells of SDO terminals are distributed among the defined wrapper chains. An example of the new wrapper implementation is given in Fig. 5.8 where a single input port is selected to receive test data and white wrapper cells indicate the CO-type terminals. In the figure, dotted lines indicate test data flow.

As reported in Amory et al. (2007b), the new wrapper model was implemented for 42 cores from the ITC’02 SoC Test Benchmarks. The authors assumed DTL protocol and chose the cores that met the minimal requirements of DTL ports. For this experiment, the new wrapper model presented an average area overhead of

Page 110: Reliability, Availability and Serviceability

975.2 Wrapper Design for NoC Reuse

14.5% and an average test length decrease of 3.8% compared to the previous wrapper model.

This wrapper model was further improved by Yi and Kundu (2008) to optimize test bandwidth. They propose to remove the need for serial-to-parallel and parallel-to-serial

scan_in

scan_out

func_infunc_out

shift normal_mode

clockFF

a b

scan_inscan_out

func_in func_out

shift normal_mode

clock

FF

prot_in

prot_mode

regular terminals CO-type terminals

Fig. 5.7 IEEE Std. 1500 compliant wrapper cells for the new wrapper model: (a) regular terminals, (b) CO-type terminals

CUT

WRAPPER

S0

S1

port

i

port (output part)

test wires

test ctrl & protocol

port

(in

put p

art)

outp

ut te

st p

ort

inpu

t tes

t por

t

Fig. 5.8 Example of new test wrapper implementation

Page 111: Reliability, Availability and Serviceability

98 5 Advanced Approaches for NoC Reuse

conversion by enabling all data lines to be used as parallel test data. Finally, Hussin et al. (2007) propose an algorithm to optimize the NoC test bandwidth and minimize the system test time based on the combination of two complementary wrapper models. The two wrappers require guaranteed bandwidth and latency and represent different tradeoffs between area overhead and resulting core test time. Given a maximum band-width or a maximum required test application time, the proposed heuristic find the optimal wrapper design for each embedded core.

5.3 ATE Wrapper for NoC Reuse

The test wrapper model discussed in Sect. 5.2 hides detailed internal information about the reused functional interconnection, but still assumes that guaranteed communication services are provided. Testing application, on the other hand, has specific requirements due to its stream-based pattern that result from the scan chain operation. As mentioned in Chap. 4, it is not desirable to interrupt the test data flow in the scan chains as this implies additional control logic and core test time. Indeed, a test access mechanism must provide “uncorrupted, lossless, in-order data trans-port with guaranteed throughput and zero latency variation” (Amory et al. 2007a). Figure 5.9a shows that after certain TAM latency, which is the number of clock cycles to transport the data from the test pin to the target CUT, a constant amount of bits must be periodically delivered every p clock cycles. This situation is naturally addressed by dedicated bus-based TAMs as shown in Fig. 5.9b, but not in NoC-based TAMs. Scalable and constant bandwidth is a consequence of the zero-jitter characteristic as long as the test signals are sent in periodically and are also received periodically by the CUT. However, even GS NoCs may not fulfill the zero-jitter and constant bandwidth requirements of the test application. There are two main obstacles for this: resource competition in the network and the load fluctuation.

Resource competition causes a variable delay in the packet delivery time because there are multiple packets competing for network resources. For instance, when two packets compete for the same channel, the access to the conflicting resource is granted for a single packet at a time while the other packet is delayed. If it is possible to distribute the concurrent test flows such that they do not need to compete for net-work resources, then this obstacle disappears.

The load fluctuation exists because packets flow through routers and each test packet spends a few cycles at each router before being routed to its final destination. Indeed, the notion of a direct connection between the ATE and the CUT does not exist in a NoC, as shown in Fig. 5.10a. This constant sum of delays leads to a defor-mation in the data propagation (called load fluctuation) and can be observed in Fig. 5.10b, at the input of the CUT (last line).

From Fig. 5.10b one can observe the need for a logic block between the external tester and the NoC. This block, called ATE interface (Amory et al. 2007a), is mainly responsible for assembling the test packets according to the packet format used during test. External testers are normally set to provide streams of bits that fill scan chains.

Page 112: Reliability, Availability and Serviceability

995.3 ATE Wrapper for NoC Reuse

Thus, a test data flit must be periodically assembled from a group of bits in the original stream. This conversion is implemented in the ATE interface by counters that regulate the period of a test. Besides width conversion, the ATE interface also implements protocol conversion since the ATE is not aware of the specific NoC protocol. Let us assume that the ATE has four test pins and is connected to a network of 16 data width via an ATE interface. Let us also assume that both the ATE and the NoC work at the same test frequency. The first task of the ATE interface is to build flits by buffering the incoming test data. In this case a flit is ready every four test cycles. The packet header must be inserted between these four clock cycles. The second line in the figure shows the ATE interface sending data flits (triangle on top) every four clock cycles and packet headers (circle on top) are sent in between. The first router calculates the packet route after a few clock cycles (routing delay), and the same happens for all routers in the path between the system interface and the CUT. One can observe, however, that test data reaches the last router in the path and the CUT in bursts, not periodically, even though data is sent every four clock cycles at the input test pins. One can also observe gaps between these bursts, which interrupt

ATE pins

one-bit streamATE

width conversion

Test

Acc

ess

Mec

hani

sm

core test inputs

core test start

width conversion

Period p

Period p

TAM input pins

TAM output pins

TAMlatency

CUTTAM width

a

bRequired shape of test traffic

Required TAM behavior

Fig. 5.9 Test traffic requirements: (a) required shape of test traffic, (b) required TAM behavior

Page 113: Reliability, Availability and Serviceability

100 5 Advanced Approaches for NoC Reuse

the scan-in/scan-out operation. This delay is inherent to the NoC operation and it is present even when there is no competition for a specific resource. When a test response packet is sent back to the test sink, the same behavior can be observed at the input interface of the tester. Therefore, some buffering between the output of the last router in the path and the target node (CUT or ATE) is required to eliminate these gaps. Once these gaps are eliminated a parallel to serial conversion can continuously inject test data every four test cycles, the same way the ATE does at the input test pins.

Thus, although the behavior illustrated in Fig. 5.9a is not natural for a NoC, it can be achieved if there is no packet collision in the network during test and some small amount of buffering is implemented at the ends of the test flow. This is sufficient to make the network behave as a dedicated TAM, providing “uncorrupted, lossless, in-order data transport with guaranteed throughput and zero latency variation”

router 2

wrapperinput

routern

router 1

input test pins

ATE interface output

output router 1

output router 2

output router n

wrapper test wire

data word of four bits

data word of 16 bits

header word

routing delay

4

ATE

ATEinterface

routing delay

routing delay

gap

network latencywith no contention

CUTWidth

conversion R1 Rn

Test width

Channelwidth

Width conversion

Test width

Channelwidth

a

b

NoC-based TAM

NoC-based TAM behavior

Fig. 5.10 Test traffic in the NoC: (a) NoC-based TAM, (b) NoC-based TAM behavior

Page 114: Reliability, Availability and Serviceability

1015.4 Test Scheduling for BE NoCs

even though the network does not provide any type of guaranteed service. Amory et al. (2007a) propose the addition of FIFO buffers at the end of the data flow to eliminate the jitter, as depicted in Fig. 5.11. The buffer size depends on the maximal delay of a given test access path, which can be defined by simulation after a test schedule is generated. This technique is associated to a specific test scheduling algorithm (see Sect. 5.4) that partitions the NoC so that less test paths need to be considered. Most NoC routers present input buffering strategy and one must only ensure that the buffer size is sufficient to eliminate the load fluctuation of the test input access path used by the cores associated to that router. At the tester side, on the other hand, the FIFO buffer must be included into the ATE interface. In the proposed strategy, several ATE interfaces are assumed at the system, one for each system-level test interface. The total number of test interfaces in the system is defined by the test scheduling algorithm, as explained in the next section.

Thus, the complete structure of the ATE output interface (ATE-NoC interface) must include the logic for protocol conversion and for width conversion. The ATE input interface (NoC-ATE interface), on the other hand, includes also the FIFOs to eliminate the load fluctuation. Figure 5.12 depicts the complete ATE interface proposed by Amory et al. (2007a).

5.4 Test Scheduling for BE NoCs

Amory et al. (2009, 2010) propose a test scheduling algorithm for NoC reuse that abstracts the details of the NoC (the cycle-accurate model of the network routing) in such a way that the NoC can be treated as a dedicated TAM from the test sched-uling point of view. The basic idea is to divide the NoC into different partitions where each partition has a single active pair of external test source and sink, as shown in Fig. 5.13.

Every module is included in one and only one partition. All nodes in a single partition can be reached from at least one of the test interface(s) defined for that partition and test data does not traverse over other partitions to avoid resource conflicts. Cores in the same partition are tested sequentially, to avoid competition

po

rt

Nocwrapper

CUT

ATEinterface

chip

po

rt

po

rtp

ort

FIFOFIFO

Fig. 5.11 DfT for NoC reuse

Page 115: Reliability, Availability and Serviceability

102 5 Advanced Approaches for NoC Reuse

over resources, thus avoiding jitter. Parallel test is possible by defining multiple partitions in the system. One partition may need more than one interface with the tester, depending on the routing algorithm, to ensure that test data does not traverse other partitions. In this case, however, only one interface per partition is active at a time. The total number of test pins in the system interface is distributed among the

outp

ut p

ort

inpu

t por

tProtocol conversion

NoC

Wid

th

conv

ersi

on

initiator

tw w

Protocol conversion

target

Test responses tw w

ATE interface

f

FIFO

Test vectors

Fig. 5.12 Proposed wrapper for ATE interface

18 1 28

20 11 27 21

17 16 25

22 26

13

14 5

9 15

23 24

12 19

10

32

Fig. 5.13 Example of NoC partition

Page 116: Reliability, Availability and Serviceability

1035.4 Test Scheduling for BE NoCs

test partitions and the partitions may receive a different number of test pins, according to the test data bandwidth required by the modules within each partition.

The test scheduling algorithm receives a directed graph representing the placement of the cores in the routers. Then, the algorithm defines the number of partitions, the association between modules and each partition, and the number of test pins assigned to each partition, in such a way that the overall system test time is minimized. The NoC partition ensures that there will be no competition over resources. Thus, BE NoCs, where no guarantees with respect to packet latency or bandwidth are given, can also be reused for test. Constant bandwidth is further ensured by the implemen-tation of DfT for the wrappers and ATE interfaces as detailed in Sect. 5.3.

The test scheduling algorithm based on the NoC partition principle is summa-rized in Fig. 5.14. It defines a set of virtual TAMs over the NoC and each virtual TAM behaves as a dedicated test access mechanism. Hence, wrapper/TAM co-optimization algorithms previously proposed for dedicated TAMs can be used to find the optimal solution. The algorithm proposed by Amory et al. (2009) is composed of seven steps, based on the strategy first proposed by Goel and Marinissen (2003). First, an initial set of partitions over the NoC is defined according to the constraints defined above for the access paths and ATE interfaces. Each partition represents a virtual TAM. Then, the next five steps try to improve the initial solution using different strategies for merging virtual TAMs and increasing the number of test wires per TAM. Finally, the ATE interfaces are defined for each virtual TAM of the final optimized solution. As a result, each virtual TAM can be defined as a tuple {d, k, w, R

atei,R

part}, where d is the FIFO depth of the DfT modules (wrappers and

ATE interface) in the partition Rpart

, k is the packet size used during test, w is the number of test wires used to test the modules in R

part, R

atei is the set of ATE interfaces

for the partition, and Rpart

is the set of routers in the partition.One of the inputs for the algorithm is the set of test lengths of each core for different

TAM widths. The maximum test bandwidth per test can be defined by simulation for a given NoC by injecting increasing amount of data in a NoC and monitoring whether the NoC is able to sustain the test flow. Some NoCs, for instance BE NoCs, might not be able to provide W test bandwidth while other NoCs might be able to support the theoretical maximum bandwidth W. The authors used a conservative

Procedure BENoCsTestSchedulingInputs: - a graph G defining the SoC and, for each g in G, a Pareto curve Pg;

- the maximal number of test wires wmax;- the physical channel width c, in bits;- the routing algorithm r() used by the NoC.

1. Define initial set of virtual TAMs over NoC 2 . Optimize solution by tackling smallest test length TAMs and distributing freed wires (BottomUp)3. Optimise solution by taclking longest test length TAMS (TopDown1)l4. Optimise solution by moving modules from longest length TAMs (Reshuffle)5. Optimise solution by taclking longest test length TAMS (TopDown2)6. Optimize solution by reducing the number of test wires per TAM (TestWires)7 . Find ATE Interfaces

Fig. 5.14 Test scheduling for BE NoCs

Page 117: Reliability, Availability and Serviceability

104 5 Advanced Approaches for NoC Reuse

approach and considered a maximum channel utilization of W/2. Thus, a number between 1 and W/2 (for W the channel width) can be used to transport test data within the NoC. This information is obtained by executing the algorithm of Fig. 5.6 for all possible TAM bitwidths between 1 and W/2 and storing the resulting test length for each execution, as a Pareto curve.

5.4.1 Creating the Initial Solution

The first step for test scheduling is detailed in Fig. 5.15. Initially (lines 3–7) the algorithm defines an initial solution that represents the minimum hardware cost that is, the maximum number of one-bit virtual TAMs is defined and the most critical cores are assigned to the defined TAMs. If the system has less than W/2 (w

max in

Fig. 5.15) modules to be tested, each module can be assigned to an independent one-bit virtual TAM and the remaining wires in the channels can be used to reduce

Fig. 5.15 Definition of the initial set of valid virtual TAMS over a NoC

Page 118: Reliability, Availability and Serviceability

1055.4 Test Scheduling for BE NoCs

the test time of most critical cores by increasing its TAM bitwidth (lines 15–19). If the number of modules is, on the other hand, higher than the maximum allowed TAM bitwidth (W/2), remaining modules are associated to the defined virtual TAMS that present lower test times (lines 9–13), so that overall system test time is optimized. This initial solution results in a set of NoC partitions.

Let us assume, for instance, a system with 12 modules in a 3×4 mesh NoC with 16-bit communication channels (W = 16), as depicted in Fig. 5.16. In the figure, each module represents a router and its associated core(s) and modules numbers indicate the order of this module in the sorted list of modules to be tested (line 1 of Fig. 5.15), that is, module 1 is the most critical one, with highest test data volume, whereas module 12 is the least critical one, with the smallest test data volume.

The maximum TAM bitwidth defined in the algorithm is W/2 or eight bits for this example. Thus, after the first step in the test scheduling algorithm, one can identify eight partitions (named a, b, c, d, e, f, g, and h) as depicted in Fig. 5.17a. Each partition represents a one-bit virtual TAM and the correspondent test scheduling for this configuration is presented in Fig. 5.17b.

One can observe, however, that partitions f, g, and h require multiple interfaces with the external tester or one will have conflicts over network resources. For instance, if the ATE interface for partition h is located at module 8, and assuming XY routing, the access path to module 9 in the same partition must traverse partitions b and g, which is not acceptable. These modules (called unconnected modules) are identified (line 21 in Fig. 5.16) and then moved to the neighbor partition with the shortest test time (lines 22–27). The detailed procedures used to identify an unconnected module, to find the neighbor partitions and minimize the overall test time are presented in Amory et al. (2009). Notice that the new set of (valid) NoC partitions (or virtual TAMs) depends on the order used to evaluate the initial partition set. Let us assume here a greedy approach where partitions containing the most critical cores are considered first. In this case, the resulting set of virtual TAMs for the example of Fig. 5.16 is shown in Fig. 5.18a together with the resulting test scheduling (Fig. 5.18b).

The next steps in the algorithm try to optimize the system test time over this initial solution.

8 2 3 6

5 10 1 7

4 9 12 11

16

Fig. 5.16 Example of a NoC-based SoC with 12 modules to be tested

Page 119: Reliability, Availability and Serviceability

106 5 Advanced Approaches for NoC Reuse

5.4.2 BottomUp Optimization

The BottomUp optimization detailed in Fig. 5.19 aims at reducing the test time of a given test architecture by trying to merge the TAM with the shortest test length with another TAM, such that the wires that are freed up in this process can be used in one or more TAMs with longer test length (Goel and Marinissen 2003). The procedure is iterative and ends when either there is a single TAM left (meaning all TAMS were merged into a single one) or when no merge proposal is found (meaning there is no possible improvement in the overall test time).

The optimization is implemented in two steps. First, the TAM r with minimum test time is identified (line 4). Other TAMs in the neighborhood of the selected TAM are inspected (lines 6–15) and merged to the selected TAM r (generating a merging candidate) if the resulting test time of the new TAM does not exceed the current system test time. Since only neighbor TAMs are allowed to be merged, each merging

c

8 2 3 6

5 10 1 7

4 9 12 11

h

g

fb

e

d

a

6

5

4

8

7

3

1

9

2Par

alle

ltes

ting

time

h

g

f

e

d

c

b

a

10

11

12

a b

NoC partitions test scheduling

Fig. 5.17 Example of (invalid) set of NoC partitions and corresponding test scheduling: (a) NoC partitions, (b) test scheduling

c

8 2 3 6

5 10 1 7

4 9 12 11h

g

fbe

d

a

6

5

4

8

7

3

1

9

2

Par

alle

ltes

ting

time

h

g

f

e

d

c

b

a

10

11

12

ab

NoC partitions test scheduling

Fig. 5.18 Example of (valid) set of NoC partitions and corresponding test scheduling: (a) NoC partitions, (b) test scheduling

Page 120: Reliability, Availability and Serviceability

1075.4 Test Scheduling for BE NoCs

candidate is free of unconnected modules. The best merging proposal (the one with smallest test length) is stored (lines 10–14) to be confirmed in a later step. After all possible merging candidates are considered, the best merging proposal is accepted if system test time is improved (lines 16–23). The merging process is such that the biwidth of the resulting TAM corresponds to the maximum width among the two merged TAMs (line 8). After a merging proposal is confirmed (line 17), one or more wires freed-up in the process (line 18) are assigned to the TAM with the maximum

Fig. 5.19 Pseudo-code of the bottom-up optimization procedure

h

c

8 2 3 6

5 10 1 7

4 9 12 11

g

be

d

a

6

5

4

8

7

3

1

9

2

Par

alle

l tes

ting

time

h

g

e

d

c

b

a

10

11

12

Previous system test time

New system test time

gain

NoC partitions test scheduling

ab

Fig. 5.20 NoC partitions and corresponding test scheduling after first iteration of BottomUp opti-mization: (a) NoC partitions, (b) test scheduling

Page 121: Reliability, Availability and Serviceability

108 5 Advanced Approaches for NoC Reuse

test length (lines 19–23) and the test architecture is updated (line 22). Wires are added to critical TAMs up to the maximum bitwidth allowed (W/2), as explained.

Figure 5.20a and b show, respectively, the resulting NoC partition and corresponding test scheduling after the first iteration of the Bottom-Up optimization over the test architecture shown in Fig. 5.18. In the example, partition h has the minimum test length and is selected to be merged to other virtual TAM. Notice that merging partitions h and f would lead to a smaller test length in the resulting merged virtual TAM. However, partition f is not in the neighborhood of partition h and could not be considered as a possible merge. Thus, the best proposal indicates the merging of partitions f and g. Partition f is removed from the architecture freeing up one wire that is assigned to partition a, thus reducing its testing length. Now, partition a is two-bits wide, while the other six partitions are one-bit wide each.

In the second iteration, the BottomUp optimization procedure merges the virtual TAM h and a and the freed up wire is assigned to the virtual TAM b. No further improvements in the system test time are achieved by this procedure and it outputs the test architecture shown in Fig. 5.21a. The corresponding test scheduling is shown in Fig. 5.21b. In this configuration, one has six partitions in the NoC. Partitions a and d are two-bits wide and all other partitions remain one-bit wide each.

5.4.3 TopDown Optimization

The TopDown optimization procedure tries to assign more test wires to the most critical virtual TAMs. The pseudo-code of this procedure is presented in Fig. 5.22. First, the TAM r

max with longest test length is identified (line 4) and the current system

test time is calculated (line 5). Then, the algorithm evaluates whether test time can be improved by merging r

max to another TAM and adding up their test wires

c

8 2 3 6

5 10 1 7

4 9 12 11

g

be

da

6

5

4

8

7

3

1 9

2Para

llel t

estin

gtime

g

e

d

c

b

a

10

11

12

Previous system test time

New system test time

gain

a

b

NoC partitions test scheduling

Fig. 5.21 NoC partitions and corresponding test scheduling after BottomUp optimization: (a) NoC partitions, (b) test scheduling

Page 122: Reliability, Availability and Serviceability

1095.4 Test Scheduling for BE NoCs

(lines 6–15). For each other virtual TAM in the neighborhood of rmax

, a temporary TAM that is the merging of r

max and the inspected TAM is created (line 7). When

merging the two TAMs their test wires are summed up (line 8) and the resulting TAM has a larger bitwidth (up to the maximum TAM bitwidth defined for the system or the maximal network bandwidth W/2). The test time of each core assigned to this new TAM is potentially reduced as more test wires are available. The test length of the new TAM is thus calculated (line 9). Notice, however, that additional test wires may not reduce the core test time. As shown by Iyengar et al. (2002), the test time of a module as a function of its TAM width presents a ‘staircase’ behavior. Indeed, the test time is proportional to the maximum wrapper scan length, which can only be reduced when the number of additional wires allows the reconfiguration of the wrapper scan chains into smaller ones. Furthermore, there is a limit for this recon-figuration when the number of wrapper scan chains equals the number of internal scan chains defined for the CUT.

The merging candidates are the virtual TAMs next to the TAM rmax

. The best candidate for merging is the one that reduces the most the overall system test time (lines 10–14). If there is such a candidate, the temporary virtual TAM is made per-manent, while r

max and its selected neighbor are removed from the test architecture

(line 17) and further improvement is tried. Otherwise, no possible merging with rmax

leads to an improvement in the test time and the procedure ends.

Figure 5.23 shows the resulting test architecture and corresponding test scheduling after two iterations of the TopDown optimization algorithm applied to the test architecture of Fig. 5.21. Virtual TAMs b, c, and e from Fig. 5.21 are merged forming

Fig. 5.22 Pseudo-code of the TopDown optimization procedure

Page 123: Reliability, Availability and Serviceability

110 5 Advanced Approaches for NoC Reuse

the new virtual TAM b in Fig. 5.23. Now we have four virtual TAMs: partitions a (two-bit), b (four-bit), d and g (one-bit each).

5.4.4 Reshuffle Optimization

The procedure Reshuffle tries to improve the SoC test length by moving one node in the virtual TAM with the longest test length to another virtual TAM. The algorithm, depicted in Fig. 5.24, first identifies the most critical TAM r

max (line 3). If this TAM

has a single node, no optimization is possible and the procedure ends (line 5). Otherwise, neighbor TAMs to r

max are inspected (lines 7–23) and the nodes in the

border of rmax

to each neighbor TAM are considered as moving candidates (line 11). The new configurations as well as the resulting test length of the partitions involved in the move operation are estimated (lines 12–15). If moving a node to a neighbor TAM reduces the test length of r

max, (line 16) a second check is performed. The move

can only be accepted if rmax

without that node remains connected i.e., all remaining nodes in the partition can be reached (lines 17–19). All neighbor TAMs to r

max and

all border nodes for each neighbor are attempted until a moving operation is approved, which happens whenever test time improves and the critical partition remains connected. The moving operation changes the set of partitions. Then, the algorithm resumes from the beginning and the procedure repeats until no moving operation leads to a test time improvement.

By applying this optimization to the test architecture of Fig. 5.23, and assuming a single iteration, one gets the test architecture and scheduling shown in Fig. 5.25, where node six from virtual TAM g moved to the virtual TAM b. Notice that even though the final cost of virtual TAMs over the NoC remains fixed (four partitions with the same bitwidths as before), some improvement in the test length was still achieved.

6

4

7

3

1 9

Par

alle

l tes

ting

time

g

d

b

a

11

12

Previous system test time

New system test time

5 82 10

8 2 3 6

5 10 1 7

4 9 12 11

g

b

da

a

b

NoC partitions test scheduling

Fig. 5.23 NoC partitions and corresponding test scheduling after TopDown optimization: (a) NoC partitions, (b) test scheduling

Page 124: Reliability, Availability and Serviceability

1115.4 Test Scheduling for BE NoCs

5.4.5 Implementation of the Defined Test Architecture

Once no more improvements can be achieved with respect to the test length, two remaining optimization procedures are executed to reduce the silicon area cost and define the details of the TAM implementation. The first procedure tries to optimize

Fig. 5.24 Pseudo-code of the Reshuffle optimization procedure

NoC partitions

8 2 3 6

5 10 1 7

4 9 12 11

g

b

da

a b

Fig. 5.25 NoC partitions and corresponding test scheduling after reshuffle optimization: (a) NoC partitions, (b) test scheduling

Page 125: Reliability, Availability and Serviceability

112 5 Advanced Approaches for NoC Reuse

the number of required test wires per partition. For each virtual TAM in the test architecture, the number of wires is iteratively reduced, one at a time, as long as the system test length is not exceeded. Again, the Pareto behavior of the core test length allows this type of optimization up to a limit. In the example of Fig. 5.25 such an optimization is not possible.

The second procedure finds the minimal number of ATE interfaces per partition (R

atei) such that all modules in each partition can connect to the ATE through the

native NoC routing algorithm but without traversing other partitions. For instance, for the test architecture of Fig. 5.25, each partition requires a single ATE interface, located, respectively, in modules 12, 8, 4, and 7.

5.5 Discussion

The testing strategies presented in this chapter address several issues that are inherent to the NoC reuse and were not considered in the previous approaches. First, different solutions to increase the test bandwidth and make a cost-effective usage of the communication channels were discussed. Those strategies contribute reduce the penalty of a reduced number test interfaces at system-level and of a slower external tester. Then, specificities of the NoC communication were tackled by the definition of DfT structures that can be included at both ends of the test path, i.e., at the ATE and at the cores interfaces, to ensure the stream-like data flow required by scan test-ing and to implement a protocol-aware test wrapper. Finally, a novel test scheduling approach allows the reuse of best-effort NoCs, which is not possible in the original reuse models. In this new approach, partitions over the NoC paths are established and traditional test scheduling algorithms were adapted to find an optimized solution. Those strategies can be combined to implement a cost-effective test solution that meets the cores test requirements, the NoC communication constraints, and the ATE limitations. More recently, Yaun et al. (2008) studied the overall costs of NoC-based TAMs compared to a dedicated TAM approach. They compared testing time, area overhead, test reliability, and complexity of the control logic for both approaches, giving the designers a few guidelines to decide for a specific test architecture.

References

Ahn J-H, Kang S (2006) Test scheduling of NoC-based SoCs using multiple test clocks. ETRI J 28(4):475–485

Amory AM, Goossens K, Marinissen EJ, Lubaszewski M, Moraes F (2006) Wrapper design for the reuse of networks-on-chip as test access mechanism. In: Proceedings of the European test symposium (ETS), Southampton, UK

Amory AM, Ferlini F, Lubaszewski M, Moraes F (2007a) DfT for the reuse of networks-on-chip as test access mechanism. In: Proceedings of the 25th VLSI test symposium (VTS), Berkeley, California, USA

Page 126: Reliability, Availability and Serviceability

113References

Amory AM, Goossens K, Marinissen EJ, Lubaszewski M, Moraes F (2007b) Wrapper design for the reuse of a bus, network-on-chip, or other functional interconnect as test access mechanism. IET Comput Digit Tech 1(3):197–206

Amory AM, Lubaszewski M, Moraes F (2009) Testing chips with mesh-based network-on-chip. LAP Lambert Academic Publishing, Köln, Germany. ISBN: 978-3838321615

Amory AM, Lazzari C, Lubaszewski M, Moraes F (2010) A new test scheduling algorithm based on networks-on-chip as test access mechanism. J Parallel Distributed Comput 71(5):675–686

ARM (2011) AMBA advanced eXtensible interface (AXI) protocol specification, version 2.0. http://www.arm.com. Accessed 26 May 2011

Chickermane V, Gallagher P, Gregor S, St.Pierre T (2001) A building block BIST methodology for SOC designs: a case study. In: Proceedings of the international test conference (ITC), Washington, DC, USA, pp 111–120

Dalmasso J, Cota E, Flottes ML, Rouzeyre B (2008) Improving the test of NoC-based SoCs with help of compression schemes. In: Proceedings of the IEEE computer society annual sympo-sium on VLSI, Montpellier, France, pp 139–144

Goel SK, Marinissen EJ (2003) SOC test architecture design for efficient utilization of test bandwidth. ACM Trans Design Autom Elect Syst 8(4):399–429

Hussin AF, Yoneda T, Fujiwara H (2007) Optimization of NoC wrapper design under bandwidth and test time constraints. In: Proceedings of the European test symposium (ETS), Freiburg, Germany

Iyengar V, Chakrabarty K, Marinissen EJ (2002) Test wrapper and test access mechanism co-optimization for system-on-chip. J Elect Test Theory Appl 18(2):213–230

Li M, Jone W-B, Zeng Q-A (2006) An efficient wrapper scan chain configuration method for network-on-chip testing. In: Proceedings of the emerging VLSI technologies and architectures (ISVLSI), Karlsruhe, Germany

Li J, Xu Q, Hu Y, Li X (2008) Channel width utilization improvement in testing NoC-based systems for test time reduction. In: Proceedings of the 4th IEEE international symposium on electronic design, test & applications, Hong Kong, pp 26–31

Liu C (2006) Testing hierarchical network-on-chip systems with hard cores using bandwidth matching and on-chip variable clocking. In: Proceedings of the Asia test symposium (ATS), Calcutta, India, pp 431–436

Liu C, Iyengar V (2006) Test scheduling with thermal optimization for network-on-chip systems using variable-rate on-chip clocking. In: Proceedings of the design, automation and test in Europe conference (DATE), Munich, Germany, pp 652–657

Liu C, Iyengar V, Shi J, Cota E (2005) Power-aware test scheduling in network-on-chip using variable-rate on-chip clocking. In: Proceedings of the IEEE VLSI test symposium (VTS), Palm Springs, CA, USA, pp 349–354

Liu C, Iyengar V, Pradhan K (2006) Thermal-aware testing of network-on-chip using multiple clocking. In: Proceedings of the IEEE VLSI test symposium (VTS), Berkeley, California, USA, pp 46–51

Marinissen EJ, Goel SK, Lousberg M (2000) Wrapper design for embedded core test. In: Proceedings of the international test conference (ITC), Atlantic City, NJ, pp 911–920

Marinissen EJ, Iyengar V, Chakrabarty K (2002) ITC’02 SoC test benchmarks. http://itc02socbenchm.pratt.duke.edu/. Accessed 23 Aug 2010

Nolen JM, Mahapatra R (2005) A TDM test scheduling method for network-on-chip systems. In: International workshop on microprocessor test and verification, Austin, Texas, USA

Nolen JM, Mahapatra R (2008) Time-division-multiplexed test delivery for NoC systems. IEEE Des Test Comput 25(1):44–51

OCPIP (2011) http://www.ocpip.org/. Accessed 25 May 2011Philips Semiconductors (2002) Device transaction level (DTL) protocol specification, Version 2.2IEEE Standards Board (2005) IEEE standard testability method for embedded core-based inte-

grated circuits. IEEE Std 1500

Page 127: Reliability, Availability and Serviceability

114 5 Advanced Approaches for NoC Reuse

Yaun F, Huang L, Xu Q (2008) Re-examining the use of network-on-chip as test access mechanism. In: Proceedings of the design, automation and test in Europe conference (DATE), Munich, Germany, pp 808–811

Yi H, Kundu S (2008) Core test wrapper design to reduce test application time for modular SoC testing. In: Proceedings of the IEEE international symposium on defect and fault tolerance of VLSI systems, Cambridge, MA, USA, pp 412–420

Page 128: Reliability, Availability and Serviceability

115É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_6, © Springer Science+Business Media, LLC 2012

This chapter focuses on the testing of part of the Network-on-Chip (NoC) infra-structure, discussing strategies to detect and diagnose manufacturing faults in the routers. Test approaches for these NoC building blocks have based their strategies on functional test, scan-based testing or built-in self-test (BIST). The refereed fault models differ from one work to another, both in terms of abstraction level (functional, register transfer or logic level) and of covered parts (FIFOs, registers, multiplexers, routing logic). A functional-based approach is usually preferred, to reduce NoC re-design costs and to provide at-speed testing. However, scan and BIST-based approaches may be required to enhance both fault coverage and test application time. All these approaches complement each other, in the sense that none can fully cover the faults that may affect all routers of the network.

6.1 Introduction

As discussed in previous chapters, the reuse of the NoC as Test Access Mechanism (TAM) has been presented as a cost-effective strategy for the test of embedded Intellectual Property (IP) cores, with reduced area, pin count, ant test time costs. Although one may claim that the network operation is also tested when it is transmit-ting test data, it is important to define a test scheme for the network before its reuse as TAM. As a matter of fact, test strategies that reuse the NoC assume that the communi-cation infrastructure has been tested before being reused for the test of the IP cores.

The NoC infrastructure is basically made up of three main components: the network interfaces, routers and communication channels. Many works (Aktouf 2002; Ubar and Raik 2003; Vermeulen et al. 2003) presented the problem of NoC testing suggesting that a wide variety of standard Design-for-Test (DfT) solutions can be used, from BIST for FIFOs, to boundary-scan and functional testing of wrapped routers. However, these proposals have not been applied, to the best of our knowl-edge, to actual NoCs.

Chapter 6Test and Diagnosis of Routers

Page 129: Reliability, Availability and Serviceability

116 6 Test and Diagnosis of Routers

To start with the routers, the NoC might be tested using standard core-based modular testing strategies, such as IEEE Std. 1500-compliant test wrappers (IEEE Std. 1500 2005). In this case, the NoC could be considered either as a flat core, i.e. a single test wrapper is inserted into the NoC interface, or as a hierarchical core, i.e. additional test wrappers for each router are necessary. Another possibility would be trying to take advantage of the regularity of the NoC building blocks, to efficiently implement BIST or scan test solutions (Arabi 2002; Wu and MacDonald 2003). However, a deeper evaluation of these approaches, brought from board and chip-level testing, shows that much better results, in terms of silicon overhead, test time and diagnosability, can be obtained if test approaches specific for NoCs are used (Amory et al. 2005). These specific approaches are the focus of next sections.

In the remainder of this chapter, a selection of NoC specific test approaches will be presented. The selected approaches are not the only works available in the literature on the topic, but those that were considered good representatives of groups of papers dealing with the same central ideas for the test of routers.

6.2 Testing the Network Interfaces

Very few works in the literature explicitly mention the problem of testing the network interfaces (NIs). Stewart and Tragoudas (2006), further discussed later in the chapter, is one example. Most works simply assume that the NIs are implicitly tested either with the cores or with the routers. This implicit test is usually based on a functional approach that cannot ensure high fault coverage.

Another possibility is to test the NIs through conventional DfT structures (Ubar and Raik 2003; Vermeulen et al. 2003). A structural test approach is advanta-geous because a high coverage of stuck-at faults can be achieved. If scan (static) testing is used, a functional test procedure must follow to ensure the detection of circuit performance deviations. If BIST (at-speed) is used, no functional test is needed, but the price to pay is a high silicon overhead to implement the additional structures for test generation and response analysis.

6.3 Testing the Routers

Basically, two classes of test solutions have been proposed in the literature for the test of routers:

Functional-based approaches, that apply the test using the NoC normal operation modes. Additional test structures may be required for the test, but they shall comply with the functional modes of the network;Structural-based approaches, that add specific testing modes to the NoC, to activate scan testing or BIST, for example.

Page 130: Reliability, Availability and Serviceability

1176.3 Testing the Routers

The fault models used in both classes may be purely functional models, or building block specific (structural) models. For the structural case, the stuck-at fault model is the most commonly used to cover all router building blocks. However, some works prefer considering stuck-at faults to cover the routing logic and memory faults (tran-sition, data retention, addressing, etc.) to specifically address the test of buffers.

Works addressing the test of NoC routers usually take advantage of the many identical (or very similar) structures to reduce the area overhead and/or accelerate test time. Some also integrate the test of network interfaces or the test of the com-munication channels, as it will be seen in next sections.

6.3.1 Structural-Based Strategies

As mentioned previously, structural-based approaches are those that add specific testing modes to the NoC. Several works have proposed to add specific modes to activate scan chains and BIST structures to test the NoC routers. Three representatives of these approaches were selected and will be presented in next sections. They are:

Progressive use of already tested resources (Grecu et al. 2005);Partial scan with NoC test wrapper (Amory et al. 2005);BIST for deflective switches (Petersén and Öberg 2007).

6.3.1.1 Progressive Use of Already Tested Resources

This particular approach assumes that the Automatic Test Equipment (ATE) access to the network is ensured through a dedicated network interface. This unique test source enhances the NoC access, at the same time as it restricts the I/O overhead. Through this unique access, the router directly connected to the dedicated interface is the first network resource to be tested. Then, the already tested router is used next to provide access for the test of its neighboring routers. At each new testing step at least one new router is tested through a testing path that has been proven fault-free in previous testing steps. The test based on the progressive use of already tested resources is illustrated in Fig. 6.1.

The test of individual routers is partitioned such that the routing logic block (RLB) is tested first, then the buffers are tested next. A stuck-at fault model is used to evalu-ate the fault coverage of the RLB test, while memory, addressing and specific FIFO faults are considered for the test of the buffers. The approach, as presented in Grecu et al. (2005) applies to both NoC mesh and butterfly-fat-tree topologies.

For the test of the routing logic block, scan chains and comparators must be implemented in all NoC routers. The communication channels will transport the test vectors to feed the scan chains and will also transport the expected test responses. No external test sink is needed, because the test responses will be locally compared to the expected responses. Two different strategies are proposed to cover all NoC routers: the unicast and the multicast testing. These strategies are illustrated in Fig. 6.2 for a NoC mesh topology.

Page 131: Reliability, Availability and Serviceability

118 6 Test and Diagnosis of Routers

Router(T) Router Router

Router Router Router

Router(N)

Router(T)

Router

Router Router Router

Router(N)

Router(N)

Router(T)

Router Router Router

a

b

c

Fig. 6.1 Progressive testing of routers: (a) testing step 1, (b) testing step 2, (c) testing step 3, (N: normal operation mode; T: testing mode)

In the unicast testing, one router is tested at a time. Testing paths must be defined according to the routing algorithm implemented in the NoC. There will be as many paths as necessary to test all routers in the network. Every testing path will be traversed in consecutive steps, and each new step tests the next untested router in the path. Figure 6.2a shows two consecutive steps for a particular testing path.

Page 132: Reliability, Availability and Serviceability

1196.3 Testing the Routers

In the multicast testing, several routers are tested in the same step. Starting from the test source, a sort of wave will be propagated through the NoC ensuring that, at each new test step, all routers neighboring previously tested routers are then tested. Figure 6.2b shows two consecutive steps of a multicast testing. Obviously, the implementation of this test strategy is only possible if the multicast operation mode is available in the NoC under test.

For the test of the FIFO buffers, a distributed BIST scheme is proposed in Grecu et al. (2005). In order to reduce the area overhead, the authors propose that the test data generator (TDG) and test control (TC) are shared with all routers, and that the test error detectors (TED) are locally implemented. The main idea of the distributed BIST scheme is depicted in Fig. 6.3.

Fig. 6.2 RLB testing strategies: (a) unicast testing (N: normal unicast operation mode; T: testing mode), (b) multicast testing (M: normal multicast operation mode; T: testing mode)

Page 133: Reliability, Availability and Serviceability

120 6 Test and Diagnosis of Routers

6.3.1.2 Partial Scan with NoC Test Wrapper

As for most scan-based testing schemes, the approach in Amory et al. (2005) assumes that the ATE access to the network is ensured through dedicated I/O pins for serial-in, serial-out and control operations. The NoC is considered as a flat-core and, an IEEE Std. 1500 compliant wrapper is defined for its test. This test wrapper is detailed in Fig. 6.4. As it can be noticed, the test wrapper implements the access to internal scan chains, the access to the functional inputs and outputs of routers and also accommodates comparators and fault diagnosis logic.

The main idea of this approach is to insert partial scan chains in individual routers that cover only the first stage of FIFO queues, but cover all flip-flops of the router control logic. Since the routers are considered identical, the same test vectors must

Router Router Router

Router Router Router

TEDTED

TEDTED

TE

D

TE

D

TE

D

Fig. 6.3 Distributed BIST for testing router FIFOs

scan-in scan-out

NoC

C1Fin1

Finn

Fout1

Foutn

=

………

TAM

TAM

Wrapper

=

ic - input celloc - output cell

DIAG.

oc

oc

ic

ic

Cn

r1

rn

C1

Cn

Fig. 6.4 The test wrapper

Page 134: Reliability, Availability and Serviceability

1216.3 Testing the Routers

be applied to all routers and they all should output the same test response for the fault-free case. From the NoC point of view, paths must be created to broadcast the test vectors computed by the Automatic Test Pattern Generation (ATPG) tool to all routers, and the router test responses must be checked to each other by means of comparators implemented internally to the test wrapper. The test based on this approach is illustrated in Fig. 6.5.

A stuck-at fault model is used to evaluate the fault coverage of both the test of the control logic (including routing, arbitration and control flow), and the test of the input FIFOs. The approach, as presented in Amory et al. (2005), was applied to the SoCIN NoC (Zeferino and Susin 2003) considering a regular, torus topology.

A very simple circuitry is embedded in the test wrapper for output comparison. This circuitry is shown in Fig. 6.6. When running in test mode, signals compEnb

i and

det are assigned ‘1’, while diag is set to ‘0’. Signals compIni receive one scan chain

output of each router under test. All corresponding bits unloaded from each router scan chain are compared against each other. If there is a mismatch, the xor gate generates an error signal (logic ‘1’) that is stored in the flip-flop (FF) and comes out at the SO pin.

The comparison logic also supports fault diagnosis. In diagnosis mode, signal diag is set to ‘1’. Then, signal compEnb

i corresponding to a single router is set to ‘1’, while

Router Router Router

Router Router Router

SI = SO

control

Fig. 6.5 Partial scan test of identical routers

compEnb0

diag

so

compEnbn

compIn0

compInn

diagnosis enable logic

FF 1det0

comparison logic

Fig. 6.6 Embedded comparison module

Page 135: Reliability, Availability and Serviceability

122 6 Test and Diagnosis of Routers

signal compEnbi of the remaining routers are set to ‘0’. Test vectors are applied again to

all routers, but the output of only one router is captured and externally compared to the expected test response. This procedure is repeated until the defective router is found.

Another scan-based approach is proposed in Hosseinabady et al. (2007) that, contrarily to Amory et al. (2005), can also work for irregular NoCs. In irregular topologies, since every switch implements only the ports needed to ensure the communication with a particular number of neighbors, the switches will not be necessarily identical and will need different scan chains and different test vectors. The approach in Hosseinabady et al. (2007) basically takes advantage of the intra-switch regularity (port structures are identical) and applies identical test vectors concurrently to all switch ports, comparing their outputs locally. It also takes advan-tage of the inter-switch regularity (routing logic blocks are identical) and sends the same test vectors simultaneously to the routing logic blocks of all switches, also comparing the outputs locally. In this approach, the test vectors are broadcasted to all switches through the minimum spanning tree of the NoC architectures. Similarly to Grecu et al. (2005), the buffers of the switches are tested separately using a particular memory BIST scheme.

6.3.1.3 BIST\ for Deflective Switches

The third structural-based approach assumes that the NoC routers (switches) imple-ment the echo (deflection) communication function. The deflection function simply echoes the data received back to the transmitting router. The tests are applied from all network interfaces that also collect and process the test responses in a BIST fashion. Therefore, these interfaces must implement test data generators and test error detectors.

The NoC is divided into two disjoint set of routers to which two different test phases are simultaneously applied. In the first test phase, one set of routers has the individual datapaths and links tested, while the other set has the individual router control parts and echo functions tested. In the second test phase, the roles are reversed and the set of routers that had the datapaths and links tested previously, now has the control parts and echo functions tested, and vice-versa. The test based on BIST for deflective switches is illustrated in Fig. 6.7.

The two test phases cover the router logic and also the communication channels (links). A stuck-at fault model is used to evaluate the fault coverage of both NoC build-ing blocks. The approach, as presented in Petersén and Öberg (2007) was applied to the Nostrum NoC (Millberg et al. 2004) that implements the echo function in its routers.

6.3.2 Functional-Based Strategies

As stated in the beginning of this chapter, functional-based approaches are those that apply tests using the NoC normal operation modes. Specific structures for testing,

Page 136: Reliability, Availability and Serviceability

1236.3 Testing the Routers

if needed, must be compliant with the functional modes of the network. Two repre-sentatives of these approaches were selected and will be presented in next sections. They are:

Direct I/O access to the network (Raik et al. 2006);NoC accessed from I/O through cores (Stewart and Tragoudas 2006).

ou

ters

:

Router Router Router

Router Router Router

Control and echo testing Datapath and links testing

Control and echo testing Datapath and links testing

Router Router Router

Router Router Router

a

b

first test phase

second test phase

Fig. 6.7 Test phases for the BIST of deflective switches: (a) first test phase, (b) second test phase

Page 137: Reliability, Availability and Serviceability

124 6 Test and Diagnosis of Routers

6.3.2.1 Direct I/O Access to the Network

The first functional-based approach assumes that all router ports in the boundaries of the NoC are accessible from I/O pins. The tests are applied from these and the local router ports, that also collect the test responses in an external test fashion. On one hand, such extensive access is obviously advantageous because, first of all, at-speed testing is made possible, and secondly, because the test data volume is greatly decreased when compared to scan test schemes. On the other hand, the price to pay is a very high I/O pin overhead.

Three test configurations are defined to cover all NoC routing possibilities. These test configurations are shown in Fig. 6.8. In the straight path configuration, the East-West, West-East, North-South and South-North routing paths are exercised. The direc-tion change configuration shown in Fig. 6.8b only covers the East-North and West-South routing paths for the routers located in the central diagonal of the NoC. Other direction change configurations exist, although not shown in the figure, that cover the other routers in the remaining NoC diagonals and the East-South and West-North routing paths. Finally, the resource connection configuration shown in Fig. 6.8c only covers the North-Local and Local-South routing paths for the routers located in the indicated NoC row. Again, other resource connection configurations exist, that cover the other routers in the remaining NoC rows and the South-Local, Local-North, West-Local, Local-East, East-Local and Local-West routing paths. All these test configurations were determined considering that the NoC implements a XY routing strategy.

The approach, as presented in Raik et al. (2006) was applied to the Nostrum NoC (Millberg et al. 2004). The test configurations above cover the routing logic and also the datapath, including the registers and multiplexors implemented in the Nostrum routers. A stuck-at fault model is used to evaluate the fault coverage of these building blocks. Since the checkerboard test pattern (‘010101..’) and its complement are applied to routers and communication channels, authors claim that delay, open and short-circuit faults that may affect intra-channel adjacent data wires are also covered.

In Raik et al. (2007) the same authors propose a method for diagnosing faulty links in NoC routers. In that work, link is defined as a physical path between any

a b c

straight paths direction changes resource connections

Fig. 6.8 Test configurations for the direct I/O access approach: (a) straight paths, (b) direction changes, (c) resource connections

Page 138: Reliability, Availability and Serviceability

1256.3 Testing the Routers

a

E Wi,j

b

E Wk+1,j E W1,j

PassAll

Fail k times

Fail

, E N, N L, L S

Pass

b

E Ni,j , N L, L S

PassFail

cin

N Li,j , L S

PassFail

cout

L Si,j

PassFail

Fig. 6.9 The diagnosis tree

two I/O ports of a router. A link is said to be faulty if it outputs faulty or no data at all. In order to simplify the fault diagnosis algorithm, classes of faults, that are equivalent from the point of view of the algorithm, are considered. The four classes that apply to a NoC implementing a XY routing scheme are shown in Table 6.1. Note that, for diagnosis purposes, the resource connection configuration of Fig. 6.8c is divided into two sub-configurations that consider links to and from the router local ports.

Then a diagnosis tree is built considering the fault classes and test configurations shown in Table 6.1. The diagnosis tree is detailed in Fig. 6.9.

The diagnosis algorithm then traverses the diagnosis tree applying the test configurations a, b, c

in and c

out shown in Fig. 6.8, and looking for faulty links. Let us

consider, for example, that after applying the test configuration a, the test fails as shown in Fig. 6.10. At this point, following the diagnosis tree, the algorithm is capable of identifying for which NoC row j the E W link did not respond as expected.

Table 6.1 Fault classes for the diagnosis algorithm

Fault Equivalent faults Test configuration

E W W E, S N, N S Fig. 6.8aE N E S, W N, W S Fig. 6.8bN L E L, S L, W L Fig. 6.8c inL S L W, L N, L E Fig. 6.8c out

Page 139: Reliability, Availability and Serviceability

126 6 Test and Diagnosis of Routers

Then, as indicated by the tree branch in Fig. 6.10, the algorithm applies the test configuration b to find out the column i of the faulty router. Row j, during the application of the complete (all diagonals) configuration b, will assume four different routing scenarios, as shown in the bottom of Fig. 6.10. If the test passes all four scenarios, the only possible faulty router is the one that is not exercised in the East-West direction in configuration b. Then i = 1, what corresponds to the leftmost router in the bottom of Fig. 6.10. If the test fails for k out of the four routing scenarios, then the faulty router will be the one for which i = k + 1, according to Fig. 6.10.

6.3.2.2 NoC Accessed from I/O Through Cores

The second functional-based approach assumes that the ATE gets access to the NoC by means of the I/O pins of the IP cores, as shown in Fig. 6.11. The router tests are thus transported and the responses collected through the functional SoC I/O inter-faces. This approach requires no area and no pin overhead, at the same time as it inherently includes the test of the network interfaces (NIs).

The fault models used are purely functional. The idea is to create tests that cover all functional modes possible for the NIs and the routers. NIs must be exercised as source and destination of data, for all transmission and connection modes available in a particular NoC. Possible transmission modes are best effort (BE), guaranteed throughput (GT) and variable guaranteed bandwidth (VGB). Connection modes are unicast (U), narrowcast (N) and multicast (M). Routers must be exercised as source, intermediate point and destination of data, using different transmission modes, and ensuring that all ports and queues are also tested.

The test configurations must be such that they combine all functional modes to cover the whole set of possible NI and router faults. Fault collapsing is applied for test optimization, but at the end, at least one test per fault results. The tests to cover routers and NIs are scheduled concurrently and the testing paths are identified so

a

E Wi,j

b

E W1,j E Wk+1,j

PassAll

Fail k times

Fail

k=1 k=2 k=3

Fig. 6.10 Fault diagnosis example

Page 140: Reliability, Availability and Serviceability

1276.4 Comparing the Approaches

that the test application time is minimized. Since this is a functional test, test patterns are data packets extracted from the SoC application.

The approach, as presented in Stewart and Tragoudas (2006) was applied to the Aethereal (Goossens et al. 2005) and the Nostrum NoC (Millberg et al. 2004). For the Aethereal NoC, the BE, GT and BE/GT transmission modes, and the U, N and M connection modes were considered. For the Nostrum NoC, the BE, GT and VGB transmission modes, and the U and M connection modes were considered.

6.4 Comparing the Approaches

In previous sections, five selected approaches for router testing were presented. The main features of these approaches are summarized in Table 6.2. The first three approaches in the Table – progressive, partial scan and deflective, were classified as structural testing, while the other two – direct access and through cores, were clas-sified as functional testing.

One can notice, from the second column of Table 6.2, that the preferred model is the one based on stuck-at faults, although functional faults are also considered in the through cores approach. Only some of the approaches that partition the router in sub-blocks or also consider faults in the communication channels (see third column) add other types of faults to the model. This is the case of the progressive and the direct access approaches. The approaches classified as structural testing implement either scan or BIST, as stated in column fourth. The functional-based approaches implement external testing and apply specific test configurations that activate the normal functional modes of the network to exercise NIs, routers (switches) and communication channels

Router Router Router

Router Router Router

Core 3

Core 5Core 4

Core 2Core 1

Core 6

Fig. 6.11 Getting test access to the NoC through the I/O pins of cores

Page 141: Reliability, Availability and Serviceability

128 6 Test and Diagnosis of Routers

(links). In terms of patterns applied for testing, all three structural approaches compute the test stimuli using ATPG tools. The progressive approach adds to the ATPG patterns computed for the routing logic, memory test patterns that are applied to the router FIFOs. According to column six, the five router testing approaches were applied to different networks (SoCIN, Nostrum and Aethereal), using different topol-ogies (mesh, butterfly fat-tree and torus). Although this fact makes it difficult to compare these testing approaches against each other, the first important conclusion that can be drawn so far is that the central idea in the background of any of them can be adapted to fit other networks and other topologies.

Although it is not possible to perform a straightforward comparison considering the benefits and costs of the studied testing approaches, Table 6.3 makes an attempt to point out the main differences between them, highlighting the advantages and drawbacks of each approach.

From the point of view of the fault coverage, every approach, as reported, does its best to achieve the highest scores. The lower the abstraction level of the fault model and the broader the set of logic blocks considered, the more realistic are the obtained fault coverage figures. Most approaches use a logic level model, the stuck-at fault model. But some do not consider all router logic, its datapath and control part, including the routing, the arbitration and the control flow logic. The structural approaches fully do, but the functional approaches only partly do. For example, the test configurations used in the direct access approach fully exercise the routing logic, but definitely do not exercise the arbitration logic, since no routing conflicts are generated in the defined configurations. In the case of the through cores approach, using all routers and ports as source, intermediate point and destination of test packets does not even suffice to cover all NoC routing possibilities. This explains why these two approaches were graded medium and low, respectively, in the fault coverage criterion ranked in the second column of Table 6.3. The coverage of wiring faults in the communication channels of the deflective approach is also graded low. Since the test vectors are generated by a commercial ATPG tool in this approach, we assume that the stuck-at model is also used for the evaluation of the fault coverage of the

Table 6.2 Router testing approaches: summary of features

Test approach Fault model Tested block Test type Test patterns NoC topology

Progressive Stuck-at Routing logic Scan, int. comp. ATPG Mesh, butterfly fat-treeMemory FIFOs BIST FIFO

Partial scan Stuck-at Router control, FIFOs

Scan, int. comp. ATPG SoCIN torus

Deflective Stuck-at Routers BIST, test phases ATPG NostrumLinks

Direct access

Stuck-at Router datapath External at-speed, specific configurations

Checkerboard NostrumDelay, open/

shortIntralink wires

Through cores

Functional modes

Network interfaces

Functional test configurations

Application data

Aethereal, Nostrum

Routers

Page 142: Reliability, Availability and Serviceability

1296.4 Comparing the Approaches

link interconnects. If this is actually the fault model considered, the coverage of wiring stuck-at and open faults will be around 100%, as reported in Petersén and Öberg (2007). However, the coverage of other important wiring faults, such as intra and interlink bridging and crosstalk faults, cannot be estimated, since no details on the test patterns are provided in the original work. For these reasons, the link fault coverage is graded low for the deflective approach. For the direct access approach, stuck-at, delay, opens and intralink adjacent wiring shorts are covered but, since crosstalk and interlink shorts are not covered by the checkerboard pattern and the proposed test configurations, the link fault coverage is graded medium.

In terms of test application time (third column of Table 6.3), whenever scan chains come into play, the test length tends to be longer because the application of test stimuli and the collection of test responses is performed in a serial manner. The progressive approach adds to the time of loading and unloading scan chains, the time to transport the test patterns and responses through the network to get to the router under test. The partial scan approach, besides reducing the length of the scan chains, loads all routers with test patterns in parallel and unloads, also in parallel, the test responses through a specific scan test bus. When BIST is considered, test generators and response analyzers are locally implemented and, since they apply stimuli and collect responses in parallel, the test length tends to be much shorter. This fact grants the low test time grade to the deflective approach. The application time of conventional external testing tends to be long, due to the limited controllability and observability of the internal logic measured from the I/O pins of complex SoCs. Since the through cores implements a conven-tional external testing approach, it is scored medium for the test time criterion, making it comparable to the partial scan approach. Since the direct access, although an external testing approach, assumes an extensive access to the network boundaries, the test time tends to be much shorter than in the conventional approach.

It is common knowledge that the area overhead penalty (column 4 of Table 6.3) increases when one moves from the external testing, to scan and then BIST. The deflective approach implements BIST. The progressive approach adds to the RLB scan testing, a BIST scheme for the FIFOs. Those two approaches are then ranked high area overhead. Although the partial scan approach adds a test wrapper to the scan test scheme, the wrapper mostly implements scan structures. It is thus ranked medium area overhead. The two functional-based approaches implement external testing and thus do not require that extra logic is embedded for testing purposes.

Table 6.3 Router approaches: test capabilities and costs

Test approach Fault coverage Test timeArea overhead

I/O pin overhead Diagnosis

Progressive High High High Low Potentially faulty link-router

Partial scan High Medium Medium Low Yes, faulty routerDeflective High (router) Low High None Potentially faulty router,

faulty linkLow (link)Direct access Medium (router) Low None High Yes, faulty link-router

Medium (link)Through cores Low Medium None None Difficult

Page 143: Reliability, Availability and Serviceability

130 6 Test and Diagnosis of Routers

In terms of additional I/O pins, the overhead (fifth column of Table 6.3) tends to increase when one moves from the conventional external testing, to BIST and then scan. The implementation of a scan test scheme requires at most four additional I/O pins. Therefore, the progressive and the partial scan approaches were graded low pin overhead. Similarly to the conventional external testing implemented in the through cores approach, the BIST scheme of the deflective approach may not require addi-tional I/O pins since, in order to run the self-test procedure, it is possible to share functional SoC pins. The external testing implemented in the direct access approach, nevertheless, is quite unconventional since, to provide access to the boundaries of a mxn mesh NoC with w channel width, 4.w.(m + n) additional I/O pins are required.

Only the partial scan and the direct access approaches have explicitly demon-strated their fault diagnosis capabilities. In the partial scan scheme, a faulty router can be uniquely identified. The diagnosis procedure, however, cannot determine which router internal block is failing. Also remind that the test wrapper isolates the routers from the communication channels during test, therefore routers and links are supposed to be tested and diagnosed separately. In the direct access approach, a faulty path passing through a router can be uniquely identified by the diagnosis algorithm. Nevertheless, it is not possible to determine if the fault is affecting the incoming channel, the outgoing channel or the router internal path communicating the two channels. The other approaches have not declared how capable they are to diagnose faults, but we can speculate. In the case of the through cores approach, due to the procedures for fault collapsing and test scheduling optimization, the resulting tests are such that it is very difficult to achieve a fine-grain diagnosis. A coarse-grain diagnosis will be certainly possible. Similarly to the direct access, the progressive approach has the potential to identify faulty link-router sets. Once a router in a particular testing path fails the test, one can conclude that either the router itself or the communication channel that brought the test patterns and the expected responses is faulty. If the same router is part of other testing paths, it is possible that combining the test results one can identify whether the fault is affecting the router or the communication channel. Finally, since in the deflective approach the communication channels are redundantly checked in different test phases, it will be possible to distinguish between a faulty router and a faulty link if the results of the two test phases of the two disjoint set of routers are jointly analyzed.

It is clear, from the analysis above, that all studied approaches have advantages and disadvantages. Since in many aspects these approaches complement each other, combining them may be the best way of meeting the requirements of a particular application.

6.5 Concluding Remarks

In this chapter, we have presented a limited number of promising techniques to detect and diagnose manufacturing faults in network-on-chip routers. These tech-niques were classified as either structural, or functional-based testing approaches.

Page 144: Reliability, Availability and Serviceability

131References

As mentioned, the selected techniques are not the only works available in the literature, but those that were considered good representatives of groups of papers dealing with the same central ideas. However, other important router testing tech-niques, published recently and left out of the discussion in this chapter, also deserve citing: Alaghi et al. (2007), Liu et al. (2006), Sedgi et al. (2007), Sedgi et al. (2008), Strano et al. (2011), Tran et al. (2008) and Zheng et al. (2010).

As the test needs and challenges in the design and manufacturing of ever more complex integrated systems are so broad and difficult in nature, conducting further research is imperative, and over the next years more efficient solutions shall be found for the fault detection and fault diagnosis in network interfaces and routers.

References

Aktouf C (2002) A complete strategy for testing an on-chip multiprocessor architecture. IEEE Des Test Comput Mag 19(1):18–28

Alaghi A, Karimi N, Sedgi M, Navabi Z (2007) Online NoC switch fault detection and diagnosis using a high level fault model. In: Proceedings of the 22nd IEEE international symposium on defect and fault tolerance in VLSI systems, Rome, Italy

Amory AM, Brião E, Cota E, Lubaszewski M, Moraes FG (2005) A scalable test strategy for net-work-on-chip routers. In: Proceedings of the international test conference (ITC), Austin, Texas

Arabi K (2002) Logic BIST and scan test techniques for multiple identical blocks. In: Proceedings of the IEEE VLSI test symposium (VTS), Monterey, California, pp 60–68

Goossens K, Dielissen J, Radulescu A (2005) AEthereal network on chip: concepts, architectures, and implementations. IEEE Des Test Comput Mag 22(5):414–421

Grecu C, Pande P, Wang B, Ivanov A, Saleh R (2005) Methodologies and algorithms for testing switch-based noc interconnects. In: Proceedings of the IEEE international symposium on defect and fault tolerance in VLSI systems, Vancouver, Canada, pp 238–246

Hosseinabady M, Dalirsani A, Navabi Z (2007) Using the inter- and intra-switch regularity in NoC switch testing. In: Proceedings of the design, automation and test in Europe conference (DATE), Nice, France

IEEE Std. 1500 (2005) Standard testability method for embedded core-based integrated circuits, IEEE

Liu C, Link Z, Pradhan D (2006) Reuse-based test access and integrated test scheduling for network-on-chip. In: Proceedings of the design, automation and test in Europe conference (DATE), Munich, Germany

Millberg M, Nilsson E, Thid R, Jantch A (2004) A guaranteed bandwidth using looped containers in temporary disjoint networks within the nostrum network on chip. In: Proceedings of the design, automation and test in Europe conference (DATE), Paris, France, pp 890–895

Petersén K, Öberg J (2007) Toward a scalable test methodology for 2d-mesh network-on-chip. In: Proceedings of the design, automation and test in Europe conference (DATE), Nice, France, pp 367–372

Raik J, Govind V, Ubar R (2006) An external test approach for networks-on-a-chip switches. In: Proceedings of the IEEE Asian test symposium (ATS), Calcutta, India

Raik J, Ubar R, Govind V (2007) Test configurations for diagnosing faulty links in noc switches. In: Proceedings of the IEEE European test symposium (ETS), Freiburg, Germany

Sedgi M, Alaghi A, Koopahi E, Navabi Z (2007) An HDL-based platform for high level NoC switch testing. In: Proceedings of the 16th IEEE Asian test symposium (ATS), Beijing, China, pp 453–458

Page 145: Reliability, Availability and Serviceability

132 6 Test and Diagnosis of Routers

Sedgi M, Koopahi E, Alaghi A, Fathy M, Navabi Z (2008) An NoC test strategy based on flooding with power, test time and coverage considerations. In: Proceedings of the 21st international conference on VLSI design, Hyderabad, India, pp 409–414

Stewart K, Tragoudas S (2006) Interconnect testing for networks on chip. In: Proceedings of the IEEE VLSI test symposium (VTS), Berkeley, CA

Strano A, Gómez C, Ludovic D, Favalli M, Gómez M, Bertozzi D (2011) Exploiting network-on-chip structural redundancy for a cooperative and scalable built-in self-test architecture. In: Proceedings of the design, automation and test in Europe conference (DATE), Grenoble, France

Tran X, Thonnart Y, Durupt J, Beroulle V, Robach C (2008) A Design-for-Test implementation of an asynchronous network-on-chip architecture and its associated test pattern generation and application. In: Proceedings of the second ACM/IEEE international symposium on networks-on-chip (NOCs), Newcastle, UK, pp 149–158

Ubar R, Raik J (2003) Testing strategies for network on chip. In: Jantsch A, Tenhunen H (eds) Networks on chip, Kluwer, Boston

Vermeulen B, Dielissen J, Goossens K, Ciordas C (2003) Bringing communication networks on chip: test and verification implications. IEEE Commun Mag 41(9):74–81

Wu Y, MacDonald P (2003) Testing ASICs with multiple identical cores. IEEE Trans Comput Aided Des Integr Circ Syst 22(3):327–336

Zeferino C, Susin A (2003) SoCIN: a parametric and scalable network-on-chip. In: Proceedings of the ACM/IEEE/SBC/SBMicro symposium on integrated circuits and systems design (SBCCI), São Paulo, Brazil, pp 169–174

Zheng Y, Wang H, Yang S, Jiang C, Gao F (2010) Accelarating strategy for functional test of NoC communication fabric. In: Proceedings of the 19th IEEE Asian test symposium (ATS), Shanghai, China, pp 224–227

Page 146: Reliability, Availability and Serviceability

133É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_7, © Springer Science+Business Media, LLC 2012

In complement to the previous chapter, this one discusses strategies to detect and diagnose manufacturing faults in the communication channels, thus covering altogether, the test of the whole Network-on-Chip (NoC) infrastructure. The huge number of interconnects allied to the shrinking of the chip dimensions make the NoC prone to a growing number of wiring faults. The capability of detecting inter-connect faults in NoC-based Systems-on-Chip is mandatory for yield improvement. Moreover, fault diagnosis of NoC link wires can help fault tolerance approaches to mitigate the faults and to maintain the network service. Fault models, including stuck-at, bridging, delay and crosstalk faults, interconnect functional test, at-speed interconnect BIST and interconnect diagnosis are discussed in this chapter.

7.1 Introduction

As already mentioned previously, the NoC infrastructure is made up of three main components that must be tested before they can be reused for the test of the IP cores. These components are the network interfaces, the routers and the communication channels.

In the previous chapter, several schemes for testing the network interfaces and routers were presented. Most of these schemes take advantage of the NoC regularity to reduce the area overhead and/or accelerate the test application time (Amory et al. 2005; Grecu et al. 2005; Petersén and Öberg 2007; Raik et al. 2006; Stewart and Tragoudas 2006). The scheme proposed in Amory et al. (2005) can only cover the faults affecting the router logic, because a specific test bus is used to transport the test vectors and test responses to/from the network routers. The schemes presented in Grecu et al. (2005) and Stewart and Tragoudas (2006) use the link interconnects to transport the router test patterns and test responses, thus implicitly exercising the communication channels. However, these schemes do not quantify the wiring fault coverage, nor even mention that they are capable of detecting such faults. Finally,

Chapter 7Test and Diagnosis of Communication Channels

Page 147: Reliability, Availability and Serviceability

134 7 Test and Diagnosis of Communication Channels

the schemes in Raik et al. (2006) and Petersén and Öberg (2007) integrate the test of the routers with the test of the communication channels and, for this reason, will be briefly revisited in this chapter.

Similarly to the network routers, the NoC interconnects usually have a regular structure, but present poor observability and controlability due to their density and deeply embedded position in the final design layout. Therefore, although all interconnects can use the same set of test vectors, ensure its application to all wires is a challenge from the fault coverage point of view. Efficient test sequences for the detection and diagnosis of interconnect stuck-at, open and short-circuit faults were devised in the past (Kautz 1974; Hassan et al. 1988; Lien and Breuer 1991) and can be reused for testing NoC link wires. The same remark also applies to the test sequences used for crosstalk fault detection (Cuviello et al. 1999; Bai et al. 2000). The way these traditional interconnect test sequences can be applied to the NoC communication channels is addressed in next sections.

In the remainder of this chapter, a selection of NoC specific test approaches is presented. These approaches are not the only works available in the literature on the topic, but those that were considered good representatives of groups of papers dealing with the same central ideas for the test of communication channels.

7.2 Testing the Communication Channels

Similarly to the test of routers, two classes of test solutions have been proposed in the literature for the test of the communication channels:

Functional-based approaches, that apply tests using the NoC normal operation modes, but may require the implementation of particular test structures that must comply with the functional modes of the network;Structural-based approaches, that add specific testing modes to the NoC, to acti-vate a BIST scheme, for example.

The fault models used in both classes may include stuck-at, open, delay, bridging and crosstalk faults. Bridging and crosstalk faults may be considered in different NoC neighborhoods, either affecting wires located in the same channel (intralink faults) or located in different channels (interlink faults). Most works only consider interconnect faults affecting the data wires of links. However, some few include in the model faults that may affect the control and handshake wires in addition.

Even applying test sequences specially tailored to the detection and diagnosis of interconnect faults, some works also try to integrate the test of routers to the test of the communication channels, as it will be seen in next sections.

7.2.1 Structural-Based Strategies

As reminded above, structural-based approaches are those that add specific testing modes to the NoC. Some works have proposed to add specific modes to activate

Page 148: Reliability, Availability and Serviceability

1357.2 Testing the Communication Channels

BIST structures to test the NoC communication channels. Two representatives of these approaches were selected and will be discussed in next sections. They are:

Structural BIST accounting for crosstalk effects (Grecu et al. 2006); and,BIST for deflective switches (Petersén and Öberg 2007).

7.2.1.1 Structural BIST Accounting for Crosstalk Effects

This particular approach implements the Built-In Self-Test (BIST) of the communication channels using Test Data Generators (TDGs) and Test Error Detectors (TEDs). The communication channels can be tested either concurrently or in a distributed manner. For the concurrent test, TDG-TED pairs are implemented in every link, as shown in Fig. 7.1, resulting in a point-to-point BIST configuration. Similarly to the test scheme for the router FIFOs proposed in Grecu et al. (2005) and illustrated in Fig. 6.3, whenever TDGs are shared among all channels and TEDs are locally implemented, a distributed BIST results. Also, according to Grecu et al. (2005), TDGs can send the test patterns to the channels using either a unicast or a multicast test strategy, as illustrated in Fig. 6.2. Grecu et al. (2006) adds that the vectors for interconnect testing can be interleaved with the test patterns for switch testing, thus unifying the test of routers and communication channels.

These three BIST configurations imply different costs in terms of test application time and area overhead, as shown in Table 7.1. Since the point-to-point configura-tion tests all communication channels in parallel, it is obviously the one that requires less time for the test application. However, it is also the approach that requires more silicon, since not only TEDs, but also TDGs are supposed to be implemented in all communication channels. The distributed unicast configuration requires less silicon, due to the share of the TDG, but requires far more time to apply the tests, since the channels are tested one after another in a serial manner. Finally, since the distributed multicast configuration can simultaneously test several communication channels, but not all, it provides a tradeoff in between the point-to-point and the distributed unicast approaches. In terms of area overhead, it implies in the same cost as for the unicast configuration since the TDG is shared with all channels.

Router Router

TEDTDG

communication channel

Fig. 7.1 Concurrently testing NoC links

Page 149: Reliability, Availability and Serviceability

136 7 Test and Diagnosis of Communication Channels

In order to consider crosstalk effects, such as rising (dr) and falling(d

f) delays,

positive (gp) and negative (g

n) glitches, and rising (s

r) and falling (s

f) speed-ups, the

Maximal Aggressor Fault (MAF) model proposed in Cuviello et al. (1999) is used in Grecu et al. (2006). Figure 7.2, adapted from Cuviello et al. (1999), illustrates the crosstalk effects accounted for in the MAF model.

As shown in Fig. 7.2 for a single victim, for each crosstalk fault two consecutive test vectors must be applied to the wires to provoke the signal transitions that may make the fault effect to appear at the victim wire. These test vectors are combined in Grecu et al. (2006) such that the whole test sequence is reduced to eight vectors in total. Figure 7.3 shows the optimized test sequence, identifying the pairs of stimuli that sensitize each of the six crosstalk faults of the MAF model. In addition to crosstalk faults, this test sequence ensures the detection and diagnosis of stuck-at, open and intralink bridging faults affecting data wires only.

The TDG must apply the eight test vectors of Fig. 7.3, considering every wire of the channel under test as a possible victim at a time. A Finite State Machine (FSM)

Table 7.1 Comparing the costs for different BIST types

BIST type Test application time Silicon area overhead

Point-to-point Shortest HighestDistributed unicast Longest LowestDistributed multicast Intermediate Lowest

V: victim wire fault-free victim signal

a c e

b d f

V

V

A: aggressor wires

“0”

gp

gn

(positiveglitch)

A

A

“1”

(negativeglitch)

A

A

V

V

df

dr(risingdelay)

A

A

(fallingdelay)

A

A

V

V

sf

sr(risingspeedup)

A

A

(fallingspeedup)

A

A

Fig. 7.2 MAF model: crosstalk effects (Adapted from Cuviello et al. 1999): (a) positive glitch, (b) negative glitch, (c) rising delay, (d) falling delay, (e) rising speedup, (f) falling speedup

Page 150: Reliability, Availability and Serviceability

1377.2 Testing the Communication Channels

can be built to generate the test bits for the aggressor and the victim wires, while a barrel shifter, controlled by a victim counter, can select a new victim and apply the appropriate test bits at each new test step.

The TED must check whether or not the MAF vectors are correctly received at the destination router. In order to locally generate the correct test responses, the TED must basically implement the same hardware as in the TDG. In addition, it must embed a XOR network to compare the received vectors against the computed expected responses. This comparison must be performed by the TED within a time window that does not exceed the maximum delay admitted for the link under test.

The approach, as presented in Grecu et al. (2006), only applies to single victim data wires located within an intralink neighborhood.

7.2.1.2 BIST for Deflective Switches

This BIST-based approach, as further discussed in Sect. 6.3.1.3 of the previous chapter, integrates the test of the routers with the test of the communication chan-nels. The test is applied to two disjoint set of routers in two consecutive test phases. In each test phase, one set has the router datapaths and link wires tested, while the other set has the router control parts and links deflective function tested.

The two test phases cover the router logic in full and, additionally, exercise twice the communication channels (links). Since a commercial ATPG tool is used for the generation of the test vectors, the fault coverage figures presented in Petersén and Öberg (2007) possibly also consider a stuck-at fault model for the channel interconnects. If this is actually the fault model considered, probably the coverage of wiring stuck-at and open faults will be around 100%. However, the coverage of other important wiring faults, such as intra and interlink bridging and crosstalk faults, cannot be estimated, since no details on the test patterns are provided in the original work.

data transmission

victim wire

aggressor wires 1

1111

1

1111

0

0000

1

1011

0

0100

0

0000

0

0100

1

1011

sf sr gn df dr gp

aggressor wires

victim wire

Fig. 7.3 Optimized test sequence for the MAF model

Page 151: Reliability, Availability and Serviceability

138 7 Test and Diagnosis of Communication Channels

Similarly to the point-to-point configuration of the previous approach, since TDGs and TEDs are locally implemented and they apply stimuli and collect responses in parallel, the test application time tends to be very short in the deflective approach. The price to pay for BIST, however, is an increasing area overhead penalty. In terms of additional I/O pins, the BIST scheme of the deflective approach may not require additional I/O pins since, in order to run the self-test procedure, it is possible to share functional SoC pins.

Finally, in terms of fault diagnosis, since in the deflective approach the links are redundantly checked in different test phases, it will be possible to distinguish between a faulty router and a faulty link if the results of the two test phases of the two disjoint set of routers are jointly analyzed.

7.2.2 Functional-Based Strategies

As reminded in the beginning of the chapter, functional-based approaches apply tests using the NoC normal operation modes. Additional structures for testing, whenever needed, must comply with the functional modes of the network. Two representatives of these approaches were selected and will be presented in next sections. They are:

Functional BIST accounting for interlink shorts (Cota et al. 2007); and,Direct I/O access to the network (Raik et al. 2006).

7.2.2.1 Functional BIST Accounting for Interlink Shorts

This approach applies tests to the communication channels and collects their test responses by sending packets through the NoC in its normal operation mode. A Built-In Self-Test (BIST) scheme is proposed that, besides being compliant with the network functional mode, provides for at-speed testing and for low I/O pin overhead. In this BIST scheme, Test Data Generators (TDGs) and Test Error Detectors (TEDs) may be implemented in software, as part of the cores, or in hardware, as part of the network interfaces, see Fig. 7.4.

This approach also extends the wiring fault model proposed in previous works to include short-circuits that may affect wires connecting the core to the network or wires located in distinct communication channels. As in most previous works, short circuits of AND and OR-type are considered in Cota et al. (2007). These inter-link shorts are illustrated in Fig. 7.4.

The neighborhood into which shorts are supposed to occur is a 2 × 2 sub-NoC, as shown in Fig. 7.5. Since short circuits may involve any of the data wires in this neighborhood, the whole 2 × 2 sub-NoC must be filled with the test vectors in order to ensure that the faults will be detected. In other words, the test packets must be sent through test paths such that all links are filled up at the same time in the same

Page 152: Reliability, Availability and Serviceability

1397.2 Testing the Communication Channels

test configuration. Considering a XY routing strategy, this means that four test paths must be simultaneously activated in the 2 × 2 mesh network. These paths are shown in Fig. 7.5 using four different lines: one solid, one dashed, one with lines and points and one with triple lines.

The organization of the test packets used in this approach is detailed in Fig. 7.6. The header identifies the test path to follow, the payload accommodates the initial-ization of the network and the test sequence, and the tail indicates that the test is done. The test vectors shown in the Figure implement the Walking-One sequence, thus covering AND-type shorts. The same kind of organization applies to OR-type shorts, except that test vectors implementing the Walking-Zero sequence must be used instead. This means simply replacing all zeros by ones, and vice-versa, in the payload flits of Fig. 7.6.

For the case of the Walking-One sequence, to prepare the wires to receive the test vectors they are initialized inserting into the network a number of flits containing strings of zeros. Then the test sequence is applied to one of the four test paths of the

NetworkInterface

Router

TDGTED

Router

Router

IP Core

IP Core( processor)

Network Interface

TDG TED

Fig. 7.4 The BIST scheme and extended fault model

Page 153: Reliability, Availability and Serviceability

140 7 Test and Diagnosis of Communication Channels

2 × 2 sub-NoC. Note that, there are l flits of zeros after each test vector in the payload, where l is the number of clock cycles needed to send a payload flit from the source to the destination NoC node. These flits are needed to guarantee that only one data wire of the path under test will be holding the value ‘1’ at a time, ensuring that AND-shorts involving any of the wires located within the path are detected. Finally, to prepare the wires for the application of the sequence to the remaining sub-NoC

Router00

Router10

Router01

Router11

Network Interface 0 Network Interface 1

Network Interface 2 Network Interface 3

Fig. 7.5 The fault model neighborhood and test paths

10000000000000000000000000000000000000000000010000000000000000000000000000000000000000000010000000000000000000000000000000000000000000010000000000000000000000000000000000000000000010000000000000000000000000000000000000000000010000000000000000000000000000000000000000000010000000000000000000000000000000000000000000010000

head

er

string

s of zero

sp

ayload

Wire0 Wire w-1

00000000

X00XX00X

tail

00000000

payload

string

s of zero

sp

ayload

00000000

Fig. 7.6 Test packet organization for the Walking-One sequence (w is the number of data wires in the communication channel)

Page 154: Reliability, Availability and Serviceability

1417.2 Testing the Communication Channels

test paths, the wires of the just tested path are re-initialized through a number of additional flits containing strings of zeros.

As mentioned, to guarantee the detection of an AND-short involving a particular wire and any other wire in a specific neighborhood, the wire under test must hold a logic ‘1’ while, at the same time, all other wires within the neighborhood hold ‘0’s. This is ensured by the test packet shown in Fig. 7.6 for just one of the test paths in Fig. 7.5. In order to extend this fault detection capability to the whole sub-NoC, the test sequence shown in Fig. 7.7 is the one applied to the four test paths. This test sequence ensures the detection of any AND-short in the 2 × 2 sub-NoC, because the Walking-One sequence is applied to each individual test path shifted in time and, while it is being applied to a particular path, all wires located in the other three test paths are simultaneously filled with ‘0’s.

header

tail

strings of zeros

strings of zeros

header

strings of zeros

header

strings of zeros

header

strings of zeros

strings of zerosstrings of zeros

strings of zeros

strings of zeros

strings of zeros

strings of zeros

Walking-O

ne Vectors

strings of zeros

strings of zeros

strings of zeros

strings of zerosstrings of zeros

tailtail

tail

10011000

10011001

10001000

10001001

Walking-O

ne Vectors

Walking-O

ne Vectors

Walking-O

ne Vectors

Netw

ork Interface 0N

etwork Interface 1

Netw

ork Interface 2N

etwork Interface 3

time

Fig. 7.7 Complete test sequence for the 2 × 2 sub-NoC

Page 155: Reliability, Availability and Serviceability

142 7 Test and Diagnosis of Communication Channels

For the detection of short faults in larger networks, Cota et al. (2007) proposes that, whatever is the size of the NoC, four test sessions are built based on different test configurations that cover, altogether, all 2 × 2 sub-NoCs possible in the mesh. Figure 7.8 shows the four test sessions for a 4 × 4 NoC. The test sessions are applied one after another. The test configurations in the same test session are applied in parallel. A very simple algorithm is proposed in Cota et al. (2008) to determine the test configurations to implement in each of the four test sessions. Applying these test sessions ensures that all 2 × 2 neighborhood channels are tested and that any short circuit affecting two wires within this neighborhood is detected. As a matter of fact, since several different test configurations are applied to the network making that many communication channels are exercised twice, it is highly probable that even short circuits involving wires outside the 2 × 2 neighborhood will be also detected.

To implement the overall test strategy for the communication channel intercon-nects, TDGs and TEDs must be included in the network interfaces or in the IP-cores

a

c d

b

test session 1 test session 2

test session 3 test session 4

Fig. 7.8 The test session and respective test configurations for a 4 × 4 mesh NoC

Page 156: Reliability, Availability and Serviceability

1437.2 Testing the Communication Channels

themselves (Cota et al. 2008). Basically, the TDG must implement a Finite State Machine (FSM) that generates the header, a number of zero flits followed by the payload containing the Walking sequence, and another series of zero flits followed by the tail. The header and tail contents, the number of zero flits and the number of flits in the payload shall be parameterized and their values loaded from the Automatic Test Equipment (ATE) through a scan chain that connects all network interfaces.

The TED has a similar structure. It waits for a synchronization signal to start the verification of the arriving data. If this signal is not received within a predefined time interval, a time-out signal is given, indicating a fault. If no time-out occurs, at each clock cycle, a new flit is read and compared to expected values internally reproduced by the TED or informed by the ATE through the scan chain. The TED generates time-out and error-flags and makes them accessible to the ATE through the scan chain. As expected for a BIST scheme, the implementation of the TDG and TED structures are very silicon-consuming and represent an important area overhead, as reported in Cota et al. (2008). The test application time is low when compared to the time needed for testing the routers through scan chains, and can turn even shorter by using boundary scan registers to implement the TDG and TED configuration chains (Hervé et al. 2009a).

The approach, as presented in Cota et al. (2007), only applies to interlink shorts involving data wires. However, Cota et al. (2008) proposes that the test packets of the sequence shown in Fig. 7.7 are increasingly delayed, from top to bottom, by a particular fixed time interval, and demonstrates that with this new test sequence short circuits affecting the control and handshake wires of the communication channels are also covered. In both works the SoCIN NoC (Zeferino and Susin 2003) is used as case study.

Considering the fault diagnosis capability of the approach, Hervé et al. (2009b) demonstrates that the use of the single test cycle of Fig. 7.5, based on four simulta-neously exercised test paths covering a 2 × 2 sub-NoC neighborhood, although very efficient in terms of test application time, makes it very improbable that the shorted wires can be located following the detection of a fault. This poor diagnos-ability is overcome in that work, by using a new test scheme based on five different test cycles, instead of one, involving different test paths devised for fault diagnosis of pairwise shorts in the same network neighborhood. Hervé et al. (2009b) also shows that, to ensure the fault detection and to improve the fault diagnosis capability for larger than 2 × 2 NoCs, the test configurations shown in Fig. 7.8, that were previously grouped together in four test sessions to reduce the test application time, should now be applied serially, instead of in parallel. That means that, instead of having four test sessions for the example of Fig. 7.8, nine test sessions would now result by ungrouping the test configurations shown in the Figure. The price to pay for enhancing the fault diagnosis is thus an important increase in the test application time.

Similarly to the approaches in Petersén and Öberg (2007) and Raik et al. (2006) that integrate the test of routers and links, the interconnect testing approach in Cota et al. (2008) is extended in Hervé et al. (2010) to also cover faults affecting the routers. Basically, the work in Hervé et al. (2010) applies the sequence devised in Cota et al. (2008) for the test of the channel wires and, since the test vectors are inherently

Page 157: Reliability, Availability and Serviceability

144 7 Test and Diagnosis of Communication Channels

transported through the network routers, it checks to what extent the original test sequence can detect faults in the FIFOs, the routing and the arbitration logic of the NoC switches. It is shown that the original interconnect test sequence provides for a pretty low coverage of faults in the routers.

Going into a deeper analysis, the first conclusion drawn in Hervé et al. (2010) is that few additional test vectors are needed to cover all stuck-at faults in the FIFOs. The impact on the test application time is almost none in this case.

The second important conclusion is that the original sequence does not exercise many routing NoC paths and thus, does not detect a number of stuck-at faults in the routing logic. Eight new test sessions are proposed to cover 8-out-of-16 XY routing possibilities that were not originally covered. The new test sessions, as in the original test sequence, apply the test packets in parallel to multiple test configurations that are concurrently activated. These new test sessions and respective test configura-tions are shown in Fig. 7.9 for a 4 × 4 NoC.

The third conclusion is that the interconnect test sequence does not test at all the router arbitration logic, since no simultaneous channel requests occur neither in the original, nor in the newly added test sessions. To improve even further the router fault coverage, a number of additional test sessions are proposed that exercise the arbitration functionality of the router logic. These new test sessions and respective test configurations are shown in Fig. 7.10 for a 4 × 4 mesh network. Since the additional test sessions are quite many to test the routing and arbitration logic, the price to pay for improving the fault coverage is an important increase in the test application time.

The fourth and last conclusion of Hervé et al. (2010) is that to achieve a complete fault coverage for the routers, without greatly sacrificing the test application time, some additional structures for scan test or BIST must be used in the end to cover the faults that remain undetected after applying the functional tests described above.

Many improvements to the original work in Cota et al. (2007) were presented so far: Cota et al. (2008) covers faults in the control and handshake in addition to the data wires; Hervé et al. (2009a) reduces the test application time; Hervé et al. (2009b) makes it possible the intralink and interlink short diagnosis; and, Hervé et al. (2010) integrates the test of routers and the communication channels. However, considering the reduction of the feature size in new technologies, not only short circuits, but also crosstalk faults become very likely to occur. Therefore, a good approach for testing the communication channel interconnects shall consider both. For this reason, Botelho et al. (2010) proposes a new improvement to Cota et al. (2007) so as to cover, in addition to interlink shorts, intra and interlink crosstalk faults.

In comparison to Cota et al. (2007), the approach in Botelho et al. (2010) uses a new strategy based on an extended set of test paths and a modified test packet. Botelho et al. (2010) considers the test application to the entire network, so that crosstalk faults affecting the interconnects in any arbitrary neighborhood can be detected. The set of test paths is such that no network channel is missed and the paths do not share any channel. It can be shown that, for any i × j mesh NoC with XY routing, there always exist at least one set of i*j paths covering all NoC channels, that can be simultaneously activated with no resource conflict. An example is given

Page 158: Reliability, Availability and Serviceability

1457.2 Testing the Communication Channels

test session 5

a

e f g h

b c d

test session 6

test session 9 test session 10

test session 7 test session 8

test session 11 test session 12

Fig. 7.9 Additional test sessions and respective routing logic test configurations for a 4 × 4 mesh NoC

test session 13

a

d e

b c

test session 14

test session 16 test session 17

test session 15

Fig. 7.10 New test sessions and respective arbitration logic test configurations for a 4 × 4 mesh NoC

Page 159: Reliability, Availability and Serviceability

146 7 Test and Diagnosis of Communication Channels

in Fig. 7.11 for a 4 × 4 mesh NoC. As it can be noticed in the Figure, the local ports of the routers in the NoC boundaries are the starting and the ending point of the longest test paths. The shortest test paths are those exclusively implemented by the local ports of the network central routers. Alternative paths to cover these local channels can be obtained by breaking paths originating in the NoC boundaries that pass over the central routers. For instance, the test path P8, that starts in router 8 and ends in router 9, could be broken into two new test paths, the first one starting in router 8 and going to router 6, and the second one starting in router 6 and going to router 9.

The test packet is built using Maximal Aggressor Fault (MAF) vectors (Fig. 7.2). Considering a NoC that uses wormhole packet switching, the MAF test vectors must be applied to the channels using several flits. However, because of the latency of routers, the next flit in a packet does not show up in the next channel at the next clock cycle. A flit coming in from an input channel takes a number of clock cycles till it can be definitely routed to the next channel, keeping stored in router buffers for a while. During this time, the MAF vectors will be hidden into the router buffer. Therefore, in order to be effective the crosstalk test sequence shall build considering this routing latency, that consists of one clock cycle to store the incoming flit, and at least another cycle for the router to check if the flit can be sent out to the desired

Router1

Router5

Router13

Router9

Router4

Router16

Router8

Router12

Router2

Router6

Router14

Router10

Router3

Router7

Router15

Router11

P1

P1

P2

P2

P3

P3

P14

P14

P15

P15

P16

P16

P4

P4

P5

P5P8

P8

P9

P9

P13

P13

P12

P12

P6 P7

P10 P11

Fig. 7.11 Crosstalk test paths for a 4 × 4 NoC with XY routing

Page 160: Reliability, Availability and Serviceability

1477.2 Testing the Communication Channels

channel. This latency depends on the implementation of the router control part. The test packet proposed in Botelho et al. (2010) works for latencies equal or longer than two clock cycles. Considering that the latency of the NoC does not change during the transmission of the packet, if the latency is equal to two, a flit containing ‘1’s must be followed by a flit containing ‘0’s to create the transitions needed for the test using the MAF model.

Figure 7.12 presents the final format of the test packet for the SoCIN NoC (Zeferino and Susin 2003), whose latency equals three. One can notice that after the header and before applying a test vector, it is necessary to initialize the test path assigning the appropriate values to all aggressor wires. Following the initialization, three vectors are applied to the channels for the test of rising (d

r) and falling(d

f)

delays that may be affecting the first bit of the network channels (victim wires). These test vectors are transported all along the test path, while the aggressor wires that remain behind continue being fed with all-‘1’ and all-‘0’ flits as shown in Fig. 7.12. Once the first bit of all channels in the path have been tested for d

r and d

f,

then three additional test vectors are launched for the test of negative glitches (gn)

in the first bit. Once again, these test vectors are transported all along the test path, while the aggressor wires that remain behind continue being fed with the appropriate values. Finally, the same procedure is repeated considering three other test vectors for the positive glitches (g

p). Then the test for bit 1 of all channels (‘test

b1’ in Fig. 7.12) is complete and new victims can be considered (‘test b2’, ‘test b3’, etc.) in the sequel. The test of the whole path ends when the tail flit is received, then a new path can be tested.

The test of each path consists in sending flits through its channels considering one bit as victim and all others as aggressors, while other channels in the path contain aggressor flits. When one path is under test, all other paths are being traversed by the all-‘1’and all-‘0’ aggressor flits. This way, only one wire plays the victim at a time, while all other wires in the same channel and in other channels of the network play the aggressors. In the fault free-case, the good test vectors are propagated all along the test path and are reused to test new victims in the following channels in the path. Whenever a fault is detected in a channel, the value of the victim bit is

drdfdr

test b1

df

test b2

I-1I-1

gpgn gn

I-1I-1

gp

I-1 I-1I-1

Fig. 7.12 Crosstalk test packet (I is the number of test flits necessary to initialize the longest test path)

Page 161: Reliability, Availability and Serviceability

148 7 Test and Diagnosis of Communication Channels

flipped and propagates unmodified through the following channels till the erroneous value gets to the path end. In the final destination, the fault path and respective bit can be uniquely identified. This way, the MAF tests, as they are applied to the NoC, can detect crosstalk faults within and among communication channels.

It is shown in Botelho et al. (2010) that the test application time of this approach grows quadratically with the NoC increase. Then two other approaches are proposed that use locality information, either extracted from the physical layout or from the circuit floor plan, in order to reduce the test time of the global strategy. The first approach partitions the overall NoC in sub-NoCs for which common crosstalk effects are very unlikely to occur and then applies the original test procedure to these sub-NoCs. The second approach, after determining crosstalk prone neighborhoods, modifies the test packets to apply multiple victims simultaneously to different NoC channels. Both alternative approaches drastically reduce the test application time.

7.2.2.2 Direct I/O Access to the Network

In an external test fashion, this functional-based approach applies the tests from the local ports of all routers and from all router ports located in the boundaries of the NoC, and collects the test responses at these same locations. Considering a XY routing strategy, three test configurations are proposed that exercise all NoC routing possi-bilities, covering stuck-at faults in the routing logic, in the datapath registers and multiplexors of the Nostrum NoC router (Millberg et al. 2004).

Since the checkerboard test pattern (‘010101..’) and its complement (‘101010..’) are applied to the routers passing through the communication channels, it is easily shown that stuck-at, open, delay and short-circuit faults affecting intra-channel adjacent data wires are covered. Stuck-at faults are fully covered because each wire is exercised with both, a ‘0’ and a ‘1’ logic. As a consequence, open-circuits modeled as stuck-at faults are also covered. Delay faults are fully covered because the complete test sequence (Raik et al. 2006) applies the checkerboard, followed by its complement, and then the checkerboard pattern again, provoking the 0 1 and 1 0 transitions in all wires. On one hand, since in the test patterns used adjacent bits are always a complement of each other, short-circuit faults involving adjacent wires of the same channel are therefore fully covered. On the other hand, pairwise short-circuits involving odd or even indexed wires cannot even be detected, although they can be considered realistic faults in a Standard Cell implementation for the NoC, for example. The same remark applies to short-circuits involving intercon-nects located in different communication channels.

Since the direct access is an external, functional-based approach, it requires no additional silicon to implement internal structures specific for testing. Additionally, since it assumes an extensive access to the network boundaries, the test application time tends to be much shorter than in conventional external testing approaches. However, the price to pay for this extensive access is a very high I/O pin overhead, and this is the major drawback of the approach.

Page 162: Reliability, Availability and Serviceability

1497.3 Comparing the Approaches

In terms of fault diagnosis at the wire level, the test sequence used in the direct access approach has the potential to identify the area (intralink) affected by the fault, but cannot ensure the precise identification of the type of wiring fault and the faulty wires. However, at the network level, faulty paths traversing any two ports of a particular router can be diagnosed, as detailed in Raik et al. (2007) and further discussed in Sect. 6.3.2.1. Nevertheless, it is not possible to determine if the fault is affecting the incoming channel, the outgoing channel or the router internal path communicating the two channels.

7.3 Comparing the Approaches

In previous sections, four selected approaches for communication channel testing were presented. The main features of these approaches are summarized in Table 7.2. The first two approaches in the Table – crosstalk and deflective, were classified as structural testing, while the other two – interlink and direct access, were classified as functional testing.

From the second column of Table 7.2, one can notice that, except for the deflec-tive approach, all other approaches are based on more sophisticated models than the stuck-at fault model. The more advanced models consider transition faults, open and short circuits, besides crosstalk effects in the interconnects. Most approaches consider faults within the same communication channel (intralink), that affect exclusively data wires (see third column). The only exception is the interlink approach that detects short circuits involving data, control and handshake wires located in the same or in different communication channels. The approaches classified as structural testing implement BIST, as stated in column fourth. The functional-based approaches implement either BIST or external testing. In both cases, they apply specific test configurations that traverse particular test paths and activate the normal functional modes of the network to exercise the communication channels (links) and, eventually, the routers (switches). In terms of patterns applied for testing, all four approaches differ from each other. The crosstalk approach applies test patterns that expose the six effects accounted for in the Maximal Aggressor Fault (MAF) model. The deflective approach computes the test stimuli using ATPG tools. The original interlink approach applies the Walking Sequence for the detection of AND and OR-type short circuits, but an extension was proposed that also applies MAF vectors. Finally, the direct access approach builds its detection strategy based on the checkerboard pattern and its complement. According to column six, the four communication channel testing approaches were applied to different networks (SoCIN and Nostrum), using different topologies (mesh and butterfly fat-tree). Similarly to the case of the routers, this fact makes it difficult to compare the testing approaches against each other. However, it is not difficult to conclude that the main ideas supporting any of these approaches can easily fit other networks and topologies available in the literature and in the market.

Page 163: Reliability, Availability and Serviceability

150 7 Test and Diagnosis of Communication Channels

Although it is not possible to perform a straightforward comparison considering the benefits and costs of the studied testing approaches, Table 7.3 makes an attempt to point out the main differences between them, highlighting the advantages and drawbacks of each approach.

In terms of fault coverage, two different scenarios can be observed in Table 7.3. The approaches that were specifically developed for the test of the communication channels – the crosstalk and the interlink approach, are naturally those that achieve the highest scores. The other two – the deflective and the direct access approach, originally developed for the test of routers, perform worse in this requisite. The main reason for this difference is the fault model in use. The deflective approach considers stuck-at faults only, and it does so for the test of the routers logic, not for the wiring portion of the network. Since no typical interconnect fault model is assumed, it is graded low. The direct access approach chooses test patterns with limited coverage of intralink interconnect defects: stuck-at faults, opens, transition delays and short circuits involving adjacent wires are covered. Since not all possible intralink shorts are detected, no interlink shorts are covered, and crosstalk effects are disregarded, this approach is scored medium in column two of Table 7.3. The other two approaches receive better grades, because they adopt interconnect-specific and inclusive fault

Table 7.2 Link testing approaches: summary of features

Test approachFault model Tested block Test type Test patterns

NoC topology

Crosstalk MAF Data intralink wires

Concurrent, distributed BIST

MAF Mesh, butterfly fat-tree

Deflective Stuck-at Routers BIST, test phases

ATPG NostrumLinks

Interlink Shorts MAF

Data, control, handshake wires

BIST, test paths

Walking sequence, MAF

SoCIN mesh

Direct access Stuck-at Router datapath

External at-speed, specific configurations

Checkerboard Nostrum

Delay, Open/short

Data intralink wires

Table 7.3 Link approaches: test capabilities and costs

Test approach Fault coverage Test time Area overheadI/O pin overhead Diagnosis

Crosstalk Medium to high Low to medium

High to medium

None Potentially faulty wire

Deflective High (router) Low High None Potentially faulty router, faulty link

Low (link)

Interlink High Medium to high

High Low Yes, faulty wire

Direct access Medium (router) Low None High Yes, faulty link-routerMedium (link)

Page 164: Reliability, Availability and Serviceability

1517.3 Comparing the Approaches

models. On one hand, the Walking Sequence covers all faults mentioned above, except crosstalk effects. On the other hand, test vectors built to expose the crosstalk effects of the MAF model cover both, conventional wiring faults and crosstalk. Since the fault coverage of the crosstalk approach is restricted to a single channel, it is scored medium to high. The interlink approach is the only one capable of detecting interconnect shorts and crosstalk effects involving wires located in different communication channels and, for this reason, is graded high fault coverage.

From the point of view of the test application time (third column of Table 7.3), whenever many test configurations are needed, scan chains are used for BIST parameterization, or test data is transported through the network to the channel under test, the test length tends to be longer because the application of test stimuli and the collection of test responses is dominated by serial operations. In addition, test sequences with many test vectors and long test paths naturally add to the test application time. The interlink approach uses scan chains to load BIST parameters and unload BIST results. When only interconnect short faults are considered, the number of test sessions is limited, but the number of test configurations, although mostly applied in parallel, may be high. When the integration with the test of routers is considered, the number of test configurations increase even further. When the MAF model is used, the test paths become very long. Finally, along with the deflective approach, the interlink is the approach that needs the highest number of test vectors. Therefore, this is the approach that tends to be the most time-consuming test option. When point-to-point BIST is considered, as test generators and response analyzers are locally implemented and apply stimuli and collect responses in parallel, the test length tends to be much shorter. This fact grants the low test time score to the deflective approach, even though the number of test vectors to apply may be high. The crosstalk approach, in point-to-point configuration, is definitely the best option in terms of application time because, in addition to the locality of TDGs and TEDs, it requires few test vectors (only eight). However, if distributed BIST (either multicast, or unicast) is considered, the test application time will greatly increase due to the need to transport test data to the channel under test by passing through already tested channels. These are the reasons why the test time for the crosstalk approach is scored low to medium. The application time of conventional external testing tends to be long, due to the limited controllability and observability of the internal logic measured from the I/O pins of complex SoCs. Since the direct access, although an external testing approach, assumes an extensive access to the network boundaries, the test time tends to be much shorter than in the conventional approach and thus comparable to the test time of the deflective approach.

It is common knowledge that the area overhead penalty (column 4 of Table 7.3) increases when one moves from the external testing to BIST. The point-to-point crosstalk, the deflective and the interlink approaches implement BIST schemes based on local TDG-TED pairs. Those three approaches are then ranked high area overhead. The crosstalk approach, when implementing distributed unicast or multicast BIST, shares the TDG and then alleviates the area overhead. This is the reason to also mention the medium grade for this approach. Finally, the direct access functional-based approach implements external testing and thus does not require that extra logic is embedded for testing purposes.

Page 165: Reliability, Availability and Serviceability

152 7 Test and Diagnosis of Communication Channels

In terms of I/O pin overhead (fifth column of Table 7.3), similarly to conventional external testing, the BIST schemes of the crosstalk and the deflective approaches may not require additional I/O pins since, in order to run the self-test procedure, it is possible to share functional SoC pins. The external testing implemented in the direct access approach, nevertheless, is quite unconventional since, to provide access to the boundaries of a m × n mesh NoC with w channel width, 4.w.(m + n) additional I/O pins are required. In the case of the interlink approach, the BIST scheme imple-mented is such that the TDGs and TEDs are parameterized by the ATE through a configuration scan chain. Therefore, few additional pins will be needed to implement scan-in, scan-out and control signals.

Only the interlink and the direct access approaches have explicitly demonstrated their fault diagnosis capabilities. In the direct access approach, a faulty path passing through a router can be uniquely identified by the diagnosis algorithm. Nevertheless, it is not possible to determine if the fault is affecting the incoming channel, the outgo-ing channel or the router internal path communicating the two channels. In the inter-link approach, by using an extended test scheme based on five test cycles, instead of one, and by serializing the application of the test configurations, the fault diagnosis of pairwise shorts becomes feasible at the wire level. The other two approaches have not declared how capable they are to diagnose faults but, as done for the router testing approaches, we can speculate. In the case of the crosstalk approach, if a point-to-point BIST is used, the fault diagnosis will be also possible at the wire level. If a distributed BIST is used, on the other hand, the crosstalk approach has the potential to identify faulty link-router sets. Once a communication channel in a particular testing path fails the test, one can conclude that either the channel itself or the router that brought to the channel the test patterns is faulty. If the same channel is part of other testing paths, it is possible that combining the test results one can identify whether the fault is affect-ing the communication channel or the router. Finally, since in the deflective approach the communication channels are redundantly checked in different test phases, it will be possible to distinguish between a faulty link and a faulty router if the results of the two test phases of the two disjoint set of routers are jointly analyzed.

As in the case of the router testing approaches, it is clear, from the analysis performed for the communication channel testing, that all studied approaches have benefits and drawbacks. Since in many aspects these approaches complement each other, combining them may be the best way of meeting the requirements of a particular application.

7.4 Concluding Remarks

In this chapter, we have discussed some testing approaches that aim at the detection and diagnosis of manufacturing faults in network-on-chip communication channels. As for the routers, these test techniques were classified as either structural, or functional-based approaches.

Page 166: Reliability, Availability and Serviceability

153References

The techniques presented are not the only works available in the literature that address the problem of link testing, but those that were considered good repre-sentatives of groups of papers dealing with the same central ideas. For further reading, we recommend other very interesting works that, due to time limitation, could not be covered in this chapter: Bengtsson et al. (2006a, b), Mondal et al. (2006), Alaghi et al. (2008) and Concatto et al. (2009).

So far, we have been dealing with the test challenges that relate to 2D implemen-tations of Networks-on-Chip whose links are designed using electrical intercon-nects. However, 3D technologies (Feero and Pande 2009) and optical interconnects (Brière et al. 2007) have been considered, more recently, as the means to integrate ever more complex and higher performance functions through NoC-based Systems-on-Chip. Preliminary work has been conducted to provide for a solution to test the communication channels of stacked mesh NoCs (Chan and Su 2010), but further research on that is imperative. Over the next years more solutions shall also be investigated that can appropriately address the particular problem of testing NoC links that are implemented using optical interconnects.

References

Alaghi A, Sedgi M, Karimi N, yathi M, Navabi Z (2008) Reliable NoC architecture utilizing a robust rerouting algorithm. In: Proceedings of the east-west design and test symposium, Lviv, Ukraine, pp 200–203

Amory AM, Brião E, Cota E, Lubaszewski M, Moraes FG (2005) A scalable test strategy for net-work-on-chip routers. In: Proceedings of the international test conference (ITC), Austin, Texas

Bai X, Dey S, Rajski J (2000) Self-test methodology for at-speed test of crosstalk in chip intercon-nects. In: Proceedings of the design automation conference (DAC), Los Angeles, CA, pp 619–24

Bengtsson T, Jutman A, Kumar S, Ubar R, Peng Z (2006a) Off-line testing of delay faults in NoC interconnects. In: Proceedings of the EUROMICRO conference on digital system design, Dubrovinik, Croatia

Bengtsson T, Kumar S, Ubar R, Jutman A (2006b) Off-line testing of crosstalk induced glitch faults in NoC interconnects. In: Proceedings of the norchip conference, Linkoping, Sweden, pp 221–225

Botelho M, Kastensmidt FL, Lubaszewski M, Cota E, Carro L (2010) A broad strategy to detect crosstalk faults in network-on-chip interconnects. In: Proceedings of the 18th IEEE/IFIP inter-national conference on VLSI and system-on-chip (VLSI-SoC), Madrid, Spain

Brière M, Girodias B, Bouchebaba Y, Nicolescu G, Mieyeville F, Gaffiot F, O’Connor I (2007) System level assessment of an optical NoC in an MPSoC platform. In: Proceedings of design, automation and test in Europe conference (DATE), Nice, France

Chan MJ, Hsu CL (2010) A strategy for interconnect testing in stacked mesh networks-on-chip. In: Proceedings of the 25th IEEE international symposium on defect and fault tolerance in VLSI systems, Kyoto, Japan, pp 122–128

Concatto C, Almeida P, Kastensmidt F, Cota E, Lubaszewski M, Hervé M (2009) Improving yield of torus NoCs through fault-diagnosis-and-repair of interconnects faults. In: Proceedings of the 15th IEEE international on-line testing symposium (IOLTS), Sesimbra, Portugal, pp 61–66

Cota E, Kastensmidt FL, Cassel M, Meirelles P, Amory A, Lubaszewski M (2007) Redefining and testing interconnect faults in mesh NoCs. In: Proceedings of the international test con-ference (ITC), Santa Clara, CA

Page 167: Reliability, Availability and Serviceability

154 7 Test and Diagnosis of Communication Channels

Cota E, Kastensmidt FL, Cassel M, Hervé M, Almeida P, Meirelles P, Amory A, Lubaszewski M (2008) A high-fault-coverage approach for the test of data, control, and handshake interconnects in mesh networks-on-chip. IEEE Trans Comput 57(9):1202–1215

Cuviello M, Dey S, Bai X, Zhao Y (1999) Fault modeling and simulation for crosstalk in system-on-chip interconnects. In: Proceedings of the international conference on computer-aided design, San Jose, CA, pp 297–303

Feero BS, Pande PP (2009) Networks-on-chip in a three-dimensional environment: a performance evaluation. IEEE Trans Comput 58(1):32–45

Grecu C, Pande P, Wang B, Ivanov A, Saleh R (2005) Methodologies and algorithms for testing switch-based NoC interconnects. In: Proceedings of the IEEE international symposium on defect and fault tolerance in VLSI systems, Monterey, CA, pp 238–246

Grecu C, Pande P, Ivanov A, Saleh R (2006) BIST for network-on-chip interconnect infrastruc-tures. In: Proceedings of the IEEE VLSI test symposium (VTS), Berkeley, CA

Hassan A, Rajski J, Agrawal V (1988) Testing and diagnosis of interconnects using boundary scan architecture. In: Proceedings of the international test conference (ITC), Washington, DC

Hervé M, Cota E, Kastensmidt FL, Lubaszewski M (2009a) NoC interconnection functional testing: using boundary-scan to reduce the overall testing time. In: Proceedings of the 10th Latin American test workshop (LATW), Búzios, Brazil

Hervé M, Cota E, Kastensmidt FL, Lubaszewski M (2009b) Diagnosis of interconnects in mesh NoCs. In: Proceedings of the IEEE/ACM international symposium on networks-on-chip (NOCS), San Diego, CA, pp 256–265

Hervé M, Almeida P, Kastensmidt FL, Cota E, Lubaszewski M (2010) Concurrent test of network-on-chip interconnects and routers. In: Proceedings of the 11th Latin American test workshop (LATW), Punta del Este, Uruguay

Kautz W (1974) Testing for faults in wiring networks. IEEE Trans Comput C-23(4):358–363Lien J, Breuer M (1991) Maximal diagnosis for wiring networks. In: Proceedings of the interna-

tional test conference (ITC), Nashville,TNMillberg M, Nilsson E, Thid R, Jantch A (2004) A guaranteed bandwidth using looped containers

in temporary disjoint networks within the nostrum network on chip. In: Proceedings of the design, automation and test in Europe conference (DATE), Paris, France, pp 890–895

Mondal M, Wu X, Aziz A, Massoud Y (2006) Reliability analysis for on-chip networks under RC interconnect delay variation. In: Proceedings of the international conference on nano-networks, Lausanne, Switzerland

Petersén K, Öberg J (2007) Toward a scalable test methodology for 2d-mesh network-on-chip. In: Proceedings of the design, automation and test in Europe conference (DATE), Nice, France, pp 367–372

Raik J, Govind V, Ubar R (2006) An external test approach for networks-on-a-chip switches. In: Proceedings of the IEEE Asian test symposium (ATS), Fukuoka, Japan

Raik J, Ubar R, Govind V (2007) Test configurations for diagnosing faulty links in NoC switches. In: Proceedings of the IEEE European test symposium (ETS)

Stewart K, Tragoudas S (2006) Interconnect testing for networks on chip. In: Proceedings of the IEEE VLSI test symposium (VTS), Berkeley, CA

Zeferino C, Susin A (2003) SoCIN: a parametric and scalable network-on-chip. In: Proceedings of the ACM/IEEE/SBC/SB micro symposium on integrated circuits and systems design (SBCCI), São Paulo, Brazil, pp 169–174

Page 168: Reliability, Availability and Serviceability

155É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_8, © Springer Science+Business Media, LLC 2012

This part of the book is devoted to on-line Network-on-Chip (NoC) testing strategies, while the previous part is devoted to off-line NoC testing strategies. The main dif-ference is that the former detects run-time faults during system’s mission mode, while in the latter is typically used to detect manufacturing defects while the system is in test mode. This chapter is devoted to on-line fault detection on data transmitted over the NoC. Due to the effect of deep submicron (DSM) technologies on the circuit reliability, the designer can no longer assume that the NoC is fault free during its normal execution. This way, designers add test approaches such as error control coding (ECC), data retransmission, or a combination of both to detect and deal with these run-time faults. The problem is that these test approaches have a cost in terms of, for instance, silicon area, codec delay, network congestion, and energy consumption. Thus, the challenge for the designer is to find a good trade-off between these costs and the potential benefit of the test approach in terms of reliability. This chapter presents the most relevant on-line NoC testing strategies that have been proposed and their results about the compromise of costs and reliability.

8.1 Introduction

Up to this chapter the test techniques presented in this book are mainly devoted to test the chip for permanent manufacturing defects such that only ‘good’ chips are sent to the market. Either external tester, BIST, or a combination of both are used to detect these defects while the chip is in test mode. Test patterns specially created for this task are used to maximize the test coverage and reduce the test time.

Now the chip designer faces a different challenge. The motivation behind the techniques presented in this part of the book assumes that the chip passed the man-ufacturing test and it is integrated to a system or product. The point is that the combination of newer DSM technologies and lower voltage makes the chip more vulnerable to transient effects – such as radiation, electromagnetic interference – and also permanent effects – such as crosstalk, device aging, and physical wear out

Chapter 8Error Control Coding and Retransmission

Page 169: Reliability, Availability and Serviceability

156 8 Error Control Coding and Retransmission

due to electromigration. These effects can occur in run-time while the chip is operating in normal mode. Depending on the system application this effect can incur in, for instance, economic or catastrophic losses. So, the chip designer has the challenge to create a robust design using unreliable DSM technologies.

The usual approaches to deal with on-line faults are based on redundancy: infor-mation redundancy (e.g. cyclic codes); time redundancy (e.g. data retransmission); space redundancy (e.g. Triple Modular Redundancy – TMR); or a combination of these approaches. Table 8.1 shows that information and time redundancies are the most used approaches, while space redundancy is only used in small parts of the circuit due to its cost. These approaches are detailed along this chapter.

The usual direct costs of the on-line test circuitry are silicon area, power dissipation, energy consumption, and codec delay. There are also costs which are not directly related to the on-line test circuitry itself, which we call indirect costs. For instance, a packet with bit flip needs retransmission. This retransmission will have a negative impact on the packet latency since the receiver will wait more time to receive the correct packet. This retransmission will also increase the NoC congestion, affecting globally the latency of the other concurrent packets. Finally, the retransmission creates more switching activity in the network, increasing its energy consumption.

As already mentioned, the most difficult challenge related to on-line test approaches is to create it such that these costs (both direct and indirect) are balanced with the potential reliability improvement and the application constraints. Complex error detection and correction approaches typically provide a more robust design. However, studies (Murali et al. 2005) demonstrate that these complex error detection and correction approaches may require unacceptably high energy dissipation and area overhead, and can adversely affect the NoC in terms of throughput and latency.

This balance between costs and benefits can only be achieved with the designer experience and experimentation. The motivation of this part of the book is to give to the reader the experience reported in the state-of-the-art in on-line testing for NoCs. The reader can reproduce the presented techniques either to solve real problems or to extract his own observations and conclusions, advancing the state-of-the-art in this topic.

Table 8.1 Classification of on-line test approaches

Information redundancy

Cyclic codes: Vellanki et al. (2005), Bertozzi et al. (2005), Murali et al. (2005), Sridhara and Shanbhag (2005), Frantz et al. (2007), Lehtonen et al. (2007, 2010), Ganguly et al. (2008), Yu and Ampadu (2010)

Crosstalk avoidance coding (CAC): Sridhara and Shanbhag (2005), Ganguly et al. (2008)

Time redundancyData retransmission: Vellanki et al. (2005), Bertozzi et al. (2005), Murali et al. (2005),

Frantz et al. (2007), Lehtonen et al. (2007), Ganguly et al. (2008)Doubled data: Lehtonen et al. (2007)

Space redundancySpare wires: Lehtonen et al. (2007, 2010), Yu and Ampadu (2010)TMR: Frantz et al. (2007), Lehtonen et al. (2007, 2010)

Page 170: Reliability, Availability and Serviceability

1578.2 Joint Information and Time Redundancy

In the remainder of this chapter, a selection of NoC specific test approaches is presented. The selected approaches are not the only works available in the literature on the topic, but those that were considered good representatives of groups of papers dealing with the same central ideas for on-line NoC testing.

There are series of different classes of fault tolerant approaches for NoCs ranging from layout-level approaches, circuit-level approaches, fault tolerant routing algorithms, or even fault tolerant task mapping/migration. Layout-level approaches are usually technology dependent, thus they are not generic enough for the didactic purposes of this book. High-level fault tolerant approaches, based on routing algorithms and task mapping, run on top of circuit-level fault detection/correction approaches. Thus, a good understanding of circuit-level fault tolerant approaches is essential to build reliable systems. For these reasons, this book focuses on circuit-level fault tolerant approaches for NoCs.

This chapter is organized as follows: Sect. 8.2 describes the most basic on-line test approaches based on information and time redundancy (i.e. ECC and retrans-mission). Section 8.3 presents other approaches which include space redundancy in specific parts of the NoC (e.g. flow control wires and part of the routing control logic). Section 8.4 returns to the subject of information redundancy where this section focus on codes to detect and correct crosstalk faults. Section 8.5 compares the presented approaches in terms of their capabilities and costs. Finally, Sect. 8.6 presents a general discussion about the presented approaches and proposes some interesting future research subjects.

8.2 Joint Information and Time Redundancy

The techniques presented in this section combine information redundancy (i.e. error control codes) and time redundancy (i.e. data retransmission) to protect noisy inter-connect channels, with focus to NoC-based chips.

Bertozzi et al. (2005) uses a framework to model on-chip AMBA (Advanced Microcontroller Bus Architecture) interconnect as noisy channels (Fig. 8.1) and evaluates the impact of different error control schemes providing the designers with guidelines for the selection of energy efficient error control methods.

Bertozzi et al. (2005) argue that there are many solutions to improve the reliability, but most of them are based on layout knowledge or they require acting at the electrical level. The authors argue that bus encoding is the most efficient approach since it is more general (no layout knowledge) and it is possible to tradeoff reliability and energy. In general, the larger the detection capability of a coding scheme, the lower the voltage swing used in the interconnects, minimizing energy consumption. The reliability reduction caused by the lower voltage is counterbalanced with better error detection and recovery capabilities.

The authors (Bertozzi et al. 2005) performed synthesis for error encoders and decoders, such as parity check, cyclic redundancy check (CRC), and hamming encoding, in order to evaluate their reliability-energy tradeoff. The synthesis results

Page 171: Reliability, Availability and Serviceability

158 8 Error Control Coding and Retransmission

show that CRC is the most lightweight implementation, similar to the single parity, and provides better error detection capability.

In terms of energy efficiency the results show in general that retransmission-based strategies perform better than error correction because they can work at lower voltage swings thanks to their higher detection capability. Moreover, when the interconnect is shorter, for instance in the case of two neighbor routers, the contribution of codec complexity becomes relevant. In this case error correcting codes are less energy efficient due to the complexity of the correcting circuitry. Still, if the bit error rate for on-chip communication is low, error correcting decoders can significantly degrade performance and energy efficiency at each transfer. On the other hand, retransmission-based strategies would only occasionally affect on-chip communication.

Bertozzi et al. (2005) admit that, since the used interconnect model does not represent a NoC, where there are several FIFOs and routers located in between the source and the target of the message, the retransmission strategies will cause extra energy consumption. In this case error correction codes can be more efficient.

Vellanki et al. (2005) implemented two low overhead error control schemes – single error detection and retransmission (PAR), and single error correction (SEC) – at the link level of a mesh-based NoC. The authors evaluate several performance-related metrics such as network latency, queue latency, acceptance rate, power dissipation, but these metrics are out of the scope of this book. The authors also evaluate the error control schemes. They evaluate network acceptance rate, network latency, and network power dissipation considering low/high bit error rates and low/high packet injection rates. The results, as expected by Bertozzi et al. (2005), show that SEC outperforms PAR when in the following cases:

{Low bit error rate, high injection rate};{High bit error rate, low injection rate},{High bit error rate, high injection rate}.

As the previous authors, Murali et al. (2005) acknowledge that the choice of the error recovery scheme for an application and its NoC requires exploring multiple power-performance-reliability trade-offs. Thus, their motivation is to provide infor-mation to designers that aids in the choice of an appropriate error control mechanism for the target application.

Different from the previous authors, Murali et al. (2005) explore error control mechanisms that can use both end-to-end flow control (at the network level, Fig. 8.2a), or router-to-router flow control (at the link level, Fig. 8.2b), or hybrid.

ErrorControlCoder

ErrorControlDecoder

DSMbus

Tx Rx

Fig. 8.1 Generic model for ECC for an on-chip DSM bus

Page 172: Reliability, Availability and Serviceability

1598.2 Joint Information and Time Redundancy

The following error control schemes are evaluated:

The end-to-end (ee) schemes include parity (ee-par) or cyclic redundancy check (ee-crc) codes to packets. A CRC or parity encoder is added to the sender NI and a decoder is added at the receiver NI. The receiver NI sends a nack or an ack signal back to the sender, depending on whether the data contained an error or not;The switch-to-switch schemes have the error detection hardware (CRC) at each switch input and retransmit data between adjacent switches. There are two types of switch-to-switch schemes: switch-to-switch flit-level (ssf) or switch-to-switch packet-level (ssp);The hybrid scheme has a single-error-correcting, multiple-error-detecting (ec + ed) code at the receiver to correct any single bit error on a flit, but it requests end-to-end retransmission in case of multiple errors.

The results in terms of power consumption show that the original circuit and ee-par schemes consume more power than the ee-crc and ec + ed schemes since the original and ee-par schemes have reduced detection capability and hence require a higher

Router BRouter A

NI Tx

coder

CoreTx

CoreRx

data + check bits

Link-level flow control

NI Rx

decoder

switchqueue

Router Bde

code

rRouter A

deco

der

NI Tx

coder

CoreTx

CoreRx

data + check bits

Link-level flow control

NI Rx

decoder

switchqueue

a

bNetwork level ECC. End-to-end error control scheme where the encoder and

the decoder are at the ends of the transmission

Link level ECC. Router-to-router error control scheme where the encoders anddecoders can also be in the intermediate routers

Fig. 8.2 Basic ECC approaches used on NoCs: (a) network level ECC. End-to-end error control scheme where the encoder and the decoder are at the ends of the transmission, (b) Link level ECC. Router-to-router error control scheme where the encoders and decoders can also be in the interme-diate routers (Adapted from Murali et al. 2005)

Page 173: Reliability, Availability and Serviceability

160 8 Error Control Coding and Retransmission

operating voltage to achieve the same residual flit-error rate. The hybrid ec + ed scheme has lower power dissipation at high residual flit-error rates, and the ee-crc has lower power dissipation at lower residual error rates. The reason is that at higher error rates, the retransmission of packets results in increased network traffic in the ee-crc scheme, dissipating more power than the ec + ed scheme. At lower error rates, the overhead of the additional check bits of the ec + ed scheme, compared to the number of check bits of the ee-crc scheme, makes ec + ed less energy efficient.

With a low flit-error rate and a low injection rate, the various schemes’ average packet latencies are almost the same. However, as the error rate and/or the flit injection rate increases, the end-to-end (ee) retransmission scheme incurs a larger latency penalty than the other schemes. The packet-based switch-to-switch (ssp) retransmission scheme has higher packet latency than the flit-based switch-to-switch (ssf) retrans-mission scheme because the latter detects errors on packets earlier. As expected, the hybrid (ec + ed) scheme has the lowest average packet latency of the schemes.

The main contributors for the power dissipation for the ee and ec + ed schemes are the packet-buffering at the NIs and the network traffic caused by ack/nack packets. For the ssf and ssp schemes, the major power overhead results from the retransmission buffers at the switches. It can be concluded from this analysis that new methods to minimize buffering and to reduce ack/nack traffic are promising. Murali et al. (2005) suggest as future work to study the effects of application- and software-level reliability schemes and to develop online adaptation capabilities, such as reconfigurable designs.

8.3 Joint Information, Time, and Space Redundancy

The techniques presented in the previous section efficiently protect the interconnect against transient faults, however, they are not as efficient against multiple permanent faults. Approaches based on space redundancy are usually more efficient against permanent effects.

Approaches considering only transient faults in NoC links are still vulnerable since a single permanent fault can drastically reduce or even eliminate the correction capa-bilities of commonly used codes. In addition, the retransmission method does not work in the presence of permanent errors if the same path is taken. For this reason, Lehtonen et al. (2007) propose and evaluate fault tolerance designs of the communication links in NoC architectures considering permanent, intermittent, and transient faults.

The reference link design consists of Hamming code and retransmission, as pro-posed by Bertozzi et al. (2005), which protects the links against transient faults only. Two methods are designed to tolerate permanent and intermittent faults: split trans-mission based on time redundancy and, spare wires based on hardware redundancy.

In the split transmission approach, a 64-bit link is split into four interleaving sections and five check bits are calculated for each section. The spare wires provide unchanged performance in the presence of up to four permanent errors. The recon-figuration circuitry uses the error syndrome for incoming data at the receiver to locate and replace the defective wires. The link control wires are protected with TMR.

Page 174: Reliability, Availability and Serviceability

1618.3 Joint Information, Time, and Space Redundancy

The results show that the proposed design performs the same way as the reference design when there are no intermittent or permanent errors. Additionaly, the proposed design performs better when intermittent and permanent errors are taken into account. The results also show that the split transmission design tolerates faults slightly better than the design with spare wires. On the other hand, the spare wire approach is more power efficient than the split transmission design. The energy consumption using the spare wire design is approximately the same as with the reference design.

The limitation of the evaluation is that the transmitter and the receiver circuits were presumed error-free. Moreover, there is also no information about the testability of the proposed designs in terms of manufacturing defects.

Lehtonen et al. (2010) propose improved approaches to replace links with permanent faults by spare wires without interrupting the data flow. Two methods are proposed to detect permanent errors at runtime: in-line test (ILT) and syndrome storing-based detection (SSD). Both methods are based on the simplified link structure illustrated in Fig. 8.3.

The scheme represented in Fig. 8.3 encodes the incoming k-bit data word in the transmitter to a codeword of width n, sends these n bits through the link, and decodes the information in the receiver. The decoder corrects any errors and outputs the original k-bit data word. The link also has s spare wires. The reconfiguration units at the transmitter and receiver determine which of the n + s lines are being used or are left idle. In case a permanent fault is detected, the reconfiguration unit at the receiver sends reconfiguration information to the transmitter regarding the location of the faulty line, which is replaced by a spare wire. Moreover, all the link-level control signals, such as valid and ready, are protected with TMR.

The in-line test (ILT) method tests each adjacent pair of wires in a link for opens and shorts. These tests are executed periodically to ensure that wires are not affected by permanent faults. Initially the ILT control unit reconfigures the target pair of wires such that they are connected to the test pattern generator (TPG) and the data on those lines are rerouted to spare wires. The TPG sends the test patterns to the

encoder decoderV

Reconf. unit

valid

ready

datak

valid

ready

datak

ready

Vreconf

Vr_data

Vvalid

n n

s

Reconf. unit

n

reconf

r_data

Fig. 8.3 Simplified reconfigurable link system proposed by Lehtonen et al. (2010). The letter v represents a TMR voter

Page 175: Reliability, Availability and Serviceability

162 8 Error Control Coding and Retransmission

target links and the responses are compared at the receiver to determine if there is a permanent fault in that pair of wires. If there are no errors, the functional wires are reconfigured to carry data once again, and the process is repeated for each pair of wires. If errors are detected, the spare wire replaces the defective one. The identification of the defective wire is signaled from the receiver back to the TPG side to perform the correct reconfiguration at the source.

The syndrome storing-based error detection (SSD) method is based on the evaluation of consecutive syndromes at the receiver, using the syndrome informa-tion already provided by an existing ECC decoder. If several syndromes point to the same error location, it represents a permanent fault and the affected line can be reconfigured.

Both proposed approaches are compared against two reference methods: a Hamming (21, 16) code with four interleaving sections, resulting in a code word width of 84 bits, representing a simple ECC scheme; and Bose-Chaudhuri-Hocquenghem (BCH) code, representing a more complex coding method, with increased error detection and correction capability compared to Hamming.

The proposed ILT and SSD methods are configured with three spare wires and with the same Hamming configuration (Hamming (21, 16) code with four interleaving sections) used as the basis ECC code. The link control wires are protected with TMR.

The results show that the Hamming and BCH fault tolerances decrease signifi-cantly in the presence of permanent faults, while the fault tolerance of the proposed approaches is less affected. The effectiveness of the SSD method is limited by the effectiveness of the used ECC. In this scenario, few multiple faults cannot be detected since Hamming code is used. The ILT method does not have this limita-tion, and therefore, it can correct all permanent faults as long as there are enough spare wires. Moreover, the SSD method cannot take advantage of additional spare wires if the correction capability of the underlying code is too low. The ILT method can use more spare wires at expense of additional delay in the reconfiguration units.

Compared to the Hamming-based design, the ILT method requires about 10%, 29%, and 23% of, respectively, energy, latency, and throughput overhead, but the silicon area overhead is 250–280%. However, compared to BCH code, with similar error correction capability for permanent errors, the ILT system requires 62–64% of its area, consumes 36% of the energy per transmitted flit, and has latency of 39% of the BCH. These results show that spare wires can be more effective (less penalties and more protection) to protect against permanent errors than complex coding schemes such as BCH.

Yu and Ampadu (2010) propose a configurable error control coding that adapts the number of redundant wires to the varying noise conditions, achieving different error detection capability. The main innovation is the proposal of a configurable hardware where it is possible to tradeoff performance, power dissipation and reliability. The reliability hardware can be configured in run-time according to the transient noise conditions and the existence of permanent faults in the chip.

The proposed approach has two operating modes depending on the noise condition:

Page 176: Reliability, Availability and Serviceability

1638.3 Joint Information, Time, and Space Redundancy

In low noise conditions, a simple ECC is used to detect/correct transient faults such that there are some unused wires (which are used by the powerful ECC detailed below) available for broken wire replacement. If there are enough spare wires, the broken wires are replaced with healthy ones. When the number of broken wires exceeds the total number of spare wires, splitting transmission is invoked.In high noise conditions, a powerful ECC is used to detect more multi-bit faults on links, using all available wires. If there are not enough redundant wires for broken wire replacement, the powerful ECC is shortened and splitting transmis-sion is used to rebuild the packet.

As a case study, the evaluated link has 48 bits where 32 are used for data. Two encoding mechanisms are implemented:

The simple ECC, called ECC1, implements Hamming(38, 32) used for low noise conditions. It can detect two-bit transient errors. In this mode, there are ten wires (48 total wires minus 38 used wires) available for permanent error recovery.The powerful ECC, called ECC2, implements four groups of Hamming(12, 8) used for high noise conditions, since it can detect multi-bit (>2) transient errors. This mode uses all 48 wires of the link, thus, no spare wires exist for broken-wire replacement. To remove this limitation, the ECC is shorten from Hamming(12, 8) to Hamming(11, 7) and the useful data width is reduced from four groups of eight bits to four groups of seven bits. This means that there is a spare wire for each group of seven wires. Splitting transmission is used to provide these lost useful bits such that, at every seven flits, one additional flit is appended to the packet.

Figure 8.4 illustrates the proposed encoding method. The ECC_sel signal selects the correct ECC encoder and enables the tail flit detector which controls the use of splitting transmission. In low noise conditions, a simple ECC (n

1, k) is used to detect

up two faults on the link. In this mode there are n-n1 wires available as spare wires

(n1 < n). In high noise conditions, a powerful ECC (n, k

2) is used to detect more

faults on links. In this mode (k-k2) bits are accumulated in the split_buf to rebuild

extra flits, which are appended latter to the packet.One can expect that the existence of two ECCs per link significantly increases

the silicon area. However, since there is an overlap in the parity matrices of Hamming(12,8) and Hamming(38, 32), most of the codec circuit can be shared reducing the silicon area.

The results show that, compared to the half splitting approach, the proposed approach has about 1% silicon area overhead and consumes 3.6% more dynamic power. On the other hand, the leakage power is reduced by 4%, the energy consumption is reduced to up to 68% (in low noise condition) and 50% (in high noise condition), and the latency reduction is between 48% and 71%. However, the results are compared to the half splitting approach, which alone imposes large overhead in terms of latency and silicon area. Moreover, the power and area analysis do not

Page 177: Reliability, Availability and Serviceability

164 8 Error Control Coding and Retransmission

include the entire reliability circuitry, such as the spare wire configuration unit which, according to Lehtonen et al. (2007), requires a significant amount of silicon area.

Frantz et al. (2007) evaluate the design of error mitigation techniques – implemented in both hardware and software – that can cope with single event upset (SEU) and cross-talk simultaneously. The paper evaluates SEU not only in the NoC links as previous papers, but also in the router logic. Four protection approaches are evaluated:

HC-TS-TMR: It is a fully hardware-based scheme for protecting a router from SEUs and crosstalk. The Hamming code protects the router input buffers against single soft errors, the delayed-sampling registers mitigate soft errors and crosstalk faults in the communication channel, and TMR protects the router FSMs and control logic;HW-SW-basic: It uses CRC and data retransmission to correct SEU effects. The CRC is computed and appended to the packet tail before transmission and verified afterwards by the recipient. The CRC is implemented in software in the IP core, minimizing the silicon overhead;HW-SW-ECC: The HW-SW-basic approach cannot mitigate crosstalk faults in communication channels. The HW-SW-ECC solution corrects crosstalk faults considering one victim line in the channel at a time. The ECC is implemented in hardware; the encoder is located at the router output port, while the decoder is at the input port;HW-SW-TS: This design deals with soft errors and crosstalk faults, considering more than one victim wire at a time. It uses a triple-sampling circuit at the router input buffers that samples the incoming data with three different clocks.

The results present a router-level comparison of silicon area, operating frequency, and power dissipation. It shows that the fully hardware-based scheme (HC-TS-TMR) requires 41% area overhead, 26% performance penalty, and 63% power penalty, which is unfeasible for most applications. On the other hand, the hardware-software approaches are more energy efficient than a fully hardware-based approach. A relevant limitation of the evaluation is that no application level analysis is performed.

Tail flit detector

DFF

Powerful ECCencoder

Simple ECCencoder

Output buffer

ECC_sel

ECC_sel

splitting_ctrl

mux

mux

k

k2

k-k2 k2

k

n

2

k2

n

n1

split_buf

Fig. 8.4 Output port of the configurable ECC encoder (Adapted from Yu and Ampadu 2010)

Page 178: Reliability, Availability and Serviceability

1658.4 Joint Error Control Coding and Crosstalk Avoidance Codes

Since part of the error mitigation approach is implemented in software, some applica-tion performance degradation is expected due to the processor’s context switching and the error mitigation software itself. Moreover, since processors are executing the error mitigation software, their energy consumption are also relevant.

8.4 Joint Error Control Coding and Crosstalk Avoidance Codes

It has been demonstrated that the coupling capacitance between the wires has a strong influence on the delay of long buses such that, if the coupling capacitance exceeds the loading capacitance on the wires, the delay of such transition may be twice longer. This effect is called crosstalk delay.

There are simple but yet effective techniques to prevent crosstalk, such as placing grounded wires between every wire on the bus or duplicate every data wire. However, these approaches double the wiring area.

Crosstalk avoidance coding (CAC) schemes are effective ways of reducing the worst-case switching capacitance of a wire by avoiding that a transition causes adjacent wires to switch in opposite value (Victor and Keutzer 2001). For instance, given the current value “0100” on the channel, the next value cannot cause any adjacent wires to transition in opposite directions, such as “0010”, “1000”, or “1010”. The worst-case delay can be reduced by avoiding bit patterns “010” and “101” from a transmission. This condition is referred as forbidden pattern (FP) condition.

Error correcting codes such as Hamming can be used to minimize crosstalk (Frantz et al. 2007), but these codes are not as efficient as CACs to handle crosstalk related issues. On the other hand, CACs do not protect from transient effects. According to Sridhara and Shanbhag (2005), delay, power, and reliability are the three problems that need to be jointly addressed in the design of on-chip buses. The authors propose a framework to jointly evaluate Low-Power Codes (LPC) to reduce transition activity, Crosstalk Avoidance Codes (CAC) to reduce the delay by forbidding specific transitions, and Error Control Coding (ECC) to protect the bus in the presence of transient noise.

Despite of the efforts of Victor and Keutzer (2001) to devise a linear code that avoids FP, Sridhara and Shanbhag (2005) prove that there is no linear CAC that satisfies the FP condition while requiring fewer wires than duplication. Based on this proof, they propose a joint CAC and ECC named Duplicate-Add-Parity (DAP) illustrated in Fig. 8.5.

DAP duplicates the lines, avoiding crosstalk, and it adds single parity allowing single error correction. The parity bit is recalculated at the decoder and compared against the received parity bit. If both lines match, the lines used to recreate the parity bit are chosen as the output, else the other lines are chosen as shown in Fig. 8.5. Since a single error affects at most one of the lines or the parity bit, the error is correctable.

The results show that DAP has the least codec area and energy overhead among the evaluated codes, while DAP has slightly larger codec delay than Hamming code.

Page 179: Reliability, Availability and Serviceability

166 8 Error Control Coding and Retransmission

A bus coded with Hamming has the least area requirement. However, DAP provides speed-up and energy savings along with reliability, but requires area overheads of 28% mostly due to the line duplication.

As a limitation, DAP has only single error correction capability. In the future more powerful error correction schemes, able to correct multiple errors, may be needed to satisfy the reliability requirement.

Considering the limited transient error correction capability of the previous approach, Ganguly et al. (2008) propose the Crosstalk Avoiding Double Error Correction Code (CADEC) which is able to correct up to two errors.

The CADEC encoder, illustrated in Fig. 8.6a, is simply a combination of Hamming and DAP where the parity is calculated over the actual data and the hamming codeword.

The CADEC decoder, illustrated in Fig 8.6b, initially calculates the parity bits of the individual copies (at xor1 and xor2) and compares with the sent parity (at xor3). If the parity obtained from the second copy is different from the sent parity, then the first copy is selected at mux1. Otherwise, if the two parities are equal, then second copy is selected. The syndrome module is only enabled if the parity of both copies matches at the output of xor4. If the syndrome of the second copy is zero, then this copy is selected as the output of mux2. Otherwise, the first copy is selected. Since the parity of both copies (at xor4) matches, then the output of mux2 is selected for decoding.

a0

a1

ak-1 ak-1

duplication

Coder Decoder

mux

a1

mux

a0

mux

Fig. 8.5 Joint CAC and ECC: Duplicate-add-parity (DAP) (Adapted from Sridhara and Shanbhag 2005)

Page 180: Reliability, Availability and Serviceability

1678.4 Joint Error Control Coding and Crosstalk Avoidance Codes

a

38

38

duplication

bit 0

bit 1

bit 2

bit 3…

bit 74

bit 75

parity

(38,32) HammingEncoder

32

Hamming DAP

b

32

38

38

38

38

Syndrome detection

38

38

38

0

1

0

1

0

1

Ham(38,32)Decode

38

38

38

sentparity

2nd copyparity

1st copyparity

77

mux1

mux2

mux3

xor1

xor2

xor3

xor4

decoder

encoder

Fig. 8.6 CADEC encoder (a) and decoder (b) (Adapted from Ganguly et al. 2008)

Page 181: Reliability, Availability and Serviceability

168 8 Error Control Coding and Retransmission

CADEC can provide larger voltage reduction and still detect and correct more faults than the other schemes such as DAP and retransmission. Moreover, the joint CAC and ECC codes (such as DAP and CADEC) also reduce the mutual switching capacitances on the wire segments, which also contributes to the reduction of energy consumption. For these reasons, CADEC is more power efficient, but this difference between CADEC and DAP is very small. On the other hand CADEC has almost twice more silicon area and a slightly higher codec delay compared to DAP.

8.5 Comparing the Approaches

Selected approaches for router testing were presented in previous sections. The main features of these approaches are summarized in Table 8.2. The first five approaches in the table – parity, CRC, Hamming, DAP, and CADEC – were classified as infor-mation redundancy. Data retransmission is classified as time redundancy, while spare wire and TMR are classified as space redundancy.

Although it is not possible to perform a straightforward comparison considering the benefits and costs of the studied testing approaches, Tables 8.2 and 8.3 make an attempt to point out the main differences between them, highlighting the advantages and drawbacks of each approach.

Parity code adds only one parity bit to the information bits and detects all error patterns of an odd number of bits, but cannot detect double errors. Parity has low direct cost due to its simple hardware design. On the other hand, parity has high indirect cost if it is associated with retransmission; the number of retransmissions increases since parity cannot correct faults. Parity has also very small detection capability for crosstalk and transient/permanent faults under high failure rates since it cannot detect multiple errors. Parity has been interestingly used in combination with more complex codes as the means to reduce their switching activity, reducing the energy consumption (Ganguly et al. 2008). The parity can make a decoder more energy efficient since it is not always required to compute the syndrome.

A Cyclic Redundancy Check (CRC) is a simple and very common ECC technique applied for NoCs. It has low direct cost but, as the parity, high indirect cost since in most cases it is used with retransmission. It can detect multiple permanent and transient faults in links and in the sequential part of buffers. CRC is particularly suitable to deal with errors affecting lines that are close to each other or a number of contiguous lines (Bertozzi et al. 2005). With the shrinking of geometries, the distance between intercon-nects is smaller, therefore even localized noise sources are likely to have an impact on multiple contiguous bus lines. CRC has a small capability to detect crosstalk compared to CAG, but better detection capability compared to Parity and Hamming.

Hamming code is also a very common ECC in NoCs, but not as simple as CRC, since the decoder requires an additional error correction circuitry. It has medium direct cost with low indirect cost due to its fault correction capability. Once faults are corrected, the probability of retransmissions decreases, reducing the indirect costs. Under the presence of crosstalk and permanent faults, the Hamming code

Page 182: Reliability, Availability and Serviceability

1698.5 Comparing the Approaches

Tabl

e 8.

2 C

osts

of

the

on-l

ine

test

app

roac

hes

Info

rmat

ion

(EC

C)

Info

rmat

ion

(CA

C)

Tim

eSp

ace

Pari

tyC

RC

Ham

min

gD

AP

CA

DE

CR

etra

ns.

Spar

e w

ire

TM

RA

rea

Low

Low

Med

ium

Low

Med

ium

/hig

hL

ow/h

igh

Hig

hH

igh

Del

ayL

owL

owM

ediu

mM

edM

ediu

m/h

igh

NA

Low

Low

Ene

rgy

Low

Low

Med

ium

Low

Low

Hig

hL

owH

igh

Lat

ency

Hig

hH

igh

Low

Med

Low

Hig

hL

owL

ow

Page 183: Reliability, Availability and Serviceability

170 8 Error Control Coding and Retransmission

Tabl

e 8.

3 C

apab

ilitie

s of

the

on-l

ine

test

app

roac

hes

Info

rmat

ion

(EC

C)

Info

rmat

ion

(CA

C)

Tim

eSp

ace

Pari

tyC

RC

Ham

min

gD

AP

CA

DE

CR

etra

ns.

Spar

e w

ire

TM

RT

rans

ient

fa

ults

Det

ect s

ingl

eD

etec

t m

ultip

leC

orre

ct m

ultip

leC

orre

ct s

ingl

eC

orre

ct

mul

tiple

Yes

NA

Yes

Perm

anen

t fa

ults

Det

ect s

ingl

eD

etec

t m

ultip

leR

educ

ed c

apab

ility

Red

uced

ca

pabi

lity

Red

uced

ca

pabi

lity

Usu

ally

no

Mul

tiple

Red

uced

ca

pabi

lity

Cro

ssta

lkM

inim

al

dete

ctio

nD

etec

tion

Min

imal

cor

rect

ion

Yes

Yes

Usu

ally

no

NA

NA

NoC

link

Yes

Yes

Yes

Yes

Yes

Tra

nsie

nt o

nly

Perm

anen

t on

lyFl

ow c

trl

wir

esN

oC b

uffe

rM

inim

al

dete

ctio

nPa

rtia

l de

tect

ion

Part

ial c

orre

ctio

nN

AN

AT

rans

ient

onl

yN

APo

ssib

le

NoC

logi

cN

AN

AN

AN

AN

AT

rans

ient

onl

yN

AIn

sm

all l

ogic

Page 184: Reliability, Availability and Serviceability

1718.5 Comparing the Approaches

reduces or looses its fault correction capability. Thus, it is not very efficient against these kinds of faults. Hamming can also correct bit flips in the sequential part (not the control part) of buffers. Its error correction capability can be increased by increasing the Hamming distance of the code and at expense of direct costs.

Duplicate-add-parity (DAP) is a CAC plus ECC code proposed by Sridhara and Shanbhag (2005) for NoC links. DAP has duplicated lines to avoid crosstalk and it adds single parity allowing single transient error correction. DAP looses its error correction capability in case of a single permanent fault. Compared to Hamming, DAP has lower codec area, lower energy consumption, and slightly higher codec delay. A bus protected with DAP has area overhead of 28%, mostly due to the line duplication, but it is more power efficient.

Ganguly et al. (2008) proposed the Crosstalk Avoiding Double Error Correction Code (CADEC) as a CAC plus ECC code for NoC links. It also uses wire duplication to avoid crosstalk, but for transient errors it uses Hamming code instead of single parity used in DAP. Thus, by using Hamming the CADEC is able to correct two transient errors. CADEC requires more silicon area and codec delay than other CACs, but it can be more energy efficient since the voltage can be further reduced due to its increased error correction capability.

Retransmission-based schemes are associated with an error control code which detects whether the incoming packet has an error. Typically error detection codes such as CRC are used because of their good fault detection and low implementation cost. Retransmission schemes can recover from a transient effect, but not from a permanent or intermittent effect, such as crosstalk. Retransmission could recover from permanent effects and crosstalk if it is capable of taking a different path and avoid the defective part of the NoC. However, this feature is not common due to its complexity.

Bertozzi et al. (2005) observed that retransmission-based approaches are more energy efficient in a bus environment compatible with AMBA. On the other hand, they argue that data correction-based schemes might become more energy efficient in NoC, specially considering a large distance between the packet source and target, because retransmission switch a large number of routers and FIFOs. But in fact a precise evaluation of retransmission-based approaches is more complex than Bertozzi et al. (2005) observed, because a retransmission has a global effect as well. For instance, a retransmission increases the network contention which globally affects the application’s performance. The characterization of this global effect depends on application-dependent features such as packet injection rate, time and spatial distribution of packets, and the level of parallelism and dependency of the application’s tasks. In summary, the retransmission cost characterization must account not only for the NoC design details, but also for the application’s commu-nication details.

Murali et al. (2005) call this retransmission scheme described previously as end-to-end retransmission, because it involves the initiator and the target of the com-munication. There is also the switch-to-switch retransmission which involves neighbor routers. In this case both direct and indirect costs are typically lower than the end-to-end retransmission scheme.

Page 185: Reliability, Availability and Serviceability

172 8 Error Control Coding and Retransmission

The spare wire approach, used in (Lehtonen et al. 2007) and (Lehtonen et al. 2010), protects links against permanent faults such that defective wires can be replaced by the spare ones. It can be associated with Hamming code (Lehtonen et al. 2007) such that the syndrome can be used to locate the defective wire, or it can use an embedded test pattern generator/evaluator (Lehtonen et al. 2010). According to Lehtonen et al. (2007) the reconfiguration circuit responsible to replace the defective wires can require a considerable silicon area. However, the power consumption of the spare wire approach is almost the same as the circuit with no reconfiguration.

Triple Modular Redundancy (TMR) triplicates small parts of the NoC design causing high direct cost but low indirect cost. The voter can be specially designed to reduce its delay, but silicon area and energy consumption are typically high due to the triplication. It provides a good fault correction capability under transient faults but, like in Hamming code, TMR looses its fault correction capability under the presence of permanent faults. TMR is used typically in some NoC wires (like the flow control wires (Lehtonen et al. 2007)) or in the NoC control logic (like some registers and FSM related to the arbitration logic (Frantz et al. 2007)).

8.6 Discussion

It is clear, from the analysis presented in Sect. 8.5, that all studied approaches have advantages and disadvantages. Since in many aspects these approaches complement each other, combining them (like in (Sridhara and Shanbhag 2005), (Ganguly et al. 2008), and (Lehtonen et al. 2010)) may be the best way of meeting the require-ments of a particular application. However, from the analyzed papers it can also be concluded that a complete framework for modeling and evaluation of permanent (open and short wires), transient (SEU), intermittent (crosstalk) is still missing. For instance, Sridhara and Shanbhag (2005) modeled transient and crosstalk, but not per-manent fault. Lehtonen et al. (2010) modeled permanent and transient, but not cross-talk. The framework can be even more complete if it is able to model the global effects of NoCs (e.g. network congestion) and to model the implementation of test approaches in both hardware and software, e.g. including embedded processors.

As discussed by Frantz et al. (2007), other promising option is to implement the on-line test approaches in both hardware and software. Thus, more complex software-based test approaches can be loaded and executed on-the-fly if the level of noise increases. In this particular feature, the approach described in (Yu and Ampadu 2010) might be particularly interesting, since the system programmer could control the trade-off between reliability capability and energy consumption by software. This would enable a layered fault mitigation approach, ranging from the circuit level to the application level.

As presented in Table 8.3, all studies presented in this chapter, except for Frantz et al. (2007), are exclusively related to fault detection and correction on NoC links. It clearly demonstrates that on-line fault detection and location on NoC routers is an open research subject. A general on-line test approach for NoC routers would be

Page 186: Reliability, Availability and Serviceability

173References

particularly interesting. Moreover, the subject of fault location on NoCs is also interesting (but not fully explored), since the information of the fault location can feed, for instance, reconfiguration techniques to circumvent faulty links or routers.

A successful on-line test approach for NoC routers would need to deal with multiple transient and permanent faults. Single fault analysis is not realistic for the state of the art in DSM technologies. Moreover, the transient faults have to be modeled not only as single event upsets (SEU, i.e. bit flip in memory elements), but also as single event transient (SET, i.e. upset in the logic gates and their nets) in order to be relevant.

Finally, none of the papers discuss how their proposed fault mitigation approaches can be tested for manufacturing defects. These approaches are based on redundancy techniques which are intrinsically hard to control and observe, thus, reducing the fault coverage. The reviewed approaches also have impact on yield due to the silicon area overhead, which is not modeled on these papers.

References

Bertozzi D, Benini L, De Micheli G (2005) Error control schemes for on-chip communication links: the energy-reliability tradeoff. IEEE Trans Comput-Aided Des Integr Circuits Syst 24(6):818–831

Frantz AP, Cassel M, Kastensmidt FL, Cota E, Carro L (2007) Crosstalk- and SEU-aware networks on chips. IEEE Des Test Comput 24(4):340–350

Ganguly A, Pande PP, Belzer B, Grecu C (2008) Design of low power & reliable networks on chip through joint crosstalk avoidance and multiple error correction coding. J Electron Test 24(1–3):67–81

Lehtonen T, Liljeberg P, Plosila J (2007) Online reconfigurable self-timed links for fault tolerant NoC. VLSI Des Hindawi 2007:13

Lehtonen T, Wolpert D, Liljeberg P, Plosila J, Ampadu P (2010) Self-adaptive system for addressing permanent errors in on-chip interconnects. IEEE Trans Very Large Scale Integr (VLSI) Syst 18(4):527–540

Murali S, Theocharides T, Vijaykrishnan N, Irwin MJ, Benini L, De Micheli G (2005) Analysis of error recovery schemes for networks on chips. IEEE Des Test Comput 22(5):434–442

Sridhara SR, Shanbhag NR (2005) Coding for system-on-chip networks: a unified framework. IEEE Trans Very Large Scale Integr (VLSI) Syst 13(6):655–667

Vellanki P, Banerjee N, Chatha KS (2005) Quality-of-service and error control techniques for mesh-based network-on-chip architectures. Integr VLSI J 38(3):353–382

Victor B, Keutzer K (2001) Bus encoding to prevent crosstalk delay. In: Proceedings of the inter-national conference on computer-aided design (ICCAD), San Jose, California, pp 57–63

Yu Q, Ampadu P (2010) Transient and permanent error co-management method for reliable networks-on-chip. In: Proceedings of the international symposium on networks-on-chip (NOCS), Grenoble, France, pp 145–154

Page 187: Reliability, Availability and Serviceability

175É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_9, © Springer Science+Business Media, LLC 2012

This is the second and the last chapter of this book devoted to on-line Network-on-Chip (NoC) testing strategies. As mentioned before, the main difference of on-line and off-line tests is that the former detects run-time faults during system’s mission mode, while in the latter is typically used to detect manufacturing defects while the system is in test mode. Compared to the previous chapter, this one presents techniques used at the router, NoC, and system levels, while the previous chapter focuses on link and router level techniques. The most used techniques at the router, NoC, and the system levels are fault tolerant and adaptive routing algorithms – where an alternative path is found, avoiding the defective part of the NoC – and fault reconfiguration – where the hardware or the software are reconfigured to mask and isolate the defective block. However, both techniques assume they are able to pinpoint the exact location of a hardware defect. This task alone, called fault location, can be a challenge itself, since NoCs are scalable and they can have hundreds or even thousands of switching elements. Similarly to the previous chapter, the test approaches presented in this chapter also have costs in terms of, for instance, silicon area, network performance, network congestion, and energy consumption. Thus, the challenge for the designer is, again, to find a good trade-off between these costs and the potential benefit of the test approach in terms of reliability. However, this trade-off evaluation is typically much more complex at the NoC level than it is at link or router level, due to the size of NoCs and complex data communication patterns of the applications. This chapter presents the most relevant on-line NoC testing strategies at the NoC and system levels and their results in terms of costs and reliability.

9.1 Introduction

We have seen that newer DSM technologies are more vulnerable than ever in terms of reliability and that the designer cannot assume that the design ‘always’ work, as we did in the past. This way, reliability concerns are not exclusive to critical applications anymore; they have to be considered in any design targeting DSM technologies.

Chapter 9Error Location and Reconfiguration

Page 188: Reliability, Availability and Serviceability

176 9 Error Location and Reconfiguration

Different levels of fault tolerance techniques can be used to design reliable NoC-based systems. There are layout-level approaches which are typically technology dependent, thus cannot be applied to a general NoC-based design. There are link and router level approaches, presented in the previous chapter, which use methods to detect and mitigate the faults. The problem is that, there will always be some fault escapes in the actual world. These escapes are typically reduced (never completely eliminated) by, for instance, increasing the error detection/correction capability by using more complex codes, what increases also the silicon costs. However, it will always be possible to insert faults in a specific position or in a specific moment where the fault detection mechanism and mitigation technique will fail to handle it. Moreover, it is well known that 100% of fault coverage in physical fault model is not practical, thus, some fault escapes are expected even if the design has 100% of fault coverage for stuck-at or any other fault model.

In conclusion, fault mitigation alone is hardly the solution for a reliable system. A reliable system must be able to:

Detect the existence of a fault;Mitigate it upfront, if possible, to avoid the propagation;If it is not possible to mitigate it upfront, the source of the fault must be located. Where is this fault being generated?;The fault location information is used, for instance, to shut down the defective module and to reconfigure the hardware such that the system operation can continue, perhaps with some performance degradation;If the data or the system’s state were corrupted, it must be recovered to its last known good state or checkpoint.

Thus, this chapter can be faced as a continuation of the previous chapter, where we advance even further into the direction of a complete fault tolerant framework for NoC-based designs. The previous chapter presents approaches for fault detection and mitigation, while this chapter presents methods for fault location and reconfigu-ration. Fault recovery for NoC-based system is not addressed in this book since, as far as we know, it is not yet addressed in the literature.

In the remainder of this chapter, a selection of NoC specific test approaches is presented (see Table 9.1), focusing on fault location and reconfiguration. The selected approaches are not the only works available in the literature on the topic, but those that were considered good representatives of innovative approaches for on-line NoC testing.

This chapter is organized as follows: Sect.9.2 describes error location approaches. Section 9.3 presents reconfiguration approaches. Section 9.4 compares the presented

Table 9.1 Error location and hardware reconfiguration for NoC-based systems

Error location

Grecu et al. (2006), Kohler et al. (2010), Raik et al. (2009)

Hardware reconfigurationKoibuchi et al. (2008), Fick et al. (2009b), Chang et al. (2011) Liu et al. (2011), Kakoee et al. (2011)

Page 189: Reliability, Availability and Serviceability

1779.2 Fault Location

approaches in terms of their capabilities and costs. Finally, Sect. 9.5 presents a general discussion about the presented approaches and proposes subjects for future research.

9.2 Fault Location

Fault location, or fault diagnosis, is the ability to pinpoint the exact position of a fault. This is not a trivial task since NoCs are scalable and they can have hun-dreds or thousands of routers. Fault location usually requires self checking hard-ware design.1 However, if on one hand more checkers increase the accuracy of the fault location, on the other hand, these checkers increase the silicon area and the energy consumption. Moreover, the lack of precise fault location can cause, for instance, an entire router to be switched off when the fault was actually located on a single link. This lack of accuracy in the fault location causes a sig-nificant performance degradation even at small failure rates. Thus, diagnosis can narrow down the fault location such that, compared to the case where the entire router is switched off, the performance degrades significantly slower with the increase of failure rate.

The fault location methods presented in this section are the basis for any NoC or system level reconfigurability approach presented in Sect. 9.3.

Grecu et al. (2006) demonstrate that both end-to-end and switch-to-switch fault detection schemes are not sufficient to provide accurate fault location. The end-to-end scheme does not allow locating the fault position since the checking takes place only at the destination interface. The switch-to-switch scheme, although it offers a better potential for fault location since the data is checked at each switch input, it is not pos-sible to determine if the error is on the previous switch or on the link that connects both switches.

Grecu et al. (2006) propose a fault checking approach for NoCs where a single parity and an error flag are added to each switch input/output. If an error flag of an input port is active, then the fault is on the link, otherwise, if an error flag of an output port is active, the fault is in the switch logic. With this approach, a higher level reliability approach, i.e. packet retransmission, is able to avoid either a single faulty link or a single faulty router. The rest of the paper demonstrates the efficiency of the proposed approach compared to end-to-end and switch-to-switch approach. Power dissipation, network latency, and network throughput are analyzed.

Kohler et al. (2010) argue that most failures would affect only a single part of a router. Thus, they propose another fault location approach that enables to narrow down even further the faults inside the router. This approach enables to locate fault

1 Self checking can also be implemented in software, but in this case it looses the on-line testing capability and perhaps the ability to locate transient faults. The system needs to be in test mode periodically to locate a permanent fault.

Page 190: Reliability, Availability and Serviceability

178 9 Error Location and Reconfiguration

in the links, in parts of the crossbar, and in parts of the router, such that the functional parts of the router can still be used normally. This feature allows graceful degrada-tion of the network performance according to the number of faults. Finally, it discrimi-nates transient from permanent faults, so that different counter-measures can be taken based on the type of the fault.

Link faults in (Kohler et al. 2010) are detected by CRC at the input ports generating a pass/fail flag that feeds the state machine presented in Fig. 9.1. According to this state machine, which is located in all input ports except for the local port, a single fault is considered a transient fault. If the next packet is valid, then it returns to the normal state, otherwise, it goes to a test state where the link is tested by a BIST unit for stuck-at and crosstalk faults. If both tests fail, then the link has a permanent fault and it is shut down.

Crossbar faults in Kohler et al. (2010) are detected by the CRC at the output ports. The CRC generates a pass/fail flag that feeds the state machine presented in Fig. 9.2 for each pair of input/output of the fault matrix. For instance, one fault from east to south moves the state machine to the transient state. If the next packet is valid, then it returns to the normal state, otherwise, it goes to the intermittent state where the crossbar is tested for stuck-at and crosstalk faults. If both tests fail, then the path from east to south in the crossbar has a permanent fault and it is shut down.

Kohler et al. (2010) assume that the remaining router faults can provoke legal, but still unwanted packet routing, even though the probability of occurrence is very low. This kind of fault provokes packet duplication or misrouting, which can be easily treated by software at the transport layer. The router is switched off only if the effect persists.

This diagnosis circuitry requires 24% of silicon area penalty compared to the basis design. The results show that the performance metrics such as packet delivery rate, network throughput and average path length are smoothly degraded as the failure rate increases.

Raik et al. (2009), also presented in Sect. 6.3.2.1, propose a functional testing method which is able to achieve near 100% of structural fault coverage of routers and it also has diagnosis capability of links. The method is not meant for on-line testing, however, since it is a fast test method it could be used to detect runtime permanents faults by running it periodically.

no fault transient

test pattern 2 test pattern 1

!pass

pass

!pass

!pass

pass

Fig. 9.1 Link fault model (Adapted from Kohler et al. 2010)

Page 191: Reliability, Availability and Serviceability

1799.2 Fault Location

The method applies three test configurations at the interfaces of the network. The test data consists of packets with checkboard test data, i.e. xAA and x55, which is able to detect stuck-at and delay faults. The first configuration tests the straight paths of the network. The second configuration, divided in two steps, tests the turnings inside the routers, e.g. west to north and south to east. The last configu-ration tests the access to the resource attached to each router. Figure 6.8 illustrates these configurations.

Functional test alone does not provide sufficient structural fault coverage, especially for the control part of the routers. In addition, non-functional capabilities might be required to increase the fault coverage. For this reason the authors propose three DfT logic used to ease the network testability: logic BIST is inserted into the router control logic; YX routing capability is added in test mode only; and loopback at the network interface. Logic BIST typically requires a large silicon area, however, since the silicon area of the router control logic is small compared to the total router, the overall silicon area overhead of the logic BIST is about 2.5%. The YX routing is used to execute part of the second test configuration, which starts at north and south instead of west and east. The YX routing implementation increases the silicon area in 0.4%. The loopback is used in the third configuration, where a target resource receives the test packet and forwards it to its next destination. The loopback imple-mentation increases the silicon area in 0.9%. Finally, the total silicon area overhead for the proposed test approach is about 4%.

The approach proposed by Raik et al. (2009) has linear test time requirements with respect to the NoC size, since multiple routers are tested in parallel and the amount of test data is drastically reduced compared to traditional scan-based approaches. The method takes 320 clock cycles for the 3 × 3 network, which is about two orders of magnitude less than scan-based approaches.

Fig. 9.2 Crossbar fault model (Adapted from Kohler et al. 2010)

Page 192: Reliability, Availability and Serviceability

180 9 Error Location and Reconfiguration

9.3 Reconfiguration

Reconfiguration, in the context of this chapter, is the ability of a hardware design to change its configuration to mask permanent faults. The previous chapter presents reconfigurable links (Lehtonen et al. 2010) with spare wires, where a defective wire can be located and replaced in run-time. This allows masking faults as long as there are enough spare wires. This way, the link can still be used normally and the presence of faults is transparent to the upper layers (router level, NoC level, appli-cation level). The methods presented in this section can be classified as router or network level approaches and they can be built on top of the methods presented in the previous chapter.

Some authors (Zhang et al. 2009) say that this kind of micro-architecture reliabil-ity approach, such as the example presented in the previous paragraph, has diminish-ing reliability improvement as the NoC size increases. Moreover, the implementation costs of the reliability scheme are typically high for very large systems. As an alter-native, some researchers are looking for coarser grain of reliability, at the router and NoC level. This section presents a selection of fault tolerant reconfigurable approaches at the router and network levels for NoC-based systems.

One of the advantages of NoCs in terms of reliability is the existence of multiple paths between two communicating modules. It is a natural hardware redundancy which can be used to improve the reliability of the system. However, in order to take advantage of this hardware redundancy the design must be able to locate the faults and reconfigure the design to avoid the faulty modules. For instance, the packet retransmission approach is cost-effective against permanent faults only if it is able to take a different path from the path taken in the initial attempt. This is an example of a coarse grain fault reconfigurability mentioned in the previous paragraph.

Koibuchi et al. (2008) propose the approach called Default Backup Path (DBP) illustrated in Fig. 9.3. The idea is, given a router design such as the one in Fig. 9.3a, backup paths are included into the router (Fig. 9.3b) such that neighbor routers are directly connected to the local processing element (PE), bypassing an entire faulty router. Thus, even though a router is faulty, it is still possible to have access to its local processing element with some performance degradation. The approach works even if all the routers are faulty. In this situation all DBPs are enabled and connected to each other such that, in the case of a mesh topology, it is reconfigured to a unidi-rectional ring topology, as depicted in Fig. 9.3c.

Once the hardware used to implement the proposed approach is simple (just multiplexers and few buffers), it is unlikely to have faults, even though, if the reliabil-ity of the DBP is an issue, more DBPs can be added to the router. Its simplicity incurs in low impact in silicon area: it adds 12% area overhead. As the number of faulty routers increase, the performance and energy consumption degrades grace-fully, except in the extreme situation where all routers are faulty and the network topology becomes a unidirectional ring. In this case, the latency and energy con-sumption are much higher due to the increased average number of hops between the sender and the receiver. For instance the energy overhead for a system with 16 and

Page 193: Reliability, Availability and Serviceability

1819.3 Reconfiguration

64 processing elements is 128% and 352%, respectively. As a concluding remark, another advantage of the proposed approach is that it is independent of the router design, thus, it can be applied to a large class of routers and networks.

Chang et al. (2011) investigate the use of spare routers which can replace routers with permanent faults. Faults in routers can make a regular network topology, such as mesh, become irregular. Then, adaptive routing algorithms or source-based routing algorithms have to be used to avoid the faulty routers. The reconfigurable network topology proposed by Chang et al. (2011), on the other hand, can mask a certain number of faulty routers and still keep the original topology. For instance, given the system presented in Fig. 9.4a and assuming that some routers and links are faulty (Fig. 9.4b), the reconfigurable hardware is able to bypass the defective parts as illus-trated in Fig. 9.4b, c. Note that each processing element is connected to two rout-ers via multiplexers. If one of the routers has a permanent fault, then the other can be used by controlling the multiplexers and the spare router (at the top of the figure) replaces the defective router. In this example there is a spare router per column, but it can have different configurations. Figure 9.4c shows that even if the network has

crossbar

SW allocator

VC allocator

FIFO X+X+

FIFO NINI

X-X-

Y+Y+

Y-Y-

Input

a

c

bOutput

crossbar

SW allocator

VC allocator

FIFO X+X+

FIFONINI

X-X-

Y+Y+

Y-Y-

Input Output

buf

DBPsbuf

R R R

R R R

R R R

R R R

P

P

P

P

P

P

P

P

P

P P P

R

R

R

R

P

P

P

P

conceptual router architecture default backup path to/from PE

unidirectional cycle in a mesh topology

Fig. 9.3 Default backup path scheme: (a) conceptual router architecture, (b) default backup path to/from PE, (c) unidirectional cycle in a mesh topology (Adapted from Koibuchi et al. 2008)

Page 194: Reliability, Availability and Serviceability

182 9 Error Location and Reconfiguration

three defective parts, the multiplexers can still be configured such that the network has still a mesh topology. This way, those defects are entirely transparent to the upper layers of the system.

Chang et al. (2011) demonstrate that the proposed approach solves the problem of isolated processing element since they have a ‘backup’ router. This approach also solves the problem of faulty regions where few faults are able to divide the network into two disjoint sub-networks, such that a router in the sub-network 1 can-not have access to another router of the sub-network 2.

The experiments of Chang et al. (2011) evaluate reliability, mean time to failure (MTTF), and yield. The performance of the spare router approach increases with the growth of NoC size, while the relative connection cost decreases. This characteristic makes the solution suitable for large scale NoC designs. Similar to DBP approach (Koibuchi et al. 2008), the spare router approach has a simple design based on multiplexers, wires and few buffers. As a consequence, this approach is independent of the router design, thus, can be applied to a large number of networks. The downside is that Chang et al. (2011) has limited results on silicon area and energy consumption.

The datapath components of a router typically have about 90% of the router sili-con area, thus, replicating them would cause a substantial area overhead. For this reason, Liu et al. (2011) propose a technique that splits the datapath components – i.e. links, input buffers and crossbar – into slices, such that the router can still be functional as long as there is at least one functional slice. For instance, a single fault in a 64-bit link would affect a single link slice instead of the entire link and three slices are still fully functional. In this case time division multiplexing would be used, reducing gracefully the network performance as the number of faults increase.

The router control components are replicated, since they are small compared to the entire router and they are critical to router functionality. Error correcting codes are used to protect the internal pipeline registers. Figure 9.5 depicts a simplified block diagram of a router with four slices.

The proposed technique (Liu et al. 2011) is applied to a 64-bit width router. It results in 65% of silicon area overhead when it is configured with four slices and

R R R

R R R

R R R

SR SR SR

P

P

P

P

P

P

P

P

P

R R R

R R R

R R R

SR SR SR

R R R

R R R

R R R

SR SR SR

P

P

P

P

P

P

P

P

P

NoC with spare routers NoC with three faults reconfigured NoC

a b c

Fig. 9.4 Mesh NoC with spare routers. P, R, and SR represent processing element, router, and spare router, respectively. Routers in dark gray have permanent fault: (a) NoC with spare routers, (b) NoC with three faults, (c) reconfigured NoC (Adapted from Chang et al. 2011)

Page 195: Reliability, Availability and Serviceability

1839.3 Reconfiguration

26% when it has two slices. The results show that 95% and 70% of the routers in a 8 × 8 torus network are still functional after 300 and 1,000 faults, respectively.

Fick et al. (2009b) propose the Vicis network which is able to tolerate many faulty components prolonging its useful lifetime. The network can locate the faults, reconfigure the hardware at the router level to circumvent the fault, and reconfigure again at the network level avoiding the faulty path. Vicis implements error detec-tion, error diagnosis, and system reconfiguration. The error detection mechanisms discover new faults with CRC checks on packets. Error diagnosis determines the location of this new fault with a BIST. At last, reconfiguration disables faulty com-ponents and configures the rest of the components to work around the faulty ones.

The diagnostic task is executed by a BIST engine which is not fully detailed in the paper (Fick et al. 2009b). The reconfiguration is executed at two levels: router and network levels. The router level reconfiguration in Vicis consists of crossbar bypass bus supporting multiple crossbar faults. If multiple flits need access the bypass bus at the same time, then an arbitration mechanism grants access to one flit, while others wait until a later cycle. Vicis supports port swapping and packet rerouting for network level reconfigurability. Input port swapping is used to increase the number of available ports. The port swapper is located at the input of the FIFOs and it enables to change which physical links are connected to an input port. This resource is not used in the output ports because they are small compared to the input ports; recall that bigger blocks ‘attract’ more faults. For instance, if there are faults in the west and south ports, port swapping is executed such that both faults are located in the same port. This way, only one port is disabled instead of two ports, increasing the number of functional links in the network. Vicis is also able to reroute packet by rewriting the network’s routing tables (Fick et al. 2009a). This way, packets circumvent the disabled faulty resources.

Vicis has an area cost of 42% which is compared to modular redundancy approach with 200% of area overhead. The experiments also show that Vicis can sustain up to one stuck-at fault per 2,000 gates, and still maintain half of its routers. The authors use the evaluation metric called Silicon Protection Factor (SPF) for router level reliability, which is defined as the number of faults a router can tolerate before becoming inoperable normalized by the area overhead of the technique. It shows that even though Vicis requires 42% of area compared to 200% of previous methods, Vicis still has a similar SPF.

xbar

in-slice 0

in-slice 1

in-slice 2

in-slice 3

xbar

out-slice 0

out-slice 1

out-slice 2

out-slice 33

2

1

0

pipelineregister

16in

put p

ort o

f 64

bits

outp

ut p

ort o

f 64

bits

Fig. 9.5 Router datapath slicing and salvaging (Adapted from (Liu et al. 2011)

Page 196: Reliability, Availability and Serviceability

184 9 Error Location and Reconfiguration

ReliNoC (Kakoee et al. 2011) is a NoC that maintains basic network connectivity and quality-of-service (QoS) assuming faulty components. The QoS is implemented with priority packets which have access to two channels in a single port: a high priority one used only for QoS traffic and a low priority shared by QoS and normal traffic.

This two-channel approach is traditionally implemented as a virtual channel (VC) where the buffers are replicated while the physical channel is shared. A vir-tual channel allocator distributes data from the physical channel to the correct buf-fer. This architecture creates a large area and power overhead due to the extra buffer and causes performance degradation due to the virtual channel allocator. Kakoee et al. (2011) propose to replicate the entire switch instead of replicating only the buffers as in the virtual channel approach. It has been demonstrated that this approach is more area and power efficient than VC and it can run at higher speeds due to the simpler and faster router design. The physical links are also rep-licated such that a link has two physical channels: one is dedicated to QoS traffic, while the other is shared between QoS and normal traffic. Thus, these design meth-ods with replicated switches and channels assign better performance, QoS, and fault tolerance to ReliNoC compared to a router based on conventional virtual channels design.

In order to effectively use this inherited redundancy of ReliNoC, four types of components are added to the switch for fault tolerance purposes: (a) two multiplexers per input channel; (b) 5-bit input status register per each channel; (c) one control logic for both channels; (d) 10-bit output status register for the entire switch. Moreover, a fault tolerant routing algorithm proposed by Rodrigo et al. (2010) is used to reroute packets around faulty parts. This routing algorithm does not use routing tables which do not scale in terms of latency, power consumption, and area, thus, it is unpractical for large NoCs. This routing algorithm requires 4% of area overhead compared to a conventional XY routing algorithm, which is much less than table-based routings. The faults are effectively detected and located by BIST logic inserted at the routers. This BIST is activated periodically when the entire network goes to a self-test mode.

The experiments compare 8 × 8 ReliNoC and an 8 × 8 NoC with 2-VC switch. In presence of 20 faults, ReliNoC has 90% probability of being fully-connected. Moreover, it can tolerate up to 50 faults within an 8 × 8 mesh at 40% latency overhead. Synthesis results show that ReliNoC incurs in only 13% area overhead compared to the baseline 2-VC switch.

9.4 Comparing the Approaches

The main features of the selected papers are summarized in Table 9.2 and compared along this section. Although it is not possible to perform a straightforward compari-son among these approaches, Table 9.2 makes an attempt to point out the main dif-ferences between them, highlighting the advantages and drawbacks of each approach

Page 197: Reliability, Availability and Serviceability

1859.4 Comparing the Approaches

and their evaluation methodology. Table 9.2 is divided in three parts. The first part summarizes the location and reconfiguration methods. The second part describes the type of fault each method is able to handle (transient, permanent, or crosstalk). The third part describes the metrics and costs used to evaluate each method:

Performance metrics: typically evaluates the network performance at increasing failure rates;Reliability metrics: typically evaluates the amount of simultaneous faults the network can handle;Performance costs: it tells if the test circuitry has some impact in the network performance. For instance, it answers whether the test circuitry is in the critical path;Silicon costs: it represents the cost to implement the test circuitry, typically in terms of silicon area overhead, energy consumption, or power dissipation.

9.4.1 Fault Detection and Location Methods

This section compares the detection and location methods of the selected papers (second line in Table 9.2).

The fault detection and location method described by (Raik et al. 2009) is the one that requires the least amount of silicon area, about 4%. It is heavily based on func-tional testing with some BIST logic for control part and small DfT logic to ease the testing. However, an external test unit with access to the network interfaces is required to implement periodic testing, but this unit is not described in the paper. For sake of completeness, the method should also be integrated with some type of fault reconfiguration method to avoid the faulty components. Moreover, it is not clear if the method is general such that it could be easily applied for networks different from the ones based on mesh topology and XY routing algorithm. Despite of these limitations, the approach is promising for designs with tight area constraints and with focus on detection of permanent faults only. Moreover, the same test scheme could be also used for manufacturing testing.

The fault detection and location schemes used in (Grecu et al. 2006; Kohler et al. 2010) are very similar. They both employ ECC at the router’s input and output ports to distinguish between link and datapath faults. An error flag in each port saves its current error status so that a reconfiguration method knows which part of the network is defective such that it can be disabled and avoided. Kohler et al. (2010) also use a simplified BIST unit which generates the patterns 0 × 00, 0 × FF, 0 × 55, and 0 × AA to further test the router’s datapath for stuck-at and crosstalk faults.

When the input error flag is activated it means that there is a fault somewhere between the output terminals of the sender router and the ECC at the receiver router. The ECC can be placed at the router input terminals (testing only link) or at outputs of the input FIFO (testing the link and part of the FIFO). When the output error flag is activated it means that there is a fault somewhere in the router’s crossbar or control logic (e.g. arbitration or routing algorithm).

Page 198: Reliability, Availability and Serviceability

186 9 Error Location and Reconfiguration

Tabl

e 9.

2 C

ompa

riso

n of

fau

lt lo

catio

n an

d re

confi

gura

tion

appr

oach

esG

recu

et a

l. (2

006)

Koi

buch

i et a

l. (2

008)

Rai

k et

al.

(2

009)

Fick

et a

l. (

2009

b)K

ohle

r et

al.

(201

0)L

iu e

t al.

(2

011)

Cha

ng e

t al.

(2

011)

Kak

oee

et a

l. (2

011)

Det

ectio

n an

d lo

catio

n m

etho

d

Onl

ine

test

. EC

C

and

erro

r fla

gs a

t inp

ut

and

outp

ut

port

s

NA

Peri

odic

al te

st.

Func

tiona

l te

st a

nd

BIS

T f

or

cont

rol

logi

c

EC

C to

det

ect

tran

sien

t fa

ult a

nd

BIS

T f

or

perm

anen

t fa

ults

Onl

ine

test

. BIS

T,

EC

C, a

nd

erro

r fla

gs a

t in

put a

nd

outp

ut p

orts

NA

. EC

C in

dat

a pa

th a

nd

redu

ndan

cy

in c

ontr

ol

path

NA

. Ass

ume

peri

odic

al te

st

with

BIS

T

NA

. Ass

ume

peri

odic

al

test

with

B

IST

up

datin

g th

e er

ror

flags

Rec

onfig

urat

ion

met

hod

NA

Def

ault

back

up

path

. Top

olog

y re

confi

gura

tion

NA

Rou

ting

alg.

, por

t sw

appi

ng,

cros

sbar

by

pass

bus

Rou

ting

algo

rith

mD

atap

ath

split

ting

into

slic

esSp

are

rout

er w

ith

topo

logy

re

confi

gura

tion

Rou

ting

algo

rith

m

Tra

nsie

nt f

aults

Yes

. Det

ectio

n on

lyN

AN

AY

es. E

rror

co

rrec

ting

code

Yes

. Cor

rect

ion

by

retr

ansm

issi

onY

es. D

etec

tion

only

NA

NA

Perm

anen

t fau

ltsY

esY

esY

esY

esY

esY

esY

esY

es

Cro

ssta

lkN

AN

AN

AN

AL

imite

dN

AN

AN

A

Perf

orm

ance

m

etri

csN

oC la

tenc

y an

d th

roug

hput

NoC

late

ncy,

th

roug

hput

, an

d av

erag

e pa

th h

op-c

ount

NA

NoC

thro

ughp

utN

oC th

roug

hput

, av

erag

e pa

th

leng

th, F

IFO

fil

ling

leve

l

NoC

late

ncy

NA

NoC

late

ncy

Rel

iabi

lity

met

rics

NA

NA

Faul

t cov

erag

eSP

F (6

.55)

.# o

f av

aila

ble

rout

ers

Pack

et d

eliv

ery

rate

SPF(

5–13

), #

of

avai

labl

e no

des

Rel

iabi

lity,

MT

TF,

yi

eld

Net

wor

k co

nnec

tivity

Perf

orm

ance

co

sts

NA

NA

NA

NA

Del

ay (

ns)

NA

NA

Poss

ibly

bet

ter

perf

orm

ance

Silic

on c

osts

Pow

erA

rea

(12.

6%).

en

ergy

Are

a (4

%)

Are

a (4

2%)

Are

a (~

24%

)A

rea

(26–

65%

)N

AA

rea

(12%

)

Page 199: Reliability, Availability and Serviceability

1879.4 Comparing the Approaches

Grecu et al. (2006) do not have a diagnostic resolution to further pinpoint the fault location when the output error flag is activated, thus, the entire router would be dis-abled. Kohler et al. (2010) employ again error flags encoded as state machine, how-ever, the error flag is not for an output port, but for each combination of input/output ports such that it is possible to pinpoint if each router internal path is faulty (e.g. from west to south). The authors call this set of state machines as fault matrix. With this approach it is possible to disable parts of the router instead of the entire router.

A fault can be transient, intermittent, or permanent and each type of fault requires a different action. Grecu et al. (2006) do not provide resources to detect these different types of faults, while Kohler et al. (2010) provide the error flags encoded as state machine, illustrated in Fig. 9.1. The fault reconfiguration would act according to its state, disabling the unit only if the fault is permanent.

As final remarks, the approach by Grecu et al. (2006) does not detail how the error flag information reaches the sender in order, for instance, to select a different path in case a fault is detected. Moreover, the approach is based on single parity, thus it provides a low fault detection capability for multiple simultaneous faults.

Fick et al. (2009b) use error correcting codes at the output of the input ports to detect and correct both transient and permanent faults. The use of only one fault checking per incoming packet provides a poor diagnostic resolution, since a fault could be anywhere between the crossbar of the sender router and the FIFO of the receiver router. To improve the diagnostic capability, this ECC triggers a local BIST unit which further tests the router using all zeros and all ones patterns. This test checks, for instance, if the fault is transient or permanent and it determines the fault location. Unfortunately, the paper has no details about the BIST design and how the diagnostic information affects the reconfiguration and routing algorithms. Moreover, an area overhead of 42% is still relevant.

Liu et al. (2011) simply assume that the faults are detected somehow, so checkers are not included into the design. Even though, the area overhead for the slicing reconfiguration approach is quite high, between 26% and 65%. Kakoee et al. (2011) and Chang et al. (2011) also do not describe their fault detection and location mechanisms. Instead, they cite references related to BIST. Fault detection and location is not mentioned in Koibuchi et al. (2008). The area overhead analysis of these papers must be considered with caution since they do not account for fault detection mechanisms. The addition of fault detection and location circuitry would increase the silicon area even more.

Finally, among the papers which describe their fault detection and location mechanisms, we can classify them as:

ECC-based: They use error control coding (e.g. CRC, single parity, or Hamming) to detect fault on-line on the network datapath. Grecu et al. (2006) would fit in this category.ECC and BIST based: ECC is very efficient with datapath and links, but it is not efficient with control logic. For this reason BIST is used to test control logic and also to enhance the fault detection of permanent faults on datapath. It usually provides better fault coverage with small increase in silicon area compared to the previous category. Kohler et al. (2010) would fit in this category.

Page 200: Reliability, Availability and Serviceability

188 9 Error Location and Reconfiguration

Functional based: It is a highly parallel test with short test sequences and using the least amount of DfT circuitry. It is a very promising approach although there are some open questions related to the generality of the approach. Moreover, the approach must be proven on silicon. Raik et al. (2009) fits best in this approach although it also uses some BIST circuitry.

9.4.2 Fault Reconfiguration Methods

Grecu et al. (2006) and Raik et al. (2009) are not evaluated in terms of reconfigu-rability, because they focus on fault detection and location only. The remaining approaches are analyzed next. The comparison of the presented reconfiguration methods (third line in Table 9.2) is specially challenging, because the evaluation metrics and criteria are not uniform. For this reason most comparisons are qualita-tive rather than quantitative.

For instance, a quantitative comparison of silicon area overhead is supposed to be simple. However, even this comparison of silicon area is not very meaningful, because some papers do not take fault detection circuit into account (Koibuchi et al. 2008; Liu et al. 2011). Other papers do not detail enough their test logic (they typically say the BIST is used to detect the faults), thus, the presented area results is not very meaning-ful since it is not clear what is implemented (Fick et al. 2009b; Kakoee et al. 2011; Chang et al. 2011). Power and energy consumption are seldom evaluated.

The performance costs are also seldom evaluated. In most approaches the fault tolerant circuits most likely increase the router critical path, reducing its perfor-mance. The exception among the evaluated papers is (Kohler et al. 2010) which compares the baseline router performance (the critical path in ns) against the proposed fault tolerant architectures. Kakoee et al. (2011) point out that the pro-posed architecture has better performance than a router with two virtual channels, however, no quantitative analysis is performed do demonstrate it.

The reliability metrics are not uniform as well and some metrics are more relevant than others. For instance, the percentage of available router versus failure rate and the network connectivity metrics (used in Liu et al. (2011), Fick et al. (2009b), Kakoee et al. (2011)) are not very meaningful for router-level reconfiguration methods, because a router can be ‘available’ or ‘connected’ but its performance can be very degraded. The percentage of delivered packets versus failure rate, used in (Kohler et al. 2010), is a better metric compared to the previous one, because it assesses the loss of network functionality and it is independent of the reconfiguration level. The Silicon Protection Factor (SPF) metric, proposed by Constantinides et al. (2006) and used in (Fick et al. 2009b; Liu et al. 2011), is even more representative than the previ-ous ones, because the number of faults in a design is proportional to its area, and this metric takes the area of the protection method into account. SPF is computed by dividing the average number of faults required to cause a router failure by the area overhead of the protection techniques (the higher the SPF, the more robust is the design). For example, if two approaches have the same protection levels but different silicon area overhead, the one with less silicon area has a higher SPF factor.

Page 201: Reliability, Availability and Serviceability

1899.4 Comparing the Approaches

From the reviewed papers, it is possible to identify two hierarchical levels where the reconfiguration takes place: at the router-level and at the network-level. The level of the reconfigurable method has impact on relevant features such as, for instance, graceful network performance degradation, generality of the approach, and silicon costs, is detailed next.

The approaches proposed by Koibuchi et al. (2008) and Chang et al. (2011) are network-level reconfigurable strategies since they do not require to change the router design. Their reconfiguration methods are based on multiplexers, wires and buffers which can be inserted on top of the original routers, like a wrapper, as illustrated in Fig. 9.6. The remaining approaches can be classified in both router and network levels, i.e. they also change the router design.

Let us first compare the network performance degradation of the reconfiguration methods at the router level, i.e. (Fick et al. 2009b; Kohler et al. 2010; Liu et al. 2011; Kakoee et al. 2011), and the network level approaches. It is most likely that a single permanent fault hitting a router using any of these approaches mentioned before would cause a partial router disability, causing minor performance degradations. A single router is completely disabled only at high failure rates with several simul-taneous faults. On the other hand, (Chang et al. 2011; Koibuchi et al. 2008) do not handle the router’s internal faults. A single fault disables an entire router, thus, the entire network is able to tolerate lower failure rates and the performance degrades ‘not so gracefully’ as for the other approaches.

Finally, among the papers which describe their fault reconfiguration mechanisms, we can classify them as:

Routing algorithms: It uses the intrinsic redundancy of the network to explore alternative paths to a given pair of source and target nodes. It is the approach

crossbar

SW allocator

VC allocator

FIFOX+ X+

FIFONI

NI

X- X-

Y+ Y+

Y- Y-

Input Output

buf

Wrapper

crossbar

SW allocator

VC allocator

FIFO X+X+

FIFO NINI

X-X-

Y+Y+

Y-Y-

Inputa

b

Output

original router original router with a wrapper on top

Fig. 9.6 Koibuchi’s method with wrapper on top of the original router: (a) original router, (b) original router with a wrapper on top (Adapted from Koibuchi et al. 2008)

Page 202: Reliability, Availability and Serviceability

190 9 Error Location and Reconfiguration

most explored in the literature; however, most papers focus on the demonstration of some features like degree of adaptability or deadlock avoidance. Other features like its implementation costs or its interface with fault location method are rarely addressed. The papers (Fick et al. 2009b; Kohler et al. 2010; Kakoee et al. 2011) use rerouting methods;Topology reconfiguration: default backup path (Koibuchi et al. 2008), spare router (Chang et al. 2011), and crossbar bypass bus (Fick et al. 2009b) approaches fit on this category;Spare components: spare routers (Chang et al. 2011), replicated router (Kakoee et al. 2011), and spare wires (Kohler et al. 2010) approaches fit on this category;Others: port swapping (Fick et al.(2009b) and datapath slicing (Liu et al. 2011) do not fit on the previous categories.

9.5 Discussion and Directions for Future Work

It has been demonstrated that most fault tolerant proposals are incomplete in the subject of fault detection and location. Few papers focus on fault detection and loca-tion and, on the other hand, several papers propose fault reconfiguration and routing algorithms without a proper fault detection and location approach. We believe that a complete fault detection and location approach must account for the following features:

Provide complete description of DfT logic;Be silicon proven. It must present its implementation costs in terms of silicon area, delay, energy consumption;Sufficient diagnostic resolution to allow graceful performance degradation;Distinguish between transient and permanent faults;Be able to detect multiple faults;Must be a general test approach. It must work with several network designs with different network topologies, routing algorithms, etc;The interface between fault location and reconfiguration must be clear.

Kohler et al. (2010) excel in these features since, among the reviewed papers, this is the one with the most detailed fault detection and location method. Moreover, it is clear how the proposed routing algorithm (i.e. the upper levels for fault handling) uses the fault location information.

We have seen that the reconfigurability level (i.e. router or network level) plays an important role in the ability to have graceful performance degradation as the failure rate increases. At first look it seems that the approaches (Fick et al. 2009b; Kohler et al. 2010; Liu et al. 2011; Kakoee et al. 2011) are better than the approaches presented in Chang et al. (2011) and Koibuchi et al. (2008). In fact, most papers in the literature of fault tolerant NoCs would best fit in the first category. However, a more holistic analysis reveals that router-level approaches also have drawbacks compared to network-level approaches.

Page 203: Reliability, Availability and Serviceability

1919.5 Discussion and Directions for Future Work

Router-level approaches have a more complex design compared to the network-level methods. A complex design has, in general, a negative impact on the silicon cost (area and power) and on the circuit performance (its critical path). These approaches require the redesign of the router and the network logics, which means that full knowledge of the network and router design is required to implement it in an efficient way. If a test method requires full design knowledge, it means that it takes more time to implement it and it also means that the approach is less general, i.e. it cannot be easily implemented in other networks.

On the other hand, the methods by Chang et al. (2011) and Koibuchi et al. (2008) could be easily implemented on top of an existing router, like a wrapper, as said before. In fact, the method requires a hardware design which is so simple, straight-forward, ‘non-invasive’,2 and general that the entire design process could be easily implemented into a CAD tool, automating the hardware design.

We demonstrated that approaches based only on network-level configuration tend to degrade performance ‘not so gracefully’ as the router-level counterparts. However, it depends on the evaluation scenario. Let us assume that we want to build a fault tolerant network on top of an existing router design. In the design 1 we apply only network-level reconfiguration methods and in the design 2 we apply both router and network-level reconfigurations methods. Let us assume that the resulting fault tolerant router has the same silicon area, which is not probably true since router-level approaches tend to cost more in silicon area. Let us also assume that a single fault in the design 1 causes an entire router to be switched off, while a single fault in the design 2 causes up to 1/5 of the router (the other four ports of the router are still functional) to be switched off.

Now let us consider we build two 3 × 3 networks with these fault tolerant router designs; let us call them network_3 × 3_1 and network_3 × 3_2. A single fault in the network_3 × 3_1 means that roughly 1/9 of the network functionality is lost, while a single fault in the network_3 × 3_2 would cause (1/5)/9 = 1/45 of the network func-tionality to fail. This difference in network functionality loss between the designs is considerable. The first design will probably have more performance degradation upon faults than the second design.

Let us now build two 10×10 networks with these fault tolerant router designs, and they are called network_10×10_1 and network_10×10_2. A single fault in the first network means 1/100 of functionality loss, while in the second network it would be (1/5)/100 = 1/500. If we think about it in terms of functional network ports,3 the first network has 99 × 5 = 495 functional ports, while the second one has 100 × 5-1 = 499. This analysis has several simplifications; however, it is still useful to demonstrate that network-level approaches start to degrade the network perfor-mance ‘gracefully enough’ for most applications, if we consider bigger networks. If we consider even bigger networks, the benefit of route-level reconfigurability starts to ‘gracefully (gradually) disappear’. The conclusion is that fine grain reconfigu-rability is less relevant for the network performance as the network size increases.

2 Do not require to change the router internal design.3 Total of 500 functional ports assuming 100 functional routers and each router has five ports.

Page 204: Reliability, Availability and Serviceability

192 9 Error Location and Reconfiguration

This analysis demonstrates two features of a fault tolerant method which are relevant and still almost never evaluated: scalability and generality. As we men-tioned before, most approaches are based on router-level reconfigurability, because their case studies are small or medium. This is good enough for short term research, but not for long term research targeting very large systems with hundreds of processing elements. Moreover, as far as we know, all papers about fault tolerant NoCs present their approaches as ‘case studies’, thus, they do not target general methods. Again, case studies might be good enough for short term research, but it has several limitations in terms of applicability. More research is required targeting general fault tolerant methods.

Other relevant consideration on fault reconfigurability methods is that it is very hard to evaluate quantitatively the current approaches. The evaluation methods are not unified and several metrics are not meaningful. New metrics such as SPF and an evaluation framework would contribute to have a more analytical evaluation among different papers. Moreover, the existing performance evaluations are not typically related to an actual application. However, the application domain has a strong influence on the selection of the most appropriate fault tolerant approach. For instance, some applications domains are resilient by nature, like video and streaming applica-tions. A bit flip in a video frame would most probably have no visible effect on the service. As far as we know, there is no research on fault tolerant methods for specific application domains.

Finally, a fault tolerant system must have mechanisms for error detection, error diagnosis, system reconfiguration, and system recovery. It has been demonstrated that most papers details only one of these items, perhaps two. A complete framework is still missing.

References

Chang YC, Chiu CT, Lin SY, Liu CK (2011) On the design and analysis of fault tolerant NoC architecture using spare routers. In: Proceedings of the Asia and South pacific design automation conference (ASPDAC), Yokohama, Japan, pp 431–436

Constantinides K, Plaza S, Blome J, Bertacco V, Mahlke S, Austin T, Orshansky M (2006) BulletProof: a defect tolerant CMP switch architecture. In: Proceedings of the international symposium on high-performance computer architecture, Austin, TX, USA, pp 3–14

Fick D, DeOrio A, Chen G, Bertacco V, Sylvester D, Blaauw D (2009a) A highly resilient routing algorithm for fault-tolerant NoCs. In: Proceedings of the design, automation, and test in Europe (DATE), Nice, France, pp 21–26

Fick D, DeOrio A, Hu J, Bertacco V, Blaauw D, Sylvester D (2009b) Vicis: a reliable network for unreliable silicon. In: Proceedings of the ACM/IEEE design automation conference (DAC), San Francisco, CA, pp 812–817

Grecu C, Ivanov A, Saleh R, Sogomonyan ES, Pande PP (2006) On-line fault detection and location for NoC interconnects. In: Proceedings of the international on-line testing symposium (IOLTS), Lake of Como, Italy, pp 145–150

Kakoee MR, Bertacco V., Benini L (2011) ReliNoC: a reliable network for priority-based on-chip communication. In: Proceedings of the design, automation and test in Europe conference (DATE), Grenoble, France, pp 1–6

Page 205: Reliability, Availability and Serviceability

193References

Kohler A, Gert S, Martin R (2010) Fault tolerant network on chip switching with graceful perfor-mance degradation. IEEE Trans Comput Aided Des Integr Circuit Sys 29(6):883–896

Koibuchi M, Hiroki M, Hideharu A, Pinkston TM (2008) A lightweight fault-tolerant mechanism for network-on-chip. In: Proceedings of the international symposium on networks-on-chip (NOCS), Newcastle upon Tyne, UK, pp 13–22

Lehtonen T, Wolpert D, Liljeberg P, Plosila J, Ampadu P (2010) Self-adaptive system for addressing permanent errors in on-chip interconnects. IEEE Trans Very Large Scale Integr (VLSI) Sys 18(4):527–540

Liu C, Zhang L, Han Y, and Li X (2011) A resilient on-chip router design through data path salvaging. In: Proceedings of the Asia and South Pacific design automation conference (ASPDAC), Yokohama, Japan, pp 437–442

Raik J, Govind V, Ubar R (2009) Design-for-testability-based external test and diagnosis of mesh-like network-on-a-chips. IET Comput Digital Tech 3(5):476–486

Rodrigo S, Flich J, Roca A, Medardoni S, Bertozzi D, Camacho J, Silla F, Duato J (2010) Addressing manufacturing challenges with cost-efficient fault tolerant routing. In: Proceedings of the international symposium on networks-on-chip (NOCS), Grenoble, France, pp 25–32

Zhang L, Han Y, Xu Q, Li XW, Li H (2009) On topology reconfiguration for defect-tolerant noc-based homogeneous many core systems. IEEE Trans Very Large Scale Integr (VLSI) Syst 17(9):1173–1186

Page 206: Reliability, Availability and Serviceability

195É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1_10, © Springer Science+Business Media, LLC 2012

About a decade ago, networks-on-chip emerged from a potential solution for the intra-chip communication problems arising in complex systems (Guerrier and Greiner 2000), to a major research topic with its specific conferences (NoCs 2011; NoCArch 2011) and then to an industrial reality (Karim et al. 2002; Goossens et al. 2005). A huge amount of works have been proposed on design oriented features of NoCs, creating equally large NoC design diversity (Bjerregaard and Mahadevan 2006). Nevertheless, efficient test and reliability approaches are required to turn NoC-based systems into a consolidated industry reality and to achieve much more challenging designs such as many-core systems. As a matter of fact, a considerable amount of effort has been made towards an economically viable, testable, and reliable NoC-based system. The increasing interest in the topic has motivated the writing of this book, where we have put together and organized such large amount of material, summarizing the most relevant scientific contributions, and identifying some open issues. The final chapter of this book addresses the open issues.

10.1 Networks-on-Chip, Testing, and Reliability as Key Challenges Towards Many-Core Systems

Some authors say that it will be possible to integrate hundreds or thousands of pro-cessing elements into a single die (Borkar 2007). The key challenges to enable these monster chips or many-core systems are power budget, memory bandwidth, intra-chip communication, test, and reliability. This book addresses these last three challenges.

It is pretty much settled in the research community that networks-on-chip can be the intra-chip communication infrastructure for these monster chips. Chapter 2 showed that the subject of networks-on-chip evolved rapidly, producing in this last decade a large number of different network-on-chip architectures such as Hermes, SoCin, Nostrum, Aethereal, Mango, Xpipes, SPIN, QNoC, among others (Bjerregaard and Mahadevan 2006). Each of these NoC architectures has dozens of design parameters

Chapter 10Concluding Remarks

Page 207: Reliability, Availability and Serviceability

196 10 Concluding Remarks

(topology, routing algorithm, data width, buffer depth, guaranteed services, and so on) that can be adjusted to best fit a particular application. In fact, the research opportunities are still vast and the research community is still increasing, thus, one can expect more design alternatives for the next decade. In other words, there is a huge design diversity in terms of networks-on-chip and it is expected that few of these designs will be adopted by the industry and will become products that need to be manufactured in a efficient and profitable way.

In order to ensure the manufacturability of NoC-based systems, the last two challenges mentioned above – testing and reliability – must also be taken into account as soon as possible, preferably at the beginning of the design phase. Unfortunately, these topics are not as evolved and settled as the intra-chip commu-nication, since the research community is smaller and the subject became a mainstream research topic just about 6 years ago. Nevertheless, the amount of existing works on testing and reliability is considerable, motivating the authors of this book to organize and analyze these works in a holistic manner, looking for long-term research oppor-tunities that target monster and many-core chips.

When we focus our attention to provide test and reliability methods for such com-plex systems as the monster chips and many-core chips, innumerous questions come to our heads such as, for instance: Can the designers still rely on the conventional test, diagnosis and fault tolerance methods? In which situations the conventional methods become less efficient? How these conventional methods behave in such large systems? Can we provide more efficient methods for these complex systems? Which are the most efficient techniques to improve reliability, availability and serviceability (RAS)? How cost-effective can these methods be in this new scenario? Is it possible to provide efficient automated methods for these huge NoC design diversity? etc.

The remainder of this chapter summarizes the major drawbacks of the most popu-lar approaches, and shows up the existing gaps towards a complete RAS approach for present and future complex NoC-based systems. The next sections discuss the network test, the test of NoC-based systems, fault tolerance approaches for NoC-based systems, testing and reliability in emerging technologies, and draw the final remarks.

10.2 Network-on-Chip Testing

Testing the network itself is challenging due to many reasons: the network has a distrib-uted nature in terms of placement, it consists of small blocks with large number of I/O terminals, it has embedded and distributed buffers, and it has huge design diversity.

On one hand, scan and built-in self-test (BIST) are conventional design-for-test methods for which it is possible to use existing computer-aided test (CAT) tools to automate the insertion of test circuitry and to generate test and diagnosis patterns. However, the results obtained so far from solely applying scan and/or BIST to NoCs are discouraging in terms of silicon area, test volume, and test application time.

On the other hand, functional test has been proposed for NoCs. It presents much better results in terms of silicon area, test volume, and test time compared to the

Page 208: Reliability, Availability and Serviceability

19710.3 Testing Network-on-Chip Based Systems

other conventional methods (Raik et al. 2009; Hervé et al. 2010). However, it is not clear if the functional method has high fault coverage for the most relevant fault models, whether or not the test access mechanisms are sufficiently low-cost to make the solution economically viable and it is either not clear if the method is general enough such that it could be applied satisfactorily to different networks. These issues must be investigated to make functional test a viable solution for industrial chips.

It is clear that it is not possible to distribute a single clock to a monster chip. Moreover, chip parameter variations, such as delay variation, are becoming a relevant issue for new manufacturing technologies. The asynchronous and GALS (Globally Asynchronous Locally Synchronous) design methodologies have been proposed to solve the timing related issues in the design of complex systems due to properties such as insensitivity to delay variation and intrinsic timing robust-ness. NoCs can be easily adapted to the asynchronous and GALS design method-ologies and few asynchronous and GALS NoCs such as Chain, Nexus, Mango, ANoC, and Hermes-A, have been proposed. Pontes et al. (2011) presents a com-plete and updated list of such proposals. However, the manufacturing test of asyn-chronous logic is not well established and there is a lack of CAT tools. Testing of general GALS and asynchronous logic and the test of GALS and asynchronous NoCs is almost an unexplored research topic with few methods proposed so far (Tran et al. 2009).

10.3 Testing Network-on-Chip Based Systems

When a NoC is used as the interconnection platform of a complex system, one must re-evaluate the pros and cons of the conventional bus-based TAM (Test Access Mechanism) model. On one hand, bus-based TAM is a general test method for SoCs, it separates the test requirements from the functional requirements, and its DfT modules and CAT tools are well established. On the other hand, it is not known if this approach is scalable enough for very large NoC-based SoCs. There are only few chip prototypes using bus-based TAMs (Goel et al. 2004), and the silicon costs (power and silicon area) of bus-based TAM is not well documented.

NoC TAM is a test method where an existing NoC is used to transport test data to the cores, thus, a test dedicated bus-based TAM is not required. Several proposals for DfT modules (Amory et al. 2007; Hussin et al. 2007) and scheduling algorithms have been presented assuming a specific NoC architecture (Cota and Liu 2006; Cota et al. 2004) and also assuming a more general NoC model (Amory et al. 2010).

However, these preliminary results show that NoC TAM might require slightly more silicon area for DfT logic and its test time can be longer compared to the conventional bus-based TAMs (Xu et al. 2008; Amory et al. 2010). Nevertheless, not all aspects of both approaches have been evaluated yet. Clearly stating and proving the advantages of NoC TAM and the limitations of bus-based TAM is still a work in progress.

Page 209: Reliability, Availability and Serviceability

198 10 Concluding Remarks

10.4 Fault Tolerance for Network-on-Chip Based Systems

Fault tolerance is required for NoC-based systems because NoCs enable the design of very large chips (reliability and yield are inversely proportional to the chip area) and the newer technologies are more vulnerable to manufacturing defects and to interferences of several natures.

Although there have been a considerable amount of papers published on fault tolerance and reliability of NoCs and NoC-based systems, it is still hard to find one or two distinguishable methods. The reasons are either because the proposals are still focusing on case studies rather than general solutions, or because the proposals are not complete since one or more of the following features are missing:

Complete description of the DfT logic;Be silicon proven. It must present its implementation costs in terms of silicon area, delay, energy consumption;Sufficient diagnostic resolution to allow graceful performance degradation;Be able to distinguish between transient and permanent faults;Be able to detect multiple faults;The interface between fault location and reconfiguration must be clear.

Moreover, there are several issues not addressed such as:

A complete fault tolerant framework that combines fault detection, location, and reconfiguration into a single design. The fault tolerant method would be imple-mented into layers of both hardware and software;New and standardized performance, reliability, and cost metrics are required to enable a comparison between fault tolerant approaches proposed in different papers. The huge NoC design diversity makes this issue even more challenging, because the evaluated networks can be very different from each other. Nowadays, a quantitative comparison of the existing approaches is almost impossible to perform;The granularity of the reconfigurable approach has a direct impact in the silicon cost, performance degradation, generality of the method, and scalability of the method, but these trade-offs were not evaluated as far as we know. There is an inflection point somewhere between few dozens and perhaps 100 of cores where network-level redundancy starts to become more efficient than router-level redundancy. This inflection point is not studied so far, because most approaches target small to medium sized systems;There are opportunities for a general fault tolerant approach that can be easily applied to different NoCs and can be easily automated using CAT tools;The manufacturing test of a fault tolerant network is also a very important point that has not been explicitly addressed so far.

Page 210: Reliability, Availability and Serviceability

199References

10.5 Network-on-Chip RAS in Emerging Technologies

In this book, we have addressed the reliability, availability and serviceability challenges related to 2D implementations of NoC-based systems whose links are designed using electrical interconnects. However, 3D technologies (Feero and Pande 2009; Pavlidis and Friedman 2007) and optical interconnects (Brière et al. 2007; Shacham et al. 2008) have been considered, more recently, as the means to integrate ever more complex and higher performance logic into NoC-based systems-on-chip. Test, diagnosis and fault tolerance in these emerging technologies are a brand new research area and there are very few papers published on these topics till now (Chan and Hsu 2010).

10.6 Final Remarks

The research of reliable, available and serviceable NoC-based systems seems to have, at this point, more questions than answers to the challenges presented at the beginning of this chapter. These challenges are in the critical path to allow NoCs and many-cores to become economically viable and a consolidated industrial reality. Hopefully this book can motivate more people to embrace these challenges and can help them to address the most relevant issues and questions that still need to be answered.

References

Amory AM, Goossens K, Marinissen EJ, Lubaszewski M, Moraes F (2007) Wrapper design for the reuse of a bus, network-on-chip, or other functional interconnect as test access mechanism. IET Comput Digit Tech 1(3):197–206

Amory AM, Lazzari C, Lubaszewski M, Moraes F (2010) A new test scheduling algorithm based on networks-on-chip as test access mechanism. J Parallel Distr Com 71(5):675–686

Bjerregaard T, Mahadevan S (2006) A survey of research and practices of network-on-chip. ACM Comput Surv 38:1–51

Borkar S (2007) Thousand core chips: a technology perspective. In: Proceedings of the design automation conference (DAC), San Diego, California, pp 746–749

Brière M, Girodias B, Bouchebaba Y, Nicolescu G, Mieyeville F, Gaffiot F, O’Connor I (2007) System level assessment of an optical NoC in an MPSoC platform. In: Proceedings of the design, automation and test in Europe conference (DATE), Nice, France, pp 1084–1089

Chan M-J, Hsu C-L (2010) A strategy for interconnect testing in stacked mesh network-on-chip. In: Proceedings of the defect and fault tolerance in VLSI systems (DFT), Kyoto, Japan, pp 122–128

Cota E, Liu C (2006) Constraint-driven test scheduling for NoC-based systems. IEEE Transactions on CAD 25(11):2465–2478

Cota E, Carro L, Lubaszewski M (2004) Reusing an on-chip network for the test of core-based systems. ACM Trans Des Autom Electron Syst 9(4):471–499

Page 211: Reliability, Availability and Serviceability

200 10 Concluding Remarks

Feero BS, Pande PP (2009) Networks-on-chip in a three-dimensional environment: a performance evaluation. IEEE Trans Comput 58(1):32–45

Goel SK, Marinissen EJ, Nguyen T, Oostdijk S (2004) Test infrastructure design for the Nexperia home platform PNX8550 system chip. In: Proceedings of the design, automation, and test in Europe (DATE), Paris, France, pp 108–113

Goossens K, Dielissen J, Radulescu A (2005) Æthereal network on chip: cixconcepts, architectures and implementations. IEEE Des Test Comput 22(5):414–421

Guerrier P, Greiner A (2000) A generic architecture for on-chip packet-switched interconnections. In: Proceedings of the design automation and test in Europe (DATE), Paris, France, pp 250–256

Hervé M, Almeida P, Kastensmidt FL, Cota E, Lubaszewski M (2010) Concurrent test of network-on-chip interconnects and routers. In: Proceedings of the Latin American test workshop (LATW). Punta del Este, Uruguay

Hussin AF, Yoneda T, Fujiwara H (2007) Optimization of NoC wrapper design under bandwidth and test time constraints. In: Proceedings of the European test symposium (ETS), Freiburg, Germany, pp 35–42

Karim F, Nguyen A, Dey S (2002) An interconnect architecture for networking systems on chips. IEEE Micro 22(5):36–45

NoCArc. Workshop on network-on-chip architectures. http://nocarc.diit.unict.it. Accessed 23 June 2011

NoCs. International symposium on networks-on-chip. http://www.nocsymposium.org. Accessed 23 June 2011

Pavlidis VF, Friedman EG (2007) 3-D topologies for networks-on-chip. IEEE T VLSI Syst 15(10):1081–1090

Pontes J, Moreira M, Moraes F, Calazans N (2011) Hermes-A: an asynchronous NoC router with distributed routing. In: Proceedings of the international conference on power and timing modeling, optimization and simulation (PATMOS), Grenoble, France, pp 150–159

Raik J, Govind V, Ubar R (2009) Design-for-testability-based external test and diagnosis of mesh-like network-on-a-chips. IET Comput Digit Tech 3(5):476–486

Shacham A, Bergman K, Carloni LP (2008) Photonic networks-on-chip for future generations of chip multiprocessors. IEEE Trans Comput 57(9):1246–1260

Tran XT, Thonnart Y, Durupt J, Beroulle V, Robach C (2009) Design-for-test approach of an asynchronous network-on-chip architecture and its associated test pattern generation and application. IET Comput Digit Tech 3(5):487–500

Xu Q, Yuan F, Huang L (2008) Re-examining the use of network-on-chip as test access mecha-nism. In: Proceedings of the design, automation, and test in Europe (DATE), Munich, Germany, pp 808–811

Page 212: Reliability, Availability and Serviceability

201

AAcademic NoCs, 21–23Access path, 39, 40, 66, 67, 69–71, 74, 77–80,

95, 101, 103, 105Adaptive routing, 14, 21, 181Addressing, 45, 117Advanced eXtensible Interface (AXI),

17, 22, 93Advanced microcontroller bus architecture

(AMBA), 157, 171Aethereal, 21–22, 127, 128, 195Aggressor, 136, 137, 146, 147, 149Aging, 26, 27, 155Algorithm, 6, 13, 28, 59, 85, 118, 142, 157,

175, 196 AMBA. See Advanced microcontroller bus

architecture (AMBA)AND, 27, 138–141, 149AND-bridging, 27ANoC. See Asynchronous NoC (ANoC)Application-specific integrated circuits

(ASIC), 39Arbiter(s), 4, 13, 15Arbitration, 6, 8, 11, 13–15, 121, 128, 144,

145, 172, 183, 185Area overhead, 35, 40, 43, 45–47, 51, 96, 98,

112, 117, 119, 129, 133, 135, 136, 138, 143, 150, 151, 156, 162–164, 166, 171, 173, 179, 180, 182–185, 188, 189

ASIC. See Application-specific integrated circuits (ASIC)

Asynchronous design methodology, 12, 22, 197

Asynchronous NoC (ANoC), 23, 197At-speed test, 51, 116, 124, 128, 138, 150

ATE. See Automatic test equipment (ATE)ATPG. See Automatic test pattern generation

(ATPG)Automatic test equipment (ATE), 35, 36, 41,

51, 65, 86, 90, 98–103, 105, 112, 126, 143, 152

access, 117, 120Automatic test pattern generation (ATPG),

30, 121, 128, 137, 149, 150Availability, 7–8, 22, 51, 77, 79, 80, 82,

196, 199Average path length, 178, 186AXI. See Advanced eXtensible Interface

(AXI)

BBalanced scan chains, 30, 45, 87Balanced wrapper scan chains, 92Barrel-shifter, 137BCH. See Bose-Chaudhuri-Hocquenghem

(BCH)BE. See Best effort (BE)BE NoCs. See Best effort NoCs (BE NoCs)Benchmark(s), 50–51, 64, 65, 70, 71, 74, 76,

80–83, 87, 88, 90, 96Best effort (BE), 8, 21, 22, 126, 127Best effort NoCs (BE NoCs), 21, 22, 101–112Best fit decreasing (algorithm), 45, 63BILBO. See Built-in logic observer (BILBO)Bin packing algorithm, 45, 63BIST. See Built-in self test (BIST)BIST-based test, 45, 46Bit error rate, 158Bit flip, 27, 156, 171, 173, 193Board, 1, 30–32, 116

Index

É. Cota et al., Reliability, Availability and Serviceability of Networks-on-Chip, DOI 10.1007/978-1-4614-0791-1, © Springer Science+Business Media, LLC 2012

Page 213: Reliability, Availability and Serviceability

202 Index

Bose-Chaudhuri-Hocquenghem (BCH), 162Boundary scan (cell), 30, 31, 115, 143Bounds, 21Bridging (faults), 27, 129, 134, 136, 137Broadcast, 14, 121, 122Buffer depth, 196Buffer(ing), 4, 6, 8, 11, 13–17, 22, 23, 66, 67,

69, 90, 99–101, 117, 119, 122, 146, 160, 164, 168, 170, 171, 180, 182, 184, 190, 196

Built-in block observer, Built-in logic observer (BILBO), 32, 33Built-in self test (BIST), 32–34, 37, 45–47, 77,

81, 82, 115–117, 119, 120, 122, 123, 127–130, 134–152, 155, 178, 179, 183–186, 188, 189, 196

Bus-based TAM, 3–5, 7, 23, 44, 47, 51, 52, 63, 75, 98, 197

Butterfly fat-tree, 117, 128, 149, 150Bypass, 37, 40, 43, 50, 63, 180, 181, 183,

186, 191

CCAC. See Crosstalk avoidance coding (CAC)CAD tools, 22, 33, 192CADEC. See Crosstalk Avoiding Double Error

Correction Code (CADEC)Capacitance, 72, 165, 168CAT. See Computer-aided test (CAT) toolCHAIN, 23Channel bitwidth, 12, 62, 64, 65, 76, 85, 88Check(er)board, 124, 128, 129, 148–150

test, 124, 148, 179Chip(s), 1, 21, 26, 60, 116, 133, 155

interface, 41test, 8, 25–52, 65, 195–197, 199

Chip-level interface, 41, 60Circuit switching, 15, 22, 77Circuit-level fault tolerant approaches for

NoCs, 157Clock

cycle(s), 17, 30, 62, 69–71, 76, 89, 91, 95, 98, 99, 140, 143, 146, 147, 179

rate, 89, 90Coarse grain fault reconfiguration, 180Codec delay, 155, 156, 165, 168, 171Communication(s

channel, 8, 17, 64, 72, 94, 105, 112, 115, 117, 122, 124, 127, 128, 130, 133–153, 164

protocol, 2, 11, 16, 60–62, 85, 94–96Compliance levels, 50Compressor, 32

Computer-aided test (CAT) tool, 196–198Conceptual test architecture, 37–38, 49Concurrent test, 98, 135Configurable error control coding, 162Configuration, 2, 3, 8, 39, 41, 46, 63, 64, 66,

71, 82, 87, 90–93, 105, 108, 110, 124–129, 135, 138, 139, 142–145, 148–152, 162, 164, 179–181, 192

Contention, 13, 21, 171Control

bits, 22, 61–63flow, 121, 128logic, 5, 41, 90, 98, 112, 120, 121, 157,

164, 172, 179, 184–186, 188part, 122, 128, 137, 147, 171, 179, 185

Controllability, 129, 151Core(s), 1, 12, 34, 59, 85, 115, 133, 164, 195

accessibility, 38, 40isolation, 34, 35logic, 35, 48, 73provider, 2, 34, 35, 40, 48terminals, 37, 49, 93, 94, 96test interface, 38, 87test strategy, 34user, 34, 35, 48

Core-based SoCs, 1, 2, 25, 36, 39, 48, 59Core test language (CTL), 35, 48, 50, 51Core under test (CUT), 35, 41, 61–63, 67,

71, 77, 78, 86, 88, 98–100, 109Coupling capacitance, 165CRC. See Cyclic redundancy check (CRC)Critical application, 2, 21, 175Crossbar, 3, 18, 178, 182, 183, 185, 188Crossbar bypass bus, 183, 186, 191Crossbar switches, 3, 4, 18, 45, 178, 182, 183,

185, 188Crosstalk, 2, 7, 27, 146–152, 155, 164–168,

170–172, 186fault, 129, 133, 134, 136, 137, 144, 148,

157, 164, 178, 185Crosstalk avoidance coding (CAC), 156,

165–171Crosstalk Avoiding Double Error Correction

Code (CADEC), 166–171CTL. See Core test language (CTL)CUT. See Core under test (CUT)Cyclic codes, 156Cyclic redundancy check (CRC), 157–159,

164, 168–171, 178, 183, 188

DD-algorithm, 28Daisychain architecture, 41, 43

Page 214: Reliability, Availability and Serviceability

203Index

Data retransmission, 155–157, 164, 168Datapath test, 122, 137, 185DBP. See Default backup path (DBP)Deadlock, 12, 14

avoidance, 13, 191free, 13, 23

Dedicated path, 67, 81, 82Dedicated TAM, 87, 100, 101, 103, 112Deep submicron (DSM) technologies, 155,

156, 173, 175Default backup path (DBP), 180–182, 186, 191Defects, 7, 8, 26, 27, 32, 122, 150, 155,

160–162, 171–173, 176, 180–182, 185, 198

Deflective switch, 14, 22, 117, 122, 123, 127–130, 135, 137–138, 149–152

Delay, 2, 3, 12, 14, 15, 27, 32, 98–101, 124, 128, 129, 134, 136, 137, 147, 148, 150, 156, 162, 165, 168, 169, 171, 172, 187, 191, 198

fault, 148, 179variation, 197

Delayed-sampling registers, 164Density, 3, 7, 134Dependability, 2, 3Design-for-test(ability) (DFT), 7, 28–34, 48,

51, 85, 86, 88, 101, 103, 112, 115, 116, 179, 185, 189, 191, 196–198

Deterministic routing, 14, 22, 66Deterministic test pattern, 28, 38, 45, 46Device aging. See AgingDevice transaction level (DTL), 17, 22, 93, 96DFT. See Design-for-test(ability) (DFT)Diagnosability, 116, 143Diagnosis, 8, 27, 115–131, 133–153, 177, 178,

183, 193, 196, 199Direct access, 127–130, 148–152Directed graph, 103Directed weighted graph, 40Distributed BIST, 119, 120, 135, 150–152DSM. See Deep submicron (DSM)

technologiesDTL. See Device transaction level (DTL)Duplicate-add-parity (DAP), 165, 166, 168–171Dynamic consumption, 72

EECC. See Error control coding (ECC)Echo test, 122Electromagnetic interference, 155Electromigration, 2, 156Encoding, 22, 157, 163End-to-end error detection, 177

End-to-end flow control, 158End-to-end retransmission, 159, 160, 171Energy consumption, 2, 155–158, 161, 163,

165, 168, 171, 172, 175, 177, 180, 182, 185, 189, 191, 198

Error, 26, 157–166, 168, 175–193correcting code, 158, 165, 182, 186, 188detection/correction capability, 158, 162,

166, 171, 176flag, 143, 177, 185, 186, 188signal, 121syndrome, 160

Error control coding (ECC), 8, 155–173, 185, 186, 188

Exhaustive test pattern generation, 23, 28External test(er), 32, 35, 37, 38, 43, 48, 65–67,

69, 71, 81, 85, 88, 98, 105, 112, 117, 124, 148, 155, 185

controller, 65, 66mode, 31sources, 8, 59, 101

External test sink, 117

FFailure rate, 168, 177, 178, 185, 189–191Falling speed-up, 136Fat-tree network topology, 19, 22, 117, 128,

149, 150Fault(s), 3, 11, 26, 81, 115, 133, 155, 175, 198

collapsing, 27, 126, 130coverage, 27, 28, 32, 34, 37, 115–117, 121,

122, 124, 128, 129, 133, 134, 137, 144, 150, 151, 155, 173, 176, 178, 179, 187–189, 197

detection, 8, 131, 134, 141, 143, 155, 157, 171, 172, 176, 177, 185–189, 191, 198

diagnosis, 8, 120, 121, 125, 126, 130, 131, 133, 138, 143, 149, 152, 177

dictionary, 27dominance, 27equivalence, 27escape, 176free, 8, 26, 60, 117, 121, 147, 155injection, 27location, 8, 27, 173, 175–179, 186, 188,

191, 198matrix, 178, 188mitigation, 172, 173, 176model, 26–28, 52, 115, 117, 121, 122, 124,

126, 128, 129, 133, 134, 137–140, 149, 150, 176, 178, 179, 197

propagation, 28

Page 215: Reliability, Availability and Serviceability

204 Index

Fault(s) (cont.)reconfiguration, 175, 185, 188–191recovery, 176simulation, 27, 28tolerance (tolerant), 3, 7, 8, 21, 23, 52, 133,

157, 160, 162, 176, 180, 184, 189, 191–193, 196, 198, 199

method, 193, 198routing algorithms, 157, 184task mapping/migration, 157

Faulty region, 182FIFO, 12, 15, 101, 103, 115, 117, 119–121,

128, 129, 135, 144, 158, 171, 183, 185, 186, 188

Finite state machine (FSM), 136, 143, 164, 172Fixed-width TAMs, 44Flat core, 116, 120Flexibility, 21, 50, 51, 81Flexible-width TAMs, 44Flip-flops (FF), 27, 29, 30, 32, 72, 87, 89, 95,

120, 121Flits, 15, 61, 62, 64, 65, 67, 68, 77, 82, 88–93,

95, 99, 139–141, 143, 146, 147, 159, 160, 162, 163, 183

width, 12Flow control, 6, 8, 12, 13, 16, 77, 158Flow control wires, 157, 172Forbidden pattern (FP), 165Formulation, 68FP. See Forbidden pattern (FP)Frequency divider, 89Frequency scaling, 90Full scan chain, 30, 47Functional bus, 38, 39, 52Functional mode, 94, 116, 123, 126–128, 134,

138, 149Functional test(ing), 26, 115, 116, 127, 128,

133, 144, 149, 178, 179, 185, 196, 197

GGALS. See Globally asynchronous locally

synchronous (GALS)GALS design methodology, 197GALS NoC, 197Generation, 26, 28, 32, 33, 36, 37, 88,

116, 137Globally asynchronous locally synchronous

(GALS), 12Graceful performance degradation, 191, 198Guaranteed bandwidth, 8, 22, 93, 98Guaranteed services, 3, 13, 14, 21, 22, 26, 85,

98, 101, 103, 140, 141, 196Guaranteed throughput, 15, 21, 98, 100, 126, 127

HHamming distance, 171Hamming encoding, 157Handshake(ing), 12, 93, 96, 134, 143, 144,

149, 150HERMES NoC, 22, 195, 197Hierarchical core, 50, 116Hypercube network topology, 18

II/O. See Input/output (I/O)IEEE Standard, 48, 49IEEE standard 1149.1, 31, 39IEEE standard 1450.6, 35, 48–50IEEE standard 1500, 35, 48–50, 62, 95–97,

116, 120IEEE Test Technology Technical Committee

(TTTC), 48ILP. See Integer linear programming (ILP)ILT. See In-line test (ILT)In-line test (ILT), 161, 162Information redundancy, 156, 157, 168Integer linear programming (ILP), 44Intellectual property (IP), 1, 11, 12,

17, 50Interconnect, 2, 5, 11, 12, 25, 93,

133–135, 143, 144, 151, 157, 158, 160

fault, 8, 27, 133, 134, 150Interconnection(s), 3, 5, 7, 11, 35–36, 38–41,

52, 93, 94, 98, 197infrastructure, 34, 36testing, 35–38

Intermittent faults, 26, 27, 160Internal scan chains, 42–44, 50, 51, 63, 69, 87,

90, 109, 120Internal test mode, 37, 48Intra-chip communication (infrastructure),

195, 196Inward-facing test modes, 49IP core network topology, 2, 3, 12, 13, 16, 48,

60, 66, 73, 76, 77, 115, 126, 133, 142, 164

Irregular network topology, 20, 22, 122, 181

JJTAG, 31, 39

Kk-ary, 18k-tuples, 46

Page 216: Reliability, Availability and Serviceability

205Index

LLarge scale integrated (LSI) circuits, 29Lifetime, 2, 3, 25, 183Linear feedback network, 32Linear feedback shift registers (LFSRs), 32,

33, 82Links, 7, 11, 12, 14, 15, 21, 22, 80, 122, 124,

125, 128–130, 133–135, 137, 138, 143, 149, 150, 152, 153, 158–164, 168, 170–173, 175–178, 180–182, 184, 185, 188, 199

List scheduling, 69, 87Livelock, 12, 14Load capacitance, 72Load fluctuation, 98, 101Long-term research opportunities, 196Low-power codes (LPC), 165LSI. See Large scale integrated (LSI) circuits

MMANGO NoC, 22, 195, 197Manufacturing, 1–3, 7, 25–27, 48, 52, 115,

130, 131, 133, 152, 196, 197defects, 7, 8, 155, 161, 173, 175, 198testing, 7, 25, 26, 155, 185, 197, 198

Many-core systems, 195–196Maximal aggressor fault (MAF), 136, 137,

146–151Mean time to failure (MTTF), 182, 187Memory, 27, 47, 81, 128, 173

bandwidth, 195BIST, 122faults, 117

Mesh network topology, 18, 20, 22, 23, 66, 71, 105, 117, 128, 130, 139, 142, 144–146, 149, 150, 152, 153, 180–182, 184, 185

Message(s), 6, 17, 18, 60, 61, 76–78, 158passing, 5, 22

Metrics, 21, 158, 178, 183, 185–187, 189, 193, 198

Minimal routing, 14, 22Modular testing, 25, 48, 50–52, 59–83, 116Monster chips, 195–197Multi-rate clocks, 88, 90Multicast, 14, 22, 117, 119, 126, 135, 136, 151Multiple clock domains, 45, 50Multiple input shift register (MISR), 32, 33Multiple test sets, 45, 81, 82, 90, 144

NNarrowcast, 14, 22, 126n-cube, 18

n-dimensional, 18Negative glitch, 136, 147Network

acceptance rate, 158adapter (see Network, interface)congestion, 15, 155, 172, 175interface, 1, 7, 8, 12, 16–17, 22, 37, 59,

61–65, 67, 72, 73, 77, 93, 115–117, 122, 126, 128, 131, 133, 138, 142, 143, 159, 179, 185

latency, 3, 4, 8, 11, 12, 15, 17, 21, 59, 67, 93, 98, 100, 103, 146, 147, 156, 158, 160, 162, 163, 169, 177, 180, 184, 186

performance, 17, 76, 175, 178, 182, 185, 190, 192

power dissipation, 74, 158throughput, 177, 178topology, 19, 180, 181, 191

Network bandwidth, 3, 5, 8, 13, 17, 22, 39, 45, 46, 86–91, 93–95, 97, 98, 103, 109, 112, 126, 195

Network-on-chip (NoC), 5, 11, 25, 59, 85, 115, 133, 155, 175, 195

building block, 16, 115, 116, 1223D, 153, 199design diversity, 195, 196, 198infrastructure, 8, 115, 133latency, 186TAM, 59, 197throughput, 186

NoC. See Network-on-chip (NoC)NoC based systems, 7, 36, 51–52, 59, 90, 133,

153, 176, 180, 195–199NoC based TAM, 59, 61, 64, 98, 100, 112Non-preemptive (testing), 59, 77–82, 85,

87, 89Normal operation mode, 46, 60, 62, 63, 116,

118, 122, 134, 138Nostrum NoC, 22, 122, 124, 127, 128,

148–150, 195

OOff-line testing, 8, 155, 175On-chip, 21, 37, 38, 41, 45, 59, 62, 157,

158, 165clocking, 88, 89

On-line, 156, 157, 169, 170, 172, 173, 175, 188

fault detection, 8, 155, 172, 188testing, 8, 155–157, 175, 177, 178

Open core protocol (OCP), 17, 22, 93Open fault, 129, 137Operation frequency, 32, 85, 95

Page 217: Reliability, Availability and Serviceability

206 Index

Operation modes, 35, 46, 48–50, 60, 62, 63, 116, 118, 119, 122, 134, 138

Optical, 153Optical on-chip interconnect, 153, 199OR-bridging, 27Output ports, 6, 12–15, 40, 49, 66, 67, 71, 77,

79, 93–96, 164, 177, 178, 183, 185, 186, 188

PPacket(s), 5, 11, 61, 88, 138, 156, 177

delivery rate, 178, 187duplication, 178injection rate, 158, 171latency, 67, 103, 156, 160priority, 69, 184rerouting, 183switching, 23, 45, 146

Pareto curve, 103, 104Parity check, 157Partial scan, 30, 47, 117, 120–122, 127–130Partition, 47, 101–103, 105–108, 110–112,

117, 127, 148Path, 6, 13, 60, 95, 117, 139, 160, 175, 199

delay, 27length, 66, 67, 69, 71, 178, 186

Peak power consumption, 73Periodic testing, 185Permanent fault(s), 26, 27, 160–162, 168,

170–173, 177, 178, 180–182, 185, 186, 188, 190, 191, 198

Permutation, 76, 78–80Phase shifter, 33Pin count, 30, 32, 39, 47, 115Placement, 47, 67, 71, 76, 103, 196Platform, 2, 4–6, 16, 25, 52, 73, 197Port swapping, 183, 186, 191Power, 2, 21, 32, 73, 159, 184, 195

budget, 37, 46, 51, 73–75, 77, 80, 195constraints, 36, 46, 47, 72, 75, 82, 88–90consumption, 1, 4, 5, 11, 12, 22, 43, 46,

47, 51, 72–75, 82, 88–90, 93, 159, 172, 184

dissipation, 2, 25, 46, 72–75, 156, 158, 160, 162, 164, 177, 185

Power-aware test scheduling, 72–77, 90Power-constrained test schedules, 45Precedence, 46, 47, 68, 69, 81, 82Precedence-based scheduling, 46Preemption (preemptive), 47, 77, 82

test scheduling, 66–77testing, 77

Printed circuit board (PCB), 30

Priority(ies), 6, 15, 69, 79, 89, 184Processing element (PE), 180–182, 193, 195Proteo NoC, 23Pseudo-code, 78, 107–109, 111Pseudo-random, 28, 45, 46Pseudo-random test vectors, 28, 32, 33, 38

QQNoC, 7, 22, 195QoS. See Quality-of-service (QoS)Quality, 2, 13, 26, 27, 46Quality-of-service (QoS), 3, 4, 11, 13, 21, 184Queue(ing), 13, 22, 120, 126, 158

RRadiation, 26, 155Real-time, 2, 21, 37Reconfigurable, 2, 91–93, 160, 161, 180, 190, 191Reconfigurable network topology, 181Reconfigurable NoC link, 180Reconfiguration, 2, 8, 82, 91, 109, 160–162,

172, 173, 175–193, 198Recovery, 3, 157, 158, 163, 176, 196Rectangle packing, 90Redundancy, 11, 156–165, 168, 173, 180, 183,

184, 186, 190, 198Reliability, 2, 3, 7–8, 21, 25, 52, 112,

155–158, 160, 162, 164–166, 172, 175, 177, 180, 182, 183, 185, 187, 189, 195–196, 198, 199

Reliability-energy tradeoff, 157Reliable NoC-based system, 176, 195Residual error rates, 160Resource allocation, 46Response analysis, 35, 116Retransmission, 8, 15, 155–173, 177, 180, 186Ring network topology, 22, 180Router datapath slicing, 183Router(s), 5, 11, 66, 98, 115, 133, 158, 175 Router-to-router flow control, 158Routing, 6, 8, 11–15, 21–23, 45, 52, 66, 99,

101, 105, 115, 117, 121, 122, 124–126, 128, 139, 144, 146, 148, 157, 178, 179, 183, 184, 186

algorithm, 6, 13, 14, 18, 21, 66, 102, 103, 112, 118, 157, 175, 181, 184–186, 188, 190–191, 196

(control) logic, 115, 117, 121, 122, 124, 128, 144, 145, 148, 157

path, 14, 77, 79, 82, 124table, 183, 184

Run-time faults, 8, 155, 175

Page 218: Reliability, Availability and Serviceability

207Index

SSample mode, 31Scalability, 3, 18, 40, 52, 193, 198Scalable core test architecture, 48–50Scan(s), 29, 64, 86, 115, 143, 196

chain, 29–35, 41–47, 50, 51, 61–65, 67–69, 77, 87, 90–96, 98, 109, 117, 120–122, 129, 143, 151, 152

architectures, 41–43length, 87, 109

Scan-based test(ing), 29, 30, 32, 68, 115, 120Scan-path, 29Schedule(ing), 45–47, 59, 67–71, 73, 74,

76–79, 81, 89, 91, 101, 197Selected data inputs (SDI), 96Selected data outputs (SDO), 96Self-testing using MISR and parallel shift, 33Serial mode, 30Serial-in, 120Serial-out, 120Serial-parallel conversion, 88Shift register, 29, 32Short-circuit, 27, 138, 142–144, 148–150

fault, 124, 134, 148Shortest path problem, 40Signature, 33Signature registers, 46Silicon area, 111, 136, 155, 156, 162–164,

168, 171–173, 175, 177–180, 182, 185, 188, 189, 191, 192, 196–198

Silicon protection factor (SPF), 183, 187, 189, 193

Single event transient (SET), 173Single event upset (SEU), 164, 173Slots, 69, 70, 73, 74, 76, 82, 86, 87Slow-to-fall, 27Slow-to-rise, 27SoC. See System-on-chip (SoC)Soft cores, 35, 40Soft errors, 164Space redundancy, 156, 157, 160–165, 168Spanning tree, 122Spare router, 181, 182, 186, 191Spare wire(s), 156, 160–164, 168–170, 172,

180, 191Speed ratio, 86 (PM: Spell error in page 86)Spidergon, 22SPIN, 22, 195Split transmission, 160, 161Standard

cell, 148deviation, 17

Standardization initiatives, 37, 48Static routing, 14

STNoC, 22Store-and-forward strategy, 15Structural fault coverage, 178, 179Structural model, 117Structural test(ing), 26, 116, 127, 149Stuck-at fault, 27, 117, 121, 124, 127–129,

133, 134, 136, 137, 144, 148–150, 178, 179, 183, 185

Stuck-at fault coverage, 116, 117, 121, 122, 124, 129, 148, 150, 176

Stuck-on, 27Stuck-open, 27STUMPS, 33Switch(es), 4, 13, 22, 49, 122, 127, 144, 149,

159, 160, 165, 171, 177, 184Switch-to-switch error detection, 159Switch-to-switch retransmission, 160, 171Switching, 5, 6, 8, 11, 15, 21, 22, 38, 62, 66,

72, 165, 168, 175activity, 156, 168matrix, 12, 15

Synchronization, 3, 12, 17, 52, 143Syndrome storing-based detection (SSD),

161, 162System(s)

characterization, 36interface, 44, 51, 52, 65–67, 78, 86, 99, 102level testing, 36power constraints, 36, 46, 72

System-on-board (SoB), 1System-on-chip (SoC), 1–8, 11, 25, 34–48,

50–52, 59–83, 90, 96, 110, 126, 127, 129, 130, 138, 151, 152, 197

SoCBus, 23SoCIN, 22, 121, 128, 143, 147, 149, 150,

195test benchmarks, 50–51, 71, 96test requirements, 34–36

TTAM. See Test access mechanism (TAM)TAP. See Test access port (TAP)TDG. See Test data generator (TDG)TDM. See Time division multiplexing (TDM)TED. See Test error detector (TED)Terminals, 37, 49, 51, 63, 93–97, 185, 196Test, 7, 22, 25, 59, 85, 115, 133, 155, 175, 195

access bandwidth, 94, 95access mechanisms (TAM), 2, 3, 6, 8, 13,

15, 21, 22, 26, 32, 35–45, 48, 49, 51, 52, 59, 61, 75, 85, 98, 99, 103, 115, 158, 163, 176, 183, 188, 190, 193, 197

Page 219: Reliability, Availability and Serviceability

208 Index

Test (cont.)application, 28, 32, 33, 62–64, 76, 91, 92,

98, 135, 144time, 30, 45–47, 81, 82, 90, 98, 115,

127, 129, 133, 135, 136, 138, 143, 144, 148, 151, 196

architecture, 25, 33, 37–38, 41, 44–50, 52, 106, 108–112

bandwidth, 88, 90, 94, 97, 98, 112bus(es), 40, 41, 44, 45, 52, 78, 129, 133

assignment problem, 44configuration, 63, 64, 124–129, 135, 139,

142–145, 148, 149, 151, 152, 179constraints, 45, 85control (signals), 41, 61, 62, 77, 119, 188controller(ing), 32, 36, 63, 65, 66coverage, 155cycle, 69, 95, 99, 100, 143, 152data, 37, 38, 40, 41, 45–47, 49, 52, 60, 61,

65, 66, 76, 85, 86, 88–91, 93, 96, 98–103, 105, 115, 124, 151, 179, 197

flow, 96, 98signals, 41volume, 46, 47, 52, 90, 105, 124

frequency, 32, 73, 85–87, 89, 91, 95, 99generation, 28, 32, 33, 116header, 61–63information, 35, 48, 67, 73, 94infrastructure, 31, 36, 52input path, 66integrator, 42, 43interface, 36, 38, 49, 50, 52, 65, 66, 69, 71,

72, 78, 85–88, 90, 96, 101, 112length, 28, 30, 97, 103, 104, 106–112methods, 26, 47, 48, 161, 178, 192,

196, 197mode, 7, 8, 29, 31, 35, 43, 46, 49, 60, 75,

93, 118, 119, 121, 155, 175, 177, 179, 184

output paths, 66packet(s), 8, 59–65, 67, 69–71, 76, 77, 87,

89, 95, 98, 128, 138–141, 143, 144, 146–148, 179

parallelization (parallelization), 46, 67, 72, 85, 86

path, 101, 112, 138–141, 143, 144, 146, 147, 149–151

pattern, 20, 26, 28, 30, 35, 38, 45–48, 50, 51, 63, 69, 73, 76, 90, 94, 98, 124, 127–130, 133, 135, 137, 148–150, 152, 155, 161, 165, 168, 175, 185, 188, 196

generation, 28, 37, 121phase, 122, 123, 130, 137, 138, 150, 152

pins, 30, 35, 43, 44, 50, 85, 86, 88, 98–100, 102, 103

pipeline, 77, 81, 94plann(ing), 36, 37, 47, 51, 81ports, 66, 85, 94, 96program, 36resource(s), 34, 36, 46, 47, 77responses, 30, 33, 35, 37, 43, 61, 62, 64,

67–69, 77, 93, 96, 100, 117, 121, 122, 124, 129, 133, 137, 138, 148, 151

analyzers, 32schedule(ing), 36, 37, 45–47, 51, 66–83,

85–88, 90, 93, 101–112, 130sequence, 134, 136, 137, 139, 141, 143,

144, 146, 148, 149, 151, 189session, 81, 82, 142–145, 151set, 27, 28, 41–43, 45–47, 70, 77, 78, 81,

82, 108sink(s), 8, 37–39, 59–62, 65, 100, 101, 117source(s), 6, 8, 21, 22, 35, 37–39, 59–61,

65, 72, 101, 117, 119, 126, 128, 140, 158, 162, 168, 176, 190

space, 43stimuli, 28, 29, 32, 33, 37, 93, 128, 129,

149, 151structure, 29, 35, 48, 116, 134time, 30, 35–37, 39, 41, 43–47, 51, 63,

66–69, 71, 72, 74–78, 81, 83, 85, 87–90, 93, 94, 98, 103–110, 115–117, 129, 148, 150, 151, 155, 179, 196, 197

minimization, 45–47optimization, 43

vector(s), 26–28, 30, 32, 34, 41, 43, 60, 61, 64, 67–69, 73, 76, 77, 82, 91, 94, 117, 120–122, 128, 133, 134, 136–140, 143, 144, 146, 147, 151

volume, 69, 196reduction, 45

wrapper, 35, 36, 38, 40, 44–46, 61–65, 85, 89, 93, 94, 97, 98, 112, 116, 117, 120–122, 129, 130

interface, 89Test access mechanism (TAM), 35–47, 49, 51,

52, 59–61, 63, 64, 72, 75, 85, 87, 93, 98–101, 103–112, 115, 197

architectures, 39, 43, 45, 46, 49bandwidth, 39bitwidth, 39, 63, 64, 104, 105, 109optimization, 46, 47partition, 47width, 38, 39, 42, 45, 47, 87, 103, 109

Test access port (TAP), 31, 39, 94Test data compression, 32, 33, 45, 86

Page 220: Reliability, Availability and Serviceability

209Index

Test data generator (TDG), 119, 122, 135–138, 142, 143, 151, 152

Test error detector (TED), 119, 122, 135, 137, 138, 142, 143, 151, 152

Test pattern generator (TPG), 28, 33, 161, 162, 172

Test technology technical committee (TTTC), 48

Testability, 48, 161, 179measures, 30

Tester vector memory depth, 47Testing, 7, 22, 25, 59, 85, 115, 133, 155,

175, 195mode, 116–119, 134path, 117, 118, 126, 130, 152time, 39, 45, 63, 87, 112

TestRail architecture, 41, 43, 44Thermal balance, 90Thermal budget, 85Throughput, 3, 12, 15, 17, 21, 52, 98, 100,

126, 156, 162, 177, 178, 186Time division multiplexing (TDM), 13, 21, 22,

87, 88, 91, 182Time redundancy, 156–160, 168Time slots, 69, 70, 73, 82, 86, 87Time tag, 68, 70, 79, 80, 82Time-division scheduling, 89Time-to-market, 1, 25Topology(topological), 3, 6, 17–23, 28, 66, 71,

85, 117, 121, 122, 128, 149, 150, 180–182, 185, 186, 191, 196

Torus, 18, 22, 66, 71, 121, 128, 183TPG. See Test pattern generator (TPG)Transient faults, 26, 52, 160, 163, 168, 170,

172, 173, 177, 178, 186Transparent modes, 40Transparent paths, 39, 40Triple modular redundancy (TMR), 156,

160–162, 164, 168–170, 172Turn model, 14

UUnbalanced scan chains, 87Unicast, 14, 117–119, 126, 135, 136, 151Unidirectional ring topology, 180

VVariability, 23Verification, 25, 26, 77, 89, 143Victim, 136, 137, 147, 148, 164

Virtual channel (VC), 13, 21, 22, 89, 184, 189

allocator, 184Virtual components, 1Virtual TAMS, 103–105, 108–110Virtual-cut-through, 15VisibleCores, 41Voltage swing, 157, 158

WWalking-one, 139–141Walking-zero, 139WBR. See Wrapper boundary register (WBR)WBY. See Wrapper bypass register (WBY)WCG. See Wrapper cell group (WCG)Wear out, 155Width adaptation, 38WIR. See Wrapper instruction register (WIR)Wiring fault, 52, 128, 129, 133, 137, 138,

149, 151Wormhole, 15, 21, 22, 62, 66, 67, 146Wrapper, 6, 35, 61, 85, 116, 190

cells, 49, 90, 92, 95–97control, 61, 63, 77optimization, 45

Wrapper boundary register (WBR), 49, 50Wrapper bypass register (WBY), 50Wrapper cell group (WCG), 90Wrapper instruction register (WIR), 50Wrapper scan chains (WSC), 61–65, 67, 69,

87, 90–93, 95, 96, 109Wrapper serial input (WSI), 49, 50Wrapper serial output (WSO), 49WSI. See Wrapper serial input (WSI)WSO. See Wrapper serial output (WSO)

XXPipes, 22, 195XY routing, 14, 22, 23, 66, 105, 124, 125,

139, 144, 146, 148, 184, 185

YYield, 2, 3, 8, 25, 32, 45, 52, 133, 173, 182,

187, 198YX routing, 179

ZZero-jitter, 98