REALTEST: Test and Reliability of Nano-Electronic Systems
01.2006 - 07.2013, DFG-Project: WU 245/5-1, 5-2
Project Description
The continuing scaling of circuit technology enables the integration of complete systems and even complete compute clusters on a single chip. At the same time, the nano-electronic structures are subject to a growing number of defect mechanisms. The manufacturing process is much more sensitive to environmental influences and for very small structures quantum mechanical effects require even higher manufacturing precision. Furthermore, variation in process and material lead to variation in circuit parameters across space (the position on the chip) as well as time (due to ageing effects). The "International Technology Roadmap for Semiconductors" [SIA] estimates that by 2019 the feature size of process technology will reach 7nm, but only between 10% and 20% of chips will be defect free. In order to achieve economical yield rates, it is imperative that appropriate measures are taken, such as fault tolerance, redundancy, repair and reconfiguration.
In an ongoing trend, the percentage of flip-flops compared to combinatorial elements is growing in modules of free, random logic. This development is a consequence of the massive pipelining used to increase the operation frequency of integrated circuits and of the shorter and shorter critical paths in the combinatorial part. Additionally, many design techniques on architectural level such as speculation and instruction scheduling on the hardware layer, require larger register sets. And finally, the existing techniques for improved reliability, such as time and structural redundancy, lead to an increase in the number of memory elements in free random logic. Circuits with millions of flip-flops in free random logic are already commonplace in the industry [Kupp04].
The growth in terms of memory elements is not only observed in data paths but also in control dominated modules, for which regularity and minimized delay is getting more important than minimum area state encoding, which in turn leads to a growing portion of memory elements.
The flip-flops of an integrated circuit are, like its combinatorial elements, subject to the growing variations and the defect and failure mechanisms of nano-electronic circuits, which affect yield during manufacturing as well as reliability during operation. But most significant for flip-flops is their susceptibility with respect to environmental influences, such as particle radiation (e.g. protons), and will require protection mechanisms that improve reliability, mask faults and keep a feasible yield. For memory arrays with high regularity, there already exist methods that tackle these problems (Figure 1). Some current tech-niques realized in the industry include repair and reconfiguration, error detection and error correction through encoding, periodic refreshment of the data ("scrubbing") to pro-tect from fault accumulation and built-in self-test techniques with redundancy analysis and self-repair.
Figure 1: Memory repair and error recovery
It will be necessary to adapt these methods to memory structures in free logic, because the growing application of power reduction techniques, such as clock gating, leads to a reduction in the number of concurrently switching elements and especially in concur-rently active flip-flops. Consequently, a large number of flip-flops have to hold their value over a longer time frame, which means, that memory elements are subject to the same long term influences and fault accumulation effects that are already significant for dynamic memory arrays. Therefore, it is imperative that periodic refreshment is introduced, as is already the case for memory arrays [Hell02].
The susceptibility to transient errors is significantly higher for memory elements than for combinatorial elements [Dodd03]. Because of the ongoing reduction in logic depth, it is expected that masking effects of most combinatorial faults will be reduced and that the soft error rate (SER) even of combinatorial elements will grow by orders of magnitude and approach the SER of unprotected memory elements [Skiv02], and also these effects will result in erroneous states to be detected by appropriate fault tolerance and redundancy mechanisms. These techniques are complemented by hardening both combinatorial elements and latches against transient faults.
At the same time, the continuous growth in the number of memory elements and the overhead, which is required to improve reliability, make the manufacturing test more difficult which is a dominant cost factor even today. For free logic, scan-path based test is the most wide-spread technique. Here, the test data is being serially shifted into the circuit and read-out, and in order to reduce test time multiple scan-paths are used at once, the test patterns are generated in form of a built-in self test directly on the chip or the test data is provided as a compressed data stream, which is decoded by on-chip circuitry. Similarly the test response is being compressed before it is sent to the tester. Figure 2 shows the basic principle of this embedded test technique.
Figure 2: Embedded test for test data compression and decompression
These compression methods are meant to counter the long imminent problem of manufacturing test, that the external band-width of a chip to the test equipment is growing much slower than the size of the internal data that is required to achieve a complete fault coverage [Mitr05, Rajs05]. The growing percentage of flip-flops in free random logic and the significant redundancy, employed to increase reliability, aggravate this problem significantly and would lead to economically unfeasible test lengths and test times, if not accounted for.
The goal of this project is the development of a unified design methodology for memory elements in random logic that combines solutions for reliability, fault tolerance, online and offline test. To achieve this, each scan path (as in Figure 3) is partitioned into seg-ments of a certain length and each segment is extended by redundancy that allows for tolerance or repair of permanent faults in a way that it is still tolerant with respect to transient faults.
A scan path can be seen as a one-dimensional one-bit memory, which lends it to re-spective memory test techniques. For regular memory arrays, periodic test, online test and transparent test have been rigorously analyzed. Some of these test methods can be adapted to the concept of scan paths. But repeated read-out and write-back would significantly impact availability of the flip-flops to regular system operation and therefore be not feasible. Because of this, it is promising to implement the test technique for transparent, periodic self-test, already implemented for memory arrays. A simple logic calculates a residual characteristic (Figure 3), which allows for keeping the contents of the scan path consistent and enables periodic consistency checking.
Figure 3: Online- and offline test for scan paths
The additional hardware, which is integrated for this online test scheme, will also be used for test response compression. Only the calculated characteristic has to be evalu-ated, from which the incorrect circuit response can be implied. A complete scan-out of the (redundant) circuit response is not required for this solution, and test time is reduced significantly without any additional hardware overhead. For the test pattern (stimuli) the currently known test data compression techniques can still be used.
Bibliography:
[Dodd03] | P. E. Dodd and L. W. Massengill, "Basic mechanisms and modeling of single-event upset in digital micro-electronics", IEEE Transactions on Nuclear Science, 50 (3), pp. 583-602, June 2003 |
[Hell02] | S. Hellebrand, H.-J. Wunderlich, A. A. Ivaniuk, Y. V. Klimets, and V. N. Yarmolik, "Efficient online and offline testing of embedded DRAMs", IEEE Trans-actions on Computers, 51 (7), pp. 801-809, 2002 |
[SIA] | Semiconductor Industry Association, "International technology roadmap for semiconductors", Technical Report, 2003, available at: http://public.itrs.net |
[Kupp04] | R. Kuppuswamy, P. DesRosier, D. Feltham, R. Sheikh, and P. Thadikaran, "Full hold-scan systems in microprocessors: Cost/benefit analysis", Intel Tech-nology Journal, 8 (1), pp. 63-72, Feb. 2004 |
[Mitr05] | S. Mitra, S. S. Lumetta, M. Mitzenmacher, and N. Patil, "X-Tolerant Test Re-sponse Compaction", IEEE Design & Test of Computers, 22 (6), pp. 566-574, 2005 |
[Rajs05] | J. Rajski, J. Tyszer, C. Wang, and S. M. Reddy, "Finite memory test response compactors for embedded test applications", IEEE Trans. on CAD of Inte-grated Circuits and Systems, 24 (4), pp. 622-634, 2005 |
[Nico96] | M. Nicolaidis, "Theory of Transparent BIST for RAMs", IEEE Trans. on Com-puter, 45 (10), pp. 1141-1156, 1996 |
[Koma04] | Y. Komatsu, Y. Arima, T. Fujimoto, T. Yamashita, and K. Ishibashi, "A soft-error hardened latch scheme for soc in a 90nm technology and beyond", Pro-ceedings IEEE Custom Integrated Circuits Conference (CICC'04), pp. 329-332,Orlando, FL, USA, Sep 2004 |
[Shiv02] | P. Shivakumar, M. Kistler, S. W. Keckler, D. Burger, and L. Alvisi, "Modeling the effect of technology trends on the soft error rate of combinational logic", Proceedings International Conference on Dependable Systems and Networks (DSN'02), Bethesda, MD, USA, pp. 389-398, June 2002 |
Publications
Journals and Conference Proceedings
30. | SAT-Based ATPG beyond Stuck-at Fault Testing Hellebrand, Sybille; Wunderlich, Hans-Joachim it - Information Technology Vol. 56(4), 21 July 2014, pp. 165-172 |
2014 DOI PDF |
Keywords: ACM CCS→Hardware→Hardware test, SAT-based ATPG, Fault Tolerance, Self-Checking Circuits, Synthesis | ||
Abstract: To cope with the problems of technology scaling, a robust design has become desirable. Self-checking circuits combined with rollback or repair strategies can provide a low cost solution for many applications. However, standard synthesis procedures may violate design constraints or lead to sub-optimal designs. The SAT-based strategies for the verification and synthesis of self-checking circuits presented in this paper can provide efficient solutions. | ||
BibTeX:
@article{HelleW2014, author = {Hellebrand, Sybille and Wunderlich, Hans-Joachim}, title = {{SAT-Based ATPG beyond Stuck-at Fault Testing}}, journal = {it - Information Technology}, year = {2014}, volume = {56}, number = {4}, pages = {165--172}, keywords = {ACM CCS→Hardware→Hardware test, SAT-based ATPG, Fault Tolerance, Self-Checking Circuits, Synthesis}, abstract = {To cope with the problems of technology scaling, a robust design has become desirable. Self-checking circuits combined with rollback or repair strategies can provide a low cost solution for many applications. However, standard synthesis procedures may violate design constraints or lead to sub-optimal designs. The SAT-based strategies for the verification and synthesis of self-checking circuits presented in this paper can provide efficient solutions.}, doi = {http://dx.doi.org/10.1515/itit-2013-1043}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/ITIT_HelleW2014.pdf} } |
||
29. | Variation-Aware Deterministic ATPG Sauer, Matthias; Polian, Ilia; Imhof, Michael E.; Mumtaz, Abdullah; Schneider, Eric; Czutro, Alexander; Wunderlich, Hans-Joachim; Becker, Bernd Proceedings of the 19th IEEE European Test Symposium (ETS'14), Paderborn, Germany, 26-30 May 2014, pp. 87-92 Best paper award |
2014 DOI URL PDF |
Keywords: Variation-aware test, fault efficiency, ATPG | ||
Abstract: In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations. | ||
BibTeX:
@inproceedings{SauerPIMSCWB2014, author = {Sauer, Matthias and Polian, Ilia and Imhof, Michael E. and Mumtaz, Abdullah and Schneider, Eric and Czutro, Alexander and Wunderlich, Hans-Joachim and Becker, Bernd}, title = {{Variation-Aware Deterministic ATPG}}, booktitle = {Proceedings of the 19th IEEE European Test Symposium (ETS'14)}, year = {2014}, pages = {87--92}, keywords = {Variation-aware test, fault efficiency, ATPG}, abstract = {In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6847806}, doi = {http://dx.doi.org/10.1109/ETS.2014.6847806}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/ETS_SauerPIMSCWB2014.pdf} } |
||
28. | Accurate QBF-based Test Pattern Generation in Presence of Unknown Values Hillebrecht, Stefan; Kochte, Michael A.; Erb, Dominik; Wunderlich, Hans-Joachim; Becker, Bernd Proceedings of the Conference on Design, Automation and Test in Europe (DATE'13), Grenoble, France, 18-22 March 2013, pp. 436-441 |
2013 DOI PDF |
Keywords: Unknown values, test generation, ATPG, QBF | ||
Abstract: Unknown (X) values may emerge during the design process as well as during system operation and test application. Sources of X-values are for example black boxes, clockdomain boundaries, analog-to-digital converters, or uncontrolled or uninitialized sequential elements. To compute a detecting pattern for a given stuck-at fault, well defined logic values are required both for fault activation as well as for fault effect propagation to observing outputs. In presence of X-values, classical test generation algorithms, based on topological algorithms or formal Boolean satisfiability (SAT) or BDD-based reasoning, may fail to generate testing patterns or to prove faults untestable. This work proposes the first efficient stuck-at fault ATPG algorithm able to prove testability or untestability of faults in presence of X-values. It overcomes the principal inaccuracy and pessimism of classical algorithms when X-values are considered. This accuracy is achieved by mapping the test generation problem to an instance of quantified Boolean formula (QBF) satisfiability. The resulting fault coverage improvement is shown by experimental results on ISCAS benchmark and larger industrial circuits. | ||
BibTeX:
@inproceedings{HilleKEWB2013, author = {Hillebrecht, Stefan and Kochte, Michael A. and Erb, Dominik and Wunderlich, Hans-Joachim and Becker, Bernd}, title = {{Accurate QBF-based Test Pattern Generation in Presence of Unknown Values}}, booktitle = {Proceedings of the Conference on Design, Automation and Test in Europe (DATE'13)}, publisher = {IEEE Computer Society}, year = {2013}, pages = {436--441}, keywords = {Unknown values, test generation, ATPG, QBF}, abstract = {Unknown (X) values may emerge during the design process as well as during system operation and test application. Sources of X-values are for example black boxes, clockdomain boundaries, analog-to-digital converters, or uncontrolled or uninitialized sequential elements. To compute a detecting pattern for a given stuck-at fault, well defined logic values are required both for fault activation as well as for fault effect propagation to observing outputs. In presence of X-values, classical test generation algorithms, based on topological algorithms or formal Boolean satisfiability (SAT) or BDD-based reasoning, may fail to generate testing patterns or to prove faults untestable. This work proposes the first efficient stuck-at fault ATPG algorithm able to prove testability or untestability of faults in presence of X-values. It overcomes the principal inaccuracy and pessimism of classical algorithms when X-values are considered. This accuracy is achieved by mapping the test generation problem to an instance of quantified Boolean formula (QBF) satisfiability. The resulting fault coverage improvement is shown by experimental results on ISCAS benchmark and larger industrial circuits.}, doi = {http://dx.doi.org/10.7873/DATE.2013.098}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2013/DATE_HilleKEWB2013.pdf} } |
||
27. | Efficient Variation-Aware Statistical Dynamic Timing Analysis for Delay Test Applications Wagner, Marcus; Wunderlich, Hans-Joachim Proceedings of the Conference on Design, Automation and Test in Europe (DATE'13), Grenoble, France, 18-22 March 2013, pp. 276-281 |
2013 DOI PDF |
Abstract: Increasing parameter variations, caused by variations in process, temperature, power supply, and wear-out, have emerged as one of the most important challenges in semiconductor manufacturing and test. As a consequence for gate delay testing, a single test vector pair is no longer sufficient to provide the required low test escape probabilities for a single delay fault. Recently proposed statistical test generation methods are therefore guided by a metric, which defines the probability of detecting a delay fault with a given test set. However, since run time and accuracy are dominated by the large number of required metric evaluations, more efficient approximation methods are mandatory for any practical application. In this work, a new statistical dynamic timing analysis algorithm is introduced to tackle this problem. The associated approximation error is very small and predominantly caused by the impact of delay variations on path sensitization and hazards. The experimental results show a large speedup compared to classical Monte Carlo simulations. | ||
BibTeX:
@inproceedings{WagneW2013, author = {Wagner, Marcus and Wunderlich, Hans-Joachim}, title = {{Efficient Variation-Aware Statistical Dynamic Timing Analysis for Delay Test Applications }}, booktitle = {Proceedings of the Conference on Design, Automation and Test in Europe (DATE'13)}, year = {2013}, pages = {276--281}, abstract = {Increasing parameter variations, caused by variations in process, temperature, power supply, and wear-out, have emerged as one of the most important challenges in semiconductor manufacturing and test. As a consequence for gate delay testing, a single test vector pair is no longer sufficient to provide the required low test escape probabilities for a single delay fault. Recently proposed statistical test generation methods are therefore guided by a metric, which defines the probability of detecting a delay fault with a given test set. However, since run time and accuracy are dominated by the large number of required metric evaluations, more efficient approximation methods are mandatory for any practical application. In this work, a new statistical dynamic timing analysis algorithm is introduced to tackle this problem. The associated approximation error is very small and predominantly caused by the impact of delay variations on path sensitization and hazards. The experimental results show a large speedup compared to classical Monte Carlo simulations.}, doi = {http://dx.doi.org/10.7873/DATE.2013.069}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2013/DATE_WagneW2013.pdf} } |
||
26. | Accurate X-Propagation for Test Applications by SAT-Based Reasoning Kochte, Michael A.; Elm, Melanie; Wunderlich, Hans-Joachim IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) Vol. 31(12), December 2012, pp. 1908-1919 |
2012 DOI PDF |
Keywords: Unknown values; stuck-at fault coverage; accurate fault simulation; simulation pessimism | ||
Abstract: Unknown or X-values during test application may originate from uncontrolled sequential cells or macros, from clock or A/D boundaries or from tri-state logic. The exact identification of X-value propagation paths in logic circuits is crucial in logic simulation and fault simulation. In the first case, it enables the proper assessment of expected responses and the effective and efficient handling of X-values during test response compaction. In the second case, it is important for a proper assessment of fault coverage of a given test set and consequently influences the efficiency of test pattern generation. The commonly employed n-valued logic simulation evaluates the propagation of X-values only pessimistically, i.e. the X-propagation paths found by n- valued logic simulation are a superset of the actual propagation paths. This paper presents an efficient method to overcome this pessimism and to determine accurately the set of signals which carry an X-value for an input pattern. As examples, it investigates the influence of this pessimism on the two applications X-masking and stuck-at fault coverage assessment. The experimental results on benchmark and industrial circuits assess the pessimism of classic algorithms and show that these algorithms significantly overestimate the signals with X-values. The experiments show that overmasking of test data during test compression can be reduced by an accurate analysis. In stuck-at fault simulation, the coverage of the test set is increased by the proposed algorithm without incurring any overhead. | ||
BibTeX:
@article{KochtEW2012, author = {Kochte, Michael A. and Elm, Melanie and Wunderlich, Hans-Joachim}, title = {{Accurate X-Propagation for Test Applications by SAT-Based Reasoning}}, journal = {IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)}, publisher = {IEEE Computer Society}, year = {2012}, volume = {31}, number = {12}, pages = {1908--1919}, keywords = {Unknown values; stuck-at fault coverage; accurate fault simulation; simulation pessimism}, abstract = {Unknown or X-values during test application may originate from uncontrolled sequential cells or macros, from clock or A/D boundaries or from tri-state logic. The exact identification of X-value propagation paths in logic circuits is crucial in logic simulation and fault simulation. In the first case, it enables the proper assessment of expected responses and the effective and efficient handling of X-values during test response compaction. In the second case, it is important for a proper assessment of fault coverage of a given test set and consequently influences the efficiency of test pattern generation. The commonly employed n-valued logic simulation evaluates the propagation of X-values only pessimistically, i.e. the X-propagation paths found by n- valued logic simulation are a superset of the actual propagation paths. This paper presents an efficient method to overcome this pessimism and to determine accurately the set of signals which carry an X-value for an input pattern. As examples, it investigates the influence of this pessimism on the two applications X-masking and stuck-at fault coverage assessment. The experimental results on benchmark and industrial circuits assess the pessimism of classic algorithms and show that these algorithms significantly overestimate the signals with X-values. The experiments show that overmasking of test data during test compression can be reduced by an accurate analysis. In stuck-at fault simulation, the coverage of the test set is increased by the proposed algorithm without incurring any overhead.}, doi = {http://dx.doi.org/10.1109/TCAD.2012.2210422}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/TCAD_KochtEW2012.pdf} } |
||
25. | Variation-Aware Fault Grading Czutro, A.; Imhof, Michael E.; Jiang, J.; Mumtaz, Abdullah; Sauer, M.; Becker, Bernd; Polian, Ilia; Wunderlich, Hans-Joachim Proceedings of the 21st IEEE Asian Test Symposium (ATS'12), Niigata, Japan, 19-22 November 2012, pp. 344-349 |
2012 DOI PDF |
Keywords: process variations, fault grading, Monte-Carlo, fault simulation, SAT-based, ATPG, GPGPU | ||
Abstract: An iterative flow to generate test sets providing high fault coverage under extreme parameter variations is presented. The generation is guided by the novel metric of circuit coverage, calculated by massively parallel statistical fault simulation on GPGPUs. Experiments show that the statistical fault coverage of the generated test sets exceeds by far that achieved by standard approaches. | ||
BibTeX:
@inproceedings{CzutrIJMSBPW2012, author = {Czutro, A. and Imhof, Michael E. and Jiang, J. and Mumtaz, Abdullah and Sauer, M. and Becker, Bernd and Polian, Ilia and Wunderlich, Hans-Joachim}, title = {{Variation-Aware Fault Grading}}, booktitle = {Proceedings of the 21st IEEE Asian Test Symposium (ATS'12)}, publisher = {IEEE Computer Society}, year = {2012}, pages = {344--349}, keywords = {process variations, fault grading, Monte-Carlo, fault simulation, SAT-based, ATPG, GPGPU}, abstract = {An iterative flow to generate test sets providing high fault coverage under extreme parameter variations is presented. The generation is guided by the novel metric of circuit coverage, calculated by massively parallel statistical fault simulation on GPGPUs. Experiments show that the statistical fault coverage of the generated test sets exceeds by far that achieved by standard approaches.}, doi = {http://dx.doi.org/10.1109/ATS.2012.14}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/ATS_CzutrIJMSBPW2012.pdf} } |
||
24. | Built-in Self-Diagnosis Exploiting Strong Diagnostic Windows in Mixed-Mode Test Cook, Alejandro; Hellebrand, Sybille; Wunderlich, Hans-Joachim Proceedings of the 17th IEEE European Test Symposium (ETS'12), Annecy, France, 28 May-1 June 2012, pp. 146-151 |
2012 DOI PDF |
Keywords: Built-in Diagnosis; Design for Diagnosis | ||
Abstract: Efficient diagnosis procedures are crucial both for volume and for in-field diagnosis. In either case the underlying test strategy should provide a high coverage of realistic fault mechanisms and support a low-cost implementation. Built-in self-diagnosis (BISD) is a promising solution, if the diagnosis procedure is fully in line with the test flow. However, most known BISD schemes require multiple test runs or modifications of the standard scan-based test infrastructure. Some recent schemes circumvent these problems, but they focus on deterministic patterns to limit the storage requirements for diagnostic data. Thus, they cannot exploit the benefits of a mixed-mode test such as high coverage of non-target faults and reduced test data storage. This paper proposes a BISD scheme using mixed-mode patterns and partitioning the test sequence into “weak” and “strong” diagnostic windows, which are treated differently during diagnosis. As the experimental results show, this improves the coverage of non-target faults and enhances the diagnostic resolution compared to state-of-the-art approaches. At the same time the overall storage overhead for input and response data is considerably reduced. | ||
BibTeX:
@inproceedings{CookHW2012, author = {Cook, Alejandro and Hellebrand, Sybille and Wunderlich, Hans-Joachim}, title = {{Built-in Self-Diagnosis Exploiting Strong Diagnostic Windows in Mixed-Mode Test}}, booktitle = {Proceedings of the 17th IEEE European Test Symposium (ETS'12)}, publisher = {IEEE Computer Society}, year = {2012}, pages = {146--151}, keywords = {Built-in Diagnosis; Design for Diagnosis}, abstract = {Efficient diagnosis procedures are crucial both for volume and for in-field diagnosis. In either case the underlying test strategy should provide a high coverage of realistic fault mechanisms and support a low-cost implementation. Built-in self-diagnosis (BISD) is a promising solution, if the diagnosis procedure is fully in line with the test flow. However, most known BISD schemes require multiple test runs or modifications of the standard scan-based test infrastructure. Some recent schemes circumvent these problems, but they focus on deterministic patterns to limit the storage requirements for diagnostic data. Thus, they cannot exploit the benefits of a mixed-mode test such as high coverage of non-target faults and reduced test data storage. This paper proposes a BISD scheme using mixed-mode patterns and partitioning the test sequence into “weak” and “strong” diagnostic windows, which are treated differently during diagnosis. As the experimental results show, this improves the coverage of non-target faults and enhances the diagnostic resolution compared to state-of-the-art approaches. At the same time the overall storage overhead for input and response data is considerably reduced.}, doi = {http://dx.doi.org/10.1109/ETS.2012.6233025}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/ETS_CookHW2012.pdf} } |
||
23. | Exact Stuck-at Fault Classification in Presence of Unknowns Hillebrecht, Stefan; Kochte, Michael A.; Wunderlich, Hans-Joachim; Becker, Bernd Proceedings of the 17th IEEE European Test Symposium (ETS'12), Annecy, France, 28 May-1 June 2012, pp. 98-103 |
2012 DOI PDF |
Keywords: Unknown values; simulation pessimism; exact fault simulation; SAT | ||
Abstract: Fault simulation is an essential tool in electronic design automation. The accuracy of the computation of fault coverage in classic n-valued simulation algorithms is compromised by unknown (X) values. This results in a pessimistic underestimation of the coverage, and overestimation of unknown (X) values at the primary and pseudo-primary outputs. This work proposes the first stuck-at fault simulation algorithm free of any simulation pessimism in presence of unknowns. The SAT-based algorithm exactly classifies any fault and distinguishes between definite and possible detects. The pessimism w. r. t. unknowns present in classic algorithms is discussed in the experimental results on ISCAS benchmark and industrial circuits. The applicability of our algorithm to large industrial circuits is demonstrated. | ||
BibTeX:
@inproceedings{HilleKWB2012, author = {Hillebrecht, Stefan and Kochte, Michael A. and Wunderlich, Hans-Joachim and Becker, Bernd}, title = {{Exact Stuck-at Fault Classification in Presence of Unknowns}}, booktitle = {Proceedings of the 17th IEEE European Test Symposium (ETS'12)}, publisher = {IEEE Computer Society}, year = {2012}, pages = {98--103}, keywords = {Unknown values; simulation pessimism; exact fault simulation; SAT}, abstract = {Fault simulation is an essential tool in electronic design automation. The accuracy of the computation of fault coverage in classic n-valued simulation algorithms is compromised by unknown (X) values. This results in a pessimistic underestimation of the coverage, and overestimation of unknown (X) values at the primary and pseudo-primary outputs. This work proposes the first stuck-at fault simulation algorithm free of any simulation pessimism in presence of unknowns. The SAT-based algorithm exactly classifies any fault and distinguishes between definite and possible detects. The pessimism w. r. t. unknowns present in classic algorithms is discussed in the experimental results on ISCAS benchmark and industrial circuits. The applicability of our algorithm to large industrial circuits is demonstrated.}, doi = {http://dx.doi.org/10.1109/ETS.2012.6233017}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/ETS_HilleKWB2012.pdf} } |
||
22. | A Pseudo-Dynamic Comparator for Error Detection in Fault Tolerant Architectures Tran, Duc Anh; Virazel, Arnaud; Bosio, Alberto; Dilillo, Luigi; Girard, Patrick; Todri, Aida; Imhof, Michael E.; Wunderlich, Hans-Joachim Proceedings of the 30th IEEE VLSI Test Symposium (VTS'12), Hyatt Maui, Hawaii, USA, 23-25 April 2012, pp. 50-55 |
2012 DOI PDF |
Keywords: Robustness; Soft error; Timing error; Fault tolerance; Duplication; Comparison; Power consumption | ||
Abstract: Although CMOS technology scaling offers many advantages, it suffers from robustness problem caused by hard, soft and timing errors. The robustness of future CMOS technology nodes must be improved and the use of fault tolerant architectures is probably the most viable solution. In this context, Duplication/Comparison scheme is widely used for error detection. Traditionally, this scheme uses a static comparator structure that detects hard error. However, it is not effective for soft and timing errors detection due to the possible masking of glitches by the comparator itself. To solve this problem, we propose a pseudo-dynamic comparator architecture that combines a dynamic CMOS transition detector and a static comparator. Experimental results show that the proposed comparator detects not only hard errors but also small glitches related to soft and timing errors. Moreover, its dynamic characteristics allow reducing the power consumption while keeping an equivalent silicon area compared to a static comparator. This study is the first step towards a full fault tolerant approach targeting robustness improvement of CMOS logic circuits. | ||
BibTeX:
@inproceedings{TranVBDGTIW2012, author = {Tran, Duc Anh and Virazel, Arnaud and Bosio, Alberto and Dilillo, Luigi and Girard, Patrick and Todri, Aida and Imhof, Michael E. and Wunderlich, Hans-Joachim}, title = {{A Pseudo-Dynamic Comparator for Error Detection in Fault Tolerant Architectures}}, booktitle = {Proceedings of the 30th IEEE VLSI Test Symposium (VTS'12)}, publisher = {IEEE Computer Society}, year = {2012}, pages = {50--55}, keywords = {Robustness; Soft error; Timing error; Fault tolerance; Duplication; Comparison; Power consumption}, abstract = {Although CMOS technology scaling offers many advantages, it suffers from robustness problem caused by hard, soft and timing errors. The robustness of future CMOS technology nodes must be improved and the use of fault tolerant architectures is probably the most viable solution. In this context, Duplication/Comparison scheme is widely used for error detection. Traditionally, this scheme uses a static comparator structure that detects hard error. However, it is not effective for soft and timing errors detection due to the possible masking of glitches by the comparator itself. To solve this problem, we propose a pseudo-dynamic comparator architecture that combines a dynamic CMOS transition detector and a static comparator. Experimental results show that the proposed comparator detects not only hard errors but also small glitches related to soft and timing errors. Moreover, its dynamic characteristics allow reducing the power consumption while keeping an equivalent silicon area compared to a static comparator. This study is the first step towards a full fault tolerant approach targeting robustness improvement of CMOS logic circuits.}, doi = {http://dx.doi.org/10.1109/VTS.2012.6231079}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/VTS_TranVBDGTIW2012.pdf} } |
||
21. | Built-in Self-Diagnosis Targeting Arbitrary Defects with Partial Pseudo-Exhaustive Test Cook, Alejandro; Hellebrand, Sybille; Imhof, Michael E.; Mumtaz, Abdullah; Wunderlich, Hans-Joachim Proceedings of the 13th IEEE Latin-American Test Workshop (LATW'12), Quito, Ecuador, 10-13 April 2012, pp. 1-4 |
2012 DOI PDF |
Keywords: Built-in Self-Test; Pseudo-Exhaustive Test; Built-in Self-Diagnosis | ||
Abstract: Pseudo-exhaustive test completely verifies all output functions of a combinational circuit, which provides a high coverage of non-target faults and allows an efficient on-chip implementation. To avoid long test times caused by large output cones, partial pseudo-exhaustive test (P-PET) has been proposed recently. Here only cones with a limited number of inputs are tested exhaustively, and the remaining faults are targeted with deterministic patterns. Using P-PET patterns for built-in diagnosis, however, is challenging because of the large amount of associated response data. This paper presents a built-in diagnosis scheme which only relies on sparsely distributed data in the response sequence, but still preserves the benefits of P-PET. | ||
BibTeX:
@inproceedings{CookHIMW2012, author = {Cook, Alejandro and Hellebrand, Sybille and Imhof, Michael E. and Mumtaz, Abdullah and Wunderlich, Hans-Joachim}, title = {{Built-in Self-Diagnosis Targeting Arbitrary Defects with Partial Pseudo-Exhaustive Test}}, booktitle = {Proceedings of the 13th IEEE Latin-American Test Workshop (LATW'12)}, publisher = {IEEE Computer Society}, year = {2012}, pages = {1--4}, keywords = {Built-in Self-Test; Pseudo-Exhaustive Test; Built-in Self-Diagnosis}, abstract = {Pseudo-exhaustive test completely verifies all output functions of a combinational circuit, which provides a high coverage of non-target faults and allows an efficient on-chip implementation. To avoid long test times caused by large output cones, partial pseudo-exhaustive test (P-PET) has been proposed recently. Here only cones with a limited number of inputs are tested exhaustively, and the remaining faults are targeted with deterministic patterns. Using P-PET patterns for built-in diagnosis, however, is challenging because of the large amount of associated response data. This paper presents a built-in diagnosis scheme which only relies on sparsely distributed data in the response sequence, but still preserves the benefits of P-PET.}, doi = {http://dx.doi.org/10.1109/LATW.2012.6261229}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/LATW_CookHIMW2012.pdf} } |
||
20. | Diagnostic Test of Robust Circuits Cook, Alejandro; Hellebrand, Sybille; Indlekofer, Thomas; Wunderlich, Hans-Joachim Proceedings of the 20th IEEE Asian Test Symposium (ATS'11), New Delhi, India, 20-23 November 2011, pp. 285-290 |
2011 DOI PDF |
Keywords: Robust Circuits; Built-in Self-Test; Built-in Self-Diagnosis; Time Redundancy | ||
Abstract: Robust circuits are able to tolerate certain faults, but also pose additional challenges for test and diagnosis. To improve yield, the test must distinguish between critical faults and such faults, that could be compensated during system operation; in addition, efficient diagnosis procedures are needed to support yield ramp-up in the case of critical faults. Previous work on circuits with time redundancy has shown that “signature rollback” can distinguish critical permanent faults from uncritical transient faults. The test is partitioned into shorter sessions, and a rollback is triggered immediately after a faulty session. If the repeated session shows the correct result, then a transient fault is assumed. The reference values for the sessions are represented in a very compact format. Storing only a few bits characterizing the MISR state over time can provide the same quality as storing the complete signature. In this work the signature rollback scheme is extended to an integrated test and diagnosis procedure. It is shown that a single test run with highly compacted reference data is sufficient to reach a comparable diagnostic resolution to that of a diagnostic session without any data compaction. |
||
BibTeX:
@inproceedings{CookHIW2011, author = {Cook, Alejandro and Hellebrand, Sybille and Indlekofer, Thomas and Wunderlich, Hans-Joachim}, title = {{Diagnostic Test of Robust Circuits}}, booktitle = {Proceedings of the 20th IEEE Asian Test Symposium (ATS'11)}, publisher = {IEEE Computer Society}, year = {2011}, pages = {285--290}, keywords = {Robust Circuits; Built-in Self-Test; Built-in Self-Diagnosis; Time Redundancy}, abstract = {Robust circuits are able to tolerate certain faults, but also pose additional challenges for test and diagnosis. To improve yield, the test must distinguish between critical faults |
||
19. | Embedded Test for Highly Accurate Defect Localization Mumtaz, Abdullah; Imhof, Michael E.; Holst, Stefan; Wunderlich, Hans-Joachim Proceedings of the 20th IEEE Asian Test Symposium (ATS'11), New Delhi, India, 20-23 November 2011, pp. 213-218 |
2011 DOI PDF |
Keywords: BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug | ||
Abstract: Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing. In mixed-mode embedded test, a large amount of pseudorandom (PR) patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time. This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults. |
||
BibTeX:
@inproceedings{MumtaIHW2011, author = {Mumtaz, Abdullah and Imhof, Michael E. and Holst, Stefan and Wunderlich, Hans-Joachim}, title = {{Embedded Test for Highly Accurate Defect Localization}}, booktitle = {Proceedings of the 20th IEEE Asian Test Symposium (ATS'11)}, publisher = {IEEE Computer Society}, year = {2011}, pages = {213--218}, keywords = {BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug}, abstract = {Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing. |
||
18. | Robuster Selbsttest mit Diagnose Cook, Alejandro; Hellebrand, Sybille; Indlekofer, Thomas; Wunderlich, Hans-Joachim 5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11) Vol. 231, Hamburg-Harburg, Germany, 27-29 September 2011, pp. 48-53 |
2011 URL PDF |
Abstract: Robuste Schaltungen können bestimmte Fehler tolerieren, stellen aber auch besonders hohe Anforderungen an Test und Diagnose. Um Ausbeuteverluste zu vermeiden, muss der Test kritische Fehler von unkritischen Fehlern unterscheiden, die sich während des Systembetriebs nicht auswirken. Zur Verbesserung des Produktionsprozesses muss außerdem eine effiziente Diagnose für erkannte kritische Fehler unterstützt werden. Bisherige Arbeiten für Schaltungen mit Zeitredundanz haben gezeigt, dass ein Selbsttest mit Rücksetzpunkten kostengünstig kritische permanente Fehler von unkritischen transienten Fehlern unterscheiden kann. Hier wird der Selbsttest in N Sitzungen unterteilt, die bei einem Fehler sofort wiederholt werden. Tritt beim zweiten Durchlauf einer Sitzung kein Fehler mehr auf, geht man von einem transienten Fehler aus. Dabei genügt es, die Referenzantworten für die einzelnen Sitzungen in stark kompaktierter Form abzulegen. Statt einer vollständigen Signatur wird nur eine kurze Bitfolge gespeichert, welche die Signaturberechnung über mehrere Zeitpunkte hinweg charakterisiert. Die vorliegende Arbeit erweitert das Testen mit Rücksetzpunkten zu einem integrierten Test- und Diagnoseprozess. Es wird gezeigt, dass ein einziger Testdurchlauf mit stark kompaktierten Referenzwerten genügt, um eine vergleichbare diagnostische Auflösung zu erreichen wie bei einem Test ohne Antwortkompaktierung. | ||
BibTeX:
@inproceedings{CookHIW2011a, author = {Cook, Alejandro and Hellebrand, Sybille and Indlekofer, Thomas and Wunderlich, Hans-Joachim}, title = {{Robuster Selbsttest mit Diagnose}}, booktitle = {5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)}, publisher = {VDE VERLAG GMBH}, year = {2011}, volume = {231}, pages = {48--53}, abstract = {Robuste Schaltungen können bestimmte Fehler tolerieren, stellen aber auch besonders hohe Anforderungen an Test und Diagnose. Um Ausbeuteverluste zu vermeiden, muss der Test kritische Fehler von unkritischen Fehlern unterscheiden, die sich während des Systembetriebs nicht auswirken. Zur Verbesserung des Produktionsprozesses muss außerdem eine effiziente Diagnose für erkannte kritische Fehler unterstützt werden. Bisherige Arbeiten für Schaltungen mit Zeitredundanz haben gezeigt, dass ein Selbsttest mit Rücksetzpunkten kostengünstig kritische permanente Fehler von unkritischen transienten Fehlern unterscheiden kann. Hier wird der Selbsttest in N Sitzungen unterteilt, die bei einem Fehler sofort wiederholt werden. Tritt beim zweiten Durchlauf einer Sitzung kein Fehler mehr auf, geht man von einem transienten Fehler aus. Dabei genügt es, die Referenzantworten für die einzelnen Sitzungen in stark kompaktierter Form abzulegen. Statt einer vollständigen Signatur wird nur eine kurze Bitfolge gespeichert, welche die Signaturberechnung über mehrere Zeitpunkte hinweg charakterisiert. Die vorliegende Arbeit erweitert das Testen mit Rücksetzpunkten zu einem integrierten Test- und Diagnoseprozess. Es wird gezeigt, dass ein einziger Testdurchlauf mit stark kompaktierten Referenzwerten genügt, um eine vergleichbare diagnostische Auflösung zu erreichen wie bei einem Test ohne Antwortkompaktierung.}, url = {http://www.vde-verlag.de/proceedings-en/453357011.html}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2011/ZUE_CookHIW2011.pdf} } |
||
17. | Korrektur transienter Fehler in eingebetteten Speicherelementen Imhof, Michael E.; Wunderlich, Hans-Joachim 5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11) Vol. 231, Hamburg-Harburg, Germany, 27-29 September 2011, pp. 76-83 |
2011 URL PDF |
Keywords: Transiente Fehler; Soft Error; Single Event Upset (SEU); Erkennung; Lokalisierung; Korrektur; Latch; Register; Single Event Effect; Detection; Localization; Correction | ||
Abstract: In der vorliegenden Arbeit wird ein Schema zur Korrektur von transienten Fehlern in eingebetteten, pegelgesteuerten Speicherelementen vorgestellt. Das Schema verwendet Struktur- und Informationsredundanz, um Single Event Upsets (SEUs) in Registern zu erkennen und zu korrigieren. Mit geringem Mehraufwand kann ein betroffenes Bit lokalisiert und mit einem hier vorgestellten Bit-Flipping-Latch (BFL) rückgesetzt werden, so dass die Zahl zusätzlicher Taktzyklen im Fehlerfall minimiert wird. Ein Vergleich mit anderen Erkennungs- und Korrekturschemata zeigt einen deutlich reduzierten Hardwaremehraufwand. In this paper a soft error correction scheme for embedded level sensitive storage elements is presented. The scheme employs structural- and information-redundancy to detect and correct Single Event Upsets (SEUs) in registers. With low additional hardware overhead the affected bit can be localized and reset with the presented Bit-Flipping-Latch (BFL), thereby minimizing the amount of additional clock cycles in the faulty case. A comparison with other detection and correction schemes shows a significantly lower hardware overhead. |
||
BibTeX:
@inproceedings{ImhofW2011, author = {Imhof, Michael E. and Wunderlich, Hans-Joachim}, title = {{Korrektur transienter Fehler in eingebetteten Speicherelementen}}, booktitle = {5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)}, publisher = {VDE VERLAG GMBH}, year = {2011}, volume = {231}, pages = {76--83}, keywords = {Transiente Fehler; Soft Error; Single Event Upset (SEU); Erkennung; Lokalisierung; Korrektur; Latch; Register; Single Event Effect; Detection; Localization; Correction}, abstract = {In der vorliegenden Arbeit wird ein Schema zur Korrektur von transienten Fehlern in eingebetteten, pegelgesteuerten Speicherelementen vorgestellt. Das Schema verwendet Struktur- und Informationsredundanz, um Single Event Upsets (SEUs) in Registern zu erkennen und zu korrigieren. Mit geringem Mehraufwand kann ein betroffenes Bit lokalisiert und mit einem hier vorgestellten Bit-Flipping-Latch (BFL) rückgesetzt werden, so dass die Zahl zusätzlicher Taktzyklen im Fehlerfall minimiert wird. Ein Vergleich mit anderen Erkennungs- und Korrekturschemata zeigt einen deutlich reduzierten Hardwaremehraufwand. |
||
16. | Eingebetteter Test zur hochgenauen Defekt-Lokalisierung Mumtaz, Abdullah; Imhof, Michael E.; Holst, Stefan; Wunderlich, Hans-Joachim 5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11) Vol. 231, Hamburg-Harburg, Germany, 27-29 September 2011, pp. 43-47 |
2011 URL PDF |
Keywords: Eingebetteter Selbsttest; Pseudoerschöpfender Test; Diagnose; Debug; BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug | ||
Abstract: Moderne Diagnosealgorithmen können aus den vorhandenen Fehlerdaten direkt die defekte Schaltungsstruktur identifizieren, ohne sich auf spezialisierte Fehlermodelle zu beschränken. Solche Algorithmen benötigen jedoch Testmuster mit einer hohen Defekterfassung. Dies ist insbesondere im eingebetteten Test eine große Herausforderung. Der Partielle Pseudo-Erschöpfende Test (P-PET) ist eine Methode, um die Defekterfassung im Vergleich zu einem Zufallstest oder einem deterministischen Test für das Haftfehlermodell zu erhöhen. Wird die im eingebetteten Test übliche Phase der vorgeschalteten Erzeugung von Pseudozufallsmustern durch die Erzeugung partieller pseudo-erschöpfender Muster ersetzt, kann bei vergleichbarem Hardware-Aufwand und gleicher Testzeit eine optimale Defekterfassung für den größten Schaltungsteil erreicht werden. Diese Arbeit kombiniert zum ersten Mal P-PET mit einem fehlermodell-unabhängigen Diagnosealgorithmus und zeigt, dass sich beliebige Defekte im Mittel wesentlich präziser diagnostizieren lassen als mit Zufallsmustern oder einem deterministischen Test für Haftfehler. Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing. |
||
BibTeX:
@inproceedings{MumtaIHW2011a, author = {Mumtaz, Abdullah and Imhof, Michael E. and Holst, Stefan and Wunderlich, Hans-Joachim}, title = {{Eingebetteter Test zur hochgenauen Defekt-Lokalisierung}}, booktitle = {5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)}, publisher = {VDE VERLAG GMBH}, year = {2011}, volume = {231}, pages = {43--47}, keywords = {Eingebetteter Selbsttest; Pseudoerschöpfender Test; Diagnose; Debug; BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug}, abstract = {Moderne Diagnosealgorithmen können aus den vorhandenen Fehlerdaten direkt die defekte Schaltungsstruktur identifizieren, ohne sich auf spezialisierte Fehlermodelle zu beschränken. Solche Algorithmen benötigen jedoch Testmuster mit einer hohen Defekterfassung. Dies ist insbesondere im eingebetteten Test eine große Herausforderung. Der Partielle Pseudo-Erschöpfende Test (P-PET) ist eine Methode, um die Defekterfassung im Vergleich zu einem Zufallstest oder einem deterministischen Test für das Haftfehlermodell zu erhöhen. Wird die im eingebetteten Test übliche Phase der vorgeschalteten Erzeugung von Pseudozufallsmustern durch die Erzeugung partieller pseudo-erschöpfender Muster ersetzt, kann bei vergleichbarem Hardware-Aufwand und gleicher Testzeit eine optimale Defekterfassung für den größten Schaltungsteil erreicht werden. Diese Arbeit kombiniert zum ersten Mal P-PET mit einem fehlermodell-unabhängigen Diagnosealgorithmus und zeigt, dass sich beliebige Defekte im Mittel wesentlich präziser diagnostizieren lassen als mit Zufallsmustern oder einem deterministischen Test für Haftfehler. |
||
15. | Variation-Aware Fault Modeling Hopsch, Fabian; Becker, Bernd; Hellebrand, Sybille; Polian, Ilia; Straube, Bernd; Vermeiren, Wolfgang; Wunderlich, Hans-Joachim SCIENCE CHINA Information Sciences Vol. 54(9), September 2011, pp. 1813-1826 |
2011 DOI PDF |
Keywords: process variations; test methods; statistical test; histogram data base | ||
Abstract: To achieve a high product quality for nano-scale systems, both realistic defect mechanisms and process variations must be taken into account. While existing approaches for variation-aware digital testing either restrict themselves to special classes of defects or assume given probability distributions to model variabilities, the proposed approach combines defect-oriented testing with statistical library characterization. It uses Monte Carlo simulations at electrical level to extract delay distributions of cells in the presence of defects and for the defect-free case. This allows distinguishing the effects of process variations on the cell delay from defectinduced cell delays under process variations. To provide a suitable interface for test algorithms at higher levels of abstraction, the distributions are represented as histograms and stored in a histogram data base (HDB). Thus, the computationally expensive defect analysis needs to be performed only once as a preprocessing step for library characterization, and statistical test algorithms do not require any low level information beyond the HDB. The generation of the HDB is demonstrated for primitive cells in 45 nm technology. | ||
BibTeX:
@article{HopscBHPSVW2011, author = {Hopsch, Fabian and Becker, Bernd and Hellebrand, Sybille and Polian, Ilia and Straube, Bernd and Vermeiren, Wolfgang and Wunderlich, Hans-Joachim}, title = {{Variation-Aware Fault Modeling}}, journal = {SCIENCE CHINA Information Sciences}, publisher = {Science China Press, co-published with Springer-Verlag}, year = {2011}, volume = {54}, number = {9}, pages = {1813--1826}, keywords = {process variations; test methods; statistical test; histogram data base}, abstract = {To achieve a high product quality for nano-scale systems, both realistic defect mechanisms and process variations must be taken into account. While existing approaches for variation-aware digital testing either restrict themselves to special classes of defects or assume given probability distributions to model variabilities, the proposed approach combines defect-oriented testing with statistical library characterization. It uses Monte Carlo simulations at electrical level to extract delay distributions of cells in the presence of defects and for the defect-free case. This allows distinguishing the effects of process variations on the cell delay from defectinduced cell delays under process variations. To provide a suitable interface for test algorithms at higher levels of abstraction, the distributions are represented as histograms and stored in a histogram data base (HDB). Thus, the computationally expensive defect analysis needs to be performed only once as a preprocessing step for library characterization, and statistical test algorithms do not require any low level information beyond the HDB. The generation of the HDB is demonstrated for primitive cells in 45 nm technology.}, doi = {http://dx.doi.org/10.1007/s11432-011-4367-8}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/SCIS_HopscBHPSVW2011.pdf} } |
||
14. | Soft Error Correction in Embedded Storage Elements Imhof, Michael E.; Wunderlich, Hans-Joachim Proceedings of the 17th IEEE International On-Line Testing Symposium (IOLTS'11), Athens, Greece, 13-15 July 2011, pp. 169-174 |
2011 DOI PDF |
Keywords: Single Event Effect; Correction; Latch; Register | ||
Abstract: In this paper a soft error correction scheme for embedded storage elements in level sensitive designs is presented. It employs space redundancy to detect and locate Single Event Upsets (SEUs). It is able to detect SEUs in registers and employ architectural replay to perform correction with low additional hardware overhead. Together with the proposed bit flipping latch an online correction can be implemented on bit level with a minimal loss of clock cycles. A comparison with other detection and correction schemes shows a significantly lower hardware overhead. | ||
BibTeX:
@inproceedings{ImhofW2011a, author = {Imhof, Michael E. and Wunderlich, Hans-Joachim}, title = {{Soft Error Correction in Embedded Storage Elements}}, booktitle = {Proceedings of the 17th IEEE International On-Line Testing Symposium (IOLTS'11)}, publisher = {IEEE Computer Society}, year = {2011}, pages = {169--174}, keywords = {Single Event Effect; Correction; Latch; Register}, abstract = {In this paper a soft error correction scheme for embedded storage elements in level sensitive designs is presented. It employs space redundancy to detect and locate Single Event Upsets (SEUs). It is able to detect SEUs in registers and employ architectural replay to perform correction with low additional hardware overhead. Together with the proposed bit flipping latch an online correction can be implemented on bit level with a minimal loss of clock cycles. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.}, doi = {http://dx.doi.org/10.1109/IOLTS.2011.5993832}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/IOLTS_ImhofW2011.pdf} } |
||
13. | Erkennung von transienten Fehlern in Schaltungen mit reduzierter Verlustleistung; Detection of transient faults in circuits with reduced power dissipation Imhof, Michael E.; Wunderlich, Hans-Joachim; Zoellin, Christian G. 2. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'08) Vol. 57, Ingolstadt, Germany, 29 September-1 October 2008, pp. 107-114 |
2008 URL PDF |
Keywords: Robustes Design; Fehlertoleranz; Verlustleistung; Latch; Register; Single Event Effect; Robust design; fault tolerance; power dissipation; latch; register; single event effects | ||
Abstract: Für Speicherfelder sind fehlerkorrigierende Codes die vorherrschende Methode, um akzeptable Fehlerraten zu erreichen. In vielen aktuellen Schaltungen erreicht die Zahl der Speicherelemente in freier Logik die Größenordnung der Zahl von SRAM-Zellen vor wenigen Jahren. Zur Reduktion der Verlustleistung wird häufig der Takt der pegelgesteuerten Speicherelemente unterdrückt und die Speicherelemente müssen ihren Zustand über lange Zeitintervalle halten. Die Notwendigkeit Speicherzellen abzusichern wird zusätzlich durch die Miniaturisierung verstärkt, die zu einer erhöhten Empfindlichkeit der Speicherelemente geführt hat. Dieser Artikel stellt eine Methode zur fehlertoleranten Anordnung von pegelgesteuerten Speicherelementen vor, die bei unterdrücktem Takt Einfachfehler lokalisieren und Mehrfachfehler erkennen kann. Bei aktiviertem Takt können Einfach- und Mehrfachfehler erkannt werden. Die Register können ähnlich wie Prüfpfade effizient in den Entwurfsgang integriert werden. Die Diagnoseinformation kann auf Modulebene leicht berechnet und genutzt werden. For memories error correcting codes are the method of choice to guarantee acceptable error rates. In many current designs the number of storage elements in random logic reaches the number of SRAM-cells some years ago. Clock-gating is often employed to reduce the power dissipation of level-sensitive storage elements while the elements have to retain their state over long periods of time. The necessity to protect storage elements is amplified by the miniaturization, which leads to an increased susceptibility of the storage elements. |
||
BibTeX:
@inproceedings{ImhofWZ2008a, author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zoellin, Christian G.}, title = {{Erkennung von transienten Fehlern in Schaltungen mit reduzierter Verlustleistung; |
||
12. | Integrating Scan Design and Soft Error Correction in Low-Power Applications Imhof, Michael E.; Wunderlich, Hans-Joachim; Zoellin, Christian G. Proceedings of the 14th IEEE International On-Line Testing Symposium (IOLTS'08), Rhodes, Greece, 7-9 July 2008, pp. 59-64 |
2008 DOI URL PDF |
Keywords: Robust design; fault tolerance; latch; low power; register; single event effects | ||
Abstract: Error correcting coding is the dominant technique to achieve acceptable soft-error rates in memory arrays. In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. Often latches are clock gated and have to retain their states during longer periods. Moreover, miniaturization has led to elevated susceptibility of the memory elements and further increases the need for protection. This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With active clock, single and multiple errors are detected. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing. |
||
BibTeX:
@inproceedings{ImhofWZ2008, author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zoellin, Christian G.}, title = {{Integrating Scan Design and Soft Error Correction in Low-Power Applications}}, booktitle = {Proceedings of the 14th IEEE International On-Line Testing Symposium (IOLTS'08)}, publisher = {IEEE Computer Society}, year = {2008}, pages = {59--64}, keywords = {Robust design; fault tolerance; latch; low power; register; single event effects}, abstract = {Error correcting coding is the dominant technique to achieve acceptable soft-error rates in memory arrays. In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. Often latches are clock gated and have to retain their states during longer periods. Moreover, miniaturization has led to elevated susceptibility of the memory elements and further increases the need for protection. |
||
11. | Scan Chain Clustering for Test Power Reduction Elm, Melanie; Wunderlich, Hans-Joachim; Imhof, Michael E.; Zoellin, Christian G.; Leenstra, Jens; Maeding, Nicolas Proceedings of the 45th ACM/IEEE Design Automation Conference (DAC'08), Anaheim, California, USA, 8-13 June 2008, pp. 828-833 |
2008 DOI PDF |
Keywords: Test; Design for Test; Low Power; Scan Design | ||
Abstract: An effective technique to save power during scan based test is to switch off unused scan chains. The results obtained with this method strongly depend on the mapping of scan flip-flops into scan chains, which determines how many chains can be deactivated per pattern. In this paper, a new method to cluster flip-flops into scan chains is presented, which minimizes the power consumption during test. The approach does not specify any ordering inside the chains and fits seamlessly to any standard tool for scan chain integration. The application of known test power reduction techniques to the optimized scan chain configurations shows significant improvements for large industrial circuits. |
||
BibTeX:
@inproceedings{ElmWIZLM2008, author = {Elm, Melanie and Wunderlich, Hans-Joachim and Imhof, Michael E. and Zoellin, Christian G. and Leenstra, Jens and Maeding, Nicolas}, title = {{Scan Chain Clustering for Test Power Reduction}}, booktitle = {Proceedings of the 45th ACM/IEEE Design Automation Conference (DAC'08)}, publisher = {ACM}, year = {2008}, pages = {828--833}, keywords = {Test; Design for Test; Low Power; Scan Design}, abstract = {An effective technique to save power during scan based test is to switch off unused scan chains. The results obtained with this method strongly depend on the mapping of scan flip-flops into scan chains, which determines how many chains can be deactivated per pattern. |
||
10. | Selective Hardening in Early Design Steps Zoellin, Christian G.; Wunderlich, Hans-Joachim; Polian, Ilia; Becker, Bernd Proceedings of the 13th IEEE European Test Symposium (ETS'08), Lago Maggiore, Italy, 25-29 May 2008, pp. 185-190 |
2008 DOI URL PDF |
Keywords: Soft error mitigation; reliability | ||
Abstract: Hardening a circuit against soft errors should be performed in early design steps before the circuit is laid out. A viable approach to achieve soft error rate (SER) reduction at a reasonable cost is to harden only parts of a circuit. When selecting which locations in the circuit to harden, priority should be given to critical spots for which an error is likely to cause a system malfunction. The criticality of the spots depends on parameters not all available in early design steps. We employ a selection strategy which takes only gate-level information into account and does not use any low-level electrical or timing information. We validate the quality of the solution using an accurate SER estimator based on the new UGC particle strike model. Although only partial information is utilized for hardening, the exact validation shows that the susceptibility of a circuit to soft errors is reduced significantly. The results of the hardening strategy presented are also superior to known purely topological strategies in terms of both hardware overhead and protection. |
||
BibTeX:
@inproceedings{ZoellWPB2008, author = {Zoellin, Christian G. and Wunderlich, Hans-Joachim and Polian, Ilia and Becker, Bernd}, title = {{Selective Hardening in Early Design Steps}}, booktitle = {Proceedings of the 13th IEEE European Test Symposium (ETS'08)}, publisher = {IEEE Computer Society}, year = {2008}, pages = {185--190}, keywords = {Soft error mitigation; reliability}, abstract = {Hardening a circuit against soft errors should be performed in early design steps before the circuit is laid out. A viable approach to achieve soft error rate (SER) reduction at a reasonable cost is to harden only parts of a circuit. When selecting which locations in the circuit to harden, priority should be given to critical spots for which an error is likely to cause a system malfunction. The criticality of the spots depends on parameters not all available in early design steps. We employ a selection strategy which takes only gate-level information into account and does not use any low-level electrical or timing information. |
||
9. | Signature Rollback – A Technique for Testing Robust Circuits Amgalan, Uranmandakh; Hachmann, Christian; Hellebrand, Sybille; Wunderlich, Hans-Joachim Proceedings of the 26th IEEE VLSI Test Symposium (VTS'08), San Diego, California, USA, 27 April-1 May 2008, pp. 125-130 |
2008 DOI URL PDF |
Keywords: Embedded Test; Robust Design; Rollback and Recovery; Test Quality and Reliability; Time Redundancy | ||
Abstract: Dealing with static and dynamic parameter variations has become a major challenge for design and test. To avoid unnecessary yield loss and to ensure reliable system operation a robust design has become mandatory. However, standard structural test procedures still address classical fault models and cannot deal with the non-deterministic behavior caused by parameter variations and other reasons. Chips may be rejected, even if the test reveals only non-critical failures that could be compensated during system operation. This paper introduces a scheme for embedded test, which can distinguish critical permanent and noncritical transient failures for circuits with time redundancy. To minimize both yield loss and the overall test time, the scheme relies on partitioning the test into shorter sessions. If a faulty signature is observed at the end of a session, a rollback is triggered, and this particular session is repeated. An analytical model for the expected overall test time provides guidelines to determine the optimal parameters of the scheme. | ||
BibTeX:
@inproceedings{AmgalHHW2008, author = {Amgalan, Uranmandakh and Hachmann, Christian and Hellebrand, Sybille and Wunderlich, Hans-Joachim}, title = {{Signature Rollback – A Technique for Testing Robust Circuits}}, booktitle = {Proceedings of the 26th IEEE VLSI Test Symposium (VTS'08)}, publisher = {IEEE Computer Society}, year = {2008}, pages = {125--130}, keywords = {Embedded Test; Robust Design; Rollback and Recovery; Test Quality and Reliability; Time Redundancy}, abstract = {Dealing with static and dynamic parameter variations has become a major challenge for design and test. To avoid unnecessary yield loss and to ensure reliable system operation a robust design has become mandatory. However, standard structural test procedures still address classical fault models and cannot deal with the non-deterministic behavior caused by parameter variations and other reasons. Chips may be rejected, even if the test reveals only non-critical failures that could be compensated during system operation. This paper introduces a scheme for embedded test, which can distinguish critical permanent and noncritical transient failures for circuits with time redundancy. To minimize both yield loss and the overall test time, the scheme relies on partitioning the test into shorter sessions. If a faulty signature is observed at the end of a session, a rollback is triggered, and this particular session is repeated. An analytical model for the expected overall test time provides guidelines to determine the optimal parameters of the scheme.}, url = {http://www.computer.org/csdl/proceedings/vts/2008/3123/00/3123a125-abs.html}, doi = {http://dx.doi.org/10.1109/VTS.2008.34}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/VTS_AmgalHHW2008.pdf} } |
||
8. | Test Set Stripping Limiting the Maximum Number of Specified Bits Kochte, Michael A.; Zoellin, Christian G.; Imhof, Michael E.; Wunderlich, Hans-Joachim Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA'08), Hong Kong, China, 23-25 January 2008, pp. 581-586 Best paper award |
2008 DOI URL PDF |
Keywords: test relaxation; test generation; tailored ATPG | ||
Abstract: This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, results for commercial ATPG test sets show that even the overall number of specified bits is reduced substantially | ||
BibTeX:
@inproceedings{KochtZIW2008, author = {Kochte, Michael A. and Zoellin, Christian G. and Imhof, Michael E. and Wunderlich, Hans-Joachim}, title = {{Test Set Stripping Limiting the Maximum Number of Specified Bits}}, booktitle = {Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA'08)}, publisher = {IEEE Computer Society}, year = {2008}, pages = {581--586}, keywords = {test relaxation; test generation; tailored ATPG}, abstract = {This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, results for commercial ATPG test sets show that even the overall number of specified bits is reduced substantially}, url = {http://www.computer.org/csdl/proceedings/delta/2008/3110/00/3110a581-abs.html}, doi = {http://dx.doi.org/10.1109/DELTA.2008.64}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/DELTA_KochtZIW2008.pdf} } |
||
7. | Programmable Deterministic Built-in Self-test Hakmi, Abdul-Wahid; Wunderlich, Hans-Joachim; Zoellin, Christian G.; Glowatz, Andreas; Hapke, Friedrich; Schloeffel, Juergen; Souef, Laurent Proceedings of the International Test Conference (ITC'07), Santa Clara, California, USA, 21-25 October 2007, pp. 1-9 |
2007 DOI PDF |
Keywords: Deterministic BIST, Test data compression | ||
Abstract: In this paper, we propose a new programmable deterministic Built-In Self-Test (BIST) method that requires significantly lower storage for deterministic patterns than existing programmable methods and provides high flexibility for test engineering in both internal and external test. Theoretical analysis suggests that significantly more care bits can be encoded in the seed of a Linear Feedback Shift Register (LFSR), if a limited number of conflicting equations is ignored in the employed linear equation system. The ignored care bits are separately embedded into the LFSR pattern. In contrast to known deterministic BIST schemes based on test set embedding, the embedding logic function is not hardwired. Instead, this information is stored in memory using a special compression and decompression method. Experiments for benchmark circuits and industrial designs demonstrate that the approach has considerably higher overall coding efficiency than the existing methods. |
||
BibTeX:
@inproceedings{HakmiWZGHSS2007, author = {Hakmi, Abdul-Wahid and Wunderlich, Hans-Joachim and Zoellin, Christian G. and Glowatz, Andreas and Hapke, Friedrich and Schloeffel, Juergen and Souef, Laurent}, title = {{Programmable Deterministic Built-in Self-test}}, booktitle = {Proceedings of the International Test Conference (ITC'07)}, publisher = {IEEE Computer Society}, year = {2007}, pages = {1--9}, keywords = {Deterministic BIST, Test data compression}, abstract = {In this paper, we propose a new programmable deterministic Built-In Self-Test (BIST) method that requires significantly lower storage for deterministic patterns than existing programmable methods and provides high flexibility for test engineering in both internal and external test. |
||
6. | A Refined Electrical Model for Particle Strikes and its Impact on SEU Prediction Hellebrand, Sybille; Zoellin, Christian G.; Wunderlich, Hans-Joachim; Ludwig, Stefan; Coym, Torsten; Straube, Bernd Proceedings of the 22nd IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems (DFT'07), Rome, Italy, 26-28 September 2007, pp. 50-58 |
2007 DOI URL PDF |
Abstract: Decreasing feature sizes have led to an increased vulnerability of random logic to soft errors. In combinational logic a particle strike may lead to a glitch at the output of a gate, also referred to as single even transient (SET), which in turn can propagate to a register and cause a single event upset (SEU) there. Circuit level modeling and analysis of SETs provides an attractive compromise between computationally expensive simulations at device level and less accurate techniques at higher levels. At the circuit level particle strikes crossing a pn-junction are traditionally modeled with the help of a transient current source. However, the common models assume a constant voltage across the pn-junction, which may lead to inaccurate predictions concerning the shape of expected glitches. To overcome this problem, a refined circuit level model for strikes through pnjunctions is investigated and validated in this paper. The refined model yields significantly different results than common models. This has a considerable impact on SEU prediction, which is confirmed by extensive simulations at gate level. In most cases, the refined, more realistic, model reveals an almost doubled risk of a system failure after an SET. |
||
BibTeX:
@inproceedings{HelleZWLCS2007, author = {Hellebrand, Sybille and Zoellin, Christian G. and Wunderlich, Hans-Joachim and Ludwig, Stefan and Coym, Torsten and Straube, Bernd}, title = {{A Refined Electrical Model for Particle Strikes and its Impact on SEU Prediction}}, booktitle = {Proceedings of the 22nd IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems (DFT'07)}, publisher = {IEEE Computer Society}, year = {2007}, pages = {50--58}, abstract = {Decreasing feature sizes have led to an increased vulnerability of random logic to soft errors. In combinational logic a particle strike may lead to a glitch at the output of a gate, also referred to as single even transient (SET), which in turn can propagate to a register and cause a single event upset (SEU) there. |
||
5. | Testing and Monitoring Nanoscale Systems - Challenges and Strategies for Advanced Quality Assurance (Invited Paper) Hellebrand, Sybille; Zoellin, Christian G.; Wunderlich, Hans-Joachim; Ludwig, Stefan; Coym, Torsten; Straube, Bernd Proceedings of 43rd International Conference on Microelectronics, Devices and Material with the Workshop on Electronic Testing (MIDEM'07), Bled, Slovenia, 12-14 September 2007, pp. 3-10 |
2007 |
Abstract: The increased number of fabrication defects, spatial and temporal variability of parameters, as well as the growing impact of soft errors in nanoelectronic systems require a paradigm shift in design, verification and test. A robust design becomes mandatory to ensure dependable systems and acceptable yields. Design robustness, however, invalidates many traditional approaches for testing and implies enormous challenges. The RealTest Project addresses these problems for nanoscale CMOS and targets unified design and test strategies to support both a robust design and a coordinated quality assurance after manufacturing and during the lifetime of a system. The paper first gives a short overview of the research activities within the project and then focuses on a first result concerning soft errors in combinational logic. It will be shown that common electrical models for particle strikes in random logic have underestimated the effects on the system behavior. The refined model developed within the RealTest Project predicts about twice as many single events upsets (SEUs) caused by particle strikes as traditional models. | ||
BibTeX:
@inproceedings{HelleZWLCS2007a, author = {Hellebrand, Sybille and Zoellin, Christian G. and Wunderlich, Hans-Joachim and Ludwig, Stefan and Coym, Torsten and Straube, Bernd}, title = {{Testing and Monitoring Nanoscale Systems - Challenges and Strategies for Advanced Quality Assurance (Invited Paper)}}, booktitle = {Proceedings of 43rd International Conference on Microelectronics, Devices and Material with the Workshop on Electronic Testing (MIDEM'07)}, publisher = {MIDEM}, year = {2007}, pages = {3--10}, abstract = {The increased number of fabrication defects, spatial and temporal variability of parameters, as well as the growing impact of soft errors in nanoelectronic systems require a paradigm shift in design, verification and test. A robust design becomes mandatory to ensure dependable systems and acceptable yields. Design robustness, however, invalidates many traditional approaches for testing and implies enormous challenges. The RealTest Project addresses these problems for nanoscale CMOS and targets unified design and test strategies to support both a robust design and a coordinated quality assurance after manufacturing and during the lifetime of a system. The paper first gives a short overview of the research activities within the project and then focuses on a first result concerning soft errors in combinational logic. It will be shown that common electrical models for particle strikes in random logic have underestimated the effects on the system behavior. The refined model developed within the RealTest Project predicts about twice as many single events upsets (SEUs) caused by particle strikes as traditional models.}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/MIDEM_HelleZWLCS2007a.pdf} } |
||
4. | Scan Test Planning for Power Reduction Imhof, Michael E.; Zoellin, Christian G.; Wunderlich, Hans-Joachim; Maeding, Nicolas; Leenstra, Jens Proceedings of the 44th ACM/IEEE Design Automation Conference (DAC'07), San Diego, California, USA, 4-8 June 2007, pp. 521-526 |
2007 DOI URL PDF |
Keywords: Test planning, power during test | ||
Abstract: Many STUMPS architectures found in current chip designs allow disabling of individual scan chains for debug and diagnosis. In a recent paper it has been shown that this feature can be used for reducing the power consumption during test. Here, we present an efficient algorithm for the automated generation of a test plan that keeps fault coverage as well as test time, while significantly reducing the amount of wasted energy. A fault isolation table, which is usually used for diagnosis and debug, is employed to accurately determine scan chains that can be disabled. The algorithm was successfully applied to large industrial circuits and identifies a very large amount of excess pattern shift activity. | ||
BibTeX:
@inproceedings{ImhofZWML2007a, author = {Imhof, Michael E. and Zoellin, Christian G. and Wunderlich, Hans-Joachim and Maeding, Nicolas and Leenstra, Jens}, title = {{Scan Test Planning for Power Reduction}}, booktitle = {Proceedings of the 44th ACM/IEEE Design Automation Conference (DAC'07)}, publisher = {ACM}, year = {2007}, pages = {521--526}, keywords = {Test planning, power during test}, abstract = {Many STUMPS architectures found in current chip designs allow disabling of individual scan chains for debug and diagnosis. In a recent paper it has been shown that this feature can be used for reducing the power consumption during test. Here, we present an efficient algorithm for the automated generation of a test plan that keeps fault coverage as well as test time, while significantly reducing the amount of wasted energy. A fault isolation table, which is usually used for diagnosis and debug, is employed to accurately determine scan chains that can be disabled. The algorithm was successfully applied to large industrial circuits and identifies a very large amount of excess pattern shift activity.}, url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4261239}, doi = {http://dx.doi.org/10.1145/1278480.1278614}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/DAC_ImhofZWML2007a.pdf} } |
||
3. | Test und Zuverlässigkeit nanoelektronischer Systeme Becker, Bernd; Polian, Ilia; Hellebrand, Sybille; Straube, Bernd; Wunderlich, Hans-Joachim 1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07) Vol. 52, Munich, Germany, 26-28 March 2007, pp. 139-140 |
2007 URL PDF |
Abstract: Neben der zunehmenden Anfälligkeit gegenüber Fertigungsfehlern bereiten insbesondere vermehrte Parameterschwankungen, zeitabhängige Materialveränderungen und eine erhöhte Störanfälligkeit während des Betriebs massive Probleme bei der Qualitätssicherung für nanoelektronische Systeme. Für eine wirtschaftliche Produktion und einen zuverlässigen Systembetrieb wird einerseits ein robuster Entwurf unabdingbar, andererseits ist damit auch ein Paradigmenwechsel beim Test erforderlich. Anstatt lediglich defektbehaftete Systeme zu erkennen und auszusortieren, muss der Test bestimmen, ob ein System trotz einer gewissen Menge von Fehlern funktionsfähig ist, und die verbleibende Robustheit gegenüber Störungen im Betrieb charakterisieren. Im Rahmen des Projekts RealTest werden einheitliche Entwurfs- und Teststrategien entwickelt, die sowohl einen robusten Entwurf als auch eine darauf abgestimmte Qualitätssicherung unterstützen. | ||
BibTeX:
@inproceedings{BeckeHSW2007, author = {Becker, Bernd and Polian, Ilia and Hellebrand, Sybille and Straube, Bernd and Wunderlich, Hans-Joachim}, title = {{Test und Zuverlässigkeit nanoelektronischer Systeme}}, booktitle = {1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07)}, publisher = {VDE VERLAG GMBH}, year = {2007}, volume = {52}, pages = {139--140}, abstract = {Neben der zunehmenden Anfälligkeit gegenüber Fertigungsfehlern bereiten insbesondere vermehrte Parameterschwankungen, zeitabhängige Materialveränderungen und eine erhöhte Störanfälligkeit während des Betriebs massive Probleme bei der Qualitätssicherung für nanoelektronische Systeme. Für eine wirtschaftliche Produktion und einen zuverlässigen Systembetrieb wird einerseits ein robuster Entwurf unabdingbar, andererseits ist damit auch ein Paradigmenwechsel beim Test erforderlich. Anstatt lediglich defektbehaftete Systeme zu erkennen und auszusortieren, muss der Test bestimmen, ob ein System trotz einer gewissen Menge von Fehlern funktionsfähig ist, und die verbleibende Robustheit gegenüber Störungen im Betrieb charakterisieren. Im Rahmen des Projekts RealTest werden einheitliche Entwurfs- und Teststrategien entwickelt, die sowohl einen robusten Entwurf als auch eine darauf abgestimmte Qualitätssicherung unterstützen.}, url = {http://www.vde-verlag.de/proceedings-de/463023018.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/ZuE_BeckeHSW2007.pdf} } |
||
2. | Verlustleistungsoptimierende Testplanung zur Steigerung von Zuverlässigkeit und Ausbeute Imhof, Michael E.; Zöllin, Christian G.; Wunderlich, Hans-Joachim; Mäding, Nicolas; Leenstra, Jens 1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07) Vol. 52, Munich, Germany, 26-28 March 2007, pp. 69-76 |
2007 URL PDF |
Abstract: Die stark erhöhte durchschnittliche und maximale Verlustleistung während des Tests integrierter Schaltungen kann zu einer Beeinträchtigung der Ausbeute bei der Produktion sowie der Zuverlässigkeit im späteren Betrieb führen. Wir stellen eine Testplanung für Schaltungen mit parallelen Prüfpfaden vor, welche die Verlustleistung während des Tests reduziert. Die Testplanung wird auf ein überdeckungsproblem abgebildet, das mit einem heuristischen Lösungsverfahren effizient auch für große Schaltungen gelöst werden kann. Die Effizienz des vorgestellten Verfahrens wird sowohl für die bekannten Benchmarkschaltungen als auch für große industrielle Schaltungen demonstriert. | ||
BibTeX:
@inproceedings{ImhofZWML2007, author = {Imhof, Michael E. and Zöllin, Christian G. and Wunderlich, Hans-Joachim and Mäding, Nicolas and Leenstra, Jens}, title = {{Verlustleistungsoptimierende Testplanung zur Steigerung von Zuverlässigkeit und Ausbeute}}, booktitle = {1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07)}, publisher = {VDE VERLAG GMBH}, year = {2007}, volume = {52}, pages = {69--76}, abstract = {Die stark erhöhte durchschnittliche und maximale Verlustleistung während des Tests integrierter Schaltungen kann zu einer Beeinträchtigung der Ausbeute bei der Produktion sowie der Zuverlässigkeit im späteren Betrieb führen. Wir stellen eine Testplanung für Schaltungen mit parallelen Prüfpfaden vor, welche die Verlustleistung während des Tests reduziert. Die Testplanung wird auf ein überdeckungsproblem abgebildet, das mit einem heuristischen Lösungsverfahren effizient auch für große Schaltungen gelöst werden kann. Die Effizienz des vorgestellten Verfahrens wird sowohl für die bekannten Benchmarkschaltungen als auch für große industrielle Schaltungen demonstriert.}, url = {http://www.vde-verlag.de/proceedings-de/463023008.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/ZuE_ImhofZWML2007.pdf} } |
||
1. | DFG-Projekt RealTest - Test und Zuverlässigkeit nanoelektronischer Systeme; DFG-Project – Test and Reliability of Nano-Electronic Systems Becker, Bernd; Polian, Ilia; Hellebrand, Sybille; Straube, Bernd; Wunderlich, Hans-Joachim it - Information Technology Vol. 48(5), October 2006, pp. 304-311 |
2006 DOI PDF |
Keywords: Nanoelektronik; Entwurf; Test; Zuverlässigkeit; Fehlertoleranz/Nano-electronics; Design; Test; Dependability; Fault Tolerance | ||
Abstract: Entwurf, Verifikation und Test zuverlässiger nanoelektronischer Systeme erfordern grundlegend neue Methoden und Ansätze. Ein robuster Entwurf wird unabdingbar, um Fertigungsfehler, Parameterschwankungen, zeitabhängige Materialveränderungen und vorübergehende Störungen in gewissem Umfang zu tolerieren. Gleichzeitig verlieren gerade dadurch viele traditionelle Testverfahren ihre Aussagekraft. Im Rahmen des Projekts RealTest werden einheitliche Entwurfs- und Teststrategien entwickelt, die sowohl einen robusten Entwurf als auch eine darauf abgestimmte Qualitätssicherung unterstützen. The increasing number of fabrication defects, spatial and temporal variability of parameters, as well as the growing impact of soft errors in nanoelectronic systems require a paradigm shift in design, verification and test. A robust design is mandatory to ensure dependable systems and acceptable yields. The quest for design robustness, however, invalidates many traditional approaches for testing and implies enormous challenges. Within the framework of the RealTest project unified design and test strategies are developed to support a robust design and a coordinated quality assurance after the production and during the lifetime of a system. |
||
BibTeX:
@article{BeckePHSW2006, author = {Becker, Bernd and Polian, Ilia and Hellebrand, Sybille and Straube, Bernd and Wunderlich, Hans-Joachim}, title = {{DFG-Projekt RealTest - Test und Zuverlässigkeit nanoelektronischer Systeme; |
Workshop Contributions
4. | Integrating Scan Design and Soft Error Correction in Low-Power Applications Imhof, Michael E.; Wunderlich, Hans-Joachim; Zöllin, Christian 1st International Workshop on the Impact of Low-Power Design on Test and Reliability (LPonTR'08), Verbania, Italy, 25-29 May 2008 |
2008 |
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1; Robust design; fault tolerance; latch; low power; register; single event effects | ||
Abstract: In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. In arrays, error correcting coding is the dominant technique to achieve acceptable soft-error rates. For low power applications, often latches are clock gated and have to retain their states during longer periods while miniaturization has led to elevated susceptibility and further increases the need for protection. This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With small addition, single and multiple errors are detected in the clocked mode, too. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing. |
||
BibTeX:
@inproceedings{ImhofWZ2008, author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zöllin, Christian}, title = {{Integrating Scan Design and Soft Error Correction in Low-Power Applications}}, booktitle = {1st International Workshop on the Impact of Low-Power Design on Test and Reliability (LPonTR'08)}, year = {2008}, keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1; Robust design; fault tolerance; latch; low power; register; single event effects}, abstract = {In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. In arrays, error correcting coding is the dominant technique to achieve acceptable soft-error rates. For low power applications, often latches are clock gated and have to retain their states during longer periods while miniaturization has led to elevated susceptibility and further increases the need for protection. |
||
3. | Ein verfeinertes elektrisches Modell für Teilchentreffer und dessen Auswirkung auf die Bewertung der Schaltungsempfindlichkeit Coym, Torsten; Hellebrand, Sybille; Ludwig, Stefan; Straube, Bernd; Wunderlich, Hans-Joachim; Zöllin, Christian 20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08), Wien, Austria, 24-26 February 2008, pp. 153-157 |
2008 |
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1 | ||
BibTeX:
@inproceedings{CoymHLSWZ2008, author = {Coym, Torsten and Hellebrand, Sybille and Ludwig, Stefan and Straube, Bernd and Wunderlich, Hans-Joachim and Zöllin, Christian}, title = {{Ein verfeinertes elektrisches Modell für Teilchentreffer und dessen Auswirkung auf die Bewertung der Schaltungsempfindlichkeit}}, booktitle = {20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08)}, year = {2008}, pages = {153--157}, keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1} } |
||
2. | Reduktion der Verlustleistung beim Selbsttest durch Verwendung testmengenspezifischer Information Imhof, Michael E.; Wunderlich, Hans-Joachim; Zöllin, Christian; Leenstra, Jens; Maeding, Nicolas 20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08), Wien, Austria, 24-26 February 2008, pp. 137-141 |
2008 |
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1 | ||
Abstract: Der während des Selbsttests von Schaltungen mit deaktivierbaren Prüfpfaden verwendete Testplan entscheidet über die Verlustleistung während des Tests. Bestehende Verfahren zur Erzeugung des Testplans verwenden überwiegend topologische Information, zum Beispiel den Ausgangskegel eines Fehlers. Aufgrund der implizit gegebenen Verknüpfung zwischen Testplan und Mustermenge ergeben sich weitreichende Synergieeffekte durch die Ausschöpfung mustermengenabhängiger Informationen. Die Verwendung von testmengenspezifischer Information im vorgestellten Algorithmus zeigt bei gleichbleibender Fehlererfassungsrate und Testdauer deutliche Einsparungen in der benötigten Verlustleistung. Das Verfahren wird an industriellen und Benchmark-Schaltungen mit bestehenden, überwiegend topologisch arbeitenden Verfahren verglichen. | ||
BibTeX:
@inproceedings{ImhofWZLM2008, author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zöllin, Christian and Leenstra, Jens and Maeding, Nicolas}, title = {{Reduktion der Verlustleistung beim Selbsttest durch Verwendung testmengenspezifischer Information}}, booktitle = {20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08)}, year = {2008}, pages = {137--141}, keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1}, abstract = {Der während des Selbsttests von Schaltungen mit deaktivierbaren Prüfpfaden verwendete Testplan entscheidet über die Verlustleistung während des Tests. Bestehende Verfahren zur Erzeugung des Testplans verwenden überwiegend topologische Information, zum Beispiel den Ausgangskegel eines Fehlers. Aufgrund der implizit gegebenen Verknüpfung zwischen Testplan und Mustermenge ergeben sich weitreichende Synergieeffekte durch die Ausschöpfung mustermengenabhängiger Informationen. Die Verwendung von testmengenspezifischer Information im vorgestellten Algorithmus zeigt bei gleichbleibender Fehlererfassungsrate und Testdauer deutliche Einsparungen in der benötigten Verlustleistung. Das Verfahren wird an industriellen und Benchmark-Schaltungen mit bestehenden, überwiegend topologisch arbeitenden Verfahren verglichen.} } |
||
1. | Programmable Deterministic Built-in Self-test Hakmi, Abdul-Wahid; Wunderlich, Hans-Joachim; Zöllin, Christian; Glowatz, Andreas; Schlöffel, Jürgen; Hapke, Friedrich 19th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'07), Erlangen, Germany, 11-13 March 2007, pp. 61-65 |
2007 |
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1; Deterministic BIST; test data compression; reseeding | ||
Abstract: In this paper, we propose a new programmable deterministic Built-In Self-Test (BIST) method that requires significantly lower storage for deterministric patterns than existing programmable methods and provides high flexibilily for test engineering in bolh internal and external test. Theoretical analysis suggests that significanlly more care bits can be encoded in the seed or a Linear Feedback Shift Register (LFSR) if a limited number of conflicting equations are ignored in the employed linear equation system. The ignored care bits are separately embedded into the LFSR pattern, but in contrast to bit-flipping BIST, the test set is not embedded by a synthesized logic function. Instead, this information is stored in memory using a special compression architecture. Experiments for benchmark circuits industrial designs demonstrate that the approach has considerably higher overall coding efficency than the existing methods. | ||
BibTeX:
@inproceedings{HakmiWZGSH2007, author = {Hakmi, Abdul-Wahid and Wunderlich, Hans-Joachim and Zöllin, Christian and Glowatz, Andreas and Schlöffel, Jürgen and Hapke, Friedrich}, title = {{Programmable Deterministic Built-in Self-test}}, booktitle = {19th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'07)}, year = {2007}, pages = {61--65}, keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1; Deterministic BIST; test data compression; reseeding}, abstract = {In this paper, we propose a new programmable deterministic Built-In Self-Test (BIST) method that requires significantly lower storage for deterministric patterns than existing programmable methods and provides high flexibilily for test engineering in bolh internal and external test. Theoretical analysis suggests that significanlly more care bits can be encoded in the seed or a Linear Feedback Shift Register (LFSR) if a limited number of conflicting equations are ignored in the employed linear equation system. The ignored care bits are separately embedded into the LFSR pattern, but in contrast to bit-flipping BIST, the test set is not embedded by a synthesized logic function. Instead, this information is stored in memory using a special compression architecture. Experiments for benchmark circuits industrial designs demonstrate that the approach has considerably higher overall coding efficency than the existing methods.} } |
Project Partners
- Project Home (University Paderborn)
- Fraunhofer IIS-EAS Dresden
- University Freiburg
- University Paderborn
- University Stuttgart
Contacts
- Prof. Dr. rer. nat. habil. Hans Joachim Wunderlich
Tel.: +49-711-685-88-391
wu@informatik.uni-stuttgart.de
- Dipl.-Inf. Marcus Wagner
Tel.: +49-711-685-88-222
marcus.wagner@informatik.uni-stuttgart.de
- Anusha Kakarala
Tel.: +49-711-685-88-281
anusha.kakarala@informatik.uni-stuttgart.de
- Dipl.-Inf. Eric Schneider
+49-711-685-88-370
schneiec at iti dot uni-stuttgart dot de