EE 651 - Digital System Testing and Design for Testability
List of Typical Project Topics
The choice of project topics is not limited to the following list.
You are free to pick from among these topics, to modify topics, or
to come up with your own ideas.
All term project topics must be approved by the instructor.
Microprocessors are undoubtedly the most complex sequential circuits being
manufactured today; hence, microprocessors pose some of the greatest
challenges to the test engineer.
The project here is to survey some of the strategies that are being adopted
to test state-of-the-art microprocessors, such as the IBM/Motorola PowerPC
series and the Intel Pentium series.
It would be useful to also review the evolution of testing strategies that
were developed to test earlier generations of microprocessors.
Are there any clearly developed trends?
How will the microprocessors of the early 2000's be tested?
RAMs with Built-In Self-Test:
Random-access memories are a key building block in most digital systems.
There are two main reasons why it is becoming increasingly desirable to
use RAMs with BIST.
First, RAM chips are getting so large in storage capacity that it is
desirable that each of the chips in a computer system, or even in different
regions within one RAM chip, be tested in parallel.
Having the built-in self-test capability would greatly facilitate
parallel testing, thus offering the potential of greatly reduced
The second reason is that memories are now often embedded on the same
same chip along with other logic.
The RAM is not always directly accessible from the chip pins;
this forces the RAM to be tested indirectly, and hence awkwardly and
probably more expensively, via the surrounding logic circuitry.
Adding the BIST capability to an embedded RAM allows the RAM to be tested
independently from the surrounding circuitry, thus simplifying the test
generation problem and reducing the overall testing cost.
This project would be a survey of some the main schemes that have been
developed for implementing BIST in dynamic and static RAMs.
RAMs with Built-In Self-Diagnosis and Self-Repair:
It has been proposed by several academic and industrial research groups
that memories be given the ability to test themselves, diagnosis where
the faults are, and then implement a suitable repair by swapping bad
memory elements with spare memory elements.
This project would survey some of the major published schemes, and then
comment on the practicality and future of the idea.
Memories with Built-In Error Detection and Correction:
A strategy for increasing the reliability and availability of RAMs and
ROMs is to include built-in error detection and correction circuitry.
Often the strategy taken is to store coded data.
Typically the codes allow the user of the memory to determine with presence,
and possibly also the exact location, of bit errors in a retrieved word.
Simple codes that are used include parity bit codes and Hamming codes.
Depending on the implementation, the presence of error detection
(and correction) can complicate the testing problem.
An approach that has been taken for ROMs is to use a special checksum
computed for a good ROM to determine the one word containing an erroneous bit
in a faulty ROM.
Once the error has been determined, special error correction circuitry
automatically corrects the ROM output word each time that the faulty word
In this way a faulty ROM can be repaired and used as if it were fault-free,
thus raising the effective yield of usable chips.
A project in this topic could survey the major error detection and correction
schemes in RAMs and ROMs.
IEEE 1149.1 - Boundary Scan and Test Access Port Standard:
This standard describes a way in which, with the addition
of four special test pins to each chip, the interconnections between the chips
on a printed circuit board can be easily tested from the boards's edge, without
the need to actually make mechanical contact with the pins on each chip.
The 1149.1 standard also provides standard protocols for accessing
various common test modes, such as built-in self test and scan testing.
This project would involve a brief discussion of the 1149.1 standard and
its impact on system testing strategies.
IEEE 1149.4 - Enhanced Boundary Scan Test Access Port Standard:
This recently approved standard specifies design-for-testability features
that support the testing of embedded analog and mixed analog/digital
1149.4 was designed to be an enhancement over IEEE 1149.4.
This project would involve briefly describing the standard as well as
reviewing the developments and negotiations that led to the new standard.
IEEE P1500 - Proposed Embedded Core Test Standard:
Most large digital ICs are not built from scratch.
Instead, much of the circuitry is assembled from pre-designed and/or
The use of cores is very attractive to designers because it can greatly
reduce the development time for new "custom" chips.
As libraries of compatible cores have become available, the resulting
market of re-usable intellectual property (IP) designs is helping
to sustain and accelerate the design and introduction of so-called
A challenge of core-based design is how to ensure that the resulting
chips are testable, even if the cores individually are testable.
The proposed P1500 core test standard aims to establish standards that
will improve the testability of core-based designs.
This project would review the elements that are likely be included in
the new standard, as well as noting the compromises that are going to
be required to achieve an industry-wide consensus.
Very High Speed Interface Testing:
The consumer takes it for granted that microprocessor and memory
speeds will continue to march upward from 500 MHz, to 700 MHz, and
then all the way up to 1 GHz and beyond.
But how will the high-frequency interfaces to complex digital systems
be tested adequately when most automatic test equipment struggles to
operate at faster than 400 MHz?
For example, how will 800 MHz Direct Rambus memory-processor
interfaces be tested?
One possible project topic is to review the major approaches that could
be taken to solve the high-speed testing challenges using
lower speed test equipment in combination with high-speed instrumentation.
Parallel Test Pattern Generation:
A number of approaches have been proposed to exploit the parallelism inherent
in the key problem of deterministic test pattern generation.
This project would involve reviewing and comparing some of the main approaches.
Testable Finite State Machines:
Finite state machines are a fundamental building block in sequential
integrated circuit designs.
Understandably, much effort has been spent developing computer-aided
design tools which automatically synthesize correct IC layouts given
boolean equations describing the desired sequential behaviour.
It is also desirable that the resulting designs be testable.
This project would involve surveying several different strategies
for synthesizing inherently testable FSMs.
Coder/Decoder (CODEC) Testing:
The CODEC is the electronic component which converts the
analog signals expected by convential telephone sets to and from the
digital signals used internally by modern telephone systems.
Lying at the interface between the analogue and digital worlds, the CODEC
is a particularly difficult part to test.
This project could involve looking at various published CODEC designs and
comparing the testing strategies that were adopted in each case.
What testability features have been provided on recent CODEC chip designs?
IC Tester Architecture:
The rapidly improving performance of advanced integrated circuits places
especially tough demands on the test equipment that must test them.
This project would involve surveying some of the tester architectures that
have been developed to cope with the problem.
What are the main categories of IC testers?
What are the subsystems present in an IC tester?
How are functional, parametric and timing tests actually performed?
Delay Fault Testing:
Delay faults model the phenomenon where logic signal transitions along some
signal paths are delayed beyond the specified limits.
This project would involve reviewing some of the work that has been performed
on modelling and testing such faults.
What is the difference between gate delay faults and path delay faults?
Can convential test pattern generation algorithms be modified to
test delay faults?
What specialized TPG algorithms for delay faults, if any, have been
described in the literature?
Quality Level versus Stuck-At Fault Coverage:
Integrated circuit quality is usually defined to mean the fraction of
defective chips that are present in the supply of delivered parts.
The goal of testing is to weed out the defective parts and thus ensure
a high quality level in the remaining parts.
However, tests are designed to detect simplified fault models, such as single
This project would involve reviewing some of the important papers that
have considered the relationship between quality level and the level of
stuck-at fault coverage.
Is it in fact sufficient to only achieve a high level of stuck-at fault
coverage and ignore the problem of explicitly detecting other faults?
Cellular Automata versus LFSRs for BIST:
The linear feedback shift register (LFSR) is the most common way of
generating parallel or serial psuedo-random bit sequences.
In parallel mode, however, the LFSR can be criticized as a poor pseudo-random
bit source because, being a shift register, there is a high degree of
correlation between adjacent bits (a random source would have no
correlation between adjacent bits).
The cellular automaton (CA) has been proposed as an alternative way of
generating parallel pseudo-random bit streams which are much more ``random'',
according to standard tests of randomness, than the data produced by
This project would survey some of the work that has attempted to demonstrate
the superiority of the CA over the LFSR in testing applications.
Special-Purpose Simulation Engines:
Simulation and fault simulation typically produce the bulk of the computation
required in the design and test process.
Several special-purpose computers, called ``simulation engines'', have been
designed so as to speed up the simulation computation.
Some of these machines have very impressive performance.
For example, IBM's Yorktown Simulation Engine is supposed to be capable of
simulating gate-level models of the early 8-bit microprocessors at a rate
faster than the original parts.
This project would involve surveying the various machines that have been
proposed and/or designed, and predicting where future developments might
TPG for Sequential Circuits:
In class, we only briefly considered in general terms the structure of
test pattern generation algorithms for sequential circuits.
This project would involve a more detailed discussion of how various practical
sequential TPG algorithms have been designed.
The advantages and disadvantages of each method should be discussed.
Testing for Bridging Faults:
The bridging fault has been closely studied because of its close
relationship to a common physical defect, the short between nearby conductors.
This project would involve a more detailed look at the bridging fault
and the kinds of behaviours that it is expected to produce in various
A discussion of algorithms designed specifically to generate tests that
detect bridging faults should also be included.
PLA Testing and Testable PLAs:
The programmable logic array (PLA) is a versatile circuit structure used
to simultaneously implement several boolean functions over a common set
of input variables.
PLAs facilitate the sharing of logic gates and result in conveniently
Many researchers have considered the related problems of testing PLAs and
of modifying the basic PLA design to produce more testable or self-testable
There are many sub-areas within this general topic area which would be suitable
for a project:
One project is to examine the interrelationships among the various
fault models that have been considered for PLAs.
Can one safely assume that by looking for crosspoint faults, one is also
going to detect all struck-at, bridging and transistor faults?
What is the effect, if any, of crosspoint redundancy on the effectiveness of
the various fault models?
Concurrent or on-line testing is the process of using coded data to test
the proper functionality of circuits as there are being used.
A project would be to consider the various ways in which concurrent testing
has been proposed for PLAs.
What are some of the ways in which PLAs have been modified?
What codes have been used?
What are the trade-offs among the various approaches?
An attractive way of solving the problem of testing PLAs that are embedded
in large circuits is to make them self-testable.
What are the various approaches that have been taken to designing PLAs
with built-in self-test?
What are the strengths and weaknesses of the main schemes?
Are RAMs with BIST actually being used?
Cache Memory Testing:
Cache memories contribute an increasing proportion of the transistors
on microprocessor ICs.
This project would survey some of the approaches that have been taken
in industrial designs to test the L1 and L2 caches in recent
Content-Addressable Memory (CAM) Testing:
CAMs are a class of memory that includes caches.
However, CAMs are now increasingly used to perform routing functions in
high-speed digital packet switches.
This project would survey some of design-for-testability approaches that
are being used in large CAMs.
Multi-Port Memory Testing:
Multi-port memories pose a challenge because of the possibility of
erroneous interactions between the different ports.
This project would involve investigating the fault models and new tests
that have been proposed for multi-port memories.
Last updated February 27, 2000