EE 552 Application Notes
DRAM
by Charlene Eriksen, Michael Rivest, and Kelly Lawson
These application notes are intended to give information on DRAM (Dynamic Random Access Memory). It is possible that in the future some projects may require larger amounts of memory than is feasible with SRAM. Hopefully this will give some useful information.
Overview of DRAM and refreshing
DRAM is a type of RAM that only holds its data if it is continuously refreshed. It is made of one transistor and one capacitor per bit. When the capacitor is energized, it holds an electrical charge if the bit is "1" and no charge if it is "0". The transistor is used to read the contents of the capacitor. Because the cells are built using capacitors, a refresh circuit must recharge the capacitors many hundreds of times each second to ensure that the charge, as well as the data, is not lost.
The memory chips are organized into rows and columns, so, to refresh DRAM, each row in memory is read, one row at a time. Because of the internal circuitry of each cell, simply reading the cell recharges the capacitors.
Differences between SRAM and DRAM
The main difference between SRAM (Static RAM) and DRAM is the fact that SRAM continues to remember its contents, while DRAM must be refreshed. This is a result of the difference in internal components. SRAM is made of four to six transistors per bit and, therefore, has no storage elements to keep charged. As mentioned before, DRAM consists of a transistor and capacitor.
This main difference between SRAM and DRAM creates disadvantages and advantages for each. DRAM is smaller and less expensive than SRAM because of the difference in the number of internal components for each cell. DRAM is typically 1/4 the silicon area of SRAM or less. However, SRAM is much more simple because it does not require an external refresh circuit and it is also faster than DRAM. Because SRAM is so much more expensive and larger, DRAM is used for system memory in PCs.
Asynchronous versus synchronous
In the past, DRAM has been asynchronous, meaning that memory access is not coordinated with the system clock. This works fine for lower speeds but high speed applications has led to the development of synchronous DRAM (SDRAM). In SDRAM, all signals are tied to the clock so timing is much tighter and better controlled.
Different types of DRAM
The many different types of DRAM vary in how they are configured and addressed. This section gives a comparison of some of the major types.
The oldest DRAM technology uses standard memory addressing where first the row address is sent to memory and then the column address. This type of DRAM is now fairly obsolete.
The next DRAM technology to develop was Fast Page Mode (FPM) DRAM. It is slightly faster than conventional DRAM. Instead of sending the row address for each memory accesses, FPM works by sending the row address just once for many accesses to memory in locations near each other. This is improves access time. FPM DRAM is quite slow compared to more modern technologies but is perhaps one of the simplest types.
Extended Data Out (EDO) DRAM is the most common type of asynchronous DRAM. It is also sometimes called Hyper Page Mode DRAM. This type is slightly faster than FPM because of a change in how memory access works. The timing circuits in EDO DRAM are set up so that one access to memory can begin before the last one has finished. This results in about a 3-5% performance boost over FPM and, because of its popularity, EDO DRAM has become less expensive than FPM.
Burst EDO (BEDO) has increased the performance of asynchronous DRAM above EDO. The use of pipelining allows for much faster access time with little additional cost. Although BEDO is an improvement in speed, it has not caught on in the marketplace and EDO remain the most common type of asynchronous DRAM.
Synchronous DRAM is relatively new but is rapidly becoming the standard in industry. SDRAM is tied to the system clock and is designed to read or write from memory with zero wait states (after the initial read or write latency) at memory bus speeds up to 100 MHz or even higher. This faster access is accomplished by a number of internal performance improvements, including interleaving, which involves providing simultaneous access to more than one chunk of memory.
As bus speeds increase, SDRAM will also be replaced by some of the emerging technologies. Double Data Rate (DDR) SDRAM is similar to regular to SDRAM but transfers data twice per cycle - on both the rising and falling edges of the clock signal. Direct Rambus (DR) and Synchronous-Link (SL) DRAM are also new technologies that may be part of the competition for faster and faster memory.
The table below gives some ideal timing information
for the basic types of DRAM described above when run at 66 MHz :
Memory Technology | Typical System Bus Speeds | Usual DRAM Speed |
Conventional | 4.77 - 40 MHz | 80 -150 ns |
FPM | 16 - 66 MHz | 60 - 80 ns |
EDO | 33 - 75 MHz | 50 - 60 ns |
BEDO | 60 -100 MHz | ?? |
SDRAM | 60 - 100+ MHz | 6 -12 ns |
Source: The PC Guide - DRAM Technologies by Charles M. Kozierok
Control of DRAM
Logic circuits can be used to control the refresh of DRAM. Logic can be used to refresh all rows with fixed intervals. If speed is a consideration, there are various techniques that can be used to refresh the DRAM. One way is to use a selective refresh with data allocation optimization . Another possibility is to use variable period refreshing. Selective refresh involves only refreshing valid rows so that each time data is stored, a flag is set to show that the row the data has been placed in is valid. By optimizing the allocation of data, the amount of rows that need to be refreshed can be reduced. If the data retention time of each DRAM cell is different, the period of refreshing can be varied based on values kept in a table.
Sources
A
Complete Illustrated Guide to the PC Hardware - Module 2e
by Michael B. Karbo
PC Guide
- System Memory by Charles M. Kozierok
Tom's
Hardware Guide - RAM Guide by Dean Kent
Optimizing
the DRAM Refresh Count by Taku Ohsawa, Koji
Kai, Kazuaki Murakami