Testing new memory technology chips

11 December 2015

The electronics industry is heavily invested in the development of new memory technologies such as PRAM, MRAM and RRAM.

The performance of new memory technology test chips is improving rapidly, but work still needs to be done before these devices can go full-scale to compete with or replace conventional memories.

Generally speaking, when a test chip for a new memory technology becomes available, basic tests have already been carried out to check for manufacture-related problems such as stuck-at faults, transition faults and address-decoding faults. But another type of testing is necessary as well in the form of performance-related tests that will disclose how fast the chip can be reliably accessed, as well as how much the chip access speed impacts the performance of the whole computing system.

To successfully carry out the planned performance tests, the test environment must be able to generate configurable digital waveforms to access the chip. It must also be able to construct an entire computing environment to measure the impact of chip access speed. There are many ways to create or purchase a test environment to satisfy these needs. But our team at Qualcomm decided to make our own environment based on Xilinx’s ZC706

Evaluation Kit.

Ins and outs of memories  

Conventional memory technologies like DRAM, SRAM and flash store ones and zeros using an electrical charge in each memory cell. DRAM is widely used in PCs and mobile computing devices to run programs and to store temporary data. SRAM is commonly used as cache memory and register files in microprocessors. It is also frequently found in embedded systems when power consumption is a big concern. Unlike DRAM or SRAM, flash memory offers persistent storage after power is removed from the system. Flash memory runs more slowly than the others, and might wear out with excessively high numbers of programming cycles.

In comparison to conventional charge-based memory technologies, new memory technologies are based on other physical properties of their storage elements. As an example, a memory element of magnetoresistive RAM  (MRAM) is formed from two ferromagnetic plates separated by a thin layer of insulator. Each plate can hold a magnetisation. One of them is permanent, the other can be changed by an external field to store data. The stored data is read by measuring the electrical resistance of the element. MRAM is similar in speed to SRAM and similar in density to DRAM. Compared with flash memory, MRAM runs much faster and suffers no degradation from programming.

Requirement analysis
   
When devising a scheme for evaluating the MRAM test chip, we settled on a Zynq SoC approach because of the following considerations: 

• The FPGA Mezzanine Card (FMC) interface on the ZC706 board provides high-speed signaling capability to and from the memory test chip through an FMC daughtercard.
• The programmable logic (PL) portion of the Zynq SoC provides the ability to construct parameterisable memory controller cores. This is essential to meet the requirement that the test chip access speed can be varied.
• The Zynq SoC’s processing system (PS), which consists of two ARM A9 cores, provides the ability to modify test chip access speed through software.
• The PS also makes it possible to construct a complete computing system. This is essential to meet the requirement that the test system measure the impact of chip access speed on a full computing environment.

Hardware and system architecture  

The hardware architecture of the chip test environment is illustrated in Figure 1. Software runs on the Zynq SoC’s ARM A9 processors, while the memory controller core is created using the programmable logic. We established a DMA channel between the PS and the controller core to move large blocks of data between them easily. The memory test chip resides on the FMC daughtercard, and it talks with the memory controller core through the FMC interface.

The system architecture is illustrated in Figure 2. The three layers on the bottom are hardware layers and the three layers on the top are software layers. We selected Linux as the operating system because it is open source, so the source code can be tweaked if needed. Although no tweaking was done in the current stage of development, it might be necessary to take advantage of some unique properties of new memory chips down the road. 

The software we wrote at the application layer fell into two categories. One category was for configuring the memory controller core, and the other one involved profiling the performance of the memory chip and the performance of the whole system.

Easy migration of hardware and software
   
With help from the local Xilinx FAE, we brought up the test environment within a month. Most of our effort was spent on designing and implementing the interface between software and hardware layers. This is actually one of the reasons that we like the Zynq SoC: It contains both microprocessors and programmable logic in one device, which makes migrating functions between hardware and software fairly easy. In our design, we fine-tuned the software/hardware partition a couple of times and eventually settled on the one we liked. To comfortably work on a Zynq SoC-based system, one needs to understand both hardware and software reasonably well.

Another thing we liked was the Vivado Design Suite tool chain. The Vivado environment shows the design blocks, automatically assigns register addresses and checks for errors before exporting hardware information to the software development process. The Vivado Design Suite also provides in-system signal-level debugging ability, which is a must-have to pinpoint the root cause of any RTL issue.

The final thing we want to mention here is the Linux OS. Our software at the application level is heavily GUI based. The popularity of the Linux OS allowed us to leverage our previous experience on Linux GUI development so that we could get the test programs up and running quickly. 


Contact Details and Archive...

Print this page | E-mail this page