Tough decisions made easy for maintaining PCI connectivity
20 October 2010
Rami Sethi explains how PCI Express (PCIe ) has now become the interconnect of choice for processor complexes, adapter cards, IO cards and graphics engines.
Applications that have traditionally relied on legacy protocols as the device interconnect, such as PCI, PCI-X, VME and others, are rapidly transitioning to new architectures involving PCIe. In particular, PCI, which was ubiquitous in applications from computing to communications, has seen its support in processors, chipsets and ASICs decline precipitously.
Unfortunately, the PCIe transition has not been linear across the peripherals that must connect to these ICs, as is often the case in technology shifts. So, as PCIe supplants legacy PCI entirely as the native interconnect on successive generations of microprocessors, the maintenance of PCI support for legacy endpoints has increasingly become a point of concern for system designers.
The installed base of legacy PCI-based peripherals is still considerable, especially in markets where platforms are constructed in a modular fashion using commercial off-the-shelf (COTS) cards or where technology roadmaps tend to lag between processor complexes and the peripheral devices to which they connect. Examples of this include computing motherboards with PCI adapter card slots for legacy peripherals or debug tools, chassis-based systems with interchangeable blade/mezzanine architectures, such as those based on Advanced Telephony Computing Architecture (ATCA), and single-board designs where one or more endpoints has not transitioned to PCIe as a native interface.
In applications where PCI support is a requirement, selection and implementation of a proper bridging solution should be made carefully. Bridging solutions are, by their nature, usually something less than elegant, and a proper bridging device should be as unobtrusive as possible in the end application. Consequently, a PCIe bridge’s performance, power, cost, board space and seamless compatibility with the latest standards and an extensive range of legacy devices are often the chief criteria for selection.
The introduction of a PCIe-to-PCI bridge where it previously did not exist provides new performance implications that are generally benchmarked in two ways; latency and throughput. The latency through a bridge can simply be measured by the time that is needed for a transaction to traverse through the bridge. In the case of latency, a lower figure will be more advantageous in traffic scenarios that utilise multiple transactions containing smaller data payloads. The latency impact on overall performance will diminish as payload sizes increase, and throughput, which is a measure of the data that can pass through the bridge in a given time, will begin to dominate. Throughput can be separated out into two transaction types; posted (writes) and delayed (reads). The overall bridge throughput limits will often converge on the performance of delayed transactions initiated from the legacy PCI device. Some commonly implemented features can improve throughput performance with delayed transactions, such as read pre-fetching and concurrent upstream requests, as can unique features such as short-term caching. To maximise the aggregate bandwidth, a bridge’s management of delayed transactions will provide the greatest overall benefit.
With the ever-increasing emphasis on power reduction across the spectrum of applications, it is important for PCIe-to-PCI bridges to take advantage of every mechanism available to lower power. This includes a combination of device architecture, design methodology and proper implementation of all power-saving states defined by the latest standard revisions. Of particular importance are the various low-power link states defined in the PCIe Base Specification, including fast-exit and Active State Power Management (ASPM).
ASPM capabilities are of specific interest in energy-efficient computing applications (such as in Energy Star-labelled computers). System designers seeking low-power PCIe solutions should pay close attention to the particular PCIe Base Specification revision of compliance for a given device. For example, although ASPM was defined in Revision 1.0 of the PCIe Base Specification, most 1.0-compliant devices did not implement it correctly. As a result, Microsoft selected PCIe Base Specification 1.1 as the required level of support to enable ASPM for a given device in the operating system.
Another very direct way to reduce power, as well as cost and board space, is to eliminate as many active external components as possible. Clock buffers, clock generators, bus arbiters and voltage regulators are prime examples of devices that can both be readily integrated and easily managed on chip. This does come with some special considerations about support for different clocking modes, number of downstream devices that can be natively supported and appropriate voltage nodes of the application. For example, some PCIe bridges use on-chip regulators to reduce the number of supply voltages that must be supplied to the chip. This results in fewer voltage supplies and a smaller footprint, but can increase the power consumed in the bridge.
Alternatively, the various circuit blocks of the PCIe bridge can be designed around specific voltage rails that match those already supplied to other key components in a given application. In the targeted applications for such a device, this can significantly reduce the power, cost and footprint of a PCIe bridge solution.
Probably one of the most important considerations in the selection of a PCIe-to-PCI bridge is its compatibility with the full range of PCI devices with which it might have to interface in the field. In applications where the bridge connects to one or more PCI slots, such as in PC motherboards, this is especially important because the system manufacturer may have great difficulty restricting what sort of PCI peripheral might be plugged in by a user. A good example of this problem is the incompatibility of many PCIe-to-PCI bridges with older PCI cards that only operate at 5V. Bridges that are subjected to 5V PCI signalling and not properly designed for 5V tolerance would experience long-term reliability effects due to degradation of the PCI interface input buffers.
Another consideration comes into play with older PCI peripherals that have fixed base addresses that cannot be translated into the PCIe address space and thus cannot be supported by a standard bridging device. Support for these peripherals requires a feature commonly called PCI Legacy Mode. This enables subtractive decoding on the upstream port of a PCIe-to-PCI bridge whereby transactions received that do not decode to an internal address are automatically forwarded to the PCI interface and can be claimed by devices on the secondary side.
The previous two examples illustrate some of the ‘thornier’ issues with interoperability of PCIe-to-PCI bridges, but there may be others related to specific device combinations. The first gate for consideration of any PCIe device should always be the PCIe Integrators List maintained by the PCI Special Interest Group. PCI-SIG offers compliance workshops where companies test compliance to the PCIe specification as well as interoperability with other devices, systems, and add-in cards available in the industry. Only devices that have passed this rigorous process are listed on the PCIe Integrators List, making it a valuable reference in PCIe component selection.
From big iron telecom equipment to ultra-mobile PCs to wireless access points, the steady elimination of PCI support from central processing chips makes connectivity to legacy peripherals increasingly difficult without utilising bridging solutions. Such solutions can present a range of headaches to a system designer, from failing to achieve energy-efficient labelling to total system failure, if they are not chosen carefully. Fortunately, new and innovative PCIe bridging solutions, such as the IDT PEB383, are being released to the market as a response to this evolving need.
Rami Sethi is Director of Marketing in the Enterprise Computing Division at IDT
Contact Details and Archive...