Faster design entry with Vivado IP Integrator and Xilinx IP

12 December 2014

Modern FPGA-based designs use an increasing amount of intellectual property (IP), both in variety and number of instances.

The Vivado Design Suite’s IP Integrator (IPI) tool and Xilinx communications IP are making it easier to quickly connect these IP blocks together. 


To illustrate the power of the IPI approach, consider the example of a wireless remote radio head (RRH). Situated near the antenna, the RRHs form part of a cellular communications network. They are normally connected with optical fiber upstream to a baseband transceiver station and optionally downstream to further RRHs, thus implementing a multihop topology.  


The Common Public Radio Interface (CPRI) protocol is widely used in linking these RRHs together. Create an example design with one uplink CPRI port and three downlink CPRI ports, and connect them. The majority of this job can be accomplished with IPI. The result will form a major component in the overall design. A Kintex-7 device will be used, which is an excellent fit in this application due to its low power, low cost and high performance. The GTX transceivers in 2 speed-grade All Programmable Kintex FPGAs and Zynq-7000 SoCs make it possible to use the 9.8Gbps CPRI line rate. 


We can create the block design and the required IP from the IP catalog. The CPRI cores are available in the standard Xilinx IP catalog and have been optimised for the sharing of resources where possible and for ease of use in IPI. The switches are custom IP.


IP core resource sharing

One of the challenges customers encounter when using multiple instances of IP is how to share resources efficiently. A number of communication IP cores support the “shared logic” feature. In the case of the CPRI core, we can configure the IP with sharable logic resources inside the core or we can omit these shared resources. If they are included in the core, they will provide the necessary outputs to let us connect them to the cores that have excluded the logic.  


Users with specialised requirements may wish to exclude this logic on all their cores and implement their own. In this design, Xilinx have configured CPRI cores to run at 9.8Gbps. At this line rate, it is necessary to use a LC-tank-based oscillator for the transceiver clock. Transceivers in the Kintex-7 device are arranged in quads, with each transceiver quad consisting of four transceiver channels and one LC-tank-based quad phase-locked loop (QPLL). It is necessary for all the cores to share the QPLL and the clock generated by the uplink clocking. The QPLL and clock output ports on the uplink core are customised with shared logic connected to the appropriate input ports on a downlink CPRI core that has been customised with it excluded. 


Routing data between CPRI cores  


We have also instantiated the IQ switch and the Ethernet switch to allow data to be routed between the cores. 


Control and management data in the CPRI network is transmitted via an Ethernet subchannel. The Ethernet switch in the system makes it possible to issue firmware updates or commands remotely and transmit them to any node. The IP was designed to use as few logic resources as possible, since a fully featured Ethernet switch in this situation is not necessary.
The IQ switch provides the ability to route any IQ sample between CPRI cores with deterministic latency. An important feature for multihop radio systems is the ability to accurately measure the link delay, and the CPRI standard defines a method to facilitate this measurement.


Connecting interfaces with IPI


IPI bus interfaces map a defined set of logical ports to particular physical ports on the IP. If we use interfaces wherever possible, we move from connecting many signals to connecting a few interfaces. Common bus interfaces on IP are those that conform to the ARM AXI specification, such as AXI4-Lite and AXI4-Stream. This elevation of abstraction makes design entry easier and faster, and also allows you to take advantage of design rule checks for the interface. Vivado IP Packager will allow you to use your own IP within IP Integrator and to take advantage of interfaces in your own design.


IPI makes it easy to connect interfaces together. Simply click on the interface and IPI will indicate what it can connect to. Drag the connection line to the desired end point and the connection will be made. This technique allows you to connect many signals with just a couple of clicks.

The Ethernet switch provides a number of AXI4-Stream interfaces, two GMII interfaces and an AXI4-Lite interface. The streaming interfaces allow direct connection to the CPRI cores and this removes the need for internal buffering on the CPRI core. The GMII interfaces allow connection to an Ethernet PHY, which could be useful for an engineer in the field debugging a network issue. The AXI4-Lite management interface provides access to the address table mapping and other configuration options such as the address table aging interval.


Continuing in this fashion, we can build up our system, connecting the interfaces within IPI. You have the flexibility to use whatever entry method works best for you. In addition to using the GUI to link interfaces, you can also opt to directly issue commands through the Tcl console or source them from a script. Every time you do something in the GUI, the resulting command will be echoed back. 


You can also export the entire design when you have finished creating it with the command “write_bd_tcl.” This command will create a Tcl file that can be sourced to create the entire block design from scratch and can be easily used as part of a scripted build flow. All of the IP in the design provides an AXI4-Lite management interface to allow the cores to connect to a host processor. Intelligence built into IPI allows connection automation. With this mechanism, IPI will recognise that the AXI4-Lite interface on the IP will connect to the AXI bus interconnect and automatically configure the appropriate address ranges and connect the bus. You can then connect this bus to the host processor with the aid of IPI. The host processor in this case is a MicroBlaze, but if using a Zynq SoC series device it could be easily changed to leverage the ARM CPU. 


Vivado IP Integrator capabilities are growing rapidly and with that growth, further gains will be achieved. With the right IP, we can put together whole subsystems quickly and reap the rewards.


Contact Details and Archive...

Print this page | E-mail this page