Required hardware

16 October 2014

Off-chip trace hardware will be required to interface the processor to the development environment. It should be capable of collecting data with the core running at its final clock rate. Many solutions are available, ranging in capability and price.

Some include important features such as a logic analyser and high-speed serial trace functionality.  For older processors without a JTAG port, an In-Circuit-Emulator (ICE) can be used.

Many reference design boards come with low cost debug solutions. These have the advantage of being easy to set up and use, but can prove to have a very limited capability as code development progresses. The more powerful solutions may be more costly and maybe even require some training, but the longer-term benefits will quickly prove a good return on investment.

Quality and standards

The big changes that debug tools and code analysis developers are seeing, is it becomes no longer enough to simply test a system. An important trend has been a requirement for engineers to document and prove code behaviour and performance.  In some instances companies are requesting engineers to be certified and qualified in software writing or mandate some form of specified code quality. With the right choice of debug tools, detailed information about code behaviour can be generated to provide the necessary proof, showing such aspects as the route taken by the code and time taken. In this respect, program flow trace is ideal in helping companies prove code behaviour and in achieving safety specifications such as ISO26262.


Analysis

Long term trace is involved in streaming trace to a hard drive and thus overcome the limitations on internal buffer size in debug tools. This provides the ability to collect massive amounts of code performance information from a running embedded system in order for the developer to detect and analyse the most unpredictable and transient of bugs

One example is to imagine an engine management system. This longer code coverage enables engineers to analyse the software from a cold start, then up to temperature, through acceleration and deceleration and then to shut down. Using long term trace technologies, engineers are now able to capture code over much longer periods that can extend to hours or even days, becoming a very important break through.

Trace technology

'High speed serial' trace technology is a serial transmission technology that has a transmission rate of over 6Gbits of up to four lines from the target core. This data rate could transmit the entire contents of a DVD in 3 seconds, making it ideal for the collecting of data for the debugging and development of high speed multicore embedded systems and those requiring mission critical analysis.

Traditionally, the trace interface had been a parallel port. Over recent years this design of port has struggled to keep up with the growing flood of information as processors have become more complex, faster and with multiple cores. The step to high speed serial solved these problems and had the side effect of reducing the pin count. So a lot of effort by silicon designers and tool vendors has been made to increase the data of the trace interface.

ARM has implemented high speed serial trace with its high speed serial trace port (HSSTP). This has been followed by AMCC with the Titan, Freescale with the QorIQ processors P4040 and P4080 and Marvell with the SETM3.

Engineers can now source a hardware interface for serial trace; a universal pre-processor has been developed on the basis of the Aurora protocol. Only the firmware and software have to be changed, this means that existing systems will need minimal reconfiguration.

An important aspect of the technology is the large volume of trace data generated requires a correspondingly large trace memory.


Multi-Core

Multi-core development is an area of embedded systems which is finding its place in the automotive sector as future emissions and fuel usage specifications demand far greater analysis and control.

With multi-core, the data is extracted through a shared trace port. Two cores are not simply ‘twice’ as complicated to debug as a single core. They are several times more complicated, because of these interactions.

Budget debug technology is not up to the job of multi-core development. For multi-core you will need a powerful trace tool and you will have a whole new level of problems to avoid, such as race conditions, uncontrolled interactions and out of sequence code behaviour.  Your toolset will need to be able to collect a large amount of accurate data to give you the detail and visibility into the code you will need to be able to understand the behaviour and interactions of the processes running on the cores.

Code optimisation

In mission critical systems where a fast and predictable performance is essential, cache analysis has proven to be an important development.
Cache memory is on-chip memory that can be accessed very quickly by the CPU. Access to cache memory can be in the order of 10-100 times faster than access to off-chip memory. This fast memory acts as a temporary storage for code or data variables that are accessed on a repeated basis, thereby enabling better system performance. However, having cache memory does not necessarily equate to better performance from a system. In order to get the best performance from cache-based architecture, it is important to understand how a cache-based architecture works and how to get the best from particular cache architecture.  Trace technology enables analysis tools to confirm the effectiveness of cache usage and can have an important impact on the overall performance of the software.

Safety critical equipment may be a wireless portable device, where battery life and predictable power usage may be an important factor. With this in mind, power management is another area that is being influenced by the use of Trace tools.

An energy profiler can provide a test set-up that measures, records and analyses the program and data flow of the control software as well as current and voltage gradients. Statistical analyses are run automatically after each program stop. They provide information about minimum, maximum and mean values of the energy consumption of the executed functions. Similarly, the absolute and percentage share of the total energy consumption is calculated for each function. This makes it easy to locate the program parts that use the most energy and enable modification, or simply enable the reliable prediction of battery life and system performance.


A clear insight into how engineers are increasingly relying on trace technology to not only enable the location of bugs, but to also create better code and provide clearer visibility and understanding of code behaviour is necessary. It is strongly recommended that such tools are explored and evaluated at the earliest stage of a new project.


Contact Details and Archive...

Print this page | E-mail this page