Why should automated test remain challenging?

Author : James D. Voaden, National Instruments

01 August 2016

Figure 1 - In 2013, 60% of 'best-in-class' test organisations saw that the top test strategy was to design and deploy a next-generation platform/system compared to 18% of average-performing test organisations.

This article is the third instalment of the ‘Trends in Test’ series delivered by National Instruments, focusing on the challenges facing the test industry.

(Click here to view article in digi-issue)

Here, James D. Voaden takes a look at the ever-increasing importance of software within automated test systems, from development environments that abstract the complexity of low-level programming, to test management tools that integrate and sequence code in a language-agnostic manner. 

While automated test has empowered manufacturing lines for decades, the need for it today is greater than ever, with multiple billions of dollars a year being invested globally. So, with all this input of money, expertise and time, why should automated test remain challenging? Why do test engineers and test architects still face challenges when deploying test systems? The answer lies in the fact that the industries and technologies that require automated testing are not stagnant. Test requirements are constantly getting tighter and tougher to meet. The test systems that were designed yesterday are not meeting new requirements and, to stay competitive, there is a need for higher throughput, lower cost, and faster set-up. Test engineers, architects and managers need to do more and there won’t be a point where automated test is solved.

For example, in their testing procedure for cellular base station products, Motorola faced a challenge in creating consistency across integration, test and certification groups. They have test engineers with the expertise to build world-leading test systems; nevertheless, they wanted to do it better and more efficiently. This situation is easy to relate to; the ATE group wanted to minimise duplicated effort, to reduce man hours needed to develop each product test capability whilst dramatically reducing future time to test. Or in other words, they wanted to do more with less.

This is a reflection on us as a society. We are iterative, making small gains and constantly moving forward. If we stopped striving to do better, it would become easier to refine and improve current processes, but there would be no progression. As technically-minded people, we don’t accept the status quo; and this drives us to work on the cutting edge, even if this spawns further challenges to be solved.

Engineers choose not to take the easy option, instead making changes that can instigate progress in the world and society as a whole. That is no small achievement. However, there is an issue with this: how long does it take to ‘build’ expertise that can work at the cutting edge? Our education system is a clear example of this: we learn from first principles what all previous generations have learnt and discovered. How many times do we ask ourselves why we need to learn basic arithmetic when we could easily use calculators on our smart phones? By learning how to use tools, rather than basic principles, it’s possible to spend less time in front of a chalkboard, and more time doing engineering. 

The same applies to setting up an automated test system. It is unwise to waste time by starting from first principles on each application. As a test developer, would you want to focus your efforts here, redoing what someone else has already done and ignoring the tools that can make it easier? Equally, managers would prefer their engineers to have a greater impact by using their knowledge, insight and engineering ability to progress the test systems they are working on. Additionally, tools are available that allow abstraction of programming complexity and allow the engineer to instead focus on tasks that deliver real value, like improving measurement techniques or analysis. For example, LabVIEW could be used when developing test modules containing data acquisition, or Python for script based validation or verifications tests. These tools give a step up and increase reach, allowing engineers to focus efforts on developing cutting edge systems. 

Considering that software is the backbone of automation when building a test system, many organisations prefer to standardise on a single, fairly general language used for all aspects of test system design from individual component tests to test management across the board. The end result is the development of a homogenous test software approach. The main advantage is all members of a team can work in a single, standardised environment, which allows easier sharing of libraries and code modules across the team. The training for this approach is also greatly simplified since the team learns and works in a single environment.

Figure 2 - Best-in-class test organisations believe the two most critical factors in choosing a new test platform are software/application development time and faster test throughput time.

However, standardising on one language does present some disadvantages. Using a single language can limit new hires to a certain skillset or force new employees into learning new tools. As students graduate, they often have a preference for and experience in one or more specific languages. Also, as new managers take over, they commonly opt to implement a language of their choice, which causes organisational whiplash. This can be a costly exercise that often requires code migration, revalidation of the codebase, and additional training in the new language.

The best test managers must look to a newer approach to test-system development that builds a heterogeneous system out of multiple languages. Though beneficial, this approach can present a new challenge to test system development: different languages now need to work and communicate together to form a single system. To resolve this, all test engineers need to understand not just the one environment they specialise in but also the others to adequately interface with them.

There was a shift where the best return in investment lay in developing the low-level test components in house. However, as the tools became available, they were developed, adopted and then became standard industrial practice. With time saved on developing test modules, more time was spent on making more efficient test systems, with a stronger focus on test management. 

A need was identified to create a common framework that could be repeatable and reusable across multiple systems, giving rise to off-the-shelf test executive software. Just as with the tools available for test modules, if the industry accepts test management software to simplify executive tasks such as sequencing, data logging and report generation, the industry has taken another step. 

Motorola took this approach, adopting TestStand software as the platform to provide a test framework. This equipped their test engineers and architects with the right tool to enable and foster innovation, with very tangible results. Motorola’s Jim Morrison explained that by developing a single modular test application, Motorola reduced its combined annual maintenance costs from $700,000 to $400,000 USD, the annual projected new product test development costs fell from roughly $200,000 to $25,000 USD, and development and maintenance savings combined were $475,000 USD a year.

Starting test applications from first principles is like starting a fire with two sticks when you have a lighter in your pocket; ignoring the tools available only stifles progress. Creating and reusing test code modules in environments that map to an engineering team’s skills and preferences helps to increase productivity. Off-the-shelf test management software provides the platform on which to efficiently knit these code modules together, resulting in less time being spent on writing code to perform common test executive tasks. Instead, engineering time and resource can be invested in tasks that add value back to the business, like improving the probability of defect detection, feeding insight back into R&D and ultimately increasing the quality of products released to customers.

Read the full Motorola case study ‘Using NI TestStand to Test Cellular Base Stations’ online at ni.com/case-studies and read the full NI Automated Test Outlook 2016 at www.ni.com/ato/.


Contact Details and Archive...

Print this page | E-mail this page