The Pursuit of Perfection — An Effective Embedded Unit Test Process for Efficient Testing

by Adam Mackay
15.05.2020 Unit Testing,Code Coverage


The methods, techniques and tools to achieve robust embedded testing.


“It’ll never happen in the field”


“Optimise later”


“Perfection is the enemy of good”


“It’s good enough”


Every developer has heard at least one of these phrases. I’m sure you have. You may have even used them yourself. They hold some truth for developers of enterprise or front-end software. However, if you have used these in embedded software, you are very wrong.


Flawless embedded software is important in many industries… Embedded code underpins everything; it’s the fundamental layer of all devices. All other software — middleware logic, databases, web servers, user interfaces — everything depends on a functioning bottom layer of software.


And in the top layer of software — and with all due respect to those who attempt to get it right — it rarely matters if the pixels are out in the user interface, or it’s the wrong colour. It doesn’t affect the functionality of the software.


As Embedded software developers, our focus is to accelerate and improve software development in the pursuit of perfection. In the forthcoming series of articles, I will look at elements that come together to make a successful, fully tested, embedded application. Important aspects discussed include requirements traceability, software metrics, testing frameworks, code coverage and automation.




Define your Process

There’s generally an unspoken truth in the software industry; a lot of code isn’t well checked. Sections of it are performed in a rush without taking into account the rest of the system. Testing is cursory or performed without a full understanding of function. Developers find that testing is boring, or they’re afraid of what they might discover. Quality assurance engineers (those whose task it is to test) often work without complete requirements or the same deep understanding of the codes (a common source of frustration).


You can address most of these concerns by using a defined process in your test and verification activities. It is important that your team supports the process you define, and that you build in iterative feedback loops to refine and improve it. Whether you are using Agile, Waterfall or a DevOps variation, your process should always feature test design as early as possible. The first iteration of your test plan, in which we document the system test design, must be a “living document”. It will undergo regular updates and must, therefore, be under version control.


Implementation of test cases designed in the early stage of the project can sometimes not be realised as intended. If we don’t adjust the specification, then real tests and documentation get out of sync and you introduce technical debt into your test set. In the worst case, we can leave the tests in an unpredictable state.


A revision of the requirements specification is the most common reason for updating tests. To appreciate which of the existing system tests have to be rethought during such a revision of the requirements, an up-to-date traceability table is vital. This table keeps track of which requirement is being tested in which test case. Create and maintain these tables manually or use requirements engineering/management tools.




Efficient Test Design


The same basic engineering concepts that apply to the design of good software are needed when designing tests. Efficient test design comprises a variety of steps in which you increasingly increase the testing depth.


The specification of the program must drive the nature of the tests. For unit testing, we design the tests to check that the individual unit meets all design decisions taken in the design specification of the unit. A comprehensive unit test specification should include positive testing, that the unit does what it is supposed to do, and negative testing, that the unit does nothing it is not meant to do.


In the pursuit of perfection, we’re looking for mistakes everywhere, even in cases that will “never” happen.

Non-trivial software can process a large (infinite?) number of different input data. This is exasperated in situations where the order and timings of data entry order is important. Testers have the difficult task to develop a few discreet cases to test a system that can accommodate an infinite number of scenarios. The results of testing activities need to be fed back to the development team, tests do not improve quality: developers do.



First Test

The object of the first test case in any unit test process should be to execute the test unit in the simplest way possible. Not only does this perform a useful check of the unit under test, but it also verifies the functionality of the build and test toolchain. The confidence gained from knowing that you can execute a simple unit test isolated from the complete system is valuable. It not only provides the tester with a foundation upon which to build but also a route to debug any failures.



Positive vs Negative Testing


Initial testing should show that the software unit under test does what it needs to do. The test model should obey the specifications; each test case will test one or more specification statements. If multiple requirements are involved, it is best to ensure that the sequence of test cases corresponds to the sequence of statements in the unit’s primary specification.


You should first look to improve existing test cases and then add further test cases to show that the program does nothing that is not specified. This depends on error guessing and the expertise of the tester to predict problem areas.


We should also design functional tests to address issues such as performance, safety and security requirements.



Estimate and Measure the Test Progress (Coverage and Metrics)

Plan the testing aspects as precisely as you would the entire project. Part of this plan is defining the project goal, for example how many undetected errors, of which category, the software under test may still have at delivery. The type and extent of testing will depend on these figures.


Code Metrics are measures taken automatically from source code. For example, the number of linearly independent paths. If we know from previous experience how long it takes to test, on average, per path, then you can estimate the time to test the complete application.


Analysis of code coverage information, extracted from running the test suite as we build it up, it is possible to refine the initial estimates and monitor the progress of the testing process. However, beware that as you approach high levels of code coverage, 90+%, the testing effort can increase exponentially. Difficult to test and complex parts of the application are often hiding in the final 10%.



Run the tests


After we have selected/designed suitable test cases, we can execute them. This can be manual, semi-automatic or fully automatic. The choice of automation depends on two factors: the liability for software errors and the repetition rate of the tests. For security-related applications, the preference should always be automated testing. Well-designed test scripts allow exact repeatability of tests.


In DevOps environments, the gold standard is to execute a complete suite of automated tests on each code modification. This allows for rapid feedback to developers and the confidence that the code is always in a tested and ‘ready-to-release’ state.


Outside of DevOps test automation allows for easy regression testing. This allows developers to ensure that added functionality does not compromise the previous testing effort. We should make an investment in a suitable test automation framework upfront to save effort throughout the duration of the project.


Despite the determinism and repeatability, fully automated tests do not mitigate the need for complete documentation and tuning of the test design with changing versions of the requirement specification. Without documentation, the test suite becomes a box of mysteries; no one dares to delete a test, because we do not understand its function. Nobody knows what the tests do, except that to pass them is essential. In this scenario, new requirements force us to add new tests, and so the test collection becomes increasingly impenetrable.



  • Experience has shown that a conscientious approach to unit testing will detect many bugs at a stage of software development where we can correct them economically.
  • Be humble about what your unit tests can achieve, unless you have extensive requirements documentation for the unit under test the testing phase will be iterative and exploratory.
  • Fix your embedded software; take the time. It doesn’t have to be perfect — but it helps to be close.
More to consider


Testing against requirements is not an ending task in your development, but rather a constant effort towards a reliable system. The easier tests are to construct, execute, monitor and report, the more useful they are to development teams. Use of coverage can guide test case creation, help optimise a set of tests and provide an empirical measure of testing sufficiency. Without measuring how much of the code we tested, verification activity always risks shipping untested code. While there are some hurdles to overcome by using coverage software in an embedded or continuous integration environment, the use of suitable tools and testing framework can solve the problems in almost all instances.


If you found this introductory article useful, QA-Systems have a detailed paper available for download for free here. This looks at the aspects of efficient reporting of test results.


Download the full 26 page white paper as a PDF.


You are also welcome to sign up to the QA-Systems newsletter… You will receive notifications of other useful software development content straight to your inbox.