Pandemics aside, there are increasing pressures on health services around the world, from the needs of larger populations and a focus on preventative medicine, to the difficulties training and recruiting staff.
That’s why being able to automate some of the routine diagnostic tests means that clinicians and technicians can spend more of their precious time elsewhere.
Underpinning the adoption of any diagnostic device in a clinical setting is the need to get it right – namely that the diagnostic assays must generate the correct test results in comparative laboratory testing using “known good” processes, and, when audited, that the device developer can demonstrate to the regulatory authorities that the device has been designed, built and tested in accordance with the relevant standards, including ISO 13485, ISO 14971, IEC 62304 and IEC TR 80002-3.
These standards have been based on hard-won experience, including the lessons learned from other safety-critical sectors.
The verification side should not be underestimated, either. On a safety-critical project, half of project effect can be dedicated to checking that a product or system does what it is meant to do, does it safely, and fails safely as well.
Our client contacted us part way through a medical product development cycle with an external customer.
The product was a smart diagnostic platform, including a handheld point of care device. The device itself was to be a class II medical device, i.e. it was deemed that there is no risk of serious injury as a result of using the device, but certain regulatory requirements still needed to be met, in the form of verification, validation, and risk assessment and management.
Our client was a medical device consultancy with significant experience in the biomedical engineering field; however, the increasing scope of the project meant that they needed to extend their team in order to keep pace with the development.
We acted as a subcontractor on the verification side of the project.
In order to meet or exceed regulatory requirements, the goal was to achieve 100% path coverage across the product code, particularly in higher-risk areas.
Our first task was to complete the automated unit test environment and framework. Once complete, we began to address the unit test deficit, bringing the original unit test effort in line with the final test framework.
We quickly developed a unit test methodology and process, and documented and disseminated this amongst the unit test team, and then further afield.
We provided training and a seminar to our client to share the best practices and lessons learned about the unit test environment.
We took our brief very seriously. Scrutinising the code to this level meant that we could also perform a detailed code review, including independent recommendations for secure design and coding.
We were also involved in triage sessions with the senior team and technical architect to examine any problems found and agree the best path to resolution/closure between our respective teams.
Once the unit test framework was stable and the unit test cases were largely in a maintenance phase, we adapted the basis of the unit test framework to perform sub-system integration testing as well, meaning it was possible to verify larger parts of the product code on the target hardware, including more complex concurrency and stress test scenarios.
In summary, the product build environment was:
- an IAR ARM Embedded Workbench compiler with MISRA extensions, targeting a high-performance ST-Microelectronics Cortex M7 processor with a touchscreen and sensors;
- automated build jobs running on a Jenkins server; and
- the product code was written in C.
Unit Test Environment & Framework
We took the initial unit test concept and product build environment and built a fully automated unit test framework which combined the capabilities of Unity (test assertions), CMock (stubbing / mocking of functions external to the unit under test), and Ceedling (a build system which is designed to integrate the functionality of Unity and CMock).
We customised this test framework to generate detailed command-line and HTML-based reports of test coverage, highlighting missing coverage, process deviations such as incorrect naming conventions and MISRA warning exclusions.
We integrated the test framework with the project Jenkins server, and provided custom test jobs.
Unit Test Methodology & Process
This was an important part of our brief. Our role was to test the implementation of the design, i.e. intrusive “white box” testing. This meant that not only were we testing that given a set of inputs, a method under test would produce the expected outputs, but also that it arrived at those outcomes in the expected way, and handled errors and boundary cases correctly.
For example, if an assay were performed, the unit under test might be expected notify other sub-systems as part of its processing. Our unit test would check explicitly that this was the case.
We have a sound process background and are familiar with working in high regulatory environments, such as aerospace and defence, automotive and medical.
A great many of our referrals come through word of mouth – through engineers and senior management who have worked with us on past projects – and this was no exception.
We were effectively a “drop-in” self-managing sub-team, up and running within a day after the initial planning meeting, exploring options for the test framework and allocating unit test tasks.
We were able to increase the quality of the product code, meanwhile allowing our client to focus on new product features which enabled them to help get the product to market.