Validation of calibration software ? as required by ISO 17025, for example ? is a topic that people don?t prefer to talk about. Almost always there is uncertainty concerning the following: Which software actually must be validated? If that’s the case, who should take care of it? Which requirements should be satisfied by validation? How will you do it efficiently and how is it documented? The following post explains the background and provides a recommendation for implementation in five steps.
In a calibration laboratory, software can be used, among other things, from supporting the evaluation process, up to fully automated calibration. Whatever the degree of automation of the program, validation always refers to the entire processes into that your program is integrated. Behind No Obligation , therefore, may be the fundamental question of if the procedure for calibration fulfills its purpose and whether it achieves all its intended goals, that is to say, does it supply the required functionality with sufficient accuracy?
To be able to do validation tests now, you ought to know of two basic principles of software testing:
Full testing is not possible.
Testing is always dependent on the environment.
The former states that the test of all possible inputs and configurations of a program cannot be performed due to the large number of possible combinations. According to the application, the user must always decide which functionality, which configurations and quality features should be prioritised and that are not relevant for him.
Which decision is made, often depends on the next point ? the operating environment of the software. Depending on the application, practically, you can find always different requirements and priorities of software use. There are also customer-specific adjustments to the software, such as regarding the contents of the certificate. But additionally the average person conditions in the laboratory environment, with a wide range of instruments, generate variance. Secure of requirement perspectives and the sheer, endless complexity of the software configurations within the customer-specific application areas therefore ensure it is impossible for a manufacturer to check for all the needs of a particular customer.
Correspondingly, taking into account the aforementioned points, the validation falls onto the user themself. To make this technique as efficient as you possibly can, a procedure fitting the following five points is recommended:
The data for typical calibration configurations ought to be defined as ?test sets?.
At Uncontrollable , typically one per year, but at least after any software update, these test sets should be entered in to the software.
The resulting certificates can be weighed against those from the prior version.
Regarding a first validation, a cross-check, e.g. via MS Excel, can take place.
The validation evidence should be documented and archived.
WIKA provides a PDF documentation of the calculations carried out in the software.
Note
For more info on our calibration software and calibration laboratories, go to the WIKA website.

Leave a Reply