Measuring and Accounting for Risk in Measurements

©COLOURES-PIC | stock.adobe.com
ROGER M. BRAUNINGER, Biosafety Program Manager, A2LA

Risk is everywhere. Risk is what you think it is. Risk is a measurable attribute that has long been applied as a means to help manage quality in the food production facility and in the analytical testing laboratory.

Similar actions are taken for both ensuring that product specifications are met and assuring testing methods remain in control. Both apply similar risk-management concepts and approaches in establishing the proper course of action and decision making regarding the risk of an out-of-specification quality event. For each, the ability to ensure that the process remains in control, or that proper corrective actions have been implemented, demonstrates that they both depend on having confidence in a given measurement outcome. However, there is a degree of risk even with an analysis run using the “best” method with the “best” equipment; and the “best” analyst has uncertainty associated with the measurement result that is quantifiable.

In the testing laboratory that is accredited to ISO/IEC 17025:2017 (General Requirements for the Competence of Testing and Calibration Laboratories), the shift to the 2017 revised criteria brings additional opportunities to systematically establish risk-based thinking. A quantified risk/opportunity-based approach helps establish a basis for increasing the effectiveness of the management system, achieving improved results, preventing negative effects, and evaluating laboratory competence.

There are many subcategories of risk. A key to risk management lies in the need to minimize exposure — quantified as evaluating the impact of risk-reduction strategies on an adverse outcome. Measuring risk may be accomplished by assigning numerical values to these categories. To create a logical approach to managing risks, the subcategories can be ranked, and values assigned to each of the rankings. These can then be quantified and used in a simple formula that takes into account the probability of an adverse event’s occurrence, the likelihood of its occurrence being detected, and the degree of impact on operations (severity) were it to occur.

For example, the probability of an adverse outcome could range from remote (with a low associated numerical probability score) to expected (high) probability score. Similarly, the risk detectability score categories could vary according to whether or not there are controls in place and, if in place, the probability of them detecting a risk outcome that occurs. Lastly, sliding scores of risk severity from negligible to highly impactful categories (such as sample handling or data integrity) could range from no-negative impact to loss of the sample or lacking traceability to the source. From all of this, a risk rating could be calculated to provide a means of assigning an overall level of importance for each identified risk by multiplying the severity, probability, and detectability scores (i.e., Risk Severity x Risk Probability x Risk Detectability = Risk Level).

DECISION RULES. While rating risk allows for prioritizing needed actions to remedy adverse outcomes, Decision Rules may come into play when a laboratory is performing an analysis to determine if a given product quality attribute falls within desired specified limits. Thus, Decision Rules are considered to be a special type of risk defined by what ISO/IEC 17025 refers to as “statements of conformity” (previously termed “compliance statements”). The Decision Rule describes how measurement uncertainty (MU) is used to report a numerical result when the decision is to report compliance via a pass or fail statement, depending on whether or not the result falls within the prescribed specification limits. In other words, it helps to calculate the likelihood that a given test result value falls within a predicted range of associated values.

Manufacturers have a vested interest in making sure that the products they manufacture meet their (or the government’s) specifications. This could range from having an accurate food label to making sure that a pesticide is not present above a legal limit or that the amount of saturated fat in a serving size is accurate. Probably the most common manner in which Decision Rules are used is that of “guard banding.” Similar to margin for error, a guard band is often applied in instances related to internal release limits where the size of the guard band is a multiple of the MU of a given product’s quality attribute assigned (measured) value. Thus, at the end of a product’s specification-acceptability zone, the guard band lowers the acceptance of the product value by an amount that reduces the probability risk that the attribute being measured lies outside the specification zone.

It can be relatively easy for people to make judgments of relative risks and probability in measurement, but more difficult to understand and rank risk. Thus, using scales for assessment can help in decision making and prioritization related to both product quality and probability in measurements.

July August 2019
Explore the July August 2019 Issue

Check out more from this issue and find your next story to read.