Can AI & IoT Improve Pathogen Detection?

Perhaps the logical evolution of AI is to provide an app on the user’s device that can “look for” potential contaminants.
© stnazkul | adobe stock

Continually expanding capabilities in artificial intelligence (AI) and the Internet of Things (IoT) are finding a foothold in many aspects of food production. Following is a Q&A with Advisory Board Member and Lonza Specialty Ingredients Head of Global Quality Michael Burness on innovative applications of these technologies in pathogen detection and prevention.

QA. Artificial intelligence (AI) is being seen as holding potential in epidemiology, to predict the spread of infectious diseases before they happen. How can this be applied to the spread of pathogens in an outbreak?

Burness. AI can be used to monitor changes in the environment — whether it be weather, animal activity, etc. — which can then be used to better understand where “hot spots” could be. Take for example, a leafy greens farm. If the intrusion of higher numbers of animals (deer, pigs, etc.) is detected, then machine learning can be used to assess whether that area of crop may have a higher probability of pathogenic contamination.

QA. To take it further, how could an AI or Internet of Things (IoT) model be used to predict contamination before an outbreak or recall occurs?

Burness. Following the same theory as in the first response, if higher numbers of potential contaminants are detected, further or higher preventive measures can be applied to reduce or eliminate that potential. So AI can be used in limiting the spread and better yet, potentially preventing contamination at the outset.

QA. The report Deploying Artificial Intelligence Against Infectious Diseases (https://bit.ly/3bVXR2D) cites modeling, specifically the Susceptible-Infectious-Recovered (SIR) model (which categorizes individuals according to their status with respect to the infection) being applied to infectious diseases. Can this model be applied to foodborne illness outbreaks as well?

Burness. I think it can. If the model identifies a higher risk population in a specific area or location, then higher risk products can be diverted away from that epicenter. One potential application could be similar to the signs that complement current meat and poultry establishments/restaurants. That is, in meat and poultry situations, there is often a note (e.g., at the bottom of the menu) that indicates that eating raw or undercooked products can lead to foodborne illness. Would it not make sense to also apply that warning to any raw product that is a known or high risk?

QA. Beyond modeling, the report states, “AI holds the promise of capitalizing on varied and divergent incoming data streams and makes ‘intelligent’ inferences based on the vast amounts of raw data accumulated.” How might this be applied in the food industry?

Burness. Relating back to question #1, mounds of monitoring data can be gathered for a particular application (think back to the farm). That data can then be converted to “information” (i.e., intelligence) that can be used to adjust operational requirements in order to minimize or eliminate risk.

QA. As stated in the study, and is often the case, “financial investment is key.” Is this a deal breaker in the food industry?

Burness. It is certainly a consideration. That said, as the rate of processing data keeps outpacing virtually everything else, the cost would likely come down, and a good investment opportunity could arise. Given the cost and impact of a recall to an organization, if the technology proves out, AI could end up having a high ROI in the area of prevention/early detection.

QA. QA ran an article on Purdue University researchers’ (https://bit.ly/38Skxig) development of a bioluminescence-based assay coupled with a portable device that works with smartphones and laptops to do on-site testing for harmful  E. coli  in food samples. Would you see this as an example of IoT or AI?

Burness. I think it is a good example of AI, albeit looking at a sample to detect contamination after a phage is added. While this is certainly something to pursue, perhaps the logical evolution is to figure out a way to not need the phage and provide an app on the user’s device that can “look for” potential contaminants without having to add or manipulate anything.

This is certainly a utopian idea, but given the advances that have been made in technology, I think it is worth a look. As a reference, think of WGS methods used today, that not too long ago were just ideas. We need to reserve the right to get smarter.

QA. You have cited electronic/digital rodent monitoring (https://bit.ly/32cYvEu0) as also being an example of an AI application for pathogen detection/prevention. What do you see as that relationship?

Burness. Rodents are carriers of quite a few pathogens, viruses, etc. Of course, not every rodent gets trapped, but if there is an indication that activity is increasing, the organization can proactively address that and reduce potential contamination sources before waiting to find out during the routine rodent station inspections. This concept for rodents also could be applied to flying insects, glue boards, etc., further identifying perhaps even smaller potential contaminants before they get to a critical stage. A similar technology has recently been applied to cattle, with the development of “smart” collars replacing fences. This could be used to know when they are “approaching” farm fields, thereby preventing potential contamination.

Michael Burness Head of Global Quality, Lonza Specialty Ingredients

March April 2020
Explore the March April 2020 Issue

Check out more from this issue and find your next story to read.