AI In Quality Management with ION Pharma

A Podcast Episode by ION Pharma and Ennov

The landscape of Quality Management and the role of artificial intelligence (AI) in the Life Sciences industry is evolving. Nowadays the digitization and automation of processes have become just more than just buzzwords.

Interestingly, despite significant investments in artificial intelligence, machine learning and predictive analytics by QMS software providers, the adoption of these technologies remains limited.

In this episode Chantal van Gorp (Managing Consultant at ION Pharma) and Josh Keliher (Solutions Consultant at Ennov) will explore the hurdles preventing widespread adoption and
the possible potential that AI and machine learning hold for revolutionizing Quality Management.

 

Ready to embrace Artificial Intelligence in your Quality Management, or facing other challenges in your QMS? Contact ION Pharma to learn more.

Overview

01:30
Use cases for embracing AI in Quality Management

07:07
Challenges for embracing AI in Quality Management

11:25
The future of AI in Quality Management

16:33
How to overcome the limited adoption of AI in Quality Management

20:50
The balance between people and their interaction with AI

Hosts

Chantal van Gorp - Director at ION Pharma

Chantal van Gorp

QA Expert
Managing Consultant at ION Pharma

See full profile

Josh Keliher - Solutions Consultant at Ennov

Josh Keliher

Regulatory Expert
Solution Consultant at Ennov

Transcript

Chet Shemanski
Hello and welcome to today’s episode of our podcast, brought to you by Ennov and our partner, ION Pharma. My name is Chet Shemanski, and I’m the vice president of Marketing North America, at Ennov. In this session, we’re diving into the evolving landscape of quality management and the pivotal role of artificial intelligence in the life sciences industry. As organizations navigate the complexities of remote and hybrid working environments, the digitization and automation of processes have become just more than just buzzwords. They’re necessities for ensuring employee safety, business continuity and cost optimization. According to the 2023 Gartner QMS Market Guide, the Quality management system market has witnessed an impressive 20% year over year growth rate. This increases a clear response to the challenges posed by supply chain disruptions, economic volatility, and a rapidly intensifying regulatory environment. These factors require us to rethink how quality is managed across the board. Interestingly, despite significant investments in artificial intelligence, machine learning and predictive analytics by QMS software providers, the adoption of these technologies remains limited in scope. Today we will explore the hurdles preventing widespread adoption and the untapped potential that AI and machine learning hold for revolutionizing quality management. On today’s podcast, I’m joined by Josh Keliher, solutions consultant Ennov and Chantal Van Gorp, managing consultant ION Pharma.

So let’s unpack these topics with our industry experts and shed some light on the challenges, opportunities and the future of quality management in the life sciences. So let’s start with Chantal. Chantal, with respect to industry challenges, what do you see as the most impactful use cases or opportunities for embracing artificial intelligence technologies and a QMS?

Chantal van Gorp
Well, I would say the AI can help us with predicting what is going to happen in our quality management system. Mainly shop floor. Actually we get a lot of data. We have a lot of analytic opportunities, a lot of trend analysis and still we often repeat the same deviations, the same trend errors. So I would say if we could build in AI based sensors and monitor our data by using specific keywords or specific numbers that would improve our event management. Even proposed Root Cause solutions, impact assessments and propose your CAPA. If something happens or we predict that it’s going to happen? So either you can stop it before it happens, or you can solve it when it happened in the same way as you did the before. I see often same deviation to the different investigations, different root causes and different corrective actions. So if I AI could help us recognize the trend or the concurring deviation, I think that would be an advantage.

Joshua Keliher
Yeah, I definitely think the providing insights across the data is useful because people are so siloed in their specific processes they’ve worked on or you know, they’ve worked on these particular deviations, but they don’t see other deviations. So they can’t really get a good assessment across. That’s a good kind of observation.

Chantal van Gorp
And it’s it actually depends on who you’re asking, right?

Joshua Keliher
Hmm hmm.

Chantal van Gorp
It should be a multidisciplinary discussion or investigation all the time, but in practice it’s not.

Joshua Keliher
Mm-hmm. Yep.

Chantal van Gorp
And she asked someone from QA. They think everything is wrong and bad and dangerous, but if you ask someone from, I don’t know, shop floor, they they don’t know what the impact is. But it’s there. So it’s a I can gather all those insights and keywords and recognize it as or you’re missing something or we had this before and this was the solution. Then I think that would be a big improvement.

Joshua Keliher
Even something so simple as telling somebody on the shop floor that they need to go also include this other person in the assessment.

Chantal van Gorp
Exactly, yeah.

Joshua Keliher
You know that alone, and who do you ask to kind of assess a, you know what the root cause is then? Yeah, I think that that could be useful.

Chantal van Gorp
And the data the data is often there, but it’s it’s either not available for the person you’re asking so that the trend analysis and the AI based sensors recognizing keywords. You you’re using the same 5 words in your investigation as you did two years ago. Maybe there’s relation?

Joshua Keliher
You have to kind of draw patterns between things that that, yeah, that we don’t easily do because we don’t we don’t have time to look at all 200 deviations that happened before so yeah.

Chantal van Gorp
Still, have we all want to be, the best continuous improver, but actually it’s still difficult.

Joshua Keliher
Yeah, it’s really difficult. I think there’s definitely a rope because it can handle all that data on maps. I I think there’s making putting data at our fingertips is a key aspect, so kind of extracting data, loading data so you know you have a list of findings for an audit and then it can auto load them into the system and auto classify them even from different languages. I mean, that’s one of the amazing things is that can make information pass from different languages, kind of much more easily. And I think that’s going to be hugely advantageous for people using quality systems to kind of have a I kind of assist with that and there’s loads of other things to generating materials as well generating especially non critical materials like well training is critical but in terms of it’s not regulatorily critical or you know you’re not going to get slapped on the hand if you don’t were to question exactly so but to generate that type of thing or to do the initial generation of reports that somebody will then take those are all things that AI is gonna really bring some interesting opportunities so.

Chantal van Gorp
Yes, I agree. Yeah.

Chet Shemanski
Great. That was a that was enlightening. Umm so Chantal as I touched upon briefly during the introduction, there were certain hurdles that are preventing the wide, widespread adoption of AI. So what do you see as some of the challenges for companies in embracing AI in QMS?

Chantal van Gorp
Well, first of all, of course your data. You have to you have clean data. I would say based on the ALCOA+ principle. I often see in a lot of digital systems that there are free text fields. Non mandatory fields. If you give people who enter data the room to not enter data or to make something up, then it will be very difficult for AI to recognize it as a as a certain keyword and the blank field. That’s no data at all, so that cannot be used in your, let’s say in your alerting or your trend analysis. So if you then trust on your AI functionality. Can you actually trust on the functionality if it can use blank fields for example. Will it recognize the trends if there’s no data?

Joshua Keliher
Umm.

Chantal van Gorp
I don’t think so and also the defining the alert and action limits you don’t want AI to go all over the place and alert of or search for trends that aren’t there or alert you multiple times a week because it’s learning itself that something was, not correct the last time. So it will probably not be correct this time as well. I think there will always we will always need a human aspect. I don’t think AI can take over all of this because it’s living data entered by humans. And there are a lot of variables I think. Like, uh, cultural difficulties, even the weather, your emotion, your function and your own values. I don’t think AI can take over everything in or if we cannot trust AI with everything in our quality management system.

Joshua Keliher
I think you definitely hit on the data quality. It’s something I hadn’t thought of it from the perspective of, you know, the quality of the data that the AI’s using. Yeah, if people don’t enter data in a field that they’re supposed to enter, if they only use, like, really funny abbreviations or something. Although I can interpret some of that. How can it make decisions based on that. I was thinking then also there’s also the problem of it kind of AI making.

Chantal van Gorp
Yeah.

Joshua Keliher
You know there’s AI hallucinations where it’s kind of it’s making stuff up. So you kind of have to have built in checks to make sure that it’s drawing the right conclusions. And so how do you kind of build that into your data quality? How do you, in the same way that you said, you know, not identifying trends that aren’t there? You know you want it to. How do you kind of verify that what it’s making the judgments that’s making are valid in the and how do we kind of put that human layer on it? You know, I was talking to somebody about a key component that AI, that technology can lack is just the even the idea of compassion. You know when you’re go to a government office to get some paperwork done, you often somebody will, out of compassion, gotta help you and maybe do something they are maybe not supposed to do, whereas AI is not going to be not going to have that human element. And a lot of our all of our processes are human implicated. They involve people and they involve people resolving them and so if the AI doesn’t have that that’s, I think challenging and challenging for people to accept as well.

Chet Shemanski
OK, Chantal, have one more for you with some clear use cases in AI and QMS as well as the hurdles in adopting AI. How do you see the future of AI in quality?

Chantal van Gorp
Well, I like wishful thinking. In my ideal world, uh, the industry starts to share their valuable information and valuable data, so we can make the world a better place together. I know that’s not going to happen. Obviously, uh, we all have our own products that we’re proud of, our own processes that we’re proud of. But I still think there’s a lot to learn if we work together at the end. We all have our own quality management system and it’s everywhere more or less the same. We have the same procedures, the same forms, the same fields, even in those forms, same approval processes. I would say if we can get to a single source of truth, even only based on the quality processes, that would be, uh, a big advantage and then we can use AI functionality. Actually for continuous improvement, not for the companies itself, but for our patient safety. I think it will save a lot of money as well.

Joshua Keliher
And yeah, it’ll be interesting to see how and kind of quality specific AI’s develop in the end and what are they gonna be, you know, fed across the industry to kind of give I think, good insights that are really applicable and well contextualized to to quality. It’s gonna be interesting to see how that that definitely develops that I think there is a big place I think for making people’s lives easier for kind of all the tedious tasks they have to do. I think that’s one of the things that I think, you know, just things like data entry type of activities where it’s coming from one source and needs to get into another format or get out of 1 format and into say a report. You know where you spend a lot of time, you know, think of how much time you spend formatting and report and, you know, writing out the, you know, writing out just all the preamble and the context even before you get to the really critical, interesting point, which is the actual assessment that the human does to say, what’s the meaning of this for our business, for our, you know, organization for our customers, for our patients. So I think getting rid of some of the things that people don’t like to spend their time on, I think that’s actually gonna be a critical point. I think there’s also, so I I went to the doctor recently and they were doing something really interesting with AI. The doctor had was basically the AI was listening to our conversation and kind of basically creating a summary of the whole kind of encounter and the interview in our questions and kind of her recommendations. And then it kind of put it in a particular formatted report that she would then take and re edit and change the language to make sure it really reflected her, her tone and what she really wanted to convey. But I thought wow, that is that is very sophisticated, especially in our healthcare industry. It doesn’t always move that quickly, so I think that type of thing where you can take narrative, they’ll that’ll be interesting to see. Maybe an inspector goes in and as they’re talking, it’s recording the things they’re saying and then generating kind of assessments kind of out of that. That could be something really interesting, but yeah, there’s some neat, neat areas.

Chantal van Gorp
Yeah, but we still have to watch ourselves because I last week I saw my daughter using AI just to generate more words.

Joshua Keliher
Yeah.

Chantal van Gorp
Because she had, like, a, I don’t know, 200 words. Paragraph in a report for school and she asked AI to say the same but use 400 words.

Joshua Keliher
Yeah, it’s that’s the challenge.

Chantal van Gorp
And we, we already use a lot of reports and words.

Joshua Keliher
And I think what was interesting about this is it reduces it down. I mean, so it was taking all you know, it was like a, you know, half an hour or hour long consultation and then it reduced it to the key points and the key kind of takeaways which that was really interesting to be able to get a the summarize all that text. So, but yeah, there’s ethical concerns around AI too, right?

Chantal van Gorp
Yeah, there are.

Joshua Keliher
So yeah, my daughter hasn’t quite gotten to that point yet of using AI, luckily, so yeah. Well, hold her off from it for for the time being, so yeah.

Chet Shemanski
OK, gotcha. While I’m gonna give you one now, what strategies do you think can be employed to overcome the limited and suboptimal adoption of AI in quality?

Joshua Keliher
I mean, one of the challenges I think in traditional machine learning has been the fact that you need to kind of feed it large pools of data. So, and that’s kind of what Chantal was kind of talking about that the industry kind of pooling some of that knowledge might be might be helpful. I mean, Gen AI of that is kind of a leap forward in that. So it’s able to kind of really make inferences from much smaller volumes of data. So that’s good. But there’s still challenges, I think with the kind of garbage in garbage out with the with AI, if they don’t have kind of good data, good things to kind of work with. If people haven’t managed their processes kind of consistently, so I think companies need to start just systematizing their data. Generally I think they need to start working and kind of a more consistent manner to then be able to look to get some of the the benefit of AI in the future. I think that’s gonna be kind of pretty key. I think the initial focus does need to be within AI adoption to not try to boil the ocean, to not try to replace what humans are doing, but instead free up employees to work on business critical tasks. I think all of us probably have 60 hours of work that we have to fit into 40 hours, so to try to get rid of the things that we’re doing that are repetitive and kind of lower value tasks, data entry or updating multiple records with the same piece of information or just things that that aren’t really returning value, but seems to need to be done. Then I think that will help drive some adoption in quality management systems, really looking at the areas of high value to for an AI to take over. So data entry, data extraction and generating summaries of things, or the initial reports for things kind of trying to minimize the amount of time people waste on tasks that they don’t often want to be doing themselves. I think that will help adoption. I’m I think there’s still a ways to go in terms of, you know, I think a lot of the challenges with AI are just the how do you, the verification of the work that AI is doing and being certain that it is of the requisite quality. So at this stage I don’t think that it’s gonna be able to. It’s hard to kind of trust it at this stage. I think there’s still enough kind of risk in what it generates and knowing whether it’s making a true determination or not that I don’t see that as being an initial use case. I think that would kind of be a little bit more in, in the future. And then and then the adoption general, I think there’s a human component that we also need to probably discuss around AI as well. But there’s, I think, reticence in adopting AI. People, I think are a little intimidated by AI. It’s oftentimes we don’t know how the machine works in a sense, and I think that’s intimidating to people that we need to to navigate. I think pretty carefully.

Chantal van Gorp
And that will depend on the quality you get back from AI. If it actually helps you limit a report writing and the result is better than you could do yourself, or at least quicker than you could do it yourself, then the adoption will be much easier. But people don’t like, don’t like changes anyways, so that will be that will be a challenge actually, yes.

Joshua Keliher
Yeah. I mean, if they have to rewrite the whole report, you know, so because you know, there are so many inaccuracies and then they just lose confidence in it.

Chantal van Gorp
Yeah.

Joshua Keliher
You know, if more than 50% of it’s wrong or something and they get a bad impression of it, they’re just gonna end up rewriting it. And then you actually haven’t gained anything. So that’s gonna be a challenge, I think for press to verify the quality of what’s being generated.

Chet Shemanski
So let’s go a little bit deeper into that one with the human AI interaction, what do you see the balance there between people and their their interaction with AI?

Joshua Keliher
I mean, I think that a lot of people have the concern that, you know, AI will replace their jobs and. You know, I I think we’ve seen, you know, movies like Terminator 2 with the machines taking over. And I think there’s that’s always kind of there in the back of our minds and I think we need to be vigilant about AI. There’s risks kind of around it, but I think it’s probably not realistic, especially in our space because it is so regulated and because ultimately humans have to sign off on it. Someone has to be responsible for what goes out the door to say a health authority or what is going to be inspectable by a health authority. I really think that, you know, people generally, at least in life sciences are just generally far too busy as it is. So I think you know, they don’t have too many tasks to complete. So I think I think AI will allow them to. It’ll be like an assistant. It will let them achieve their job a little more effectively. Umm, you know some of those manual tasks will be, I think a lot easier having some sort of, you know, chat bots within systems where you can ask it a question, you know in the same way that we use Google every day and ask something and it kind of seems to really understand the sense of what we mean. I think having that ability to say, OK, well, how have we handled this type of deviation in the past? That actually could be a really useful type of question to gonna have insight on and it doesn’t take away the person’s job. It just lets them have a more precise, informed perspective as they as they manage deviations so.

Chantal van Gorp
That would be, uh, actually very cool functionality. Instead of using queries and searches and to search in your history data, if you just could ask a question in your, let’s say your quality management system. Did this happen before or something like this? And AI gets back with another a bullet list or another list.

Joshua Keliher
And then you can ask it to verify its answer too, right?

Chantal van Gorp
Yep.

Joshua Keliher
So you can say wait, Are you sure you know, did you? How do you know this? It’s really interesting how you can interact with the AI to say, well, but what about deviations that are like this or like, oh, I’m not sure that that exactly applies. Can you double check just so that actually yeah, kind of becomes a really interesting use case.

Chantal van Gorp
Now I wonder and I’m talking about it the QA people, because those are my people. I’m still wondering if we are ready for innovation like this.

Joshua Keliher
Hmm hmm.

Chantal van Gorp
We keep on doing what we always did. I even come across the hard copy printed wet signature documents still, so I’m not sure if QA is actually ready for this automation because that’s what it is and we don’t trust nobody.

Joshua Keliher
And there’s a control thing there which is important. You know that that is the role of the quality department. It’s to ensure that things are done in a consistent and controlled way across the organization. So that’s where I think AI really has to be an assistant. It can’t replace that point of control, we can’t surrender the control of our organizations to AI and just say, oh, we’re just gonna let AI manage our quality. But to kind of make a quality person’s job easier to make so they don’t have to look at 6 different queries and then dive into each data record or you know, go back to the filing cabinet and look at all the forms, the deviation forms from the past 20 years to then be able to instead say ohh well look, I can ask a question of it and you know then they can go check if that answer is right or not. And they can say, well, show me what records you look at that to get that information and then they could go check it. So I think just in the same way that we’ve come to trust Google when you wanna learn about something new like AI, you kind of Google it and you kind of check your sources that you find. And I think if it becomes easy like I could see quality people kind of adopting it. But I think it will still be a slow process, yeah.

Chantal van Gorp
And I’m not sure if we understand what the impact could be if AI works 100% correct because it could give you results back that you didn’t expect.

Joshua Keliher
Yeah, yeah, yeah. You could get some surprises about you think you’re managing total quality about your, maybe not.

Chantal van Gorp
You’re normally professionally ignoring stuff. AI will not do that right, so.

Joshua Keliher
Yeah, it could really reveal some cracks in the system that people may not like. So yeah, like you could kind of feel threatened by it, right to, you know, kind of the AI is telling them.

Chet Shemanski
So what you’re saying is that as a QA person, you’re gonna have to relearn how to do your job instead of coming up with answers. Maybe figuring out what the right questions to ask the AI system are gonna be, so that sounds like that’s gonna be a training nightmare. How would you go about retraining these people to change the way they think?

Joshua Keliher
Hmm, I I think one is just kind of getting people to be I think familiar with what AI can and can’t do. And I think being very specific about the use cases in which you’re going to deploy AI. So it’s not a catch all that does everything that can do anything and do, but their job is still important to kind of make sure that they’re maintaining the level of quality required by regulations and by their organization. But they really need to understand that that context and what to expect from it, I think also to kind of make sure that there’s enough kind of checkpoints in the training to not you know because we can be kind of lazy sometimes. And so we just, you know, generate, you know, some text with AI and we don’t actually read through it. And actually, when you read through it, maybe there’s something very obviously wrong or something. That makes it very obvious that it was generated by AI, so you need to kind of train people to kind of that. They still have responsibility for this process. They’re still obligated, you know, their name is the one being signed to it, regardless of what AI does. So and then and I think there has to be something about I think trying to engage employees concerns about AI. I think including them in maybe your AI program taking a consultative approach where you’re going to, you know, maybe ask them where their pain points are, what are the things that they don’t want to be doing day to day and then well, maybe let’s target those for AI based solutions if appropriate. And then that will, I think help. So even almost before the training involves them earlier on, involve them in the discussions about what do we want our organization and get organizations engagement with AI to look like.

Chantal van Gorp
So are we talking about standardizing AI usage?

Joshua Keliher
I think you have to have policies around it of some variety. I think you it’s almost like you’re like we have now a chief data officers or chief information officers, maybe we’re gonna need Chief AI officers or maybe it’ll be a function of those other roles that determines what is the appropriate use of AI within different departments. Maybe it’s fine for product development to kind of be using it to identify candidates, but maybe in quality there’ll be a different use case that that we as a as an organization want to regulate. Ultimately, you know, humans have to decide how they want to use the tools, so I think it’s important that organizations don’t walk blindly into these AI discussions and just take it on wholesale because there are impacts to it. So just like we did with cloud based, we don’t take that on lightly. We have organizations that verify that the cloud providers are up to snuff. For what we needed to do so.

Chet Shemanski
All right. Well, any parting words of wisdom for us today? Josh and Chantal?

Chantal van Gorp
I personally I think it’s very interesting. I’m starting to use it myself, like for report generation. I come up with what I want to report and then I ask AI to help me to make a readable sentence. Often I want to get rid of the industry specific terminologies, so AI can do that for me. So I use it more and more on it. I hope that we all we all open to change so we can actually make AI better instead of AI making us better.

Joshua Keliher
Yeah, I think it’s just important that we be in the driving seat with AI and we do, you know, with like a good worker. Who knows the use of their tools and knows the purpose that it fits within their business and doesn’t just kind of go buying new tools wholesale just because it’s something new and fancy on the market. I’m not into gadgets for gadget sake, so if AI has a place in their tasks that it can do that are going to bring value to our organizations, then it should be adopted. And I think we need to kind of ramp it up well for our colleagues, but I think it is definitely here to stay. So I think we need to, we need to face it on and not put our head in the sand about it.

Chantal van Gorp
Yeah, and it’s going fast. So the industry, I hope that we are on top of it and our because normally we tend to wait and see what will happen. But in this case, I don’t think the industry can wait and see.

Joshua Keliher
Yeah, yeah.

Chantal van Gorp
We need to evolve together with the evolution of AI.

Joshua Keliher
Yep. Yeah, cause even quality departments like they’re gonna start getting is they’re gonna ask a department head to write the SOP for their process, and it’s gonna be a document that’s generated by AI.

Chantal van Gorp
Yes.

Joshua Keliher
So it’s gonna come to quality, whether quality wants it or not. So it’s better now for quality to put into place the controls and the requirements and policies that they wanna have around it.

Chantal van Gorp
That they can use AI for that, right?

Joshua Keliher
Exactly. We’ll ask ChatGPT what would be an appropriate model for implementing Gen AI systems within quality departments? Yeah, it’s all very meta so.

Chet Shemanski
Well, it sounds that there’s an interesting future ahead in life sciences with the opportunities to redefine the quality management to leverage AI technologies. So it was a very interesting conversation. I’d like to extend my gratitude to both Josh and Chantal for participating and contributing to our conversation today. Thank you.

Chantal van Gorp
You’re welcome.

Chet Shemanski
And we look forward to discussing some more interesting topics with you in the near future.

Chantal van Gorp
Thank you.

Joshua Keliher
Thank you.