Welcome to the final entry in Verto’s blog series on AI in healthcare with Martin Persuad, our Director of AI & Technology Transformation.
To wrap up the series, we present the second episode of Verto Voices, an ongoing conversation where we profile some of Verto’s innovative digital health leaders. For this installment (full interview available here), we interviewed Martin for a deep dive into AI and it’s application in heatlhcare.
Q: We’ve been looking forward to this, Martin, so thanks for sitting with us today.Before we jump in, I want to take a minute to talk about what led you to where you are with Verto today. Your original educational and professional background is in finance, which is a good deal different from healthcare. Why the switch? Was there a particular opportunity that sparked your interest in making the pivot from finance to digital health?
A: I think I had a very different way into the space, although many people you meet do have a weird path into AI; It’s not just your typical engineers. There are a lot of those, but there are a lot of other people who found their place here. (In my case), I did a commerce degree first. I got my CACPA, completed my CFA exams, and followed the traditional path of wanting to be an auditor at Deloitte.
Within a very short time, I found a lot of the audit work kind of monotonous, and I wanted to find new ways to automate it. I thought there were a lot of cool statistical ways to show that something was valid versus just brute-force testing. That was my very first experience with automation and applying slightly higher math than counting to achieve the same goal.
I became very interested in finance instead of just basic auditing and accounting. I performed due diligence for venture capital and government funding for about 30 to 40 tech companies or tech projects that these companies undertook. From there, I thought, “I need to get into this space.” I got to see how transformative it could be and I started writing about it. So, I got seconded to Deloitte’s consulting team, where they researched how AI would impact the world. I think this was in about 2017.
That gave me a good six months to focus on that space. I realized AI could have a big impact in many facets of the world, but healthcare stood to really benefit if it were applied correctly. Obviously at the time, I didn’t know that much about healthcare, but I thought I needed to get further into the space, and that meant going to industry.
And so I took my first leap into sales at a healthtech scale-up in Toronto. With every step I took, I just wanted to get deeper and deeper, and I definitely ran across some people who said, “You weren’t traditionally trained in the space.” So it took a while to credential, including a master’s degree that was based in AI after that. I feel like I’m now starting to get into it a bit more and growing the competencies I need.
Q: For folks who may not have much background knowledge of AI – can you give us the ‘Coles Notes’ version of your definition and perhaps break down the pieces it’s comprised of?
A: AI is not a neat term, and you’re always hearing it synonymous with machine learning, deep learning, etc. It’s quite confusing for most people to understand what it actually is other than that it’s doing things that maybe humans cannot do as accurately in very specific cases.
Only recently that’s being more generalized with these large language models. Again, another buzzword. So my version of this, which corroborates with what’s online for the most part, is AI is like the wide umbrella term.
And then, within that, you have the reason-based expert systems, knowledge graphs, and the actual learning domain of AI (machine learning). That’s what’s getting all the press right now, probably for a good reason. In my view, machine learning is just a subset of AI, and then, within machine learning, you’ve got all of these other key terms being used. So, in a traditional sense, machine learning can be or has been delineated between supervised, unsupervised, and reinforcement learning. I don’t need to go into details on what all of those are, but they’re really just different ways of training an algorithm, depending on the type of data that you have and the task that you’re trying to complete.
Within supervised learning, you’re seeing many of these newer algorithms within this deep learning space, where language learning models (LLMs) and all of these bigger models that are making mainstream now typically fit. Again, it’s not a neat fit. This is just trying to make it understandable.
In practice, even when we’re trying to do some of our work at Verto, I can’t classify anything as purely supervised or deep learning, per se. It’s all kind of layered on top of each other, depending on what the need is. So that’s the best way to understand how all these terms fit together without really boxing anything in.
Q: Can you share your thoughts about the importance of investing in MLOps relative to the healthcare sector?
A: MLOps is this new space at the intersection between DevOps and machine learning. I think there’s a really good analogy for it because we’re living this right now as we try to build our algorithms and deploy them at scale and then make sure that they work over time because data changes over time. Think about how much effort it took for large companies like Google and Microsoft to build document version control. That is just a blank piece of paper where you can write your ideas, yet these massive companies are still trying to solve versioning control.
Well, imagine now we’re building a machine learning model, and I need to determine:
- Which data I used to train it;
- The features that I want to save;
- The embeddings that I want to use;
- The version of the model that’s going to be deployed;
- Then, I need to monitor that that model actually works in all the different domains that I apply it to.
That sounds a lot more complicated than simply writing on paper, albeit digital, right? So, imagine the amount of thought that needs to go into machine learning and versioning operations.
Where this applies in healthcare is that healthcare data is extremely complex. It’s a very large domain with subdomains. It has a lot of different formats, and I would say the tasks are quite far away from a general AI. So you’re doing a lot of task-based things. Most tasks can be enhanced. The performance of the machine learning algorithm conducting those tasks could be changed by the type of data you train it on. It could be changed by the population that it’s being applied to. And so MLOps and the governance that comes with it, if done correctly, will help a lot of these healthcare companies apply this type of algorithm in the right way, monitor it, and make sure that it applies to the population and the task.
ML Ops will be a key factor in our success, generally speaking, in healthcare, unless there’s some all-knowing AGI down the road. Even then, you’ll need ML Ops.
Q: You have a unique take on the role AI should be playing in healthcare at this current time. You’ve spoken about the need to avoid the temptation of focusing on flashy sectors like generative AI…at least for the moment
My understanding is you feel one of the first things AI should be used for in healthcare is to focus on improving the integrity and reliability of data. Can you tell us your thoughts on how you’d like to see AI make the world of healthcare data a better place?
A: I fully understand the hype around LLMs and generative AI, and I think they have a place (in healthcare). I’m certainly not a detractor of this new technology. I would actually love to see it make an impact earlier. But in healthcare right now, we’re in a space where moving on from fax is a revolution.
We need to be able to bridge. And this is not new; none of this is novel. I want to be clear that I’m not saying anything groundbreaking in saying that fax is the norm and it’s a big leap to AI. But at the same time, you can’t expect these healthcare providers who have been so used to applying their knowledge a certain way to all of a sudden trust a completely automated way of making choices. That’s just one aspect of where AI can be deployed, but it’s the one that most consumers and the general populace understand.
There are also a lot of other behind-the-scenes applications that could work. But there’s going to be a lot that happens before we get to a point where it’s decision-making at the patient level, at the point of care. And so I think one of the things that artificial intelligence can do right now is focus on interoperability, which is a massive problem in healthcare.
It’s one of the reasons why we struggle to get data to train our algorithms because there’s so much that has to be done to get this data and transform it into something we can use. So, standards have their place in the health data world to standardize how that data looks, but it’s not enough. So many legacy vendors don’t want to change systems they deployed 20 years ago, disrupting day-to-day healthcare operations. I fully understand that. So, we need to think of another way to transform data at the source and make it usable for things like analytics and decision-making, whatever the case may be.
I think AI has its place in that domain right now. AI can be that thing that makes processes a little less manual, and it hasn’t been solved yet. Schema matching and tasks of that nature are still not as well researched as other aspects of AI. That’s a problem that we need to address (not just in healthcare) and will be one that makes a huge impact on healthcare.
Q: How is Verto Health currently using AI to improve the patient and clinician experience?
A: First, I’ll talk about it at the technological level. We do have a couple of patents that we’ve filed. One is specifically what I just spoke about, you know, really honing in on the data ingestion because it comes in all these different formats. I think deploying that at scale and automating data mapping and schema matching will be a huge factor in the interoperability space, which is one of the most quoted barriers to better healthcare across all health systems.
The second is actually another patent. Everyone wants to solve the problem by being the “house of master data.” So, you end up with many master data systems with copies of this data, and you don’t know where it originates. Therefore, the end users may not be able to trust it. It’s obviously inefficient, and it creates silos, etc. So, our second patent is very focused on virtualizing data so that we can point back to where it came from and maintain much of its provenance to make choices on it.
Verto is doing both of those things during the data ingestion process so that when we make this data available to end users, whether those are administrative, policymakers/decision makers, or it’s in the clinical domain. We’re equipping them with the right information to make choices. In the future, if anything is predictive, we can point back to why those predictions were made. We’re putting interpretation at the forefront right away instead of trying to prove an algorithm can perform in itself because healthcare is human, and humans need to be able to interpret regardless of how accurate it is.
I would say, this is how we’re tackling these problems. There are a lot of other things we’re doing with clients right now, like ingesting these healthcare source systems data and applying these patents to get it to a point where their policymakers can make choices. We are doing this in real-time, and I think it will make an impact.
Q: Where do you see the application of AI in Healthcare over the next ten years? Are there any rumblings in the industry we should keep an ear to the ground for?
A: Ideally, there is some form of generative AI. I’m not opposed to that, but I think what’s going to happen is we need to be aware of the regulation coming down the pipeline and plan for it. It doesn’t need to necessarily be a blocker to innovation if we acknowledge that it’s there for a reason. So, I think people will wake up to that in the next ten years. We will start to see policies coming out. In fact, the Cures Act just recently released HTI-1, which calls for the transparency of any algorithms that drive decision making. I think that’s one of many that are going to come down the pipe. Instead of being reactive, we should be thinking, “Well, there’s a reason this is happening.” The algorithms of the future will be able to address that.
I think if we’re getting a little more excited about things outside of regulation – which has become more mature in the past 24 months – there has been so much attention on language, large language models, and how we’re using language as a way to represent knowledge and apply knowledge.. I don’t have to get into the specifics on how that works, but that’s really just an understanding of language patterns, and it’s through billions of documents that have been ingested by these algorithms.
We’ve also started to see things like autonomous vehicles and all these other spaces where vision is being used. So, I can see a world in 10, maybe a little more than ten years, where these modalities come together and start to really drive multimodal decision-making. Not just based on some progress note but also based on the radiology report or the image itself. That’s already happening but in isolation.
Bringing these things together to tell the story of a patient is when things become really powerful.
My hope is that we move in that direction. I’m sure people are already working on a multimodal approach, and you’re starting to see architectures based on that. So, there’s the mixture of experts architecture, where depending on the task, the type of data and what you’re trying to convey, there’s a mixture of “experts” that you would consult. Experts being these algorithms.
I think that’s where we could go, but I want to stress right now that it is a long road to that, and our focus is making the best use of healthcare data, using AI to do so, so that we can forge that future.
*This transcript has been edited for clarity and length
Stay tuned for future entries in the Verto Voices series where we will dig into population health management.