E149 Amin Madani on AI, Machine Learning, Surgical Data Science, & GoNoGo in Cholecystectomy
Listen to this podcast on SoundCloud
Chad Ball 00:15
Welcome to Cold Steel, the Canadian Journal of Surgery podcast, with your hosts Ameer Farooq and Chad Ball. The goal of the CJS podcast is threefold. First is to highlight the best research currently being completed by Canadian surgeons; second is to offer educational topics for both surgeons and trainees alike; and most importantly, the third goal is to inspire discussion, thoughts, creativity and career development in all Canadian surgeons. We hope you enjoy it.
Ameer Farooq 00:49
Dr. Amin Madani is the director of the Surgical AI Research Academy (SARA) at the University Health Network (UHN) in Toronto. He's an endocrine and acute care surgeon at UHN and assistant professor of surgery at the University of Toronto. Dr. Madani talked to us and showed us some of the work he is doing on AI and surgery and, in particular, on computer vision. He really breaks down for us the terms AI, machine learning and surgical data science, and highlighted some of the promise and challenges for AI in surgery. Dr. Madani, thank you so much for joining us today on the Cold Steel podcast. It's truly an honour and a pleasure to have you on the show. Could you tell us a little bit about where you grew up and where you did your training?
Amin Madani 01:31
Thanks for having me. Really exciting. I've heard a lot about this podcast. Great to be here. So I kind of went all over the place. So I started in Toronto; that's where I did my undergrad. Then went to Western [University] for [medical] school, then McGill [University] for residency; that's where I did also my research training and my Ph.D. in surgical education. And then I went to New York for surgical endocrinology fellowship, and then finally back to Toronto, so it's sort of full-circle, and started as staff here for the past few years.
Chad Ball 02:07
That's a pretty awesome voyage. Tell us about your endocrine practice now and what your days and weeks look like and what drew you to that field in particular.
Amin Madani 02:15
Yeah, sure. So, endocrine surgery, I think, a lot of people ask me, like, which organs do you guys deal with? So thyroid, parathyroid and adrenals. I don't do too many neuroendocrine or pancreatic endocrine disorders; that's more for the HPB folks. But basically, we deal with everything from, you know, the full comprehensive care of endocrine disorder, everything from the benign to the more complex endocrine oncology. My bread and butter is things like thyroid nodules, goiters, thyroid cancer is a big part of my practice, parathyroid disorders, everything from primary to renal hyperpara[thyroidism], including secondary and tertiary. Adrenals, we do quite a bit; we're probably one of the more higher volume centres, where we do everything from incidentalomas to, you know, hormonally active tumours, to the more complex adrenocortical cancers, you know, to the big en bloc resections. And, you know, we try to do more innovative things like, here, we do a lot of retroperitoneoscopic adrenals, for instance; we do a lot of minimally focused parathyroidectomies, and also we started radiofrequency ablation of thyroid nodules. So, that's kind of, like, our practice for endocrine surgery. We do a lot of ACS, as well. I would say I'm probably like 85% endocrine, 15% ACS. I would say my practice is more like a 50/50 surgical and surgeon investigator type thing. So probably like 1 or 2 hours a week, 1 or 2 clinics a week and 2 academic days. Living the dream.
Ameer Farooq 03:50
I'm not very good at math, but somehow that doesn't seem to add up to 5 days a week.
Amin Madani 03:56
We make it fit. We squeeze that extra day.
Ameer Farooq 03:59
Not surprised, not surprised at all. I think what some people would call AI in surgery, I think there's some terms that you like maybe better than that. And I think rightly so, because I feel like this term of AI or artificial intelligence surgery gets thrown around so much, and it must be [inaudible]. Like, you know, like, when I was in Boston where I did my Master's, I think there was, like, 2 terms that got thrown around all the time, which is blockchain and AI, and it used to drive me insane. So can you talk to us a little bit about, like, when people say AI, what really are we talking about? And in particular, how is that different than machine learning?
Amin Madani 04:37
Yeah, that's a great question. And you're so right; there's so much hype around the term, and the number of times I see the word AI thrown around when it's not actually AI — it's quite incredible. I'll just start off actually by saying my background actually is not — I'm not a computer scientist or engineer. My background has always been in surgical education and trying to understand decision making in the OR, you know, where the gaps in performance happen. And so then I kind of met like really smart people who did a lot in computer vision and machine learning, and that's kind of how I got involved with it. But anyways, it's a great field. But AI, just to go back to your question is — so, okay, AI is really, it's a big term. It's really the study of computer science that focuses on machines being able to perform human-like functions, or functions that are traditionally done by humans, like perceptual functions, or decision-making, cognitive functions, and sometimes even surpassing human abilities, like superhuman. That's kind of like the AI, the overall study of all this. Machine learning is a little bit — it's a subset of AI. It's the part about AI that has made it so popular, if you will, if you think about traditional programs, if I want to program something, you know, I might write a program of code, I have to explicitly program it in detail, I have to say, for this situation, this is what's going to happen; for that, this is the situation; and you have to program it for all the different scenarios. Machine learning takes a different approach. It's saying, "Okay, here's a lot of data. I'm going to find patterns in this data to make sense and to be able to make predictions on new data that I have never seen before." That's the, kind of, essence of machine learning. So they're very different. Machine learning is a subform of AI; it's a form of algorithms that are able to perform AI functions, essentially.
Ameer Farooq 06:33
I think, from reading some of your work — and we'll post links to this. If you're watching this on the YouTube channel, it will be in the description; if you're listening to this on the podcast (audio version), we'll put it in the show notes, some links to your papers. But my sense is that you kind of don't like either of these terms, or you prefer to use the term surgical data science. Tell us why that term is maybe perhaps a bit better and, perhaps also, what is meant by the term surgical data science?
Amin Madani 07:04
No — excellent. And that's a great point, because I think, you know, regardless of what you do, regardless of what branch of research, you really have to be specific in the terms that you use. I try to stay away from using "AI" as much as possible when you write manuscripts or you give a talk, because, you know, it's a very big, big, big, you know, sort of, overall overarching term. You need to specify exactly what you mean; like, for instance, you know, if it's a deep neural network, I'm going to describe it as that, you know, as opposed to saying the AI, I'm going to specify the algorithm, that specific type of algorithm they use for that specific research question, for instance. And I think it's important to do that, especially if we're going to describe our research and it has to be reproducible, and so on and so forth. Surgical data science is an important term. And that's, and I like that term, because that, to me, makes more sense from a surgical standpoint, because it's more — it's bigger than AI. It's got the application of why we want to use AI into the term, so it's much bigger than just AI; it's the overall field of how we use data to improve what we do in the OR. Basically, surgical data science is basically how we capture this vast amount of data from the OR or around our patients. How do we organize it? How do we curate it? How do we analyze it and create prediction models with this, so that we can help us to take care of our surgical patients and, you know, and ultimately improve everything, from quality, safety, stewardship, and so on. When I say data, I mean everything from the vital signs, the bloodwork, the pixels on the monitor during MIS, or fluoroscopy, all the information the nurses type in the computer, you know, about the ASA score or whatever — this is all data. And when you think about today's practice, you know, we have a wealth of data, you know, on patients, and it's up to us to kind of integrate all this data, and kind of use our Gestalt on like, you know, and make decisions around the patient. You know, that's how we decide in the OR, you know what, things don't look right; why don't we divert the patient. I say that because I think you're a colorectal surgeon. So that's kind of how we use data science in our everyday life. But fast forward to the future, we want to be able to tap into those modern computational techniques, like AI, and be able to integrate this, kind of, this avalanche of data, to find patterns that we cannot see as humans and augment our abilities. You know, we're looking at — surgical data science, we're looking at the seamless integration of these kinds of support tools in the OR that'll kind of make us sort of superhuman, improve our situational awareness and things of that nature. And it's, you know, this kind of data-driven augmentation is everything, you know, the decision-making support during surgery, it could be how to tailor the care to a specific patient, you know, perioperative decision-making and things like that. I mean, if you can picture yourself in the OR in the future, you can see like, again, going back to the example of whether to divert or not, you know, you can have an AI algorithm that says, "You know what, based on the pixels I see, based on the lab work, based on all this data, I can make a prediction with 85% probability [that] there's going to be a leak if you anastomose this," and then, you know, help you with the decisions for — so that's how surgical data science, to me, is a little bit different from AI. AI is the actual computational techniques that you use. Surgical data science is the study of using all that for the intended applications.
Ameer Farooq 10:38
I'm getting a little bit of PTSD with you talking about whether to divert this patient or not, because it feels like 90% of my life is deciding this question.
Amin Madani 10:50
For me, it's whether or not the parathyroid's there, or the [inaudible].
Ameer Farooq 10:56
Okay, so like, it's funny, because, again, you talk about in this paper that you wrote, you're listed, you're among these many of these great certain scientists on this paper, that I think came out of a workshop, talking about the fact that there's actually not that many success stories when it comes to surgical data science, right? Like, you know, for all the hype, where do we actually see, you know, this type of, you know, risk prediction, or, you know, decision support. We don't really see it. Like, it's interesting that you talked about, specifically, about whether to leak or not, you know, there's been all these papers that have looked at the fact that the decision to divert patients probably is more related to the purse, the surgeons personality type, right? Like, are you a risk-averse type surgeon? Or are you not, and less to do with the patient in front of you in the clinical situation? Right? And, you know, like, that's just one example. And we don't even use very often very simple basic decision support type tools. So I guess what I'm asking is, why do you think there haven't been that many success stories in terms of surgical data science? And what do you think it's going to take to actually start to really bring some of this work to fruition?
Amin Madani 12:13
Thanks for mentioning that. I mean, that's one of the things that I spend a lot of my time and energy explaining to people because there's — you're right; there's so much hype out there, and there are billions of dollars being poured into this industry. The amount of dollars invested by [venture capital] companies into, like, the hype of, like, AI in surgery, and this and that, is so profound, disproportionate — I think a lot of people recognize how powerful this technology can be. But to this day, I have yet to see — I mean, you tell me. Have you ever seen AI in the operating room, despite all the hype? I mean, it's very hard to see. The only area that I've seen it, maybe, maybe show a bit of a success is in adenoma detection rates for colonoscopy. And you may be familiar with some of the new platforms that are out there, where you have, you know, you do your colonoscopy, and then you have a computer vision algorithm that basically detects polyps, and kind of highlights it with a square, so that you can, you know, to tell you there's a polyp, for instance. And even that, even despite all the studies they've done to that, whether or not it improves patient outcomes is still up in the air; like, it may improve your adenoma detection rate, but perhaps these are, you know, things that would never have led to colon cancer or whatnot. So, you know, AI, we have yet to show the value proposition. And so that's why we need success stories to actually show that it actually can improve things in the OR, and we're not there yet. The stuff that we've done has been a lot of proof-of-concepts. Like, we know we can use AI to do certain things. Like, I know I can develop a computer vision algorithm to, you know, to tell me whether or not a critical view of safety has been achieved or not, you know? But implementing that into the real world, is it going to improve patient outcomes? I don't know. I'll give you an example. One of the things that we're working on is creating — so all these proof of concepts, they're just a piece of software right now; like, it's just a software. Taking that into the operating room, it's a big step. Nobody's ever really done implementation, or develop sort of like a seamless tool that people can take into the OR. You can't expect people to buy expensive hardware into every operating room around the world. So one of our work has been in trying to design systems that people can take, like for instance, their phone or their tablet, and just put it on the monitor and get like the AI to kind of, like, give you a, you know, an assessment for instance. Things like that, I think, are what's going to help disseminate this kind of technology. Making it seamless, making it easy, making it scalable, but even then, we have to be able to show that there is value behind it. And so far, I don't think, I don't think we've done a good job of that. It's hard to show that, you know, an algorithm that tells you where to go or where not to go during the [laparoscopic cholecystectomy] is going to improve bile duct injury rates. You know, doing a randomized trial and that is quite challenging.
Chad Ball 15:17
That's so interesting. I'm glad you took us there. Let's see if we can go a little bit deeper on on [laparoscopic cholecystectomy], in particular, given that it's one of those procedures that's really the glue across general surgery, and a lot of our listeners — you guys wrote a recent paper that was super interesting. Can you talk about go no-go and how — you know, you framed it beautifully — but how that really works in terms of its mechanics and where you see your vision at the end of the day, maybe?
Amin Madani 15:43
Sure. Yeah, that was — so, I think, just to take a step back, and I think this is where it's really important — I think we can't get just, like, any innovation, and I emphasize this to everyone who wants to do this kind of research: Don't get sucked into the hype of the technology, you know? First understand what the unmet clinical need is, and then design the solution around that. And that's how we design GoNoGoNet. So a lot of my background is understanding how, you know, injuries have decision making and lapses in judgment and things like that. And one of the main cognitive behaviours that we know that lead to bile duct injuries is this, you know, lack of awareness, perhaps, or errors in judgment, where you dissect in a, you know, in a territory where there's a high risk of bile duct injuries below. So, sort of this line of safety, you know, Rouviere's sulcus and things like that. So we trained an AI algorithm, we wanted to keep it very practical; something very practical that helps you during surgery. So we said, okay, it would be great to have an algorithm that kind of tells you: This is the fly zone, this is the no-fly zone. Very simple, binary. Go, no go. You want to keep your dissection — do whatever you want, but just keep it up there, above this line — imaginary line of safety. That's important because — and that was a big step, because for the first time, we're not just developing an algorithm that can tell you, "Hey, I see a cystic duct. What grade?" You know, how's that helping me in the OR? If I've dissected the cystic duct, I don't really need an AI to tell me that. For the first time, we're training something to do something very clinically relevant. The second thing is that there aren't very clear boundaries around a go in a no-go zone; it's a very conceptual thing that takes a lot of experience, a lot of pattern recognition, Gestalt, of understanding what a safe zone is. So that was one of the challenges in machine learning that hadn't been done before. So that was kind of, like, the reason behind it. And I emphasize that because people don't really recognize that addressing an unmet clinical need is a very big part of designing an AI solution. So okay, so we decided to train an algorithm. We said, "Can it replicate the mind of an expert surgeon to decide where these go and no-go zones are?" So the way we did this was we basically, you know, we used something called supervised machine learning, which is basically, you say, okay, I'm going to teach you how to visualize this part of the anatomy. I'm going to give you hundreds of thousands of different examples. So in this example, this is where the go zone is, this is where the no-go zone is. In this example, this is where it is, and this is where it is. And you give it so many examples that it can learn from. And eventually, after hundreds of different cases, and different scenes from those cases, you know, it's able to figure out that this pattern of pixels, this is where the go-zone typically tends to be, and this is where the no-go zone is. And so that's kind of how we train the AI algorithm. And what we're able to find out is, after enough training, it's more or less able to kind of figure out where these go zones and no-go zones [are]. So it's kind of like your mental model as a surgeon being reflected on the surgical field. So you've got this kind of augmented reality thing where it highlights these safe zones and unsafe zones during surgery. That's kind of the idea and the motivation behind that.
Chad Ball 19:04
That's so cool. Not to be negative about it, but what are some of the flaws and some of the places where this approach and this technology falls down? And how do you address that? How do you recognize that and how do you, kind of, engineer around that?
Amin Madani 19:20
No, it's not negative at all. I think you have to be — like, you got to be negative and critical about your research, otherwise, how do we develop good, robust, you know, things from this? There's lots of deficiencies, and we've got to be honest, because it's not ready for, you know, real-time, you know, prime-time use yet. So there's a lot of problems we've noticed with the algorithm — the GoNoGoNet algorithm — there's a lot of also problems in general with AI and deep learning. So I'll just start with the algorithm itself. One thing that we noticed, which was really interesting, is as I was analyzing videos, I came across a problem. And this is such a good illustration of one of the deficiencies of deep neural networks. Deep net or neural networks are designed to, basically, like I said, take big complex datasets and find patterns, and then make predictions. And sometimes it takes the easiest path to find those predictions. So one of the problems we found with this was the AI, we found that every time you had an instrument in the surgical field, it was biasing the results. And so it tended to move the go-zone around where those instruments [were]. So a big question became, wait a second, is it telling you where the go-zone is based on the underlying anatomy, or based on where the instruments happen to be? You know, it just happens that in most [laparoscopic cholecystectomies], the instruments happen, you know, the dissection tends to happen in a safe zone, you know, as opposed to an unsafe zone. So is it just learning the wrong thing? And so we had to kind of accommodate, you know, make adjustments to the model based on that. That was one bias we noticed. Another one we noticed — and this is one of the areas where GoNoGoNet doesn't fully succeed — is whenever you don't have good exposure of the anatomy, it's not able to tell you properly, like, "This is the safe area" or "This is the unsafe area". When the gallbladder hasn't been retracted properly, it — just like us humans — like, I can't properly assess where it's safe, where it's unsafe, when I don't have good exposure. And this goes back to training an algorithm for a very specific function. Ultimately, it's not going to be 1 silver bullet AI algorithm that's going to solve bile duct injuries; it's going to be a library of algorithms that can replicate many cognitive behaviours, like good retraction, safe zone dissection, critical view of safety — like all the different things and nuances that make you a safe surgeon. So that's one of the other deficiencies we noticed. And then the other one was just — like, again, like a surgeon, whenever it was so zoomed in that it just sees like fat and fibrous tissue, it had this sort of tunnel vision where I had no idea where it was, and it was like, "Sure, go there." And it was like, not in the same place. So the algorithm doesn't work well when it's kind of very zoomed in. It has to, you know, see its anatomical, you know, just establish its bearings a little bit. And newer models that we're using are actually integrating memory into it. So it's not looking at individual brains, in and of itself; it's looking at, you know, based on frames I've seen previously, okay, this is not actually a go-zone, even though I'm zoomed in. So, deep learning has a lot of deficiencies, a lot of biases, and you just have to be really, really careful on how you use this technology, because it can introduce biases that you never thought of before. And going to, you know, AI in general, not just its biases, but there's a lot of problems with it. Like, for instance, you know, how do you under — you know, people want to know, so if something tells you, this is what I recommend, you want to know why. Like, why do you think I shouldn't divert the patient? Well, because of this reason and that reason, you know, to help us and to guide our decisions. This explainability is one of the big lacks in AI. It's kind of like this black box, where it's really, really good at making predictions, but it's not really good at telling you how it arrived at those decisions. And so there's an entire field of Explainable AI that's trying to target that. So that, you know, so if I'm a surgeon, and I'm using this kind of algorithm, you know, I can have some explainability around that. And there's going to be more trust, you know, around this kind of technology for people who want to adopt it in their OR. And so that's definitely one of the big problems. Another problem is generalizability. So just because I train an AI algorithm from data from my institution does not mean it's going to work as well at your institution, or even across the street. I'll give you an example. You know, let's say you train an AI algorithm to tell you where to go or where not to go during, I don't know, colorectal operation, and you trained it on — you laugh, but that's actually one of our next projects — but anyways, you train it out to do that, and you basically train it on only on, you know, University of Calgary data, and you've used only 1 platform like Storz or Olympus or whatever, you know, the video itself is very different from another platform. So, it's guaranteed, actually, that it's not going to perform as well in someone else using a different platform, for instance, or using a different resolution. They're very — different aspect ratio, or somebody likes to have like, you know, the full view with the black circle around like, you know, the circle; somebody likes it zoomed in so that the — like, all these nuances, you know, are very important. So if you want to train an AI algorithm, it's important to have a very wide breadth of data, real-world data, that you're going to see. You want data from many institutions, from different health care systems, from different platforms, you know, people from different anatomies, different demographics, people that use different instruments, different techniques, and so on. Because that generalizability is a very big key part of it.
Ameer Farooq 25:25
I mean, like, you're, like you mentioned, your background is in surgical education, and I remember you actually came to Calgary, promoting the FUSE platform, when I was a resident. That's all about [inaudible] energy and stuff. So, you know, I wonder, in your surgical education brain, or with that lens on, where do you see this fitting in from an education perspective? Because, you know, like, the explainability piece, I think, is a big — in some way — I don't know if you can say it's a problem, but it is a challenge in that, you know, you know, a lot of the things that you're talking about in terms of what the AI has trouble with, like, those are human, also human challenges that we have to learn as trainees. Like, I'm reminded of Dr. Ball and I, you know, one of our mentors, Francis Sutherland, you know, he talks about this bile duct [inaudible], right? And so when ever you do [laparoscopic cholecystectomies] with him, he makes you, like, have it zoomed out; like, he doesn't allow you to actually move the camera in closer when you're doing a [laparoscopic cholecystectomy] because of this issue. And so I just wonder, like, do you think — I guess my question is: How do you see this — Where do you see this being implemented in actual surgical practice? Is this something, like, you switch on at the end of the case, to sort of, before you put the clips on, to give you an additional data point? Is it something you have on all the time? Where does that interact with training? Like, where does our — Where does it behoove us to, like, actually train ourselves to recognize these problems and these issues? And how does that interact with these types of algorithms and platforms?
Amin Madani 27:04
That's a great question, Ameer. I don't know if we have all the answers to that. I think we're just, kind of, seeing this starting to be introduced into the OR. And we've started doing surveys and you know, you know, qualitative studies and needs assessment surveys, and things like that, for end-users to kind of figure out how they want this technology integrated. I think that there's, kind of, 2 main themes that, you know, that you can kind of see this. And when I say AI, I think I just want to emphasize, we're talking about a lot of the work that we're doing: computer vision. AI, like you said, can be used so many — you can make prediction models on whatever data sets you want. It can be for perioperative decision, intraoperative navigation, and things like that. Our areas has been really focused about in the operating room and algorithm that kind of gives you navigation and guidance. So for that, specifically, I think that there's sort of 2 areas, 2 themes that we've noticed. One is intraoperative deployment, and one is postoperative on videos that have already been recorded. I can see this being used in a number of different ways for educational purposes, for quality improvement, for feedback, for assessment for coaching, you know, and it ultimately depends on the end-users and what their use case is. Like, a lot of people are not very keen to use this in the OR, we've noticed, especially since this is a very new technology; you know, there's not a lot of validity evidence behind it and things like that. But they're very happy to take their recorded videos, sitting on their computer, and seeing what the AI told them to do — told them they would have suggested, and analyze their performance, and then use that as, sort of, like, an educational experience. Or you can use it for residents, as well. You know, perhaps they can get a safety score of some sort. You can use this for video-based assessment of residents. And we're actually doing a randomized trial right now, using this for residents to see whether or not it actually improves their performance. Do they become safer? Does it change their practice? And things like that. So I think there's a huge opportunity for postoperative integration. The second piece is intraoperative, and that's been more challenging, obviously, because you have to integrate a technology in the OR, it's got to work with the — for workflow and the equipment and the nurses and, you know, that's a bit more challenging, and we're not there yet. Like, the way we've, kind of, done the initial deployment was, I literally take — we design a pipeline where I can take the laparoscopic tower, plug it into my tablet that I'm talking to you with right now, and basically, like, have the video feed from the tower on my tablet and get the AI on a monitor. But who's going to have a, you know, who wants to have another computer in the room, having them guide them? Like, that's just not really well-integrated into the OR workflow. So the question becomes this: So how would you use this in the OR? So, I think that — and for some people, it may be the situation where they, you know, they don't want to invest extra hardware to have this integrated into their OR, and they just want a quick, like, assessment to say, "Hey, am I on the right path? Yes or no? Okay, I don't need it anymore. Let me continue the surgery". So you can use it that way. Like the example that I gave, where you can take your, like, your iPhone or whatever, take the camera on it and put it on the monitor, and then just get the AI to give you an assessment. That's one of the ways we see this technology is being scaled in every operating room around the world. Another way is there's going to be a need for some people who, especially when algorithms become a lot more powerful, to have a, sort of, plug-and-play solution, where, you know, you literally plug the AI into the monitor or into your integrated OR, and it gives you on-demand, high-performance AI inference in the OR, and you can turn it on and off whenever you want. Now, that begs the question, people who are doing unsafe maneuvers, they're oftentimes unaware that they are doing unsafe maneuvers, and they may not have the awareness turn on an AI, you know, algorithm to guide them either. So a lot of questions remain. I don't know, you know, I don't think I have all the answers. But I think, in my opinion, we're going to see AI, sort of, integrated as, sort of, like, a fly on the wall kind of watching you. You know, if it sees that you're about to do something, maybe give you like an alarm of like, hey, you know, just maybe rethink what you're doing. Try this instead. Or, you know, maybe you should bail or change your approach.
Ameer Farooq 31:48
Yeah. It's fascinating to think how things might look in the future. We have to, since you mentioned it, where — What do you think ChatGPT is going to do to surgery? Like, is this just —
Amin Madani 32:02
I don't know. Look, ChatGPT is a big game changer, but I don't really — we don't really know where — it's so powerful that we don't really know where its applications will lie. But it also has a lot of problems, too. Like, it's really good at, like, generating, like, you know, we were joking around with it; I'm like — with ChatGPT — I'm like, writing a song in the style of Taylor Swift about surgical AI, and it, like, wrote me a song, like, and a really funny song. Like, it was really good. Like, things like that, it's really good at. It's actually changed a lot of things, like for programming. Like, you know, one of the engineers I work with made a joke. It took him 2 years to train something that one of our proof-of-concept for 3D modelling and computer vision, and took him 2 years to write a code for that. When ChatGPT came, it took him literally 10 days to do the same thing. So there's actually a joke amongst computer scientists that the new programming language is English, because you just need to type, this is what I want. And like, it'll give you the computer code for it. So you don't have to do it from scratch. You know, you can see ChatGPT used for patient care. Like, you know, patients have questions about their surgery, you know. They have, like, a quick thing that they need to answer. You know, instead of trying to call the hospital to figure out their answers, they got ChatGPT to help them. You can see a lot of potential there. There's a lot of problems, too. Like, as you know, ChatGPT will also give you references that — it'll make things up that don't exist, which is a big problem. So it's a very powerful thing; I just don't exactly know where it's going to come into play, to be honest, you know, in terms of clinical applications.
Ameer Farooq 33:53
It's all very exciting, all, kind of, nebulous, all kind of, kind of interesting. So it's an exciting time — What's that?
Amin Madani 34:02
The field is moving way too fast; faster than we can keep up, you know, not just in terms of knowing what to do with this, but also how to regulate it as well, so.
Ameer Farooq 34:11
That's a whole discussion for another day about, sort of, the ethics around this and what is, you know, how do we roll this out in a way that's safe, and all that kind of stuff. Because I think that is an important consideration, you know, particularly around ChatGPT, but you wonder about the tools that you're developing, how they're going to be implemented, you know, does that — What kind of legal ramification does it have? Like, if your GoNoGo said, "Don't go there," you went there. Like, right? Like, you could see some interesting scenarios coming up down the road, and I don't think we know the answers to any of these things. Yeah.
Amin Madani 34:50
Absolutely, and I think we have precedent for that. And ultimately, I think it's going to be just like any technology that you use in the OR. Ultimately, you have to use all of that in — with, you know, a grain of salt and at your own discretion because, you know, ultimately these prediction algorithms have their confidence intervals and things like that. And that's part of what we've integrated into our newer platforms, is information about how confident [inaudible] is about its predictions for you.
Ameer Farooq 35:18
Dr. Madani, thank you so much, again, for joining us. It's been an absolute pleasure. One of the questions that we ask our guests at the end of the show is: If you could go back in time and give yourself advice, as a chief resident, or perhaps even as an early attending — although you're still, probably, still a young buck, but you know — If you could go back in time and give yourself advice, maybe as a chief resident, what advice would that be?
Amin Madani 35:42
Oh, that's — I don't even know. Like, that's a tough question to answer. I think it can be anything, not necessarily related to surgical AI. I think the number 1 thing I would tell myself is: Trust your training. I think, you know, when you go out into practice, I think there's a lot of unknowns, and you're not sure of yourself, you question yourself, you're questioning your abilities, and things like that. I think one of the things I would go back realize, like, "It's okay," like, "You've got this. You've got the skill set, and you'll be fine". Especially in Canada, we get really good training all around. And the other thing is to not be afraid to ask for second opinions. Ask for help. Just, "Hey, I just want to run this case. This is what I'm doing". You know, "I know it's probably right, but just want to run it by someone," and kind of helping that build your confidence so that you can finally fly from the nest, or whatever that analogy is. That'd be kind of, I guess, the one advice I would say.
Ameer Farooq 36:54
You've been listening to Cold Steel, the official podcast of the Canadian Journal of Surgery. This episode was produced and edited by Kirsten Allen, one of our new team members on the Cold Steel team and a medical student at Queen's University. If you have comments or questions, please email us at podcast.[email protected]. Thanks for listening.
Posted February 5, 2024