Artificial Intelligence and Point-of-Care Ultrasound

One of the greatest ongoing challenges of POCUS (point-of-care ultrasound) is educating existing physicians, residents, students and others. There are not even enough teachers to teach everyone who wants to learn. Clinicians would like to get the results from POCUS performed on their patients but have difficulty investing the effort required to learn, practice, and then become credentialed. Further complicating things for some is the dreaded self-doubting period, which could last months or years, where providers worry they may make a mistake and be ridiculed for it, or worse.Blaivas

One potential answer is thought to be artificial intelligence (AI); kind of like it seems to be for everything in medicine today. What good is AI in POCUS anyway? What if the education required was simply to find the correct spot on the body to apply the probe? Then the algorithm would do the rest and it would be more accurate than the best POCUS masters. Not only would training be truly minimized, maybe to minutes, but the examination would be shortened as well. A few sweeps through organs, whether it is the liver and gallbladder or the heart, may be enough for the AI algorithm to do its thing. This would mean all those busy clinicians really would get a great return on their time investment. If the algorithm is that accurate and expert, providers will not be questioned easily when they document an AI US finding.

AI is an inescapable topic of sensational news stories and movies alike. AI is simply a machine approximation of human-like intelligence in task performance. The type most associated with image interpretation is deep learning. How does it work? Programmers develop software architectures roughly resembling levels of neurons in the cerebral cortex, with multiple connections. The levels of neurons have specific functions and transmit messages to neurons in the next row via mathematical functions. They are also capable of sending messages in reverse as feedback. Such a deep network is often termed a convolutional neural network (CNN; or some variant on the name). It can learn to interpret images, whether CXR, head CT, or ultrasound, by scanning each image one tiny part at a time, then pooling all of the neuronal-like reactions to those tiny parts and coming up with an answer. Give it enough training data and such a CNN can become very accurate.

Well, imagine a CNN algorithm plugged into your favorite POCUS machine. The CNN is trained on the liver and gallbladder; it has seen millions of example images, both normal and abnormal. It can recognize liver anatomy and point it out for you, the same for every detail around the gallbladder and biliary tree. It’s great at identifying pathology and can make measurements in the correct spots for the wall, common bile duct (CBD), and more. Once again, who really cares? I spent 2 decades scanning the gallbladder, performing research studies, and publishing on it. Well, while it may not have been an issue for me, not everyone invests their free time like that. Yet, many would like to be able to put a probe on the abdomen, have the ultrasound machine tell them where to move it, point out pathology, and come up with a likely diagnosis. Did I mention it could happen in real time, at the patient’s bedside, while you are casually speaking to them? How useful would this be? It could substitute for years of training, maybe even over 2 decades worth. There are other subtle benefits too. Although some studies seem to show that CNN CT algorithms seem to catch so much pathology radiologists can miss, the individual CNN may not be as good at finding something a rare expert might pick up, at least for now. But the CNN never gets tired. It never gets hit with a massive wave of scans to read late at night or overwhelmed with clinicians calling to discuss imaging studies. Thus, even experts can benefit from such algorithms as an aid.

Not happy with the image quality due to patient body habitus or another factor? It turns out another algorithm can actually artificially improve the image clarity and quality, and do so accurately without introducing false data. This has not been introduced into clinical use of POCUS but is likely to be just around the corner. The key is to make sure nothing is invented by the algorithm that is not actually there.

Imagine incredible ultrasound expertise from a short exam that required minimal training to perform. This scenario will come, but not this year or the next. As some speakers and authors have noted, AI coupled with POCUS is a big step toward the fabled and elusive “tricorder” first depicted in the 1960s Star Trek television series. An incredible hand-held device (that does not even require body contact), which diagnoses maladies in a few short sweeps over the patient. The eventual outcome of approaching such a device is greatly increased speed, efficiency, confidence, and accuracy of patient assessment and diagnosis. The benefit of significantly decreased skill/training requirements will also pose some challenges for the workforce, but these are likely to appear gradually and may be hardly noted.

What about combining other data feeds along with the ultrasound images? AI algorithms are great at interpreting EKG tracings and even cardiac and lung auscultation. Studies analyzing digital auscultation signals using deep learning systems are able to diagnose many more abnormalities than humans are. The result could be synergistic and add redundancy in diagnosis, such as for abnormal lung or heart sounds during ultrasound evaluation. Maybe other signals could be incorporated also.

These algorithms just need data, lots of data, and that is the conundrum for people seeking to develop AI apps. What do you think about companies getting de-identified image data without provider and patient awareness? Do you think it would help you to have a smart machine that analyzed the images and made calculations within seconds? What about incorporating other diagnostic signals such as digital auscultation, EKG tracings, or maybe some other signal?

 

 

Share your thoughts on AI in ultrasound: comment below, or, AIUM members, continue the conversation on Connect, the AIUM’s online community.

connect_now_live_digital_graphics_e-newsletter-1

 

Michael Blaivas, MD, MBA, FACEP, FAIUM, is an Affiliate Professor of Medicine in the Department of Medicine at the University of South Carolina, School of Medicine. He works in the Department of Emergency Medicine at St. Francis Hospital in Columbus, Georgia.