Artificial intelligence (AI)-powered programs are increasingly taking over various medical testing functions. As an LNC, you may have worked on cases where the reliability of a pulse oximeter or other measurement device came into question. The influx of AI into medicine, while it may generate more accurate and certainly faster, results, also has more potential for erroneous readings. That means that LNCs need some familiarity with these developments.
I’ve listed here some of the most prominent programs currently in use, mainly to give you a picture of how much AI technologies have become part of medical testing. You can be sure that there will be many additions to these. I describe a few to give you an idea of the current range of functions AI performs.
This is a good time to again mention the “Garbage In, Garbage Out” (GIGO) truism. The output of data can never be better than the input.
Arterys This company has designed a product that reduces the time required for a cardiac scan from an hour to six to ten minutes. It obtains data about heart anatomy, blood-flow rate, and blood-flow direction.
Enlitic This program analyzes radiological scans up to 10,000 times faster than a radiologist and its designers claim that it’s 50% faster at classifying tumors with a zero percent error rate.
K’Watch Glucose This product provides constant glucose monitoring.
Qardio This provides a wireless ECG. The company claims that a person with limited medical knowledge can easily use it. It requires the use of a smartphone.
Sentrian This product can monitor blood sugar or another chronic disease statistic. Presumably, it allows its user to use the data to anticipate a problem. Sentrian recommends changes in patient medications and behavior to prevent a medical crisis. This reduces hospitalizations, which reduces medical costs.
What happens when the provider relies on faulty AI-driven equipment?
Your AI tool could be easier to deal with than your doctor
Now we move into a fascinating and perhaps alarming area AI has entered: the physician-patient consultation.
The particular example I am using involves ChatGPT. While the program can’t replace the value of a medical professional, recent research suggests that it may have a better “bedside manner.”
An article in JAMA (Journal of the American Medical Association) Network describes a study that compared how ChatGPT and doctors responded to 200 patients’ questions on the online forum, r/AskDocs, is a subreddit (a category within Reddit). It has about 474,000 members. Users submit medical questions. Anyone can answer, but verified healthcare professional volunteers include their credentials with their answers.
Professionals in pediatrics, internal medicine, oncology, and infectious diseases scored the human and bot answers on a five-point scale that measured the quality of information and empathy in the responses.
They rated the chatbot’s responses as 3.6 times higher in quality. Empathy was rated 9.8 times higher.
The length of the response may have influenced these ratings. An example cited in the study involved a person who feared going blind after splashing some bleach in the eye. Chat GPT delivered a seven-sentence-long response. A doctor wrote “Sounds like you’ll be fine” and included the phone number for Poison Control.
The clinicians involved in this study suggest further research into using AI assistants They point out that to some extent this is already occurring, with reliance on canned messages or having support staff respond. An AI-assisted approach could give staff time for more involved tasks. They also believe that review of the AI responses could help both clinicians and staff to improve their communication skills.
If the use of AI results in questions being answered quickly, to a high standard, and with empathy, people could avoid unnecessary visits to the doctor. It could ease the difficulties of mobility limitations, inability to take time off from work for an appointment, and elevated medical bills.
The authors of this report are candid about its limitations. They only considered the elements of empathy and quality in a general way. They didn’t evaluate patient assessments of the AI responses.
They also acknowledge ethical concerns, especially the accuracy of AI responses, including false and/or fabricated information. Learn from the tale of an attorney who relied on ChatGPT for citing cases in a brief for federal court. He sure couldn’t sue AI this time.
In my view, they should have elaborated on this point. Artificial information can never be more accurate and non-partisan than the humans who input this information.
In an evaluation separate from the r/Ask Docs study, Dr. David Asch, a professor of medicine and senior vice dean at the University of Pennsylvania, describes ChatGPT as, well, chatty. “It didn’t sound like someone talking to me. It sounded like someone trying to be very comprehensive.”
But would AI be able to detect conditions that require prompt attention? And if not, can you sue AI?
Researchers agree that an AI-generated diagnosis should always be backed up by human review. Imagine the legal issues that could arise from a chatbot misdiagnosis. Who would the claimant sue? The bot? Does a bot have a deep pocket?
Obviously, this would be impossible, but what about the designers? Would the supervising medical professional be held liable?
Another issue is built-in bias. If you have predominantly male and white people programming AI, you will end up with biased results.
The New York Times addressed this issue in an article that pinpointed multiple racial issues with AI. An example is that AI could not identify Black faces.
In other studies, researchers found that AI seemed programmed principally to understand “whitespeak.” It gave less coherent answers to questions from Black people.
This poses additional questions. The U.S. has many residents who speak English as a second language. Imagine the difficulty an AI program would have understanding them.
Given increasing data about the discriminatory treatment in the medical system towards people of color, the certainty that these inequities will extend to artificial intelligence programming is cause for concern.
Another issue that needs exploring is how people would feel if they knew they were talking to a bot and not a human. I saw no evidence in the JAMA article that questioners knew whether a bot or a human answered them.
How do you feel about this? Would you trust it? Would you feel cheated of a human response? Would you ever sue AI? you’d like to post your answer here, I’d love to read it.
Pat Iyer is president of The Pat Iyer Group, which develops resources to assist LNCs in obtaining more clients, making more money, and achieving their business goals and dreams. she created this image with AI.
Pat’s related websites include the continuing education provided on LNCEU.com, the podcasts broadcast at podcast.legalnursebusiness.com, and writing tips supplied at patiyer.com.
Get all of Pat’s content in one place by downloading the mobile app, Expert Edu at www.legalnursebusiness.com/expertedu. Watch videos, listen to podcasts, read blogs, watch online courses and training, and more.