Artificial intelligence and fake news: imagining the potential for AI chatbots

Feb. 23, 2023

If you are like me, you are intrigued by the recent hype around ChatGPT, and its potential applications in healthcare and the supply chain. I have started using the AI chatbot to learn more about the tool itself, the potential risks and opportunities, and whether it could go beyond just informing to reinventing how the healthcare supply chain operates. Here is what I have learned (or surmised) so far, through reading about the experience of others and my own direct “conversations” with ChatGPT, and by doing a bit of creative visioning on my own.

It’s impressive

What is most intriguing to me was the experience of Jeremy Faust, MD, editor-in-chief of MedPage Today. In a recent video1, he described how he provided ChatGPT with some clinical factors about a hypothetical patient, and asked it to provide the most likely, as well as possible, diagnoses. Even when he used medical jargon, Dr. Faust says the tool provided him with what he also considers to be the most likely (and common) diagnosis, costochrondritis. ChatGPT also offered a number of other possible (and Faust adds accurate) diagnoses, despite the minimal information provided.

It's concerning

Dr. Faust’s experience also discovered a flaw related to the likely diagnosis. In its readout, ChatGPT said that costochrondritis is made worse by oral contraceptives (OCPs), which Dr. Faust had noted the patient was taking. When he asked for the evidence regarding the impact of the OCPs, ChatGPT first provided what appeared to be a reference to a peer-reviewed article that Dr. Faust discovered did not exist. ChatGPT made it up, using the name of a real journal and even names of authors who had published in it.

Given that some earlier iterations of AI chatbots, by Microsoft and Meta, had allowed the proliferation of racist, antisemitic and/or false information, Dr. Faust’s experience raises some real concerns about the potential for the technology to proliferate inaccurate and even life-threatening recommendations, which, as we saw during the pandemic, can go viral and prove deadly.

It’s self-aware, sometimes

When Dr. Faust challenged ChatGPT about the reference, he says it “stood its ground.” But when another researcher asked about ethics and AI, ChatGPT explained that it its “ethical behavior” is based on the values “built into the algorithms and decision-making processes it uses.” Problems with algorithmic bias have plagued healthcare before. A commonly used algorithm for care decisions was found to have reduced the amount of care Black patients receive compared to equally sick White patients.2 The algorithm was built in a manner that based health status on the amount of money spent on care; the problem is that the amount of money spent on care for Black patients has historically been lower, more as a result of access and affordability issues than as to health status. The ethics of AI algorithms, including those used by tools such as ChatGPT, is dependent upon the ethics, values and/or (unconscious) bias of the humans who program it. That begs the question as to the implications of ChatGPT’s other ability—to generate Phython computer code—but that’s a topic for another day, and another column.

It's got info, not new ideas

Dr. Faust’s example provides a clear example of how—when crafted carefully—ChatGPT can satisfy a long-held request of physicians – to have access to the most relevant information when and where they need it. As just one example, think how something like ChatGPT could provide real-time access in the middle of the case to information often buried in text-heavy instructions for use; or how might this change physician reliance on the manufacturer representative in the OR.

When I asked ChatGPT about how AI could improve the healthcare supply chain or the selection of the best products for patient care, it gave me some very valid responses, such as the ability to identify and mitigate potential supply chain disruptions, or to gather data on product efficacy, cost effectiveness, and clinician and patient satisfaction. I was most interested in its stated ability to analyze large amounts of data to predict demand and supply patterns, but it failed to make the link (at least in the answer I received) to corresponding advancements in AI to predict the future healthcare needs of patients. What’s intriguing to me is how that data, when available for large enough populations of patients, could support advancements in supply-chain forecasting, production, and fulfillment.

It's working on it

Finally, when I asked if ChatGPT could generate new ideas or concepts, it explained that “would require a fundamental shift in the way AI systems are designed and trained ...[and] a better understanding of how the human mind generates new ideas and concepts.” Currently, ChatGPT is trained on large amounts of data, but interestingly, nothing since 2021. Given just how much new information has been generated in medical science in that time, ChatGPT clearly has a lot of learning to do—as do I on the risks and opportunities of this game changing technology.

References
1. https://www.medpagetoday.com/opinion/faustfiles/102723
2. https://www.science.org/doi/10.1126/science.aax2342
About the Author

Karen Conway | CEO, Value Works

Karen Conway, CEO, ValueWorks

Karen Conway applies her knowledge of supply chain operations and systems thinking to align data and processes to improve health outcomes and the performance of organizations upon which an effective healthcare system depends.  After retiring in 2024 from GHX, where she served as Vice President of Healthcare Value, Conway established ValueWorks to advance the role of supply chain to achieve a value-based healthcare system that optimizes the cost and quality of care, while improving both equity and sustainability in care delivery. Conway is former national chair of AHRMM, the supply chain association for the American Hospital Association, and an honorary member of the Health Care Supplies Association in the UK.