
An extra set of eyes on radiology scans, double-checking for signs of prostate cancer. A green light telling surgeons when it’s safe to operate on trauma patients. A reminder system that finds and flags test results that need to be followed up on.
Artificial intelligence has bounded into the mainstream, into personal lives, classroom assignments and work meetings — so it should be no surprise to find it in doctors’ offices and emergency rooms, too.
Nationwide, according to a federal brief, hospitals’ use of AI tools is growing rapidly. In 2023, 66% of hospitals used predictive AI tools in their electronic record systems. A year later, that number was up to 71%.
As AI saturates nearly every aspect of our modern world, some medical applications run directly parallel to the types of tools we’re already familiar with. Many doctors, for instance, are using AI tools to listen to, transcribe and summarize their patient visits. Medical offices are using automated scheduling tools to navigate patient appointments and cancellations.
These administrative tools, while not the most exciting, are proving to be hugely important. By reducing medical providers’ workloads, these tools can help curb physician burnout, a problem that has plagued the medical field for years.
But in the field of medicine, there are also much more dynamic — and controversial — applications.
Artificial intelligence tools can be used in clinical processes and decision-making, too, interfacing either directly with patients or with those patients’ care plans. The people who are working most closely with the development and implementation of these tools are excited. There are so many backstops that AI can provide, they say, to keep medical providers from making mistakes and to help understaffed emergency rooms respond more effectively to patient needs.
Many of these tools are either in use or under development at North Texas hospitals, too.
The people who are most excited about AI in health care describe the technology as “transformative.”
As AI continues to evolve, day by day, the central question for health care leaders — including those in Texas — is no longer whether or not they’ll use the technology in their medical facilities. The question, now, is how they’ll make sure the technology is improving patient care instead of imperiling it.
The caveats
AI tools of all sorts come with caveats.
ChatGPT, among the most popular consumer-facing AI tools, has a caveat written at the bottom of the webpage. “ChatGPT can make mistakes,” the message says. “Check important info.” Google’s AI tool contains a caveat, too, in a sidebar. “Generative AI is a work in progress and info quality may vary,” it says.
AI mistakes or hallucinations may not have life-altering consequences when a user is looking for help rewriting emails or optimizing a to-do list. When AI tools are applied to medical diagnosis and decision-making, though, the stakes are significantly higher.
That’s part of why doctors and other health leaders emphasize that AI tools, at this stage in their evolution, are meant to assist medical professionals — not replace them.
Xiaoqian Jiang — a researcher and the director of the Center for Secure Artificial Intelligence for Healthcare at the University of Texas Health Houston — said that many of the existing tools perform well in straightforward medical cases. The same isn’t yet true, though, for complex cases.
“I think we are on the edge, but many of the models we currently have are still not actually to the level of the expert,” Jiang said. “A lot of the time, sophisticated scenarios still need human judgment.”
Even tools that do work well can still make mistakes or erroneous connections, which a human eye may be able to suss out before any damage is done.
AI is evolving rapidly, though, and in many ways it’s developing outside the boundaries of existing rules and regulations.
Dr. Ryan Choudhury, a hospice and geriatrics physician at University of North Texas Health Fort Worth, said he thinks AI has outpaced governmental and safety regulations.
“It feels like the government is probably five years behind on where they need to be in terms of legislating and helping guide what this looks like,” Choudhury said.
A number of health experts pointed to liability law as one protection mechanism.
From a legal perspective, doctors remain responsible for the care they provide, no matter what outside tools they’re using.
Angela Clark is the director of the Urology Research & Education Foundation. The organization was created by Dr. Pat Fulgham, a urologist who practiced at Texas Health Presbyterian Hospital Dallas for 35 years.
Clark and Fulgham said doctors’ legal liability is a built-in protection mechanism, preventing providers from leaning too heavily on AI tools.
“The providers are still held accountable, liable, for whatever they diagnose,” Clark said.
“Or fail to diagnose,” Fulgham added.
Even with those caveats on AI, experts say there are myriad ways the tools can help doctors do their jobs better. And there are some things, AI proponents say, that these tools can do even better than a human doctor can.
The applications
Even just looking at clinical and patient care applications, there are more potential uses of AI tools than could be covered in any one article.
But Dallas-Fort Worth doctors and health care leaders gave some examples of applications they’re focusing on, to give a sense of what role AI could increasingly play in the U.S. health care system.
Fulgham said there are AI tools that double check radiology scans to identify risk factors that a human radiologist might have missed. That could help to ensure accurate diagnosis of prostate cancer, he said.
“It’s not meant to replace the radiologist,” Fulgham said, “but it may point out something that was inobvious to them.”
Similarly, there are tools that can look over a biopsy and assist a pathologist in determining how aggressive of a cancer a patient has. That information can then be used to inform a treatment plan.
Dr. Brett Moran, the chief health officer at Parkland Health, which is Dallas County’s public hospital system, pointed to another soon-to-be implemented tool, which has its roots in a problem he’s seen firsthand.
Years ago, Moran said, a patient came into the emergency room for chest pain. The medical staff sent the patient for a CT scan, primarily to look for blood clots. The scan turned up no blood clots, but there was a small nodule in the patient’s lung. Separately from the chest pain, the staff told the patient, he should follow up on that nodule.
“In all the hoopla of the ER,” Moran said, “it didn’t sink in.” The patient didn’t go for follow-up scans.
A year later, the patient was admitted to Moran’s care. The patient had cancer and, by then, it had spread through his body.
“It’s a story that we’ve seen too often, and it really bothered me and it stuck with me,” Moran said. “This isn’t a single doctor that failed, this is a system failure.”
Parkland now has a team that follows up manually with patients, based on flags that have been raised by radiologists and other medical providers. But when a provider is treating a specific problem, and trying to juggle a large number of patients, they may forget to go back through scans and flag unrelated issues.
“What we needed was a more automated solution,” Moran said.
Soon, Moran said, Parkland will switch on an AI tool built by the Parkland Center for Clinical Innovation . The tool will look through the interpretations of medical scans and flag potential follow-ups that patients might otherwise miss.
It’s an example of an area where AI can shine.
The tool is not necessarily more accurate or smarter than the medical providers — but it’s indefatigable. It won’t forget to go back through the scans. It won’t be swayed by the tedious work of sorting through a ream of documents. It won’t get tired of the repetition.
It’s also an example of a tool or process evolving to include AI, as the technology has developed.
Joe Longo — chief digital information officer at Parkland Health — and James Gaston — chief data officer at Parkland — say the health system has a wide variety of AI tools already in use.
Some, such as an early warning system that alerts providers when a patient is heading toward coding, have been in use for years. The original version of the system wasn’t called “artificial intelligence” at the time it first rolled out, but it falls into that category now.
The goal of that system, and the other tools that Parkland and the Parkland Center for Clinical Innovation are working on, is to solve an actual problem in the hospital. AI tools are capable of all sorts of things. But if those tools are providing a solution where there is no problem, then they aren’t particularly useful to a health system.
“All the vendors are throwing spaghetti at the wall right now,” Gaston said. “We’re trying not to just spend money and be excited about AI; We’re trying to make sure we’re delivering that value for the organization, for our patients.”
Reality vs the ‘hype cycle’
When health leaders try to explain the potential impact of AI, they speak in sweeping terms.
Several health experts who talked to The Dallas Morning News compared it to electronic health records, which were adopted on a widespread basis about a decade and a half ago. The transition from paper to digital records was a massive lift for governmental and health organizations.
Longo, at Parkland, went back a bit further.
“I would parallel it to the advent of the internet,” he said.
There are some who worry that the transformative power of AI in health care might be exaggerated, at least based on what’s in use now. Paige Nong, a researcher and assistant professor in the University of Minnesota School of Public Health, said the majority of in-use tools are on the administrative and operational side of things, rather than patient care.
“There is so much public hype and excitement about AI,” Nong said, “and the claims that are made in public-facing ways are often a little misaligned with what’s actually happening in the health care system.”
There are also reasons to be cautious about the tools that are patient-facing: A tool that’s still in development, for instance, might sound like it’d be an amazing help to hospitals. But if it doesn’t actually work, then it might actually make things worse instead of improving them.
Nong pointed to one notorious instance. The massive health records company Epic Systems rolled out a tool that claimed to be able to predict sepsis. Sepsis is the body’s extreme and life-threatening reaction to a major infection, and it’s a significant threat to hospitalized patients.
But when journalists and researchers looked into the tool, they found that it did not work as advertised. (Epic Systems later revamped the tool, according to a STAT News report that followed up on the outlet’s investigations.)
Jiang, at UTHealth Houston, has seen a similar example firsthand.
At a competition for a sepsis prediction tool, he saw models that used, as a top prediction measure, whether the patient had their blood drawn in the middle of the night. But a blood draw in the middle of the night won’t lead to sepsis; It’s instead an indication that a medical provider is already concerned about the patient’s health.
“This is not a biological feature, but a human artifact,” Jiang said. “Probably the nurses already realized, otherwise nobody goes to the bed and draws blood.”
Longo, for his part, is aware of the possibility of getting swept up in a “hype cycle.” But he thinks the excitement around AI in health care is more than just that.
“I’m not one that gets overexcited about new things,” Longo said, pointing to the recent blockchain frenzy and noting that he pushed back on the excitement about that technology.
“Every year or two, people get amped up over certain new technologies,” he said. “Machine learning and AI, that’s the first one where I’m saying, ‘This has legs to be transformative.’”




