Artificial intelligence has been used “behind the scenes” in health care for decades, but with the growing popularity of new technologies such as ChatGPT, it’s now playing a bigger role in patient care — including during routine doctor’s visits.
Physicians may rely on AI to record conversations, manage documentation and create personalized treatment plans. And that raises the question of whether they must get patients’ permission first to use the technology during appointments.
“While regulations may vary by jurisdiction, obtaining informed consent for using AI is often considered best practice and aligns with the principles of medical ethics,” Dr. Harvey Castro, a Dallas, Texas-based board-certified emergency medicine physician and national speaker on artificial intelligence in health care, told Fox News Digital.
“It ensures transparency and respects patient autonomy,” he added.
“Regulatory bodies and health care institutions may provide specific guidelines.”
Augmedix, a medical technology company in San Francisco, offers solutions that allow doctors to capture documentation using ambient AI technology.
“We repurpose the conversation that occurs between a doctor and a patient, and use that as the basis for creating a medical note, which is required for every patient visit,” CEO Manny Krakaris said in an interview with Fox News Digital.
Manual documentation by physicians, on the other hand, can consume up to a third of their day, Krakaris said.
“That’s a lot of wasted time spent on administrative tasks, which could be applied to spending more time with their patients and interacting with them on a very human level,” Krakaris said.
The AI technology can also help reduce physician burnout, Krakaris noted, as it can allow doctors to reduce their workload and spend more time with their families.
Based on surveys from Augmedix’s clients, patients generally feel a greater sense of satisfaction when the doctor pays full attention and listens to them during the visit, rather than being distracted or diverted by a computer screen.
In terms of HIPAA compliance with AI-generated documentation, things can get a little murky.
“Artificial intelligence wasn’t even a term when HIPAA was created, so it has some catching up to do.”
“HIPAA does not specifically require patient consent for the use of AI — artificial intelligence wasn’t even a term when HIPAA was created, so it has some catching up to do,” Krakaris said.
AI-generated documentation is permissible under HIPAA, he said, “as long as the intent is to use it to contribute to generalized knowledge — and that’s typically how this is used.”
There are also individual state laws that govern patient privacy, Krakaris noted, and physicians must adhere to those whether they’re using AI or not.
Among Augmedix’s clients, Krakaris said that obtaining AI consent is generally part of the patient intake process.
“It will vary from one enterprise to another in terms of how that is done,” he said.
Some practices require only verbal consent to use ambient technology to help generate the medical note, while others require written consent.
Overall, most patients are open to the use of AI in the doctor’s office, with the typical opt-in rate across all of Augmedix’s customers averaging about 99%, the company said.
“So there hasn’t been any kind of widespread hesitation on the part of patients to use AI or to take advantage of this technology,” said Krakaris, sharing his experiences.
Some patients, however, may have concerns about privacy, data security or the impersonal nature of AI, Castro pointed out.
“Ethical considerations, mistrust of technology or cultural beliefs may also deter consent,” he told Fox News Digital.
“It’s essential to address these concerns with empathy and integrity, and I always ensure that patients understand that I do not violate HIPAA laws,” he added.
People in different age groups often respond to technology differently, Castro said.
“Education is vital to addressing concerns about AI,” he said. “It’s realistic and ethically responsible for physicians or health care staff to provide clear explanations and education about AI’s role in care.”
“This fosters trust and empowers patients to make informed decisions.”
In Krakaris’ view, physicians are the best source for explaining the use of the particular technology, because it happens during the encounter with patients.
“That’s the perfect time to do it — at the point of care,” he said.
“Emphasizing transparency, informed consent and education ensure that AI can enhance, not replace, the human touch in medicine.”
Each of Augmedix’s clients also gets a one-page laminated description that fully explains what the AI does and how patient data is protected.
“After they have a chance to review that, they’re asked whether they opt in or not,” Krakaris said.
There are risks associated with “blindly relying” on using large language models to summarize the doctor-patient experience, Krakaris said — especially given the current shortage of health care providers.
“The large language models are prone to errors — it’s been widely documented,” he said.
“And so you need to provide guardrails to ensure that those errors are removed from the final medical note,” he also said.
In Krakaris’ view, that guardrail is human judgment.
“A human expert needs to apply their expertise to that final product,” he said. “The technology isn’t nearly good enough today to be able to do that.”
As the integration of AI in health care continues, Castro stressed the need for a commitment to “ethics, integrity and patient-centered care.”
“Emphasizing transparency, informed consent and education will ensure that AI can enhance, not replace, the human touch in medicine.”