The next time you read a message in MyChart from your doctor, that message might be from an AI bot.
According to a New York Times report, about 15,000 doctors and assistants from 150 healthcare systems across the US are taking advantage of a new tool in MyChart, a popular patient portal used by many medical offices and hospitals, that uses AI to respond to messages.
Also: AI is relieving therapists from burnout. Here’s how it’s changing mental health
By default, MyChart doesn’t tell the recipient that AI composed the message. However, some doctors are asking patients if it’s OK to use the AI tool, and some healthcare systems have opted to add a disclosure to each message.
Here’s how it works: When a medical professional opens a patient’s question to respond, they see a pre-written response, presented in the doctor’s voice, instead of a blank screen. AI writes the response based on the question, the patient’s medical records, and the patient’s medicine list. MyChart uses GPT-4, the same technology that powers ChatGPT (but a specific version that complies with medical privacy laws). The medical provider can accept the message as is, edit it, or toss it aside completely.
The hope is that by letting doctors simply edit and approve messages instead of writing them all, they’re able to help patients more quickly.
Naturally, there are some concerns. What happens if the AI hallucinates, offers up a bad answer, and a harried doctor approves it? Although AI has been acing medical school exams and has proven useful to doctors, it’s a fair point of concern.
Epic, the company behind MyChart, explained that doctors send less than one-third of AI-written messages with no editing, so they’re being somewhat vigilant about catching errors. Also, an Epic representative explained that the tool is most appropriate for administrative questions, such as a patient asking when an appointment is or requesting to reschedule.
+ There are no comments
Add yours