Artificial intelligence has been transforming healthcare behind the scenes for years. But since ChatGPT launched in November 2022, physicians have gotten creative about using the chatbot to streamline clinical practice. One of the most exciting possibilities is using ChatGPT to write medical notes. 

The possibility of automating medical notes is so exciting because most doctors spend a third of their time on paperwork and administrative tasks, a large part being clinical documentation. Physicians recognize the value of good clinical notes, but most hate charting. It often feels like they are wasting their work hours satisfying regulatory and reimbursement requirements rather than caring for patients. 

Healthcare has innovated many solutions to doctors’ digital documentation problem: better EHR user interfaces, speech-to-text dictation, or simply delegating note-taking to someone else. All improve the situation, but none eliminate the problem. Someone still has to write the note. 

What if AI-driven language models could perform doctors’ least favorite clinical tasks? Many companies think it can, and we may be at the precipice of a more extensive AI-driven transformation of clinical documentation.

But in the meantime, some doctors are taking matters into their own hands.

How to use ChatGPT to write medical notes

Many doctors are already using ChatGPT to write medical notes and generally report impressive results. The most effective method uses prompt engineering techniques following a few simple steps: 

  1. Open ChatGPT (or a similar program)
  2. Prompt the AI by specifying: A) the goals of the note; B) specific details about the patient, session, and treatment plan; C) your desired structure
  3. Edit the output and copy it into the EHR

Here’s an example of an effective prompt published in the Journal of Clinical and Translational Medicine, which produced an accurate SOAP note:

(Context) Write a concise and accurate health progress note. (Specific details) 35‐year female X, chronic lumbar pain (7/10) for 5 years, impact sleep and work. Third session. Med: acetaminophen 500 mg PRN—pain, amitriptyline 25 mg pain, sleep. Family history of chronic pain. Nil structure deformity or inflame. Sleep issues increase pain, stress and worry lead to sleep. Target sleep. Med review. Physio review. Appeared normal today with good eye contact and speech. Structure is subjective, objective, assessment, plan, interventions included today

The example above intentionally includes typos, but it produced a ready-to-use SOAP note you can read here

Does this approach still require typing and editing? Yes. But with practice, it might save several minutes per patient encounter, and a few minutes per patient can lead to enormous time and cost savings.

Most importantly, AI-generated notes are surprisingly accurate. The physicians who shared the example above concluded that there is “great potential for integrating ChatGPT in generating health progress notes with effective prompts.” 

Similarly, a recent Stanford study published in JAMA Internal Medicine found that ChatGPT used with a prompt engineering method generated clinical notes “on par with those written by senior internal medical residents.” 

Precautions for AI-generated clinical notes

Despite the promise of generative AI tools to eliminate documentation, it’s essential to recognize the limitations of this technology. 

First, ChatGPT’s note will only be as accurate as your information. The output note could contain inaccurate information if the input data contains errors or inconsistencies.

Similarly, this method requires that you use effective prompting techniques. Users need to provide explicit context (e.g., “Write a concise and accurate health progress note”) as well as their desired structure (e.g., SOAP).

Physicians also need to be especially vigilant about privacy and security. You should never put protected health information into ChatGPT. If you use this prompt engineering method to write notes, first eliminate patient identifiers and add them later during editing. 

Finally, remember that developers trained ChatGPT using Internet data, which means it could have inherent racial, gender, or cultural biases. Companies are working to eliminate biases, but you should always double-check that the technology hasn’t included implicit biases inherited from training data. 

It’s best to use generative AI tools cautiously in any medical context at this early stage. But physicians using ChatGPT to write faster report promising results that point to exciting possibilities ahead.

Comments are closed.