AI has arrived: My experience using a Medical Artificial Intelligence Scribe in Psychiatric Practice
Documentation in my job, meaning writing my clinical notes and filling out other necessary paperwork (e.g. Notes for patients' work or school, government forms etc.) takes up an additional 1-2 hours of my day. What this looks like is I typically see patients from 9-5pm leaving documentation to the 8-9am, 12-1pm, or 5-7pm time slots. As time-consuming and annoying it is to bring 1-2 hours of work home with me, I am someone whose clinical care is improved through my note taking. When I write my notes from a clinical encounter, it serves as another pass through the information. It allows me to internalize updates from my patient so that I know the individual better, but it also forces me to review my clinical reasoning behind making diagnoses and treatment suggestions. I have had some occasions where I told a patient my impression after the initial encounter, and had to change it later when after writing my note, I realized from the timeline and the symptoms endorsed, that I had got it wrong the first time.
So there is value writing my own notes, both to me and patient care. But what I have long struggled with was whether or not it was worth it. Was the benefit great enough to have it eat up an additional 5-10 hours of my week? There is no question paperwork contributes to physician burnout and I know what that feels like. It is spending up to 2 hours in a given day on a patient rather than my personal relationships and hobbies. It is feeling resentful towards the work and emotionally less resilient.
Into this space steps the long-awaited medical artificial intelligence scribe, with many different companies offering this service and promising to save many hours of my time. Being a slow adopter, I held off. My notes are too much a part of my professional identity and the pride I had in myself as a psychiatrist. To allow AI to take this over, felt like a lazy short cut while also adding fuel to the threat that I may be training AI to take over my job one day, as it is already doing in other fields. But many of my colleagues raved about it, and I was offered a free trial, so I started playing around with it too.
One colleague in particular benefited most. This colleague's first language was not english, in fact, it was not his second language either. He had moved to Canada to leave a war torn country and began practicing psychiatry here. Within a week of using AI, he was off to the races. He was using the medical AI scribe with every patient and loving it. It took the translating he was doing with each note out of his hands, and delivered up summaries of his clinical encounters in seconds with a thoroughness and clarity that would have taken him hours. To this day, he thanks me for introducing this AI program to him on almost a daily basis.
My experiences with the medical AI scribe really impressed me also. Rather than my usual, "Follow-up in 4 weeks", the AI scribe would write down the exact date and time and would put in detail not only my medication interventions, but my psychological interventions too. On average, medical AI saves me 1-2 hour daily, even more time when it comes to my initial consultations. The notes generated are at the level of an excellent final-year psychiatry resident. I am truly grateful and impressed with this technology and I feel less dread from the burden of documentation.
Is it worth it? If what they say is true, and I have no reason to distrust them yet, which is that all the patient data is gone permanently once I delete it, and the information is not used to train their language models, then yes. But there are some drawbacks that make me not so sure how I feel about using medical AI yet.
First, I notice that I do not know my patients as well as before. I read the note that AI generated and because I did not write it, I have a harder time recalling exactly who the patient is and what we discussed. I really value having my patients feel like people I am invested in, and knowing specific details about them, but since using medical AI, I feel more lost in the first few encounters with a new patient. Like I am supposed to know this person, but can't remember anything about them. But because I offer longer-term follow-up in my practice, I think this issue becomes less of a problem over time.
Another downside has been the shortcuts it has provided to my clinical decision making. At times, the AI scribe will generate a note that is highly suggestive of a condition when the patient's symptoms have not met the criteria for diagnosis. This has caused me to doubt my own clinical impression and history, which can be healthy, unless you end up deferring to AI, which I could see earlier career psychiatrists or learners being at risk of doing. You also need to take editing seriously, because sometimes the medical scribe can get even the basic facts wrong.
My workarounds have been to leave certain parts of the clinical note blank, to prompt me to fill it in with my own clinical impression using my own words. This is time-consuming, but vital if I am going to feel confident about my diagnoses and treatment plan. I have also had patients preferring that I not use the medical scribe or the scribe malfunctions, and so it is back to the "old school" way of doing things in these cases.
The last observation I will make is that we are prone to give AI the credit for generating an excellent clinical note, when in fact, AI can only make a note as good as the clinical encounter. I stop myself at times when I glorify AI too much. We all need reminding that it takes a person to sit with another person for the 2 hours of a clinical assessment to get the work done, not AI. It is humans who do the work. This misplacement of credit from the person doing the craft to AI has been our default so far in the many headlines I have been bombarded with about AI, and very much at the expense of us, humans and our creativity. So we must be very careful.
So far, the medical AI scribe has been for me and my colleagues what AI should be. A technology that makes my work more efficient and my documentation easier and better. It has not replaced me or the work that I do with the people who see me. Protecting these relationship from AI is the priority for me as we head into this new technological age.