It was December 2022, the first time I was told about Chat GPT: “What is it?” I asked, “Serious stuff, I was answered, not a game; try yourself to write a text.” I registered and entered the words “Narrative Medicine,” and I received a very elegant two-page text of clear and structured explained synthesis of what narrative medicine is. I searched for more recent information, browsing my autobiography and other scholars in the same field. Still, the very polite standard answer was, “I apologize; I cannot retrieve information after 2021”, possibly because I was using the free version.
Chat GPT is an artificial intelligence chatbot developed by OpenAI and launched on November 30, 2022. It is notable for enabling users to refine and steer a conversation towards a desired length, format, style, detail level, and language. Successive prompts and replies are considered at each conversation stage as a context.
By January 2023, it had become the fastest-growing consumer software application, gaining over 100 million users. Some observers expressed concern over the potential of ChatGPT to displace or atrophy human intelligence and its potential to enable plagiarism or fuel misinformation.
ChatGPT was released as a freely available research preview, but OpenAI now operates the service on a freemium model due to its popularity. It allows users on its free tier to access the GPT-3.5-based version.
ChatGPT was built with a safety system against harmful content: sexual abuse, verbal and physical violence, racism, sexism, and other discriminatory content. Chat GPT, besides some inaccurate information, has a very kind, empathic and warm narrative genre, rejecting any rude text.
Chat- GPT in the healthcare system
The promise of Chat GPT was also tested in the healthcare sector, and results were published in April 2023 in a prestigious Journal as JAMA. (6) How do doctors inform their patients about their diagnosis? What is the level of accuracy? And how are you empathetic with the situation?
The study’s premise was this: “Virtual healthcare has caused a surge in patient messages concomitant with more work and burnout among healthcare professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that clinicians could review.” The objective was to evaluate the ability of an AI chatbot assistant (ChatGPT) to provide quality and empathetic responses to patient questions.
In a cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions being asked) on December 22 and 23, 2022. The original question and anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed healthcare professionals. Evaluators chose “which response was better”. They judged both “the quality of information provided” (very poor, poor, acceptable, sound, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between the chatbot and physicians.
Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001). Chatbot responses were rated of significantly higher quality than physician responses (t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to a 3.6 times higher prevalence of good or excellent quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to a 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.
A chatbot generated quality and empathetic responses to patient questions online in this cross-sectional study. “Further exploration of this technology is warranted in clinical settings, such as using a chatbot to draft replies that physicians could edit. Randomized trials could further assess whether using AI assistants might improve responses, decrease clinician burnout, and improve patient outcomes” (6).
The clinical setting is very different from an open forum in virtual mode. In daily practice, doctors and patients are physically there, with their personalities, competencies, and emotions. Therefore, translating these conclusions to real-life care settings would take time and effort. However, these results may draw the route for the ecosystem to create between AI and Health Care Providers. Beyond the authors’ conclusion of using AI-Chat-GPT to reduce burnout, burnout is reduced when more nourishing and empathetic relationships are created with the patients. It is the narrative part that Chat-GPT covers with the longer answers, that the real-life doctors are rejecting somehow, as “sons and daughters of EBM.” The reductionist model of biomedicine also shrinks the language possibility and the empathy skills. This result looks very odd since we would have been most likely expecting more accuracy but less relationship using AI.
Why these results? We said, “sons and daughters of an EBM-cracy,” of lessons of detachment from patients, with the teaching of a “clinical gaze, which should not look into patients’ eyes”. In which system? In health care organizations strangled by continuous downsizing, constrained to a hyper- efficiency; this poisons the ecosystem made of healthcare professionals and patients, and increase the risk of rude answers, verbal violent replies and in an escalation, physical violence. Soft skills or humanities, ironically, are called non-technical skills, so the very competencies so essential to develop are robbed of a proper name, in a very ignorant and short-sighted way, ignoring that there is a technique to develop empathy, communication and teamwork. The outcome of this choice of terms is that doctors while updating their competence, will read preferably only technical papers, neglecting the other humanities papers, not perceiving the facts that humanities and techniques are intertwined to provide good care -diagnosis is also an art, not just a mere technical process. Behind the chosen words, there is a narrative competence.
If we don’t repair now and urgently, the human touch of Doctors and Carers will be replaced by Artificial Intelligence that will never suffer from compassion fatigue but always show lovely compassion without the needs of human beings.