Foundation in Digital Education and AI

Lior Levy, MD

Published July 3, 2024 | Clinics in Medical Education 

Issue 7 | Volume 1 | June 2025

AI is such a vast and fast evolving topic that the available tools at the beginning of the class were no longer the most relevant tools by the end. For instance, advanced OpenAI o3 and o4-mini were made available in April 2025 as advanced reasoning models, allowing for a level of deep thinking and complex problem solving that was just not around earlier that year. In 2024, OpenEvidence was probably one of the most used AI platforms for clinical decision support and has somewhat fallen out of favor. More on that later.

How Can AI Help

While we try to keep up with a field advancing at lightning speed, today we can focus on how AI can help us with medical education and what its current limitations and dangers are. Generative AI (GAI) aka AI capable of generating new content can be a powerful assistant for us medical teachers. It can help, for instance, in:

  1. Creating content that is adjusted by level of difficulty (interns vs PGY4, or adjusted based on performance) or tailored to certain ACGME milestones
  2. Creating or organizing high-quality content in a fraction of the time (lectures, journal clubs, slide creation from a word document you created)
  3. Generating pictures, graphs for teaching
  4. Generating quizzes, MCQs for concept checks or repetition
  5. Quickly making difficult content we created more digestible for our learners
  6. Standardizing or automating assessments and feedback
  7. Expanding teaching modalities (podcasts, VR, etc.)
  8. Supporting faculty in curriculum development and delivery
  9. Providing 24/7 platforms for learners to practice differential diagnosis, step-by-step clinical reasoning practice and safe exploration without fear of judgment
  10. Analyzing audio or video of simulation sessions to provide neutral feedback (technology has a way to go there)
  11. Coaching faculty for simulation debriefings depending on the type of the debriefing chosen
  12. Increasing access to a more diverse group of learners (ie. with disabilities, ESL needs) Think of GAI not as a teacher (no one can replace you) but more as a supercharged teaching assistant that, when supervised wisely, can make medical education more efficient, more learner-centered, and also more equitable.

The Presence of AI

While we try to keep up with a field advancing at lightning speed, today we can focus on how AI can help us with medical education and what its current limitations and dangers are. Generative AI (GAI) aka AI capable of generating new content can be a powerful assistant for us medical teachers. It can help, for instance, in:

  1. Creating content that is adjusted by level of difficulty (interns vs PGY4, or adjusted based on performance) or tailored to certain ACGME milestones
  2. Creating or organizing high-quality content in a fraction of the time (lectures, journal clubs, slide creation from a word document you created)
  3. Generating pictures, graphs for teaching
  4. Generating quizzes, MCQs for concept checks or repetition
  5. Quickly making difficult content we created more digestible for our learners
  6. Standardizing or automating assessments and feedback
  7. Expanding teaching modalities (podcasts, VR, etc.)
  8. Supporting faculty in curriculum development and delivery
  9. Providing 24/7 platforms for learners to practice differential diagnosis, step-by-step clinical reasoning practice and safe exploration without fear of judgment
  10. Analyzing audio or video of simulation sessions to provide neutral feedback (technology has a way to go there)
  11. Coaching faculty for simulation debriefings depending on the type of the debriefing chosen
  12. Increasing access to a more diverse group of learners (ie. with disabilities, ESL needs) Think of GAI not as a teacher (no one can replace you) but more as a supercharged teaching assistant that, when supervised wisely, can make medical education more efficient, more learner-centered, and also more equitable.

The Presence of AI

A 2025 HEPI survey of 1,041 undergraduates (not specifically med students) found that 92% are now using AI tools in some form, up from 66% in 2024. For medical students (though data is a year old) the number was reported to be 48.9%, likely much higher now, estimated closer to 80%. AI is no longer emerging; it is here, and we must start meeting learners where they are and guide its use. More specifically, our learners need guidance on accuracy, ethical use, data privacy, and when not to rely on AI.

Hallucinations

AI tools are known to hallucinate content or lack accuracy, which can be particularly critical in the field of medicine. Whether it is outdated guidelines, or purely inaccurate information, we must cross-check content with current guidelines. Dr. Rodman often brings up a recent case study he uploaded onto OpenEvidence, which recommended a treatment that would have been life-threatening for a patient with the condition he was describing. Moreover, AI has a tendency to be sycophantic (overly flattering, agreeable with all your opinions), which can then lead the learner to erroneously think they are on the right track with a flawed clinical reasoning. 

Ethical Concerns

Using AI to write an entire QI project for instance or a learner who copies an AI-generated answer into a takehome assessment without disclosure, would undermine reflection as a professional development tool and may even breach the honor code of academic integrity.

Data Privacy

Public AI tools (ChatGPT, Claude, Gemini etc) are not designed for clinical use or protected data. Even “de-identified” data can be become identifiable when rare diagnoses, institutional references, or timeline events are included. This creates a very real HIPAA violation risk. In a context I am familiar with, simulation, we must be very careful when uploading transcripts or videos from simulation sessions into an AI tool for summarization or feedback generation. If the material contains names of trainees or specific performance behaviors, it can raise ethical concerns of educational data privacy. Learners need to have consented to having their performance data processed by external AI systems.

The Danger of Cognitive Deskilling

Cognitive deskilling is the decline in human abilities; like reasoning, decision-making, and memory due to over-reliance on tools such as AI or automation. In medicine, this could mean clinicians losing diagnostic intuition or judgment by deferring too much to technology. Evidence from various fields shows that interaction with generative AI can lower critical thinking through cognitive offloading.

As AI transforms medicine and education, we face both tremendous opportunity and real responsibility. From content creation to clinical decision support, generative AI is already a powerful tool in classrooms and at the bedside. But alongside its benefits come risks: hallucinations, ethical concerns, data privacy issues, and cognitive deskilling. Human-AI collaboration is complex, but promising studies (some from our own institution) are helping us navigate this space. Leading medical schools, including Harvard, are also integrating AI into curricula early.

The real question isn’t whether to use AI, but how to learn and teach it: intentionally, ethically, and effectively. The future of medical education isn’t humans versus machines, but humans empowered by them and guided by educators who still know how to think.

REFERENCES

1.  Freeman, J. (2025). Student Generative AI Survey 2025. Higher Education Policy Institute (HEPI).

2.  Zhang JS, Yoon C, Williams DKA, Pinkas A. Exploring the Usage of ChatGPT Among Medical Students in the United States. J Med Educ Curric Dev. 2024 Jul 25;11:23821205241264695. doi: 10.1177/23821205241264695. PMID: 39092290; PMCID: PMC11292693.

3.  Lee HP, Sarkar A, Tankelevitch L, et al. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. In: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. CHI ’25. Association for Computing Machinery; 2025. doi:10.1145/3706598.3713778