Research Article - (2026) Volume 4, Issue 1
Artificial Intelligence as a Strategic Lever for Academic Physicians: Navigating the Triple Burden of Clinical, Administrative, and Managerial Constraints
Received Date: Mar 01, 2026 / Accepted Date: Apr 06, 2026 / Published Date: Apr 14, 2026
Copyright: ©2026 Emmanuel Andres. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation: Andres, E. (2026). Artificial Intelligence as a Strategic Lever for Academic Physicians: Navigating the Triple Burden of Clinical, Administrative, and Managerial Constraints. Int J Med Net, 4(1), 01-07.
Abstract
Academic physicians today face an unprecedented convergence of escalating clinical demands, regulatory complexity, and institutional governance responsibilities that progressively erode the time available for their core scholarly missions. The electronic health record (EHR) alone now consumes more than half the average physician workday. Artificial intelligence (AI), particularly large language models (LLMs) and ambient AI scribes, has emerged as a potentially transformative technology that could partly rebalance these competing obligations. Early evidence suggests that AI-assisted documentation can meaningfully reduce charting time and physician burnout. In medical education, AI tools offer adaptive, interactive learning environments and facilitate rapid generation of pedagogical content. In research, AI can accelerate literature synthesis, manuscript structuring, peer-review assistance, and knowledge dissemination. However, significant concerns remain regarding hallucinations, algorithmic bias, medicolegal accountability, data security, and the risk of cognitive deskilling among trainees. This review critically appraises current evidence across the clinical, educational, and research domains, discusses governance and ethical imperatives, and proposes a framework for the responsible integration of AI into academic medical practice.
Keywords
Artificial Intelligence, IA, Academic Physicians, Patient Care, Hospital, Research, Education, Pedagogy, Management
Introduction
Academic medicine is organized around a foundational tripartite mission: patient care, education, and research. Yet the contemporary academic physician operates in an environment that has fundamentally transformed the allocation of time among these missions. Electronic health records (EHRs), regulatory compliance mandates, quality-reporting frameworks, and the proliferation of asynchronous communications have collectively generated a documentation burden that now consumes more than half of the average physician’s workday, leaving only approximately one quarter of their time for direct patient interaction [1,2].
This shift carries profound consequences for academic medicine specifically. Clinical faculty at academic medical centers are expected to maintain scholarly productivity, supervise trainees, and contribute to institutional governance simultaneously. Yet survey data consistently indicate that non-clinical responsibilities— including administrative work, committee participation, and residency program leadership—are expanding faster than institutional support for them [3,4]. The result is a chronic contraction of protected academic time that threatens the vitality of the research enterprise and the quality of graduate medical education.
Into this landscape, artificial intelligence (AI) has arrived with remarkable speed. Generative AI systems, built on large language models (LLMs) capable of processing and producing complex natural language, have within two years penetrated clinical workflows, medical schools, and research pipelines. The pace of adoption has outrun the evidence base, generating both enthusiasm and legitimate concern. The present review synthesizes peer-reviewed evidence on the applications of AI across the three axes of the academic physician’s mission, evaluates risks and limitations, addresses governance imperatives, and proposes principles for responsible deployment.
The Mounting Burden on Academic Physicians
Clinical and Administrative Overload
Ambulatory physicians across specialties spend, on average, the equivalent of a full additional workday per week on EHR tasks beyond their scheduled patient hours.2 Studies using event-log data from over 186,000 ambulatory physicians found that the number of patient-scheduled hours resulting in a 40-hour workweek varies substantially by specialty and practice type, underscoring how deeply EHR workload infiltrates what was once considered “off-duty” time.5 Burnout rates have responded accordingly: recent national surveys reported that 93% of physicians experienced regular burnout, with nearly half describing their workload as unsustainable [6].
For academic physicians, these pressures are compounded by documentation-intensive academic workflows: preparation of didactic content, grant applications, manuscript revisions, committee reporting, and accreditation compliance. A survey of academic emergency medicine faculty found that actual non-clinical hours worked per week exceeded contracted hours by a median of 6 hours, and that misalignment between preferred and actual time allocations—particularly the excess of administrative over research and teaching time—was a key driver of job dissatisfaction [7].
Erosion of Protected Academic Time
Department chairs at academic medical centers acknowledge that protected academic time—typically defined as one half-day per week for non-shift specialties—is increasingly nominal. Clinical production pressure, coverage demands, and documentation overflow have made genuine protected time difficult to guarantee [3]. A 2023 survey of midcareer academic physician-scientists found that many were struggling to maintain the research productivity required for promotion despite holding federally funded grants, reflecting structural tensions between clinical service obligations and scholarly output [8].
These constraints disproportionately affect early-career faculty and those in non-procedural specialties. The erosion of academic time is not merely an individual inconvenience; it threatens the pipeline of academic medicine, as junior faculty discouraged by impossible workloads increasingly opt for community or industry careers. Preserving the academic physician workforce requires structural solutions, among which AI-mediated efficiency gains represent one promising, if partial, intervention.
Artificial Intelligence in the Clinical Domain
Ambient AI Scribes and Documentation Burden
Among AI applications in clinical medicine, ambient AI scribes— systems that record clinical encounters in real time and generate draft clinical notes—have attracted the greatest evidence base and institutional attention. These tools leverage automatic speech recognition coupled with large language models to produce structured documentation that physicians then review and edit.
A 2025 quality-improvement study across 6 health care systems enrolled 263 physicians and advanced practice practitioners using an ambient AI scribe for 30 days. Burnout rates among ambulatory clinicians decreased significantly from 51.9% to 38.8%, with parallel improvements in cognitive task load, after-hours documentation time, and focused attention on patients [9]. A separate single-center randomized trial—the first of its kind—conducted at UCLA Health in 2025 found that physicians using Nabla reduced average per-note documentation time by approximately 41 seconds compared with 18 seconds in controls, with a 7% improvement in validated burnout scores for both AI-scribe arms [10].
A large cohort study examining ambient AI scribes across multiple academic centers demonstrated measurable improvements in physician productivity alongside documentation quality, though heterogeneity across specialties and practice settings warranted caution [11]. Physician perspectives on ambient AI scribes, documented across more than 30 academic and community sites, highlighted themes of time recapture, improved patient engagement, and reduced cognitive fragmentation, while flagging concerns about note accuracy and the risk of reducing clinical note writing to a validation exercise [12].
Safety Considerations and Hallucinations
The randomized trial from UCLA reported that AI-generated notes “occasionally” contained clinically significant inaccuracies, most commonly omissions and pronoun errors, with one mild patient-safety event documented during the study period [10]. The phenomenon of AI “hallucination”—confident generation of factually incorrect content—represents a non-trivial risk in clinical documentation. Systematic evaluation of LLM-generated clinical summaries against source EHR data has confirmed that factual errors can occur even in high-performing models, necessitating active physician oversight rather than passive acceptance [13].
Stanford researchers examining whether high LLM performance on medical benchmarks reflects genuine clinical reasoning or pattern recognition found that models achieving near-perfect accuracy on standard benchmarks suffered significant performance drops when questions were restructured to eliminate statistical patterns—raising fundamental questions about the reliability of AI in novel or atypical clinical scenarios [14]. These findings underscore that AI systems should be positioned as decision-support tools, not autonomous agents, and that their deployment must be accompanied by rigorous physician validation.
AI-Assisted Clinical Decision Support
Beyond documentation, LLMs are being evaluated for clinical decision support, differential diagnosis generation, and patient message triage. A randomized clinical trial published in JAMA Network Open in 2024 found that LLM assistance influenced physician diagnostic reasoning, with effects that were beneficial in some contexts and potentially anchoring in others [15].Asubsequent Nature Medicine randomized controlled trial demonstrated that GPT-4 assistance improved physician performance on structured patient-care tasks, suggesting a domain-specific role for AI augmentation of clinical judgment [16].
AI-generated draft replies to patient portal messages have been piloted at several academic medical centers, reducing the asynchronous inbox burden that has become a significant contributor to after-hours workload. Early evidence suggests these tools can produce patient-appropriate responses that require modest editing, though accuracy, tone, and medicolegal accountability remain active concerns [17].
Artificial Intelligence in Medical Education
Generating Educational Content
For academic physicians with teaching responsibilities, AI offers immediate, practical assistance in the production of educational materials. LLMs can generate clinical vignettes, multiple-choice questions, case-based learning modules, and structured summaries of scientific literature at a speed and volume that would require substantial time investment without technological assistance. A 2025 narrative review of AI applications in health-professions education, drawing on structured searches across PubMed, Scopus, and Web of Science, confirmed that AI tools can enhance knowledge retention, personalize learning, and extend simulation-based training [18].
Generative AI tutors capable of adapting to learner level, providing immediate feedback, and presenting cases in an interactive Socratic format have been piloted in undergraduate and postgraduate medical education. These tools hold particular promise for facilitating formative assessment at scale—a need that faculty time constraints have historically limited [19].
Risks to Clinical Reasoning Development
The pedagogical benefits of AI in medical education must be weighed against documented and theoretical risks to the development of core clinical competencies. Clinical note writing, historically a cognitive exercise that structures diagnostic reasoning and compels explicit articulation of clinical logic, may be undermined if AI scribes assume the documentation function before trainees have internalized this reasoning process [20].
A 2026 commentary in the Journal of General Internal Medicine articulated guardrails for preserving clinical reasoning when AI scribes are introduced into residency training, recommending structured debriefing, deliberate handwriting exercises at key stages, and competency assessments that cannot be discharged by AI-generated proxies [21]. A viewpoint in JMIR Medical Education drew on principles of cognitive load theory to argue that generative AI, by reducing the effortful processing that drives schema formation, risks producing physicians who are information-fluent but reasoning-fragile [22].
This concern extends to medical students who now have pervasive access to LLMs capable of answering examination-style questions at or near the level of licensed physicians. The challenge for medical education is not to prohibit AI use—a demonstrably futile regulatory ambition—but to design curricula that leverage AI for its productivity advantages while preserving the irreducible cognitive demands that produce competent clinicians.
Assessment and Quality Assurance
AI tools are being explored for the assessment of clinical reasoning quality in resident documentation. A 2025 study across two academic institutions developed and validated LLM-based assessments of clinical reasoning as documented in admission notes, demonstrating that AI-derived scores showed meaningful correlation with expert evaluator ratings.23 Such tools could provide scalable, near-real-time feedback to trainees—a function that residency programs have struggled to deliver given faculty workload constraints.
Artificial Intelligence in Research and Scientific Dissemination
Literature Synthesis and Manuscript Production
For clinician-investigators constrained by time, AI offers meaningful assistance across the research workflow. LLMs can accelerate systematic and narrative literature reviews by retrieving, summarizing, and synthesizing large bodies of evidence. When deployed with retrieval-augmented generation (RAG) frameworks—architectures that ground model outputs in verified source documents—AI can substantially reduce the labor required for preliminary literature surveying, identify thematic gaps, and propose relevant citation networks [24].
In manuscript preparation, AI tools can assist with structural organization, section drafting, rewriting for scientific register, and language editing for non-native English speakers—a non-trivial equity consideration in global academic medicine. A 2024 analysis of biomedical scientific writing trends noted that AI penetration into the authorship process is accelerating and that medical journals are actively revising their editorial policies to govern this shift [25].
Peer Review and Publication
Ethics A 2024 cross-sectional study of the 100 most influential medical journals found that a majority had developed policies governing the use of AI in peer review, uniformly prohibiting the designation of AI as an author while diverging on disclosure requirements for AI assistance in manuscript preparation [26]. The JAMA Network group published specific guidance in 2024 requiring explicit disclosure of AI and LLM use in research and scholarly publication, reflecting consensus that transparency—rather than prohibition—is the operative principle [27].
The use of AI in peer review itself remains ethically contested. AI systems that review manuscripts may introduce novel biases, fail to appreciate methodological nuance, or—through overuse— contribute to homogenization of scientific discourse. Conversely, AI-assisted peer review could accelerate publication timelines and reduce reviewer burden. This debate has not been resolved, and journal policies continue to evolve [28].
Knowledge Structuring and Dissemination
A distinctive contribution of AI in academic medicine is its capacity to externalize and structure tacit clinical knowledge. Much of what experienced academic physicians know—diagnostic heuristics, procedural nuances, risk-stratification logic—resides in implicit form and is transmitted through apprenticeship rather than formal publication. AI tools can assist in converting these implicit knowledge structures into explicit, transmissible formats: clinical decision pathways, annotated case libraries, and structured competency frameworks.
The dissemination function is equally significant. AI can enable academic physicians to reach broader audiences through translated content, plain-language summaries of technical publications, and interactive educational formats adapted for non-specialist clinicians. These capabilities are particularly relevant in global health contexts and in efforts to reduce health literacy disparities.
Risks, Limitations, and Ethical Imperatives
Algorithmic Bias and Health Equity
AI systems trained on historically biased datasets risk perpetuating or amplifying health disparities. Models trained predominantly on data from high-income, English-speaking health systems may perform poorly for populations underrepresented in training corpora. These performance gaps are not uniformly transparent to end users, creating the risk that AI outputs will be uncritically accepted even when they are systematically less accurate for vulnerable populations [29].
The risk of algorithmic bias extends to physician evaluation: AI tools that assess clinical note quality or research productivity may embed assumptions that disadvantage physicians from underrepresented groups or those practicing in resource-constrained environments. Institutional deployment of AI in these domains requires prospective equity auditing.
Medicolegal Accountability
The question of legal accountability when an AI-generated clinical note contains an error that contributes to patient harm remains insufficiently resolved in most jurisdictions. Current regulatory frameworks attribute responsibility to the physician who reviewed and approved the document, reinforcing the imperative for active rather than passive oversight. However, as AI-generated content becomes increasingly voluminous and rapid, the practical feasibility of comprehensive physician review requires institutional attention—including workflow designs that preserve meaningful human oversight without negating AI efficiency gains [30].
Data Privacy and Security
The processing of protected health information by commercial AI platforms raises significant concerns under existing data-protection frameworks, including HIPAA in the United States and the General Data Protection Regulation (GDPR) in the European Union. Academic physicians who use general-purpose AI tools for tasks involving patient data—including note dictation, case summarization, or referral letter drafting—must ensure that their use complies with institutional and regulatory requirements. Most academic medical centers are in the process of establishing approved AI platforms with appropriate data processing agreements, but policy lags adoption in many settings.
Cognitive Deskilling and Technological Dependency
There is legitimate concern that sustained reliance on AI for documentation and reasoning support could atrophy skills that remain essential in AI-unavailable contexts—including emergencies, resource-limited settings, and situations where AI outputs are incorrect. A scoping review of AI and physician burnout noted that documentation time reductions ranging from 0.7 to 2.1 minutes per note were promising but insufficient to conclude that AI-mediated workflow changes would prevent burnout at a systems level, particularly given the risk of substituting one form of cognitive load for another [31].
Educational institutions face the particular challenge of maintaining competency standards in an environment where AI assistance is omnipresent. Licensing and certification bodies are beginning to evaluate AI-inclusive assessment frameworks—allowing AI tool use while evaluating higher-order competencies such as critical appraisal, ethical reasoning, and the recognition of AI errors.
A Framework for Responsible AI Integration in Academic Medicine
Based on the available evidence, the following principles may guide academic institutions in deploying AI responsibly:
• Target low-value-added tasks first. AI should be directed toward repetitive, high-volume tasks—initial note drafting, bibliographic search, meeting summarization—where automation displaces cognitive labor without compromising irreplaceable human functions. High-stakes clinical reasoning and formative assessment of trainees should remain domains of explicit human engagement.
• Maintain the physician as the authoritative decision-maker. AI outputs should be treated as drafts requiring expert review. Workflow designs that create pressure to accept AI-generated content with minimal scrutiny must be actively resisted.
• Integrate AI governance at the institutional level. Individual improvisation with AI tools creates unacceptable risks of data security violations, inconsistent practice standards, and medicolegal exposure. Institutions should establish AI oversight committees, maintain approved tool inventories, and provide faculty with structured training.
• Preserve pedagogical scaffolding in training environments. Curricula must be redesigned to ensure that AI assistance does not displace the effortful cognitive activities—note writing, differential generation, literature appraisal—that build clinical competence. AI should be introduced progressively, with explicit instructional frameworks governing its role at each stage of training.
• Require transparency and disclosure in research. Consistent with emerging journal policies, all AI assistance in manuscript preparation, data analysis, and peer review should be disclosed in methods sections, acknowledging both the tool used and its specific function.
• Audit for bias and equity prospectively. AI deployments in clinical, educational, and research settings should include pre-specified equity analyses to identify differential performance across patient populations, physician demographics, and institutional contexts.
• Invest in evaluation science. The pace of AI deployment in academic medicine substantially exceeds the evidence base for its effects. Pragmatic randomized trials, quasi-experimental designs, and long-term longitudinal studies are urgently needed to characterize real-world impacts on outcomes, efficiency, equity, and physician well-being.
Conclusions
Artificial intelligence offers academic physicians a set of tools with genuine potential to alleviate the administrative and documentation burdens that have progressively displaced time for scholarly and educational activities. Evidence from ambient AI scribes documents meaningful reductions in documentation time and burnout; evidence from educational contexts suggests AI can extend the reach and interactivity of medical teaching; evidence from research settings indicates AI can accelerate the synthesis and dissemination of knowledge (Table 1).
Yet the evidence base remains nascent, the risks are real, and the governance frameworks are incomplete. Hallucinations, algorithmic bias, medicolegal ambiguity, data security, and the threat of cognitive deskilling demand that AI integration be deliberate, governed, and continuously evaluated. The role of AI in academic medicine is not to displace human judgment but to liberate the time and cognitive resources that allow that judgment to be exercised with full engagement.
The promise of AI for academic physicians will be realized only through the collaborative effort of clinicians, educators, researchers, ethicists, policymakers, and technologists working within transparent, accountable, and equity-informed frameworks. The question is not whether AI will reshape academic medicine— it already has—but whether that reshaping will be directed by principled, evidence-based stewardship or left to the momentum of technology adoption alone.
|
Domain |
Current Challenge |
Potential AI Contribution |
Limitations / Risks |
|
Clinical Care |
EHR documentation burden; >50% of workday |
Ambient scribes; draft note generation; patient message triage |
Hallucinations; safety events; passive acceptance; medicolegal liability |
|
Administration |
Prior authorization; quality reporting; scheduling |
Automated workflow; intelligent routing; coding support |
Technological dependency; data security; equity gaps |
|
Education |
Reduced protected teaching time; variable formative feedback |
Case generation; adaptive tutoring; large-scale assessment of reasoning |
Cognitive deskilling; academic integrity; standardization risk |
|
Research |
Time for literature synthesis, writing, translation |
RAG-assisted review; manuscript structuring; peer-review support |
Bias amplification; publication integrity; lack of verified sourcing |
|
Knowledge Dissemination |
Tacit knowledge rarely formalized; language barriers |
Clinical pathway generation; plain-language summaries; multilingual translation |
Factual accuracy; validation standards; intellectual property |
Table 1: Potential Contributions of Artificial Intelligence to the Missions of the Academic Physician
Acknowledgments: Acknowledgments and Use of Digital Tools. The authors acknowledge the use of bibliographic databases, including PubMed and Google Scholar, for literature retrieval. Zotero was used for reference management and organization. Artificial intelligence–based tools (ChatGPT and Claude AI) were used to assist with language editing, structuring of the manuscript, and improvement of clarity. All content was critically reviewed and validated by the authors, who take full responsibility for the accuracy and integrity of the work.
Conflicts of Interest: The authors declare no conflicts of interest.
Funding: No specific funding was received for this review.
References
- Sinsky, C. A., Willard-Grace, R., Schutzbank, A. M., Sinsky,T. A., Margolius, D., & Bodenheimer, T. (2013). In search of joy in practice: a report of 23 high-functioning primary care practices. The Annals of Family Medicine, 11(3), 272-278.
- Arndt, B. G., Micek, M. A., Rule, A., Shafer, C. M., Baltus,J. J., & Sinsky, C. A. (2024). More tethered to the EHR: EHR workload trends among academic primary care physicians, 2019-2023. The Annals of Family Medicine, 22(1), 12-18.
- Misra, M., Huang, G. C., Becker, A. E., & Bates, C. K. (2023). Leaders’ perspectives on resources for academic success: defining clinical effort, academic time, and faculty support. The Permanente Journal, 28(1), 33.
- Ringwald, B. A., Auciello, S., Ginty, J., Jefferis, M., & Stacey,S. (2024). Administrative time expectations for residency core faculty: a CERA study. Family medicine, 56(7), 428.
- Holmgren, A. J., Sinsky, C. A., Rotenstein, L., & Apathy,N. C. (2024). National comparison of ambulatory physician electronic health record use across specialties. Journal of general internal medicine, 39(14), 2868-2870.
- athenahealth. 2024 Physician Sentiment Survey. Fielded by Harris Poll, 2024.
- Chapman MJ, Nguyen C, Wiler J, Messman A, Swanson E. (2023). A national survey of job satisfaction and workload among emergency medicine residency faculty. Cureus, 15(3):e36124.
- Pololi, L. H., Evans, A. T., Civian, J. T., Cooper, L. A., Gibbs,B. K., Ninteau, K., ... & Brennan, R. T. (2023). Are researchers in academic medicine flourishing? A survey of midcareer Ph. D. and physician investigators. Journal of clinical and translational science, 7(1), e105.
- Olson KD, Sinsky CA, McDaniel SH, et al. (2025). Use of ambient AI scribes to reduce administrative burden and professional burnout. JAMA Netw Open; 8(7):e2519832.
- Lukac PJ, Turner W, Vangala S, et al. (2025). Ambient AI scribes in clinical practice — a randomized trial. NEJM AI.
- Holmgren, A. J., Fenton, C. L., Thombley, R., Soleimani, H., Croci, R., DeMasi, O., ... & Yazdany, J. (2026). Ambient artificial intelligence scribes and physician financial productivity. JAMA Network Open, 9(1), e2553233.
- Shah, S. J., Crowell, T., Jeong, Y., Devon-Sand, A., Smith, M., Yang, B., ... & Garcia, P. (2025). Physician perspectives on ambient AI scribes. JAMA Network Open, 8(3), e251904.
- Chung, P., Swaminathan, A., Goodell, A. J., Kim, Y., Momsen Reincke, S., Han, L., ... & Aghaeepour, N. (2025). Verifying Facts in Patient Care Documents Generated by Large Language Models Using Electronic Health Records. NEJM AI, 3(1), AIdbp2500418.
- Bedi, S., Jiang, Y., Chung, P., Koyejo, S., & Shah, N. (2025). Fidelity of medical reasoning in large language models. JAMA Network Open, 8(8), e2526021.
- Goh, E., Gallo, R., Hom, J., Strong, E., Weng, Y., Kerman, H., ... & Chen, J. H. (2024). Large language model influence on diagnostic reasoning: a randomized clinical trial. JAMA network open, 7(10), e2440969.
- Goh, E., Gallo, R. J., Strong, E., Weng, Y., Kerman, H., Freed, J. A., ... & Rodman, A. (2025). GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial. Nature Medicine, 31(4), 1233-1238.
- Garcia, P., Ma, S. P., Shah, S., Smith, M., Jeong, Y., Devon-Sand, A., ... & Sharp, C. (2024). Artificial intelligence–generated draft replies to patient inbox messages. JAMA Network Open, 7(3), e243201.
- Izquierdo-Condoy, J. S., Arias-Intriago, M., Tello-De-la-Torre, A., Busch, F., & Ortiz-Prado, E. (2025). Generative Artificial Intelligence in Medical Education: Enhancing Critical Thinking or Undermining Cognitive Autonomy?. Journal of Medical Internet Research, 27, e76340.
- Thesen, T., & Park, S. H. (2025). A generative AI teaching assistant for personalized learning in medical education. NPJ Digital Medicine, 8(1), 627.
- Boscardin, C. K., Gin, B., Golde, P. B., & Hauer, K. E. (2024). ChatGPT and generative artificial intelligence for medical education: potential impact and opportunity. Academic Medicine, 99(1), 22-27.
- Abernethy, J., Shah, A., Chen, B., Reynolds, S., Wright, S. M., & O’rourke, P. (2026). Integrating AI Scribes into Medical Education: Guardrails for Preserving Clinical Reasoning. Journal of general internal medicine.
- Schaye, V., & Triola, M. M. (2024). The generative artificial intelligence revolution: How hospitalists can lead the transformation of medical education. Journal of Hospital Medicine, 19(12), 1181-1184.
- Schaye, V., DiTullio, D., Guzman, B. V., Vennemeyer, S., Shih, H., Reinstein, I., ... & Burk-Rafel, J. (2025). Large language model–based assessment of clinical reasoning documentation in the electronic health record across two institutions: Development and validation study. Journal of Medical Internet Research, 27, e67967.
- Zakka, C., Shad, R., Chaurasia, A., Dalal, A. R., Kim, J. L., Moor, M., ... & Hiesinger, W. (2024). Almanac—retrieval-augmented language models for clinical medicine. Nejm ai, 1(2), AIoa2300068.
- Fornalik, M., Makuch, M., Lemanska, A., Moska, S., Wiczewska, M., Anderko, I., ... & ZieliÅ?ska, A. (2024). Rise of the machines: trends and challenges of implementing AI in biomedical scientific writing. Exploration of Digital Health Technologies, 2(5), 235-248.
- Li, Z. Q., Xu, H. L., Cao, H. J., Liu, Z. L., Fei, Y. T., & Liu, J. P.(2024). Use of artificial intelligence in peer review among top 100 medical journals. JAMA Network Open, 7(12), e2448609.
- Bibbins-Domingo K, Hswen Y. (2024).Reporting use of AI in research and scholarly publication — JAMA Network guidance. JAMA.
- Perlis, R. H., Christakis, D. A., Bressler, N. M., Öngür, D., Kendall-Taylor, J., Flanagin, A., & Bibbins-Domingo, K. (2025). Artificial intelligence in peer review. JAMA, 334(17), 1520-1522.
- Ko, C., Shectman, B., Uy, D., Minars, C., Ingram, B., Chary, N., ... & Jacobs, R. J. (2025). A scoping review of the role of artificial intelligence in physician burnout. Cureus, 17(7), e88580.
- Gandhi TK, Classen D, Sinsky CA, et al. (2025). Unburdening patients and clinicians through automation and artificial intelligence: informatics strategies for reducing administrative burden. J Med Syst.
- Singh H, Meyer AND, Thomas EJ. (2025). AI and physician burnout: a productivity paradox. Learn Health Syst.

