inner-banner-bg

International Journal of Forensic Research(IJFR)

ISSN: 2767-2972 | DOI: 10.33140/IJFR

Impact Factor: 1.9

Bangyi Yang

Department of Computer Science, University of Minnesot, United States

Publications
  • Research Article   
    Examining Psychological Influence Risks in Large Language Model Applications: Threat Frameworks and Mitigation Approaches
    Author(s): Bangyi Yang*

    Large Language Models (LLMs) have emerged as powerful tools capable of engaging users in personalized, extended interactions. However, this capability raises significant concerns about potential misuse for psychological manipulation. This paper examines how LLM-based applications could theoretically be exploited to conduct covert influence operations targeting individual cognition over time. We analyze the psychological foundations underlying human decision-making vulnerabilities, including dual-process cognition theory and common cognitive biases. Building on established military cognitive operations frameworks, we present a novel threat model describing how adversarial agents might leverage trusted AI interactions to systematically profile and influence users. We further propose a kill chain specific to individual- targeted cognitive influence campaigns and discuss ongoing research .. Read More»

    Abstract HTML PDF