Applying Deep Personal Privacy (DPP) An Empirical Framework for Inference Resistance in Large Language Models
Abstract
This paper introduces an empirical extension of the Deep Personal Privacy (DPP) framework, a novel paradigm that reconcep- tualizes privacy as resistance to inference rather than mere control over data disclosure. Unlike traditional privacy-preserving approaches-such as k-anonymity, l-diversity, t-closeness, and differential privacy-which primarily focus on data access and identifiability, the DPP framework models privacy as impedance within an inference network.
The core contribution of this work lies in operationalizing DPP within embedding-based systems, particularly large language models (LLMs), where sensitive information can be inferred through semantic alignment rather than explicit disclosure. We establish a formal mathematical relationship between cosine similarity, inference probability, and privacy impedance, demon- strating that reducing semantic alignment systematically increases resistance to inference.
Through empirical analysis on medical and social media textual data, we show that DPP-based mechanisms-such as em- bedding perturbation, abstraction, and dual-shifting transformations-effectively weaken inference pathways while preserving semantic utility. In addition, we introduce a regulatory interpretation of privacy via the parameter K, enabling privacy to be enforced as a measurable and auditable constraint on inference capability.
This work contributes a new layer to privacy protection in the ICT era by shifting the focus from data protection to inference control, offering both a theoretical foundation and a practical framework for designing privacy-preserving AI systems.

