Pascal Bürklin
DHBW Lörrach, Hangstraße 46-50, 79539 Lörrach, Germany
Publications
-
Research Article
Refinetuning Decentralized Large Language Model for Privacy-Sensitive University Data
Author(s): Kilian Lorenz, Pascal Bürklin, Jay Kim*, Klemens Schnattinger, Sascha Reining, Nathan Peterson and Agha Husain
This work focuses on refining a decentralized large language model (LLM) tailored for finetuning on privacy-sensitive university data. Devolved AI models, designed to operate across multiple distributed nodes, offer a promising solution for handling sensitive information by ensuring data remains localized at its source while collaboratively training a global model. The key challenge addressed in this study is the adaptation and fine-tuning of a decentralized LLM to work effectively with heterogeneous, privacy- restricted datasets typical in university environments, such as student records, research data, and administrative information. Our approach involves enhancing the LLM’s ability to handle domain-specific language through targeted fine-tuning on anonymized university datasets. The model is further optimized for efficient decentralized learning, ensuring data privacy while i.. Read More»