Dynamic Prompt Mixture Networks: Adaptive In-Context Learning for Robust Reasoning in Large Language Models
Abstract
Shirmohammad Tavangari and Somayeh Taghavi Kolfati
The efficacy of in-context learning in large language models (LLMs) is highly sensitive to the selection and composition of demonstration examples. Static or fixed ratio prompting strategies fail to adapt to the diverse semantic and logical demands of individual tasks, leading to unstable reasoning performance. In this work, we introduce the Dynamic Prompt Mixture Network (DPMN), a lightweight task-aware router that computes an optimal blend of human-annotated and self- generated demonstrations for each input query. By formulating prompt selection as a Bayesian marginalization problem over a distribution of demonstration ensembles, DPMN reduces variance from noisy examples and improves reasoning robustness. Experiments across GPT-3.5, GPT-4, T5, LLaMA-2, and newly added Mistral-7B models on benchmarks including SuperGLUE, MMLU, and the challenging GPQA dataset demonstrate significant gains in accuracy, logical consistency, reasoning depth, and confidence calibration without increasing inference cost. Our approach offers a pathway toward more autonomous and reliable in-context learning.
