LLM Modules: Knowledge Transfer from a Large to a Small Model using Enhanced Cross-Attention
Abstract
Konstantin Kolomeitsev
In this paper, we propose an architecture of LLM Modules that enables the transfer of knowledge from a pre-trained large model to a smaller model using an Enhanced Cross Attention mechanism. In the proposed scheme, the Qwen2-1.5B model is frozen, and its representations are passed through specially designed attention layers to the GPT-Neo-125M model, which is trained on limited computational resources. Experimental results obtained on the Bespoke-Stratos-17k dataset demonstrate that after 15 epochs of training, the combined model generates responses that are comparable in quality to models obtained by distillation. The paper discusses the advantages of the modular approach in detail, provides examples of input queries and their comparative analysis, and outlines prospects for further extension of the method.