inner-banner-bg

Journal of Electrical Electronics Engineering(JEEE)

ISSN: 2834-4928 | DOI: 10.33140/JEEE

Impact Factor: 1.2

LLM Modules: Knowledge Transfer from a Large to a Small Model using Enhanced Cross-Attention

Abstract

Konstantin Kolomeitsev

In this paper, we propose an architecture of LLM Modules that enables the transfer of knowledge from a pre-trained large model to a smaller model using an Enhanced Cross Attention mechanism. In the proposed scheme, the Qwen2-1.5B model is frozen, and its representations are passed through specially designed attention layers to the GPT-Neo-125M model, which is trained on limited computational resources. Experimental results obtained on the Bespoke-Stratos-17k dataset demonstrate that after 15 epochs of training, the combined model generates responses that are comparable in quality to models obtained by distillation. The paper discusses the advantages of the modular approach in detail, provides examples of input queries and their comparative analysis, and outlines prospects for further extension of the method.

PDF