Vansh Kumar
Lead Researcher, Vispark Research Lab, India
Publications
-
Research Article
A Self-Improving Experiential AI Model a Path to Continual Learning
Author(s): Vansh Kumar* and M Tanusri
This paper presents Vision Experiential AI, a 250-billion-parameter multimodal model built on the Vision Transformer architecture. Unlike static large language models, it incorporates continual and experiential learning at inference time, allowing it to update internal weights dynamically and sustain long-term contextual memory without catastrophic forgetting. The model delivers hyper-personalized, context-rich interactions while maintaining low computational cost and high efficiency. Evaluations, including the Humanities Last Exam (HLE), demonstrate state-of-the-art performance and self-improving behavior, positioning Vision Experiential AI as a major step toward continual, self-evolving intelligence and ultimately representing a decisive step on the pathway to Artificial General Intelligence (AGI)... Read More»

