Reinforcement Learning–Driven Adaptive Traffic Light Control for Urban Congestion Management and Emission Reduction
Abstract
Idowu Olugbenga Adewumi and Akintayo Ayoade
The research proposes and evaluates an RL-driven adaptive traffic signal control system that aims to alleviate congestion and curb vehicle emissions in Lagos State, Nigeria. The framework was based on a real-world traffic volume dataset obtained from seven major intersections Ojota, Ikeja-Allen, Maryland, CMS, Lekki Phase 1, Yaba, and Oshodi between January 2024 and July 2025 with an hourly traffic volume of 20 to 400 vehicles. The training and validation of the system were done in a SUMO environment. Over 300 training episodes, we assessed three RL algorithms Q-learning, Deep Q-Networks (DQN), and Advantage Actor-Critic (A2C) against conventional fixed-time, and adaptive heuristic signal control models. Tests show that performance is improved in single intersection, corridor and grid network runs. The DQN algorithm was successfully applied to an isolated intersection traffic control problem. Before the application of the algorithm, it was observed that the average vehicle delay was 120 seconds. Now, it has reduced to 65 seconds. Queue length has reduced from 18 vehicles to 8 vehicles. Moreover, the throughput increased from 250 vehicles from per hour to 320 vehicles per hour. Furthermore, the CO2 emissions were reduced from 8.5 kg to 5.5 kg from per hour. At a corridor comprising of three intersections, there was a 43% reduction in delay, 25% increase in through put and 31% decrease in emissions by A2C. Fuel consumption was also reduced by 37.5% with respect to the fixed-time control. Statistical analysis verified that performance across all models was significantly different (ANOVA, p = 0.001).

