Cooperative Traffic Signal Control Using a Distributed Agent-Based Deep Reinforcement Learning With Incentive Communication

Link to article

摘要

Deep Reinforcement Learning has shown some promise in dynamic traffic signal control by adapting to real-time traffic conditions. However, multi-intersection control presents challenges, primarily due to the need for efficient information exchange across increasing intersections, and the importance of spatiotemporal dynamics in traffic flows. Traditional methods often focus solely on spatial or temporal aspects, leading to suboptimal control strategies. This paper introduces a novel Multi-Agent Incentive Communication Deep Reinforcement Learning (MICDRL) method, designed for collaborative control across multiple intersections. MICDRL features an incentive communication mechanism, allowing agents to generate customized messages that influence other agents’ policies, thereby enhancing coordination and achieving globally optimal decisions. A key feature of MICDRL is its reliance on local information for message generation, effectively reducing communication overhead while ensuring collaboration. Additionally, MICDRL integrates a teammate module that leverages temporal data for predicting other agents’ actions, crucial for understanding collective dynamics and spatial environment characteristics. Empirical results show that MICDRL outperforms several state-of-the-art methods in metrics like queue length and throughput. Furthermore, we introduce a tailored three-layer Internet-of-Things architecture to enhance data collection and transmission.

出版物
IEEE Transactions on Intelligent Transportation Systems
Avatar
周斌
博士毕业生
Avatar
周启申
博士研究生
Avatar
Simon Hu
助理教授