首页 >> 学术论文 >> 自动驾驶安全

Human-Like Autonomous Car-Following Planning by Deep Reinforcement Learning

发表时间:   阅读次数:

ABSTRACT:This study proposes a framework for human-like autonomous car-following planning based on deep reinforcement learning (RL). In the framework, historical driving data are fed into a simulation environment, where an RL agent learns from trying and interaction, with a reward function signaling how much the agent deviates from the empirical data. An optimal policy (car-following model) that maps from speed, relative speed, and spacing to following-vehicle acceleration in a human-like way is finally obtained, and can be continuously updated when more data are fed in. Two thousand car-following periods extracted from the Shanghai Naturalistic Driving Study were used to train the model and compare its performance with that of four traditional car-following models. Results show that this new model can reproduce human-like car-following behavior with significantly higher accuracy than traditional car-following models, especially in terms of speed replicating. Specifically, the model has a validation error of 18% on spacing and 5% on speed, which is generally 15 and 30 percentage points less, respectively, than that of traditional car-following models. Moreover, the model demonstrates good capability of generalization to different driving situations and can adapt to different drivers by continuously learning. These results can contribute to the development of human-like autonomous driving algorithms. Moreover, this study demonstrates that data-driven modeling and reinforcement learning methodology can contribute to the development of traffic-flow models and offer deeper insight into driver behavior.

Meixin Zhu, Xuesong Wang*, Yinhai Wang. Human-Like Autonomous Car-Following Planning by Deep Reinforcement Learning. Transportation Research Board 97th Annual Meeting, Washington D.C., USA, 2018. 1.7-11.

©CopyRight 2003-2012   同济大学交通运输工程学院

备案号:沪ICP备13005359号-1