Vehicular ad hoc networks (VANETs) have attracted great interests from both industry and academia. The developments of VANETs are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme.

Additional Metadata
Keywords Security, Software-defined networking, Trust management, Vehicular ad hoc networks
Persistent URL dx.doi.org/10.1145/3132340.3132355
Conference 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications, Co-located with MSWiM 2017
Citation
He, Y. (Ying), Yu, F.R, Zhao, N. (Nan), Yin, H. (Hongxi), & Boukerche, A. (Azzedine). (2017). Deep reinforcement learning (DRL)-based resource management in software-defined and virtualized vehicular ad hoc networks. In DIVANet 2017 - Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications, Co-located with MSWiM 2017 (pp. 47–54). doi:10.1145/3132340.3132355