In this paper, the distributed edge caching problem with dynamic content recommendation is investigated in fog radio access networks (F-RANs). Firstly, the joint caching and recommendation policy is transformed into a 'single' caching policy by incorporating the recommendation policy into the cache policy and the corresponding training complexity is halved. Considering that there is no existing user requests dataset involving content recommendation, we propose a time-varying personalized user request model to describe the fluctuant demands of each user after content recommendation. Then, to maximize the long-term net profit of each fog access point (F-AP), we formulate the caching optimization problem and resort to a reinforcement learning (RL) framework. Finally, to circumvent the curse of dimensionality of RL and speed up the convergence, we propose a double deep Q-network (DDQN) based distributed edge caching algorithm to find the optimal caching policy with content recommendation. Simulation results show that the average net profit of our proposed algorithm is increased by nearly half compared with the traditional methods. Besides, content recommendation could indeed accelerate the convergence and increase cache efficiency.

Additional Metadata
Keywords Content recommendation, Deep reinforcement learning, Distributed edge caching, Fog radio access networks, User request model
Persistent URL dx.doi.org/10.1109/ICCWorkshops49005.2020.9145039
Conference 2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020
Citation
Yan, J. (Jie), Jiang, Y. (Yanxiang), Zheng, F. (Fuchun), Yu, F.R, Gao, X. (Xiqi), & You, X. (Xiaohu). (2020). Distributed edge caching with content recommendation in Fog-RANs via deep reinforcement learning. In 2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings. doi:10.1109/ICCWorkshops49005.2020.9145039