Both caching and interference alignment (IA) are promising techniques for future wireless networks. Nevertheless, most of existing works on cache-enabled IA wireless networks assume that the channel is invariant, which is unrealistic considering the time-varying nature of practical wireless environments. In this paper, we consider realistic time-varying channels. Specifically, the channel is formulated as a finite-state Markov channel (FSMC). The complexity of the system is very high when we consider realistic FSMC models. Therefore, we propose a novel big data reinforcement learning approach in this paper. Deep reinforcement learning is an advanced reinforcement learning algorithm that uses deep Q network to approximate the Q value-action function. Deep reinforcement learning is used in this paper to obtain the optimal lA user selection policy in cache-enabled opportunistic lA wireless networks. Simulation results are presented to show the effectiveness of the proposed scheme.

Additional Metadata
Keywords Caching, deep reinforcement learning, interference alignment
Persistent URL
Conference 2017 IEEE International Conference on Communications, ICC 2017
He, Y. (Ying), Liang, C. (Chengchao), Yu, F.R, Zhao, N. (Nan), & Yin, H. (Hongxi). (2017). Optimization of cache-enabled opportunistic interference alignment wireless networks: A big data deep reinforcement learning approach. In IEEE International Conference on Communications. doi:10.1109/ICC.2017.7996332