(A) study on deep reinforcement learning-based dynamic enhanced inter-cell interference coordination scheme in dense heterogeneous networks밀집 이종 네트워크에서 심층 강화학습 기반의 동적 셀간 간섭 제어 기법 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 646
  • Download : 0
In order to overcome the problem of improving spectrum efficiency, which has reached a fundamental limit in a situation where wireless network traffic increases, various technologies such as massive multiple-input and multiple-output, mm-Wave, beamforming, have emerged. Cell densification is also an inevitable trend for this purpose, and the cells of mobile communication are increasingly dense and irregular. The heterogeneous network refers to a network in which low-power nodes (LPNs) are called small cells existing on top of an existing macrocell. A dense heterogeneous network with more small cells attracts attention as an economically practical solution to improve network capacity in 5G networks. However, there are still critical technical problems to be solved, such as Interference Coordination (IC) and Self-organizing network (SON) to improve spectral efficiency and energy efficiency. Meanwhile, the 5G network has three core service characteristics represented by mobile broadband service (eMBB), ultra-reliability and low-latency service (URLLC), and large-scale Internet of Things (mMTC) service. To this end, 13 performance targets that the 5G network must satisfy are defined, and among them, the eight-core performances are peak data rate, user experienced data rate, spectrum efficiency, mobility, latency, connection density, energy efficiency, and area traffic capacity. In the 5G network, services with various requirements can exist by combining the three core service characteristics. Accordingly, it is necessary not only to consider how to satisfy the user's QoS requirements but also to evaluate whether the user's QoS requirements are well satisfied. Therefore, we study a load balancing technique that maximizes the QoS satisfaction ratio in dense heterogeneous networks (QoS satisfaction rate) and further research the enhanced inter-cell interference (eICIC) method considering the QoS satisfaction ratio and the energy efficiency. In a dense heterogeneous network, as the distance between the BS and the user decreases, the radio channel quality and the data rate are improved. On the other hand, as the distance between small cells decreases, the interference between cells and the operation for load balancing affect the performance of the other BSs. To solve this problem, a cooperative multi-agent load balancing technique based on online reinforcement learning is proposed. The proposed method operates with the goal of maximizing the QoS satisfaction ratio, and a QoS satisfaction indicator (QSI) is defined to evaluate the QoS satisfaction. QSI is an index that can confirm whether the current user is sufficiently guaranteed QoS through the user's requirements defined by the data rates, the maximum delay bound, and the delay violation probability. Using this indicator, the QSI utility, which can represent the utility of the network in terms of QoS, is defined. In the proposed method, load balancing is achieved by adjusting the Bias offset (BO) parameter of Cell range expansion (CRE), a component of eICIC, and the network environment is modeled as a Markov decision process (MDP) for online reinforcement learning. MDP is designed to define the state of each cell through the previously defined QSI statistics. Each cell has BO as action and has a QSI utility value as a reward according to the action. On the other hand, a coordination graph (CG) was used to represent the cooperative relationship of neighboring cells. The Q function was decomposed according to the cooperative relationship in a given CG and expressed as a cooperative Q function according to the contribution of each cell. When the Q function of each cell and the cooperative Q function are given, a message-passing-based algorithm is also proposed to find the optimal behavior through cooperation. Therefore, cooperative multi-agent online reinforcement learning operates so that the sum of the rewards of each cell, that is, the sum of the QSI utility, is maximized, and in the end, performs load balancing in which the users who are guaranteed QoS in each cell are maximized. Through simulation, the effectiveness of the proposed method was verified in terms of throughput, QoS satisfaction rate, and fairness. As energy consumption increases as the number of small cells increases in a dense heterogeneous network, we propose an energy-efficient eICIC technique based on deep reinforcement learning for energy-efficient network operation. The proposed method utilizes deep reinforcement learning to determine the optimal values of all parameters of eICIC (ABS ratio, transmission power intensity of MBS in ABS, BO, channel quality indicator (CQI) threshold for classifying victim UE) and sleep mode. To find the optimal values, we first model all parameters of eICIC, SINR, and instantaneous service speed according to sleep mode and energy consumption of BSs. In addition, the energy-utility efficiency function is defined so that the QSI utility and energy consumption of the network can be considered together. The energy-utility efficiency function is designed so that the influence of QSI utility or energy efficiency varies according to weighting parameters. The MDP model is designed to apply deep reinforcement learning. At this time, the state is defined according to QSI statistics, all parameters and sleep mode of eICIC are defined as the behavior of each BS, and it is designed to have energy-utility efficiency as a reward. It is designed to find the optimal behavior of each BS through the well-known deep q-network (DQN). A DQN agent exists for each BS. At this time, the input of DQN is defined as the state vector of the BSs so that multiple agents can share the environment, and training is performed by having the energy-utility efficiency of the entire network as a reward. Through simulation, we verify the effectiveness of the proposed method in terms of energy-utility efficiency, energy efficiency, and QoS satisfaction rate according to learning convergence and weighting parameters. Through the previously proposed methods, we are able to implement the eICIC method that is energy efficient and maximizes the QoS satisfaction rate in a dense heterogeneous network, and by applying reinforcement learning, a SON-enabled network can be realized. Through this thesis, it is expected that energy efficiency can be improved while maximally satisfying QoS requirements of various users in 5G network.
Advisors
Choi, Jun Kyunresearcher최준균researcherPark, Hong-Shikresearcher박홍식researcher
Description
한국과학기술원 :정보통신공학과,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 정보통신공학과, 2021.8,[v, 100 p. :]

Keywords

Dense Heterogeneous Networks▼aInter-cell interference coordination▼aLoad balancing▼aQoS▼aEnergy efficiency▼aReinforcement learning; 밀집 이종 네트워크▼a셀간 간섭 제어▼a부하 분산▼aQoS▼a에너지 효율▼a강화 학습

URI
http://hdl.handle.net/10203/295754
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=962486&flag=dissertation
Appears in Collection
ICE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0