Policy Gradient Reinforcement Learning-based Optimal Decoupling Capacitor Design Method for 2.5-D/3-D ICs using Transformer Network

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 45
  • Download : 0
In this paper, we first propose a policy gradient reinforcement learning (RL)-based optimal decoupling capacitor (decap) design method for 2.5-D/3-D integrated circuits (ICs) using a transformer network. The proposed method can provide an optimal decap design that meets target impedance. Unlike previous value-based RL methods with simple value approximators such as multi-layer perceptron (MLP) and convolutional neural network (CNN), the proposed method directly parameterizes policy using an attention-based transformer network model. The model is trained through the policy gradient algorithm so that it can achieve larger action space, i.e. search space. For verification, we applied the proposed method to a test hierarchical power distribution network (PDN). We compared convergence results depending on the action space with the previous value-based RL method. As a result, it is validated that the proposed method can cover ×4 times larger action space than that of the previous work.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2020-12
Language
English
Citation

2020 IEEE Electrical Design of Advanced Packaging and Systems, EDAPS 2020

ISSN
2151-1225
DOI
10.1109/EDAPS50281.2020.9312908
URI
http://hdl.handle.net/10203/310776
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0