Hafnia-based nonvolatile memory devices for deep neural network hardware accelerator딥 뉴럴 네트워크 가속을 위한 하프니아 기반의 비 휘발성 메모리 소자에 대한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 181
  • Download : 0
Recently, an intelligent society based on hyper-connectivity is coming with development of digital technology such as artificial intelligence and big data. In particular, deep neural network computation, which is a part of artificial intelligence algorithms forming multiple networks by itself, is a core technology of the 4th industrial revolution. The previous deep neural network operation was mainly performed in a computing device because the learning and inference process consists of a number of algorithms of multiplication and addition operations. However, a computing device such as a central processing unit (CPU) is not suitable for performing high-speed deep neural network operations because CPU performs complex operations in series. Therefore, a lot of research has been conducted on a deep neural network hardware accelerator which can perform relatively simple calculations in parallel and assists the computing device through a fast interface with a memory device. However, in the deep neural network based on the von Neumann architecture, where the computing device and the memory device are separated, frequent access to the memory device for loading and storing the data required for the calculation becomes bottleneck that induces enormous time delay and power. Recently, in order to minimize such bottlenecks, a computing in-memory architecture is emerging with the advantage of performing simple operations such as addition and multiplication in a memory device. In this thesis, two hafnia-based nonvolatile memory devices are studied to implement a deep neural network hardware accelerator based on a new computing in-memory architecture. In the first chapter of this thesis, we described the problems of deep neural network operations based on the von Neumann architecture and the needs of computing in-memory architecture. In addition, we analyzed neural network hardware accelerator models utilizing various memory devices. Finally, we defined the required property of non-volatile memory (NVM) device for the application of the accelerator model as below. Memory window, switching speed, endurance, retention, multilevel level (symmetry), and density. In the second chapter of this thesis, we studied interfacial dipole switching memory device, which is recently reported as a promising NVM. Conventional interfacial dipole switching device is not able to be applied to a deep neural network hardware accelerator owing to small memory window (≈ 2 V). In order to exploit for deep neural network accelerator, we improved the memory window characteristics (≈ 8.05 V) by proposing a new device structure for inducing bidirectional oxygen atom relocation. Additionally, we performed various electrical and physical property analysis to reveal the nonvolatile memory characteristic is from the interfacial dipole switching mechanism. In the third chapter of this thesis, we present a hafnia ferroelectric field effect transistor. We studied a ferroelectric field-effect transistor (FeFET) including a floating gate (metal-halfnia ferroelectric-metal-gate oxide-silicon). Employing MFMIS FeFET, we improved memory window, durability, and switching speed characteristics by controlling the capacitance ratio of ferroelectric layer and SiO2 gate oxide. Furthermore, in order to achieve stable ferroelectricity in a relatively thick (30 nm) ferroelectric, we modulated hafnium and zirconium ratios. By electrical and physical analysis, is has been obviously confirmed that stable orthorhombic phase (o-phase) even in a relatively thick ferroelectric. We integrated the relatively thick HZO to a field effect transistor, which leads a wide memory window (≈ 16 V), fast switching speed (≈ 20 ns), and excellent endurance (> 10$^11$ cycle) properties.
Advisors
Jeon, Sanghunresearcher전상훈researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[v, 42 p. :]

Keywords

Artificial Intelligence▼aDeep Neural Network▼aHardware Accelerator▼aNon-volatile memory▼aInterfacial Dipole Modulation▼aFerroelectric Field Effect Transistor▼aHafnia; 인공 지능▼a딥 뉴럴 네트워크▼a하드웨어 가속기▼a비휘발성 메모리▼a계면 다이폴 스위칭▼a강유전체 필드 이펙트 트랜지스터▼a하프니아

URI
http://hdl.handle.net/10203/296051
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948686&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0